API Security

Autodesk: Enabling New Revenue with the Apigee Platform

New case study

Autodesk makes software for people who build things. Founded in 1982, the company has been synonymous with industry-leading 3D design software for desktops, and it has used APIs for decades. But as the world moved to mobile devices and cloud, the need to transform from desktop technology to cloud offerings took center stage.

That’s why Autodesk has committed itself to digital reinvention, not only moving its award-winning software to the cloud but also investing in APIs to enable new, data-driven revenue streams.

Creating an ecosystem became a key part of achieving this goal.

“We are trying to drive a movement to the cloud in the industries we serve,” said Shawn Gilmour, Autodesk’s director of PaaS strategy. “To really be successful, we need to build an ecosystem. We really need partners and data sharing and integrations to do this—and that’s where APIs come in.”

Internally, leveraging modern APIs and the Apigee API platform enabled Autodesk to empower its development teams to easily and securely leverage the company's long-term software applications and resources for new applications and create customized and connected workflows. But it also opened doors to untapped markets, beyond Autodesk's bread-and-butter customer base of professional designers.

Read this new case study to learn how the Apigee platform helped Autodesk attract new customers and provided new levels of flexibility and scalability to the company's development efforts.

How Moving Apigee Sense to GCP Reduced Our “Data Litter”

In the year-plus since Apigee joined the Google Cloud family, we’ve had the opportunity to deploy several of our services to Google Cloud Platform (GCP). Most recently, we completely moved Apigee Sense to GCP to use its advanced machine learning capabilities. Along the way, we also experienced some important performance improvements as judged by a drop in what we call “data litter.”

In this post, we explain what data litter is, and our perspective on how various GCP services keep it at bay. Through this account, you may come to recognize your own application, and come to see data litter as an important metric to consider.

First, let’s take a look at Apigee Sense and its application characteristics. At its core, Apigee Sense protects APIs running on Apigee Edge from attacks and unwanted exploitation. Those attacks are usually performed by automated processes, or "bots," which run without the permission of the API owner. Sense is built around a four-element "CAVA" cycle: collect, analyze, visualize and act. It enhances human vigilance with statistical machine learning algorithms.

We collect a lot of traffic data as a by-product of billions of API calls that pass through Apigee Edge daily. The output end of each of the four elements in the CAVA cycle is stored in a database system. Therefore, the costs, performance and scalability of data management and data analysis toolchains are of great interest to us.

Read the whole story on the Google Cloud Platform blog

Apigee’s Top API Editorials of 2017

2017 was a big year for APIs.

They continued to solidify their position as the mechanism through which value is exchanged in modern economies, with literally quadrillions of API calls connecting apps, data, and systems throughout the world each day.

Apigee experts published dozens of editorials last year, both externally and via our Medium publication, to help developers, IT architects, and business leaders understand how to maximize the value of APIs and keep pace with constant technological change.

Here are some of our top articles from 2017, organized by some of the year’s biggest themes. Thank you to all of our readers, and stay tuned for more in 2018!

API management best practices

The nitty gritty details of API management can be challenging, but Apigee experts are here to help with their observations from the field. Be sure to check out “KPIs for APIs and Digital Programs: A Comprehensive Guide” by Michael Leppitsch and “Building an Outside-In Approach to APIs” by Chris Von See.

APIs and digital transformation

Virtually all companies understand the digital transformation imperative: if you don’t continually use technology to evolve your business, you’ll go out of business.

John Rethans explains why APIs are central to this imperative in his Forbes article, “APIs: Leverage for Digital Transformation.” And to explore why the technologies that businesses have been using for years are simply no longer good enough, read Brian Pagano’s “Legacy IT: Like a Horse on the Autobahn.”

To maximize the leverage John discusses in Forbes, APIs must be managed as products that empower developers—not as middleware. For details, see my article “How APIs Become API Products,” which includes real-world examples from Apigee customers Pitney Bowes, Walgreens, and AccuWeather.  

To appreciate the full scope of an API-first business evolution, check out “Lessons from Magazine Luiza’s Digital Transformation,” in which John interviews the CTO of one of South America’s hottest companies. And to understand where multicloud strategies fit into the mix, read David Feuer’s “Multicloud: Taming the Rookery.”

Caught up on how APIs are used today? For a glimpse into the future of digital transformation and the role APIs will play as new technologies emerge, don’t miss our article in Business Insider,How APIs are Key to Successful Digital Transformation.”

Security

New software vulnerabilities and attacker techniques emerge on a daily basis, so security remained a leading concern for enterprises in 2017. David Andrzejek wrote two of our top articles on the topic. “Using Behavior Analysis to Solve API Security Problems” in Help Net Security examines how user behavior can be monitored in near-real time to identify suspicious behavior and block malicious actors, and “Grinch Bots are out to Spoil the Holidays” in VentureBeat explains how businesses can stop a trend that plagued many online shoppers last year: attackers who use bots to buy up the most in-demand, supply-constrained items.

Digital ecosystems

To adapt to shifts in customer behavior and the competitive landscape, a business doesn’t need to become a platform company, invent new machine learning technologies, or build loads of new software in-house. Instead, it should leverage what others have built to complement its own capabilities, reach new user groups, and explore adjacent markets.

Anant Jhingran and I discuss these ideas in our CIO.com articles “APIs, Ecosystems, and the Democratization of Machine Intelligence” and “Do You Really Want to be a Platform?” For a deep look at these ecosystem dynamics, including a set of simulations, check out Anant and Prashanth Subrahmanyam’s CIO.com article, “3 Golden Rules for Winning in Software-Driven Ecosystems.”

Industry trends

APIs are playing into business strategies in virtually all industries, but there are still scores of specific trends, use cases, and regulatory requirements from one vertical to the next. Some of our top industry-specific stories from 2017 included David Andrzejek’s “Why Haven’t More Banks Embraced Digital Platforms?” in The Financial Brand and Aashima Gupta’s “Voice Interfaces Will Revolutionize Patient Care” in VentureBeat.

Image: Flickr Creative Commons/Jlm Ronan

Apigee Sense: API Protection with Intelligent Behavior Detection

New explainer video

APIs are everywhere. With their pervasiveness comes a whole new set of security threats. They can come in the form of automated software programs that commit brute force attacks, information scraping, and account abuse. They can probe for API security weaknesses and skew analytics.

What’s worse, these threats can be difficult to detect because they blend in with normal API traffic.

That’s where Apigee Sense comes in. Apigee Sense detects, collects, analyzes, and mitigates API attacks, and is purpose-built to protect APIs.

Learn more in this two-minute video.


And visit the Apigee Sense page for details.

How Secure Are Your APIs?

Webcast replay

APIs have revolutionized how companies build new marketing channels, access new customers, and create ecosystems. Enabling all this requires the exposure of APIs to a broad range of partners and developers—and potential threats.

Watch this webcast replay to learn more about the latest API security issues. We discuss:

  • The API threat landscape
  • API security best practices
  • How Apigee is helping customers protect their APIs

 

 

API Security: It's More than Web Security

Since the advent of smartphones over a decade ago, enterprises and internet companies have focused on building apps to reach customers and to provide a richer, interactive experience.   This shift to apps has also impacted threats to the enterprise with bots and malicious actors shifting from attacking web sites to attacking public APIs.

When Apigee’s customers expose critical business functions through APIs, they tempt a variety of bad actors, who try to break in by probing login interfaces or scraping data from catalogs, among other methods.      

These threats also exist in web interfaces, of course. But the move toward apps has changed the basic interaction metaphor between the client and server. Apps can store more data and preserve more state between sessions. This, in turn, has changed security paradigms, and new approaches to securing the application using features like an API key have emerged.    

More than a firewall

A new security approach based on the application layer is required; one that understands the structure of APIs, rather than security at the networking layer based mostly on traffic volume.

But API security shouldn’t be considered merely an extension of web security. A common method of providing web security is a WAF (web application firewall), which enables one to set rules based on IP addresses and traffic volume, among other things. The intention is to block individual IP addresses from attacking the system.   

These rules can be effective in warding off simple attacks, but applying these same techniques to the API layer doesn’t take advantage of the richer data available for analysis and action in the application layer.

More than IP addresses

API traffic analysis can go beyond IP addresses; API keys and access tokens can be scrutinized. If the analysis determines that a combination of the IP address and API proxy is under attack, the action taken (blocking or tagging of the IP) can also be fine-tuned to the appropriate scope.    

In other words, API security enables a finer degree of control to block specific bad actors within the appropriate context rather than blocking all traffic from an IP address. The API security layer should be part of a multi-layered approach to protecting back ends against malicious behavior.

Because of its placement in the data stream, an API platform presents an opportunity to monitor for and detect anomalies in API traffic. But the possibilities go beyond detection—and they have to, in order to protect an organization’s backend.

 

Built on the Apigee Edge API platform, Apigee Sense is a full-cycle API security solution that was purpose-built for APIs. It detects bad behavior patterns at the API level, and blocks bad actors from access based on administrator specifications. Identifying behavior patterns enables a finer grain of control for blocking, so the system doesn’t have to block all users from a particular geography or service provider, which could potentially block normal behavior.

Joy Thomas is a data scientist on Google Cloud’s Apigee team. He is coauthor of “Elements of Information Theory.”

 

How To Submit Security Tokens to an API Provider, Pt. 2

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge, WebSphere DataPower, and 3Scale by Red Hat.

In a previous post, I discussed a variety of considerations regarding how bearer tokens should be passed from an API consumer to an API provider. I explored two approaches to client-side bearer token storage: cookies and HTML5 local storage. Here, I’ll look at the implications that these two approaches pose for native mobile apps, traditional web apps, and single page applications (SPAs).

Of course, there are plenty of things that don’t fall perfectly into one of these categories (for a more detailed analysis of the evolution of web applications, see this), but mobile, web, and SPAs comprise a large proportion of the use cases.

Most concerns come down to XSS vulnerabilities and cross-site request forgery (CSRF) attacks. Of course, if the device or server-side components have been compromised in some way, then this entire discussion is moot.

SPAs

If HTML5 local storage is used, then the token is passed in the authorization header. Unlike with cookies, information stored in local storage is not automatically transmitted to the server (this does place an additional burden on the developer, but can be mitigated with supporting libraries).

Attack vectors include:

  • XSS: If an attacker successfully gets valid JavaScript inserted into input to your API, then later, when it is retrieved, it could be interpreted and executed by the browser’s JavaScript engine, which in turn could access the bearer token from local storage (because the script was technically loaded from your site and API endpoint). The standard defense against XSS attacks should be sufficient in this situation. Escape all HTML characters that are used to delimit JavaScript in HTML (&, <, >, “, ’,/) with HTML entity encoding (&amp;, &lt;, &gt;, &quot;, &#x27;, &#x2F;) referenced in input and other steps recommended by OWASP (see OWASP recommendations). Not all of these are relevant for API endpoints; for example, APIs typically don’t generate HTML. There is gn support for most of these recommendations built into modern web frameworks.
  • CSRF: Conventional wisdom suggests that this is not an issue because the attacking code would not have access to the JWT in the HTML5 local storage for the API endpoint. I’m not going to debate this or dissect the notion here, but let’s assume that all interaction between the SPA and its backend API use standard anti-CSRF patterns as described by OWASP. They describe two steps that should be taken: verify the same origin with standard headers (or a known origin), and require some type of randomly generated value be presented with each request that wouldn’t be known to the CSRF instigator. In a stateless REST API, both of these can be challenging. But many of the major frameworks provide support for implementing this functionality.
  • JavaScript from site A accessing data stored in local storage for site B: Not possible based on the security model implemented in browsers. This includes access by code on sub-domains.

If HTTP cookies are used for storage and transportation, then the attack vectors of concern are:

  • XSS: The token value stored in a cookie cannot be accessed by JavaScript injected as bad input and later interpreted in the browser. Nevertheless, use the same mitigation patterns described above.
  • CSRF: This is possible. Any request that is made (regardless of what’s triggering it) to the API from a browser session will include all cookies that are defined for the API endpoint. In this case, the mitigation strategies described above are absolutely imperative.
  • JavaScript code from site A accessing cookies from site B: By default, not possible, but CORS can override this.

Mobile apps

A native mobile application doesn’t run in a browser. It likely uses a library that either acts as a user agent to interact with the identity provider or launches the system browser to handle the IdP login workflow interaction. The library used to handle IdP interactions and login shouldn’t be a general purpose JavaScript engine (although it is likely launching the system browser that is). The login workflow that the browser interacts with should be a self-contained entity that isn’t relying upon external JavaScript sources.

Likewise, a native mobile app isn’t going to have local session storage. The use of cookies is limited to the HTTP client library or framework used to make API calls. The best option for securely storing a bearer token on each mobile platform is beyond the scope of this article.

Let’s assume that the API the mobile application is interacting with is the same one that is utilized by a SPA application and other API consumer actors. Attack vectors include:

  • XSS: If the native mobile app is using a component that has a JavaScript interpreter, then this is possible. In this case, the mitigation strategies described in the previous section should be utilized.
  • CSRF: This is only possible if a library is being use that includes cookies automatically. So, it depends. However, as always, the mitigation strategies described in the previous section should be used.

Traditional web applications

For our purposes, the difference between the SPA above and the traditional web application hinges on whether the server returns full HTML pages or JSON objects (or maybe XML). For the most part, the information provided in the SPA section applies here with the following exceptions.

If HTTP cookies are used for storage and transportation, then the attack vectors of concern are:

  • XSS: Same patterns described above should be used.
  • CSRF: Synchronizer (CSRF) token patterns can be used with the stateful security model.
  • JavaScript code from site A accessing cookies from site B: Comments in the previous section apply here.

So which approach should be used?

As I’ve mentioned in other posts, I always fall back on a standards-based approach to security. This implies that per RFC 6750, the bearer token should be placed in the HTTP request authorization header for each API call and the token stored in HTML5 local or session storage (for brower-based applications). For many IdPs, libraries will be provided that abstract these details away.

Is this the only way of accomplishing the desired effect? Obviously, no. But in my attempts to implement standards-based security solutions, it‘s the approach I recommend. Appropriate defense strategies for XSS and CSRF must be used, and, thankfully, can be largely accomplished with functionality in popular frameworks.

2016: The Year in Review

2016 was a great year for Apigee, and, more importantly, our customers. We introduced more than 90 new features to Apigee Edge and issued over 150 bug fixes via 35 public cloud and three private cloud releases. We open-sourced our mobile application performance monitoring solution. We added new solution accelerators. We processed over one billion API calls per day, and maintained 99.99% uptime. We even received some high praise from Gartner and Forrester.

Here’s a quick look at many of the new features our customers employed to accelerate their digital businesses.

Security

We introduced several features to help customers tighten down the security screws on their API programs.

Two-factor authentication

At the API administration level, Edge now provides two-factor authentication in both the UI and the management API. Additionally you can lock down management API calls with OAuth 2.0 (using acurl), making it easy to invoke management APIs without repeatedly requiring credentials.

Encrypted KVMs

We've also added important security features at the messaging and API proxy development layer. Encrypted key value maps (KVMs) let you securely persist sensitive data, retrieve data at runtime with variables, and keep sensitive values from appearing in trace and debug sessions. See this October 2016 blog post for details.

Adaptive bot detection and protection

Apigee Sense provides protection from a number of different bot patterns. The new Sense Protection feature completes the “CAVA” (collect, analyze, visualize, and act) lifecycle. It enables an Apigee Sense customer to act on detected abuse and selectively stop abusive API traffic.

Productivity improvements
  • Logs sent to third-party message logging services including Splunk, Loggly, or Sumo (using the message logging policy) can now be securely sent over TLS/SSL.

  • API credentials, developers, and developer apps can now be managed through the management UI. Users can generate multiple key/secret credentials for an app, control key expiration, and assign different keys to different products—all in a single screen. This simplifies API key rotation, where a newer API key replaces an older API key set to expire.

  • Users can also revoke credentials using a cascading model. For example, you can deactivate a developer, revoke a developer app, or revoke individual API credentials.

  • When controlling access to specific API resources through API products, users now have more flexibility when defining valid resource paths with wildcards..

Governance

We added some powerful capabilities to cater to our customers’ governance and compliance requirements. To enable standardized governance of API proxy functionality, shared flows enable executing a group of policies (OAuth, spike arrest, and message logging, for example) consistently across all proxies. Flow hooks let you reference those operational behaviors before or after the proxy execution in the request and response. (See this October 2016 blog post for details).

Reliability and scale

We added several continuous reliability and performance improvements under the hood. We switched to the Nginx router for better API traffic performance (for both public and private cloud deployments).

For public cloud deployments, in 2016 we began releasing product updates using "blue/green" deployments--where a small amount of traffic is initially routed to the updated product so that we can monitor for potential issues (read more in this September 2016 blog post).

We also added support for automatic scaling in Apigee Edge Cloud. This helps maintain availability and enables customers to scale capacity up or down automatically based on policies. This has helped us deliver a more predictable API platform.

Developer productivity

In 2016, we spent a lot of time working to make API lifecycle management more intuitive and powerful—from design to development to publishing to analytics.

 
Integrated OpenAPI editor and spec repository

"New Edge," released in October, offers a new model for API proxy development and documentation. You can use the integrated editor by creating an OpenAPI specification to define your API, without leaving the Edge UI. You can generate an API proxy directly from the spec, create an API product, generate API documentation, and immediately publish it  to the New Edge developer portal. The new spec repository enables collaboration of OpenAPI specs and fosters team-based, iterative API development. Read more in this November 2016 blog post.

New API proxy editor

The API proxy editor in the management UI became easier to use by including full XML views of API proxy configuration, search, more options for adding policies, endpoints, and scripts, as well as an analytics dashboard that shows proxy performance. Regarding proxies that interact with SOAP services, the proxy builder evolved to provide even stronger support for SOAP passthrough messages by hosting the service WSDL in Edge, as well as more reliably generating policies that handle RESTful calls to backend SOAP services.

Proxy chaining and policy enhancements

Another cool enhancement we delivered is called proxy chaining. It lets you call one proxy from another proxy directly without having to call it via its HTTP/S URL. The platform does it for you. This saves a lot of time, particularly when the proxy being referred to changes.

Other notable proxy development enhancements include refactored policy error codes, deploy-time validation of proxy bundles to catch issues before runtime, new JavaScript crypto functions, providing more control over converting XML to JSON arrays, and improved rendering of JSON payloads generated by policies such as Assign Message and Raise Fault.

On-demand, lightweight developer portal

With New Edge, there's virtually no lag time between creating your API proxies and giving developers API documentation. A new lightweight portal framework lets you instantly provision  multiple developer portals, including API documentation that's automatically generated from your OpenAPI specs. You can use HTML/Markdown to create pages and add CSS styles on the fly for complete control over styling and layout. And we provide a new type of samples framework that lets users browse different types of Edge samples, deploy them, and learn more about them without leaving the UI.

Self service

Several customers wanted a more holistic view into their adoption and usage of the platform, so we delivered a broad set of information via Apigee 360. It offers a view of account information accessible through the Edge single sign-on,  including monthly API traffic volume, statistics for apps and developers, availability percentages, Edge features used and purchased, support cases and statistics, and server information.

We also rolled out a new mechanism, Apigee Advisory, to display messages in the Edge management UI. These advisories inform customers of availability and security issues that could impact their APIs.

Our web site, apigee.com, also underwent a significant redesign that provided clearer, more comprehensive information about Apigee products and solutions, as well as improved discoverability of our thought leadership content.

Business impact and reporting

A modern and scalable analytics platform was launched in 2016 built on big data technologies. This new architecture makes it easy to handle high traffic throughput, enable a variety of data queries (by time, tenants, applications, developers, clients, plans, and products), and provides flexibility to build new data-driven applications.

There was also a fundamental change introduced in the means of delivering the daily email digest. Rather than pushing out an email with all the report content, users now receive short summaries along with links back to the full report.

Finally, for customers who have APIs that record custom attributes using the Statistics Collector policy, they can request the creation of custom aggregation tables that can improve the query performance for those custom metrics if they are used on a regular basis to generate analytics reports.

For customers using monetization, several enhancements provide more control over charging models and notifications when users get close to (or exceed) their plan limits.

These enhancements include:

  • A new adjustable rate notification plan that enables a user to set different plan limits per app developer
  • Support for webhooks to notify developers and companies when they near or exceed their plan totals, as well as support for several different conditions under which notifications are triggered, including a new criterion based on combined transaction totals
  • A tool that migrates developers into the monetization framework (for users with an existing non-monetized developer ecosystem who later decide to use monetization)
  • A new API that lets users suspend and unsuspend developers (to support stronger control of developer participation)

Edge Private Cloud

The on-premises version of Edge got several improvements, including a simpler, more RPM-based code-with-config installation and upgrade framework, which enables easier product installation and upgrade with fewer errors.

A new monitoring tool lets on-premises customers understand the health of various components (routers, message processors, ZooKeeper, Cassandra) as well as HTTP error codes for various orgs and environments in their deployments. The tool lets customers take a snapshot of their dashboard data and share it with Apigee to help resolve support incidents.

 
 

Partner ecosystem

We continue to demonstrate our commitment to multi-cloud and cloud native deployments. Integration with Pivotal Cloud Foundry was a big focus area for Apigee in 2016.

The first new enhancement was Pivotal Cloud Foundry integration with Apigee Edge (public or private cloud) using the route services feature, which enables developers to use Apigee Edge as a Pivotal Cloud Foundry Service. The Apigee Edge service broker (see more details in this May 2016 blog post) approach brings simplicity and consistency to the range of services that customers typically use when developing apps.

More recently we announced the general availability of Apigee Edge Microgateway on Pivotal Cloud Foundry. This complements the previous release by providing a hybrid deployment option which is suitable for low-latency use cases.

We also announced Edge integrations with Amazon AWS (this enables users to proxy AWS apps and services such as AWS Lambda), Microsoft Azure (this enables users to deploy the Edge Private Cloud) and Google Cloud Platform (this enables GCP customers to use Edge Cloud for their API management needs).

Community and learning

The Apigee Community continues to be very active. We’ve have received great reviews from developers about our 4mv4d (four-minute videos for developers), which demonstrate how to use Edge policies, implement error handling, and much more.

Our product documentation received several additions and enhancements, notably a set of documentation for the New Edge release. The Private Cloud documentation also emerged from behind the firewall and joined our publicly accessible cloud docs.

Our docs team added deeper set of API development samples, redesigned tutorials for speed and ease of use, upgraded navigation and search for easier content discovery, and translated key sections of the cloud docs into Japanese. You can see more detailed lists of doc enhancements throughout the year in the Apigee Community.

Apigee Edge got to where it is today thanks in large part to our community and customers. As many of you know, we became part of the Google family. We look forward to an exciting 2017 and expect to do more amazing things for our customers as part of the Google Cloud Platform team.

Join us at our Adapt or Die World Tour stops in Sydney on Feb. 8 and London on Feb. 23, and in San Francisco at Google Cloud Next '17, March 8-10. 

 

Apigee and Okta Partner for API Security

Apigee is excited to announce a partnership with Okta, a leader in the Identity-as-a-Service space.

A critical reason customers use Apigee is to secure their APIs. And one of the most critical aspects of security is the authentication and authorization of the APIs.  

Apigee’s and Okta’s offerings complement one another to solve the AuthN/AuthZ problem for customers. Almost every customer using Apigee needs to integrate Apigee Edge with a central identity/SSO store. Enterprises traditionally have used legacy, on-premises identity providers.

But customers increasingly are choosing Apigee and Okta together as two critical platforms for digital transformation and their cloud journey.

There are two common use cases for this integration:

  • Customers want to use Apigee’s API management capabilities and use Okta for identity management
  • Customers use Okta as their OAuth/OpenID connect provider across the organization

As part of the partnership, we’ve created an out-of-the-box solution for the first scenario (learn about it here). The solution is intended as a reference implementation and is available as an open source solution.

Scenario 1: Apigee provides OAuth

In the first use case, Apigee acts as the OAuth provider while Okta provides the identity store and handles authentication of the identities.

Let’s take a quick look at what we’ve built here:

In the diagram above, Apigee provides the client_id and client_secret. Pre-work steps aren’t needed for the runtime API security; they’re performed beforehand:

  • The Apigee admin registers Apigee as a valid client with Okta; Apigee receives the public key from Okta and stores it in its KVM or vault
  • A developer self registers in the Apigee developer portal
  • The developer registers an app in the Apigee dev portal.,Apigee issues a client ID and Client secret to the app.

In the runtime flow:

  • The client makes a call to Apigee with the client ID and secret, which Apigee validates
  • Apigee redirects the client app to Okta
  • Okta collects the user credentials and authenticates and authorizes the user with support for multi-factor authentication ( if needed)
  • If the user is validated, then Okta returns an id_token back to Apigee
  • Apigee issues an access token against the id_token and stores a map of access_token to id_token in the key storage
  • When a secured API is invoked, it is invoked with the access token, which Apigee validates; depending on requirements, it either sends the whole id_token or part of it to the backend

Scenario 2: Okta acts as the OAuth provider

This is another common integration pattern, but currently we are not providing an out-of-the-box solution for this. Often, when Okta is the enterprise-wide identity platform (for web, mobile, or APIs), the customer is likely to leverage Okta as the OAuth provider.

To follow this integration pattern, it’s important to keep a few things in mind.

For an API program, a developer portal is crucial. This is where developers register themselves as well as their apps. Once the apps are registered, developers receive their API key or client_id/client secret from the dev portal.

Okta can be used as an OAuth provider, but Apigee must have knowledge of the client_ID and client_secret because:

  • It uses those IDs for throttling
  • The IDs are crucial for analytics and API debugging
  • IDs are used for app lifecycle management
We hope you've found this walk-through useful, and look forward to working with Okta to develop new integrations. To learn more about Okta’s OAuth and OpenID Connect support, visit https://developer.okta.com/docs/api/resources/oauth2.html.

 

The Banks Have Been Hacked. What Now?

APIDays in London last week encompassed two full days of banking and APIs. The event was full of excellent talks and useful content, but I keep thinking back to a talk by Stevie Graham about his company Teller.io.

Stevie explained how he’s hacked all the major banks’ mobile apps, reverse-engineering them to get at the underlying APIs that power them, and then exposing those APIs to developers.  In a highly charged, “thumb your nose at the man” talk, Stevie explained how he was also deploying "anti-anti-hacking techniques" to thwart banks’ attempts to stop him from turning their mobile apps into open APIs.  

Reflecting on the talk, a few things struck me.

Realize it or not, banks actually do have external APIs

Every mobile app is powered by APIs. Those APIs are not built out to be accessed by third parties, but they exist!!  In fact, everything you can do in a mobile app—check balances, transfer money, make a payment—all those functions are APIs. The “balance check”  button initiates an API call, which may initiate several other API calls, to fetch your bank balance. The fact that these APIs exist and power external experiences makes them essentially external, and thus vulnerable.

Banks need to invest in API security

Stevie has vividly illustrated this (no kidding, you say). Regardless of whether you plan to open APIs or not, having first-class tools to secure, monitor, and manage APIs are a must. As always in security, defense in depth (layers of security to ensure API clients are doing what they are supposed to be doing) is the right approach.

This includes sophisticated behavior analysis like what we are doing with Apigee Sense, to detect anomalies in traffic patterns. Just because an API client might look like your home-grown iOS app doesn’t mean that it is!

Developers want access to banking data APIs

PSD2 regulation aside, given there are over 3,000 developers on the waiting list for Teller.io, it‘s clear that demand is already there for API access to banking data. Whether and how banks want to exploit this interest is yet to be seen, but where there is unmet demand, there’s threat of disruption, as upstart challenger banks like Starling and Monzo illustrate. By building their banks API- and mobile-first, they are clearly moving to fill the void left open by the established brands.  

With mobile ubiquity and APIs being used as competitive weapons in digital-natives’ strategies, building first-class competency in API security and operations has never been more important.  

The unmet demand for programmable access to banking functions and data suggests, at best, a lack of execution among many banks. At worst, it reveals a fundamental lack of understanding of how today’s API economy works. 

Image: Flickr Creative Commons/Ofer Deshe