Demo: Apigee Edge OAuth2 Debugging

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower.

As I’ve described in previous posts, OAuth2 and OpenID Connect (OIDC) have emerged as the de facto standards for securing APIs (for authentication and authorization). Apigee Edge provides an out-of-the-box OAuth2 implementation.

It’s a common pattern to “wrap” a third-party identity provider (IdP) with the Apigee OAuth2 functionality. In other words, Apigee Edge can act as an OAuth2 provider while an external IdP is employed to provide end-user authentication services. In this post, we’ll explore assumptions and requirements for a real-world application.

We are going to introduce two Apigee API proxies; one that implements our OAuth2 provider as a wrapper around a third-party IdP using the OIDC spec and one that protects an API with OAuth2 security. Then, we’ll use my OAuth2 + OIDC Debugger to demonstrate the authorization code grant and refresh token call.

To keep the size of this post at a reasonable length, I’ve put most of the technical details into Github repositories and supporting blog posts:

This last link is the entry point for setting up a working example . While not specific to Apigee, I added refresh token support to the OAuth2 + OIDC Debugger while putting together this blog post.

If you want to move on to those technical details, go ahead and look at the steps outlined in the fourth link.

Follow the specs

Apigee Edge provides the building blocks of an OAuth2 authorization server that are meant to be assembled by a skilled practitioner in whatever configuration is needed for the given use case.

While flexible, this gives one a lot of rope with which to hang one’s self; as always, any real-world applications of this technology should adhere to the relevant specifications and undergo proper penetration testing.

I’ve already built some non-spec-defined features into the debugger to deal with implementation details of other OAuth2 (and OIDC) implementations. All of the implementations closely followed the specs, including:

  • User-agent (browser) interacts with the authorization endpoint as defined in the specification. The call will include all required parameters (as well as some optional parameters, and maybe some proprietary parameters).
  • Application (client) interacts with the token endpoint as defined in the specification. The call will include all required parameters (and some optional parameters, and, again, perhaps some proprietary parameters).
  • The login sequence is initiated by the user-agent by making the call to the authorization endpoint. This endpoint either does a redirect to a separate authentication workflow endpoint or returns a login form. The exact details of how this works are beyond the scope of the specification.
  • There is some type of trust relationship established between the authorization endpoint and authentication workflow endpoint—typically implemented with a security-session tracking cookie.
  • For the OAuth2 authorization code grant, OAuth2 implicit grant, and all OIDC authentication flows, the IdP serves the authentication workflow.

The Apigee OAuth2 examples that involve end-user authentication generally involve Apigee Edge acting as an OAuth2 provider and a third-party IdP handling the end-user authentication.

There are several ways of integrating these two concepts. We can’t cover them all here. We have this one that is Apigee’s official example of the authorization code grant. It is discussed in further detail here. This example has the (server-side) client application making an initial request going to the third-party identity provider (simulated by an Apigee API proxy). It is similar to the OAuth2 protocol, but isn’t spec-compliant.

An OIDC example

So I looked around for another OAuth2 authorization code grant example from Apigee that looked a bit more like what I was used to seeing. I found this example that has the same high-level pattern (only OIDC authorization code flow) with Apigee as the OAuth2 provider and a third-party IdP (PingFederate) as the IdP.

This OIDC example wraps the third-party IdP response in an Apigee-issued OAuth2 access token that is returned to the calling client application. It accomplishes this by impersonating the original client application at the API proxy (Apigee Edge layer) during its interaction with the third-party IdP.

This implementation has the initial spec-defined OAuth2 authorization code grant call to the authorization endpoint, but the registered endpoint in the third-party IdP is the Apigee API proxy. The end result of this is that the client application doesn’t actually make the call to the token endpoint. One could imagine how someone would view this as beneficial and easier from the perspective of the client app developer.

Actually, what this example is doing is similar to one of OIDC hybrid flow variants (no interaction with the token endpoint from the client’s perspective), but that doesn’t match up with the response_type in use. This, also, is not meeting the OIDC and OAuth2 spec compliance we need in the example that we are going to use.

Key design principles

So, I created my own implementation that used the following design principles:

  • A third-party IdP is responsible for authenticating the end user and applications. In our example, Red Hat SSO v7.1 is acting as the IdP responsible for authenticating end users and the applications.
  • All clients send OAuth2 requests to an API proxy that wraps interaction with the third-party IdP.
  • The third-party IdP has no concept of the API gateway (or proxy) that is acting as an intermediary. The most secure implementation of this pattern would be to ensure the third-party IdP has a clear understanding of the API gateway and the apps. However, modeling the delegation rules between these actors is far more complex than pretending that the API gateway doesn’t exist in the IdP. So, for now, we’re not going to worry about that detail.
  • The authorization endpoint on Apigee returns a redirect to the third-party IdP authorization endpoint (using the same query parameters). This doesn’t explicitly hide the third-party IdP from the client application, which would likely be preferable in most situations—let’s call this the author taking a shortcut that could be easily resolved if properly motivated.
  • The authorization codes, refresh tokens, and access tokens issued to the client applications are generated by Apigee Edge, but issued after validating user and application credentials against the third-party IdP.
  • The OIDC protocol is used to integrate with the Red Hat SSO for the authorization code grant. This gives us access to the UserInfo endpoint to retrieve information about the user.
  • The OAuth2 protocol is used to integrate with Red Hat SSO for the client credentials grant.
  • The cached refresh token on Apigee is only held for eight hours. After this, the user would have to start a new session by logging in again.
  • There are numerous other timeout considerations across access tokens, refresh tokens, and sessions on the IdP that should be considered for real-world usage. That’s beyond the scope of this post (and the given example).

We also assume that the authorization code grant and client credentials grant have been implemented in Apigee for the purposes of this example. The other OAuth2 grants can be implemented easily enough using the building blocks from these two.

Given all of that, we arrive at this API proxy (available on GitHub) that wraps a third-party, OIDC-compliant IdP. There is a pre-built API proxy bundle available here if you want to get started very quickly.

Likewise, I created this API proxy that will protect a backend API using Apigee’s out-of-the-box OAuth2 access token validation.

The details of how to build, deploy, and test these proxies can be found here.

The basic interaction between the actors is described by this diagram:

The steps are:

  1. Load OAuth2 + OIDC debugger UI.
  2. Send request to Apigee OAuth2 authorization endpoint  (advertised by the OAuth2 wrapper API proxy) to kick off the authorization code grant.
  3. User is redirected to third-party IdP OAuth2 authorization endpoint.
  4. User authenticates against IdP (involves interaction with IdP login workflow).
  5. Authorization code is returned via redirect to redirect URI; results in authorization code being available to debugger UI.
  6. The debugger UI makes a call to its backend with the token endpoint parameters that must be given to the Apigee OAuth2 token endpoint.
  7. The debugger backend sends a request to the Apigee OAuth2 token endpoint (advertised by the OAuth2 wrapper API proxy).
  8. The API proxy makes a call to the IdP OAuth2 token endpoint to validate the authorization code and obtain an IdP-issued access token and refresh token.
  9. The API proxy makes a call to the IdP OIDC token endpoint to obtain user profile information for the authenticated user.
  10. The API proxy caches the IdP-issued refresh token for later lookup and generates an Apigee-issued access token and refresh token. These tokens are returned to the debugger backend and then to the debugger UI.
  11. Using the Swagger UI (or something similar) and the access token that was just obtained, an API call can be made to an API proxy that is protecting the backend API with OAuth2.
  12. The OAuth2-protected API proxy extracts and validates the access token (using out-of-the-box functionality).
  13. The API proxy removes the access token from the request and forwards the request to the backend API. In the real world, there would be some type of trust relationship established between the API gateway and the backend API (mutual Auth SSL, shared key, username and password, or something similar).

The detailed interaction between these actors is described here.

A test of the authorization code grant is shown at the end of the Apigee OAuth2 configuration post.

A description of how to use the refresh token with the debugger is available in this post.

There are many ways Apigee’s OAuth2 implementation can be used. This is just an example, but I encourage you to follow the design principles that have been laid out here. Ordinarily, the level of detail described here can be abstracted away from application developers by authentication libraries. An Apigee developer that is implementing a similar pattern will need to know these details.

Identity Propagation in an API Gateway Architecture

The power of end-to-end user security context with APIs

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower.

As enterprises continue to expand their usage of APIs, the need to keep those APIs secure increases as well. One way to bolster security and improve auditing and authentication is the transmission of an authenticated user’s security context from the front end of a request pipeline, beyond the API gateway, and all the way to the back-end implementation of an API or service.

End-to-end transmission of an authenticated user’s security context benefits API-based systems by enhancing overall security, eliminating the use of generic (privileged) accounts, providing a secure audit mechanism (of traffic traversing the system), and supporting advanced  authentication use cases.

This pattern can in some instances cause some complexity, but, as you’ll read in this post, it provides some valuable benefits.

End-to-end identity propagation

In most architectures, the propagation of the end user security context tends to stop at the API gateway layer, and a more generic security mechanism such as Mutual Auth SSL or a service account (with basic authentication) steps in to secure the API provider layer.  

However, there are several benefits to propagating the authenticated user security context all the way to the API provider:

  • Support for processing Attribute Based Access Control (ABAC) authorization decisions based upon the end-user identity at the API provider.   
  • Avoidance of generic service accounts with super-user privileges to connect to the backend.
  • Provision of a secure audit mechanism for the user who initiated requests that hit each layer of the infrastructure.
  • Maintenance of certain compliance requirements
  • Value in SOA and microservice architecture implementations
  • Support for authentication protocol translation

Token exchange functionality

If there is an intermediary between the front-end (such as a native mobile app or single-page web app) and the API implementation, such as an API gateway, this pattern becomes more complex, but is still possible.

This is where token exchange functionality such as that defined by the OAuth2 Token Exchange protocol (grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer with requested_token_use=on_behalf_of; in the most recent spec drafts "on_behalf_of" was changed to "subject_token"), the WS-Trust (active clients with On-Behalf-Of and ActAs) spec, and Windows Kerberos extensions (Protocol Transition and Constrained Delegation, which is based upon Kerberos) can be used.

It’s important to note that this activity requires the intermediary to play an active role in making this specific request; the identity provider must grant explicit rights to the intermediary to be allowed to make this request.

This token exchange functionality essentially enables an actor to exchange a token (such as SAML2 or JWT) with one audience (presumably its own) for a token that describes the same user, but with a different audience from an identity provider. This is done in a secure manner, meaning there are significant limits placed upon which audience the new token may reference. 

Two common forms of this token exchange are delegation, where the protocol provides a concept of the intermediary that is making the call (such as the WS-Trust ActAs), and impersonation, where the protocol provides no concept of the intermediary (such as WS-Trust On-Behalf-Of). The OAuth2 Token Exchange spec defines similar concepts for OAuth2 and OpenID Connect (OIDC); however, it uses the terms to mean the opposite of how they are used here.

The API gateway scenario

This pattern is an evolution of a similar pattern used in the SOA world. In 2011, I did a presentation about using this pattern for SOA implementations with SOAP actors, SOAP web services, SAML2, WS-Trust, and WS-Security. The emergence of APIs and API gateways merged the concepts used in the SOA world to enable front ends (such as native mobile applications and SPA applications) to communicate with the backend and what (SOAP web services) was used by backend intermediaries (ESBs) to communicate with the service implementation layer.

Now, the entire request/response pipeline, from the front-end consumer to the backend implementation, can use RESTful APIs.

With an API gateway intermediary in place, the whole model will look something like this:

Let's make some assumptions and set some requirements for this scenario:

  • The access token is a digitally signed bearer token (JWT).
  • All system actors are part of the same security realm (in other words, they use the same identity provider). This identity provider can authenticate all end users of the front-end application.
  • Every system actor (API gateway, API provider, and, possibly, the datastore) must validate the identity token (authenticate the request) attached to an incoming request.
  • This token validation includes audience restriction enforcement, which further ensures the token is only used where it is supposed to be (the granularity of the audience depends on several factors, including desired reusability of the token, resource authorization policy, and organization security standards).
  • All communication occurs over TLS v1.2 or better.
  • All communication occurs with RESTful APIs using JSON as the payloads (where appropriate)
  • All security details will be implemented with spec-based security (more on that below)
  • For added protection, use mutually authenticated SSL between actors where the server component only accepts connections from a single actor (between the API gateway and API provider, for example). This assumes that a single client certificate describes a single system or possibly a single node within a system. There shouldn’t be sharing of certificates across systems.

OAuth2 and OpenID Connect

So, what specs can be used to implement this security model? If you've read my earlier posts, then you might guess that the answer lies within OAuth2 and OpenID Connect—and you’d be correct.

In particular, OpenID Connect can be used to perform the initial authentication of the end user signing into the mobile application as described in the blog posts here and here.

As outlined in the diagram above, the result of that initial authentication includes an OIDC ID token and OAuth2 access token. The ID token must be validated per the spec, then information contained in it can be relied upon to describe the authenticated user.

If additional information is needed, the mobile app can make a call to the IdP OIDC UserInfo Endpoint to obtain more information about that user. Additionally, the OAuth2 access token that is provided should be cached on the mobile app and included with the authorization header of each API request per the OAuth2 Bearer Token Usage spec. When the mobile app needs data from the backend, it makes an API request to the API gateway.

Now, if the OAuth2 access token is also a JWT token, that makes the downstream authentication (access token validation by the API gateway) easier. If it is an opaque token, then the system actor must be able to support passing the access token into an IdP OAuth2 Introspection Endpoint to validate the token. 

All of these details are discussed in great detail in my OAuth2 blog post and OIDC Series. Regardless of the validation process, the token must be validated and, if successful, then the claims provided through that access token (JWT) are available to the system for authorization decisions.

Securing each step of the request’s journey

The mobile app will make API calls to the API gateway; identity information will be passed from the mobile app to the API gateway layer. When the request arrives at the API gateway, the access token must be extracted from the request and validated. This validation step could be performed locally, if the token is a JWT, or could be done by querying the identity provider's OAuth2 Introspection Endpoint or other identy provider-specific mechanism. If necessary, a call can be made to the OIDC UserInfo Endpoint.

Some type of coarse-grained authorization decision will likely be made at this point, the details of which are beyond the scope of this conversation. Finally, the access token that was passed into the API gateway with the request needs to be replaced with an access token whose scope matches the downstream actor’s (API provider’s) scope.

This is accomplished with a Token Exchange Grant call defined by the OAuth2 Token Exchange spec. The new access token is placed into the API request (replacing the existing token) and that request is routed to the API provider layer.

When the request arrives at the API layer, the access token is extracted and validated in the same fashion as the input token at the API gateway layer. Likewise, the UserInfo endpoint can be queried for additional information, a fine-grained authorization decision can be enforced, and possibly a new downstream token obtained for the datasource tier. For the last component, this assumes that the datastore layer can process an OAuth2 access token. 

The request that is sent to the datastore is almost certainly going to look different than the one that was sent to the API provider. There may also be multiple datasource queries for a single API request. Implementing the OAuth2 access token check at the datastore will likely require a custom authentication module on the target datastore in order to implement the desired behavior.

An example of a database that supports something like this is DB2 with DB2 Connect using Kerberos authentication. More details on that implementation can be seen here.

The authentication and identity propagation model described here isn’t applicable if:

  • the end user's security context isn’t extended to the backend
  • generic service accounts are used
  • OAuth2 Client Credential Grant is used
  • any of the other assumptions listed earlier in this post are not true 

While extending the end user’s security context back to the API provider introduces complexity to the authentication pattern, it brings valuable benefits including elimination of privileged service accounts on publically-facing systems (such as the API gateway), secure auditing, and efficient support for attribute-based access control (ABAC) decisions.  

Furthermore, using a spec-based approach to implement a complex identity pattern such as end-to-end identity propagation provides a layer of insulation between your solution and the identity provider vendor, making it easier to change out the identity provider later if needed.

Read other posts that Robert has contributed to the Apigee blog.


Identity and Access Considerations for Public & Private Clouds

APIs are everywhere. With organizations of all sizes connecting their customers, partners, and developers through APIs, identity and access management becomes a key consideration. But different use cases have different requirements.

Some organizations rely on SAML, while others might employ OAuth, OpenID Connect (OIDC), or JSON Web Tokens (JWT).

Apigee’s API platform, which can be deployed as a private cloud in a customer’s own data centers or can be leveraged from the Apigee Cloud, incorporates several different components, each with different security capabilities:

  • Apigee Edge admin UI  The admin UI used by API team.
  • Apigee Developer Portal  The web portal to expose your APIs, educate developers about your APIs, sign up developers, and let developers register apps.
  • Apigee Management API  Used primarily for CI/CD processes
  • Apigee runtime APIs  These are the customer APIs proxied by Apigee Edge

Here, we’ll explore how Apigee Edge addresses the identity and access needs for both private and public cloud deployments, as well as considerations for runtime security, developer portal security, and management UI/API security.

Enabling SSO for users logging in to Apigee Edge

Many enterprises need to support single sign on (SSO) for their employees to log in to vendor applications. The SSO providers are typically connected to the enterprise directory services, so that they’re kept in sync when employees join or leave the organization. 

How SSO can be enabled on different Apigee components

* Direct LDAP connectivity from Apigee Edge is not supported for Apigee public cloud. It’s generally considered a security risk to expose LDAP services over the internet, so Apigee recommends a SAML SSO provider (like Okta, Ping Identity, or SiteMinder, for example) for integration.

Leveraging Apigee to implement OAuth for APIs

OAuth has become the de facto standard for securing APIs. Apigee Edge provides a fully compliant OAuth 2.0 authorization server (AS). Apigee Edge also acts as a resource server.

In addition to managing the lifecycle of access tokens (generate, expire, and revoke), Apigee Edge also provides the ability to manage application credentials (client IDs and secrets).

Users can generate (and rotate) client IDs, set expirations, and revoke and approve access to APIs.

There are several capabilities provided for OAuth across public, private, and hybrid cloud scenarios. Microgateway is a lightweight gateway which can be deployed within a trusted network in close proximity to back-end target services.

This helps to achieve a hybrid deployment model by leveraging the Apigee Cloud for all public APIs and managing private APIs using the microgateway, while maintaining a single point of control.

Capabilities provided for OAuth across public, private, and hybrid cloud scenarios

Access an external identity provider for APIs

API access often requires integration with identity providers for authentication (and sometimes authorization) of users. As part of an OAuth flow (Auth code, password, or implicit grant types), Edge often needs to integrate with external identity providers. 

Edge comes with an OAuth authorization server and requires the user to integrate with an existing identity provider. For example, before completing an “auth code” or “password” grant OAuth flow, the user (the resource owner) must be authenticated against an identity provider.

There are multiple ways for Apigee Edge to integrate with an external identity provider. The first step is to establish a trusted and secure communication between Apigee Edge and the identity provider.

This is generally achieved using mutual trasnport layer security (TLS). The second step is for the identity provider to communicate the identity of the end-user in a trusted format. This trusted format is generally a SAML assertion or a JWT claim, signed by he identity provider. Apigee Edge has the public key to verify the assertion or claim, then extract the identity from the identity token.

Apigee can integrate with external identity providers (like Okta, Ping Identity, CA SiteMinder, and Active Directory, for example) for user authentication via one of the following options:

  • A signed token (SAML or JWT) containing the identity assertion, issued by the identity provider  Apigee validates the token and extracts the identity.
  • An API call  If mutual trust can be established between the identity provider and Apigee Edge (two-way TLS, for example), then Apigee can make a REST/API call to the identity provider for authentication.
  • Custom libraries  If the identity provider provides custom libraries in Java or JavaScript, then those libraries can be leveraged to communicate with the identity provider within Apigee.
  • LDAP  Apigee can communicate with identity providers directly via LDAPs.

Here are the different ways to implement the integrations:


Leverage external OAuth authorization server for APIs

Some enterprises already have an OAuth AS, like Active Directory Federation Services (ADFS) or Okta. These businesses might want to continue to leverage these authorization servers with Apigee Edge.

Apigee Edge supports a couple of integration patterns that involve an external AS.  

In the first pattern, the external AS also acts as the identity provider. Apigee Edge acts only as the resource server. Access token generation and life cycle management is performed by the AS. Client IDs and secrets are managed by the AS.

The advantages of this integration pattern include:

  • Token (web and API) lifecycle management happens in a centralized AS
  • Integration with existing solutions

The cons include:

  • Performance impact (gateway makes sideways calls to validate token for each API call)
  • Requires developer portal customization; the application registration process (a developer portal capability) must integrate with the external AS.
  • Does not work with Apigee API products.
  • Loss of application-based analytics; Apigee analytics is based on client ID, and because this is managed/maintained only externally, analytics will be limited.
  • Functionality of out-of-the-box developer and app lifecycle management is restricted.

In the second integration pattern, the external OAuth server acts both as the identity provider and the AS. Edge is also the AS and resource server. Access tokens are generated by the external AS and are imported into Edge. In the case of JSON web tokens, Apigee can validate the external token based on public keys.

Access token generation and lifecycle management is handled by the external AS and Apigee. Client IDs and secrets are managed by the external AS and a copy is imported into and stored in Edge.

The advantages of this integration pattern include:

  • Performance (No sideway calls to validate tokens)
  • Integrates with existing solutions/landscape
  • No loss of functionality from Apigee.

The cons include:

  • Developer portal customization is required
  • Token lifecycle is required in Apigee and the external AS 

Implement lightweight security

Apigee manages API programs of all sizes and of all kinds of enterprises. In some cases API programs are run by small teams inside large organizations. In other cases, the platform might belong to startups or small and medium-sized businesses. In these cases, if the data isn’t very sensitive, customers might initially want to adopt a lightweight security model to get the API program rolling.

Customers can start implementing basic authentication or API key-based security mechanisms on the runtime. The out-of-the-box username:password mechanism for developers  to log into developer portals or for an API team to log into the management UI/APIs makes it easy to get started. This enables customers to rapidly launch their API program and add advanced security features later on.

Apigee is widely used by organizations of all sizes, with a variety of security requirements, so the platform has evolved to provide a wide variety of choices and options. Our documentation contains a wealth of details on the topics covered in this post, including using OAuth with the Apigee Edge management UI, enabling SAML authentication for Edge,  configuring TLS on Edge, and defining user roles.


An Alternative to Delegated Access in the Enterprise

Extending OAuth2 and OpenID Connect as the enterprise standard for API security

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower.

OAuth2 and OpenID Connect (OIDC) have their origins in the concept of delegated access—think three-legged OAuth. These protocols are designed around the notion that the resource owner is an end user; however, for the enterprise, the business may own the data and be responsible for determining when access should be granted. This post explores how an enterprise IT organization may go about defining policies around such access.

What’s delegation?

Delegated access is the act of one entity (a person or system) granting access to a resource that they own (or control), to another person or entity that has a need for access. The concept of delegation comes up over and over in identity and access management. The Kerberos protocol that Microsoft Windows security is built upon has delegation as a core concept.

Within three-legged OAuth (and OIDC) use cases, the credential validation mechanism is not specifically defined (not to be confused with the OIDC ID token, which conveys details of the authentication to the application). Yet it is generally assumed to include a consent-granting phase wherein the end user (who is being authenticated and owns a resource the third-party application wants to access) specifically grants permission  or consents to the third-party application to access some resource. For our purposes, let’s assume this resource is an API that provides data the end user owns.

The evolving standard for API security

Ironically, these protocols that were designed for a very specific use case (three-legged OAuth and delegated access) grew into the defacto standard for authentication and authorization of API access, mobile applications, single page applications (SPAs), and others. To that end, OAuth2 (together with additions like JWT for the access token and additional security mechanisms to provide a minimal authentication protocol, which many of the enterprise IdP vendors would already do) and OIDC should be the starting point for any identity conversation in those spaces. There was a point, three years ago, where I would have said that OAuth2 should only be used in situations where delegated access is needed.

But the world of modern identity needs a standards-based approach to identity just as much as last decade’s SOA and SOAP web services did. There were not any other standards that had the industry’s major players backing it (and using it) that provided the requisite functionality. So that’s why I call OAuth2 the de facto standard of API and mobile security. Of course, that is evolving into OIDC being the de-facto standard.

So, we have OAuth2 and OIDC being used for a variety of use cases across SPA, mobile, API access, and traditional web application use cases that they were not necessarily originally specifically targeting. This bothered me in the beginning; I still interact with people today who do not really like this approach. But the ship has sailed already, so let’s make the best of it.

When the business owns the accessed resource

We live in a world where OAuth2 and OIDC are the standard for API security (at least where end-user authentication and identity propagation are concerned). These protocols assume a consent-granting step during the login workflow and assume that the user being authenticated owns the resource being accessed. What happens when the business owns the resource being accessed, instead of the user?

In the enterprise, this is a common scenario; especially in the business-to-employee (B2E) space. It’s possible in the B2B space, and in the B2C space it would depend on the jurisdiction and the exact circumstances. For the discussion here, let’s stick with the B2E space.

If the enterprise owns the data, then it owns the authorization decision regarding who and what can access that data. These decisions will be driven by the data classification assigned to the data in question. A typical data classification policy in a Fortune 100 company would be created by information security, legal, and representatives from the business side. The categories of data sensitivity could look something like this:

  • Public classification Information that is meant for distribution to the public, including store location information, hours, anything on the public corporate web site
  • Internal business use Information that is used in regular business processes; it generally isn’t meant for public disclosure, but if it were to be disclosed, it would not seriously harm the business, business partners, or customers
  • Confidential sensitive information that should not be disclosed; If it is disclosed, destroyed, tampered with, or manipulated, it could cause significant damage to the business, business partners, or customers

An effective data classification policy will ensure that sensitive data is handled appropriately, based on the level of risk associated with it. This drives a data-loss prevention program, which forms one of the pillars of an enterprise authorization policy definition process. That and an understanding of the expected audience will drive the security semantics of the authorization policy of an API that is exposed to internal or external actors. The same is true for any IT system or app that provides access to data (most of them do), but we are interested in APIs. So, let’s focus on APIs.

Determining access

In the enterprise, with the possible exception of personally identifiable information (PII), the data most employees, contractors, and business partners interact with is owned by the business, not individuals. Depending on what country you are in, the data may be written into contracts that the business has the right to access. It’s also possible that additional steps must be taken to have this level of access if data residency requirements are in play, but this is beyond the scope of this discussion.

The data classification policy drives the semantics of an API authorization policy. Before the semantics can be codified into a policy (whatever technical implementation that takes), the syntax of that policy must be defined. To that end, something like eXtensible Access Control Markup Language (XACML) could be used to author the authorization policy. The implementation of such policies is, again, beyond the scope of this post.

So if the data is owned by the business and not the individual, it follows that the business will be responsible for determining who (and what) is allowed to access and manipulate that data. The mechanisms used to accomplish this by the IT department have also been discussed (data classification policies and API authorization policies).

Decisions, decisions

We have arrived at a place where we are using a delegated access (authorization) protocol (OAuth2 and OIDC) in a general case where the end users being authenticated don't own the data they are trying to access. As I mentioned before, we’re going to continue to use OAuth2/OIDC because that’s the best standard that has emerged in the API and mobile spaces.  

To this end, the consent granting phase of the login workflow on the identity provider (the OpenID provider or authorization server) may be unnecessary. Most enterprise-grade identity providers will have features to support this. For example, Azure Active Directory provides a concept called administrator consent that allows permission to be granted on resources for all users in a tenant.

At least two different forms of authorization have been mentioned in this discussion: authorization decisions made during token issuance (authentication time) and authorization decisions made at runtime during resource access (API invocation).

By their very nature, the initial decision, made on the identity provider at the time of authentication and token issuance, is more coarse-grained than the decision that can be made at the time of API invocation on the API provider (or possibly the API gateway). This is because there is far more information available at runtime during the API invocation then there is during the token issuance about the context surrounding exactly what the end user or application is attempting to do.

As an example, in one large client, the decision was made to use the information that was available at the time of token issuance to scope the token (the JSON web token, or JWT) to a known API audience that would be accessed. This audience information combined with strict audience enforcement information at the API provider (or API gateway) ensured that tokens were not reused between environments (unit test, QA test, load test, and production, for example).  

Taken one step further, the audience information can be scoped to a particular API or set of API functions. It can even implement a basic RBAC (role-based access control) model to protect the API. Yet a mechanism still has to exist on the API endpoint to correctly map the audience (role) to the resource—this is common functionality in an API gateway. Many other authorization schemes are possible.

How closely the consent and delegation can be integrated between the IdP and the API provider depends upon the system and its features. But, for most of my enterprise clients, in the B2E space the consent step for the end user isn’t present and, instead, there is an evolved authorization policy concept defined at the API provider layer.

In order for this to happen, enough information has to be present to make authorization decisions in the token (the JWT-based access token or the ID token), including LDAP attributes, group information, role information, context-specific information about the environment or application, and what is defined by the JWT, OIDC, and JWS specs.

The OAuth2 and OIDC specs are employed in use cases now that they weren’t originally meant for, but the industry needed a standards-based, interoperable approach to application security. This is why they’ve become the defacto standard for API authentication and authorization. Moving forward, these specifications (especially OIDC) will be the standard that enterprise identity is built upon.

Read other posts that Robert has contributed to the Apigee blog.

Image: The Noun Project / Arthur Shlain

Apigee and Okta Partner for API Security

Apigee is excited to announce a partnership with Okta, a leader in the Identity-as-a-Service space.

A critical reason customers use Apigee is to secure their APIs. And one of the most critical aspects of security is the authentication and authorization of the APIs.  

Apigee’s and Okta’s offerings complement one another to solve the AuthN/AuthZ problem for customers. Almost every customer using Apigee needs to integrate Apigee Edge with a central identity/SSO store. Enterprises traditionally have used legacy, on-premises identity providers.

But customers increasingly are choosing Apigee and Okta together as two critical platforms for digital transformation and their cloud journey.

There are two common use cases for this integration:

  • Customers want to use Apigee’s API management capabilities and use Okta for identity management
  • Customers use Okta as their OAuth/OpenID connect provider across the organization

As part of the partnership, we’ve created an out-of-the-box solution for the first scenario (learn about it here). The solution is intended as a reference implementation and is available as an open source solution.

Scenario 1: Apigee provides OAuth

In the first use case, Apigee acts as the OAuth provider while Okta provides the identity store and handles authentication of the identities.

Let’s take a quick look at what we’ve built here:

In the diagram above, Apigee provides the client_id and client_secret. Pre-work steps aren’t needed for the runtime API security; they’re performed beforehand:

  • The Apigee admin registers Apigee as a valid client with Okta; Apigee receives the public key from Okta and stores it in its KVM or vault
  • A developer self registers in the Apigee developer portal
  • The developer registers an app in the Apigee dev portal.,Apigee issues a client ID and Client secret to the app.

In the runtime flow:

  • The client makes a call to Apigee with the client ID and secret, which Apigee validates
  • Apigee redirects the client app to Okta
  • Okta collects the user credentials and authenticates and authorizes the user with support for multi-factor authentication ( if needed)
  • If the user is validated, then Okta returns an id_token back to Apigee
  • Apigee issues an access token against the id_token and stores a map of access_token to id_token in the key storage
  • When a secured API is invoked, it is invoked with the access token, which Apigee validates; depending on requirements, it either sends the whole id_token or part of it to the backend

Scenario 2: Okta acts as the OAuth provider

This is another common integration pattern, but currently we are not providing an out-of-the-box solution for this. Often, when Okta is the enterprise-wide identity platform (for web, mobile, or APIs), the customer is likely to leverage Okta as the OAuth provider.

To follow this integration pattern, it’s important to keep a few things in mind.

For an API program, a developer portal is crucial. This is where developers register themselves as well as their apps. Once the apps are registered, developers receive their API key or client_id/client secret from the dev portal.

Okta can be used as an OAuth provider, but Apigee must have knowledge of the client_ID and client_secret because:

  • It uses those IDs for throttling
  • The IDs are crucial for analytics and API debugging
  • IDs are used for app lifecycle management
We hope you've found this walk-through useful, and look forward to working with Okta to develop new integrations. To learn more about Okta’s OAuth and OpenID Connect support, visit https://developer.okta.com/docs/api/resources/oauth2.html.


API Best Practices: Security

How to ensure your APIs aren't naked

Over the last few years, we’ve witnessed hundreds of enterprises launching API initiatives. This post is the first in a series that aims to distill our learnings from all these customer engagements and share best practices on a wide variety of topics, from deployment models to API design to microservices.

Three-quarters of mobile apps fail standard security tests—and most cyber attacks target the app layer, according to Gartner.

Organizations have used web application firewalls and DDoS protection solutions to secure their web apps. In the world of mobile, cloud, and microservices, where enterprise data is accessed with APIs in a zero-trust environment, what’s needed is deep API security. At risk are hundreds of thousands of sensitive customer records or millions of dollars.

There are known threats. The Open Web Application Security Project (OWASP), an online community of application security researchers, publishes an annual list of the top 10 security threats enterprises face. But there are also potential attacks from “unknown” threats—software that constantly scans for vulnerabilities in application infrastructures.

For protection against all kinds of external threats, organizations should create proxies in front of their APIs with an API management platform and enforce a set of consistent security policies at the API proxy layer. So, let’s review typical types of cyber threats and how organizations secure their APIs.

Injection threats

These are common. Attackers trick an application to divulge sensitive information by including malicious commands as part of data input. For example, by sending strings like " OR 1==1 -- " as a data input, hackers could bypass user authentication in the application like the code snippet below:

select user_id FROM customer_data WHERE username = ‘ ’ OR 1 = 1

-- customer_passwd = ‘abcd’;

To prevent this kind of attack (and related JSON and XML bomb threats), organizations use built-in policies like regular expression protection (Regex) and XML/JSON threat protection policies available in API platforms. These API input validation policies harden your APIs and applications against injection threats.

Broken authentication and session management attacks

These attacks steal session IDs or tokens from your app user’s device and enable hackers to take over a user account. To mitigate such attacks, enterprises use out-of-the-box two-way TLS and OAuth2 policies to implement standard OAuth2 and enforce two-factor authentication.

Cross-site scripting (XSS)

Cross-site scripting involves a hacker executing malicious code on behalf of an innocent user by hoodwinking the user to click on compromised URLs.

Instead of http://example.com/account?item=543, for example, the hacker could get an innocent user to execute something like


Executing this script, the mobile client using the API could allow an attacker to steal the user’s authorization token or cookie. This vulnerability exists in many web apps and mobile apps that use webviews. Hackers can use the innocent user’s token or cookie to log in into your systems and steal sensitive data.  

To prevent XSS threats, you can enforce strict API input validation with out-of-the-box regular expression protection (Regex) policies in the API proxy layer.

Insecure direct object reference and missing function-level access control

With each of these attack vectors, the hacker modifies an existing API request pattern to either access objects they shouldn’t or request access at a different level (user versus admin) in your apps.

For example, realizing the API is of the form https://api.awesomeretailer.com/user/account/1234, hackers could attempt a series of API calls by changing the account number (for example, https://api.awesomeretailer.com/user/account/5432 ) to access other accounts. Or they could try to gain access using https://api.awesomeretailer.com/admin/account/5432

With OAuth2 and the right scopes set for APIs, organizations can easily reduce the attack surface and mitigate risk without refactoring existing applications.

Sensitive data exposure

When web and mobile applications store sensitive data like credit card information and passwords insecurely, problems arise. By building secure API proxies for your sensitive APIs, you create a secure facade in front of your applications. Using a PCI- and HIPAA-compliant API platform, organizations create secure proxies and also ensure API keys are stored in encrypted mode.

Cross-site request forgery (CSRF)

In these attacks, hackers exploit the trust that an application has in a user’s identity after the user initially logs into the app (a banking app, for example). The hacker would have sent this innocent user an HTML email with a tag like

<img src=”https://www.somebank.com/move?amount=5000&destination=”favhacker”>

If this “image” is loaded automatically, the user’s client would make a transfer request with the exact IP address and session cookies or token; it would appear as if the innocent user has made the call. From the app perspective, it would look like a legitimate request, and it would send $5,000 to the hacker’s account.

This kind of attack is mitigated by adding special CSRF tokens or by reducing the attack window with expiring tokens and using two-way TLS to avoid token leakage. Organizations have implemented it with a set of secure API proxies for all their sensitive APIs.

Beyond the above known OWASP threats, organizations also need to protect themselves from volumetric attacks like denial of service (DoS) attacks. Enterprises use spike arrest and rate-limiting policies at the API proxy layer to mitigate risk from such volumetric attacks.

Adaptive threats

Besides the various known threats, increasingly we need to tackle “unknown” threats, where automated software programs called “bots” can constantly scan for security vulnerabilities in your app infrastructure.

Unlike web apps, APIs are programmable, making it easier for attackers to target APIs using bots. Bot traffic can probe for weaknesses in APIs, abuse guest accounts with brute force attacks, use customer API keys to access private APIs, abuse loyalty programs, or scrape pricing data for competitors via APIs. Bad bots like these comprise 10% to 15% of internet traffic today.

A sophisticated API platform continuously monitors your web and API traffic. It identifies bots by using API access behavior patterns, rather than IP addresses. As a result, bots can be tracked even when they change location.

Advanced API platforms uses sophisticated machine learning algorithms on data aggregated across multiple customers. As it analyzes billions of API calls, it can distinguish legitimate human traffic more effectively than possible from a single source of data.

Once it identifies API call pattern that has a “bot like” signature, it flags it. You can then specify the action to take for each such identified “bot signature.” The API platform would then automatically take appropriate actions like blocking, throttling, or honeypotting.

It is pretty easy to stand up a set of APIs for use for your mobile apps teams or partners, but it takes a lot more to make sure you are not exposing a set of naked APIs.

For protection against all kinds of threats, organizations create proxies in front of their APIs and enforce a set of consistent security policies at the API proxy layer with API management platforms.


How to Make Your Apigee Edge-Okta Integration Seamless

Okta is a popular identity and access management solution that shares several customers with Apigee. Often, customers whose API calls flow through Apigee and rely on Apigee as the OAuth provider for all of their apps also want to use Okta for end-user authentication. How can they make this a seamless process that adheres to standards as much as possible?

In this short yet comprehensive demonstration, we walk through integrating Apigee and Okta and discuss topics including OAuth and resource-owner or password-grant flows.

All API traffic (the runtime API and the authentication/token API) is proxied through Apigee Edge. Apigee delegates the end-user authentication to Okta and generates an access token that can be used to access any API that is protected using OAuth policies.

Apigee supports custom attributes that can be associated with an access token very easily. This is helpful when you have to reference some values/IDs returned by the identity provider and, more importantly, when you have to pass them along to the back-end APIs at runtime. Apigee Edge makes it simple with out-of-the-box policies.

Apigee also supports external access tokens (those not generated within Apigee—in this case, from Okta). You’ll see two variations of the password grant: access token minted and persisted within Apigee; and access token minted in Okta (In the demo we use the session ID token returned from Okta as the access token) but persisted and recognized within Apigee as if it was generated there.

There are several benefits to using this delegated model approach, including:

  • It’s standards-based (OAuth 2.0)
  • It enables security enforcement at the edge
  • It enables responsive and secure APIs
  • It provides end-to-end visibility into authentication traffic in addition to your run-time traffic
  • It creates a seamless experience for app developers

We hope you found this useful. If you have any questions or want to discuss this, please visit community.apigee.com.

Why Enterprise Security Needs APIs

From time to time, we will have guest authors weigh in on specific topics here. While I’m  interested in all topics, and have opinions on almost all things related to APIs and data :), sometimes it’s better to let the experts have a say. So here is a first guest post. This topic is especially important because by now people realize that digital transformation requires APIs.  And we know that APIs require security. But does security need APIs? I discussed this in a previous post; here's the first in a series of  posts that elaborates on the topic.

Enterprise security incidents surged by 38% last year and malicious attacks on enterprise data continue to have significant impact, incurring a cost of over $500 billion today to an estimated $2 trillion by 2019. Enterprises are adopting APIs and API facades to secure access to valuable enterprise data from both internal and external users and apps.

Although many new apps use REST APIs, many enterprises still employ web applications that make direct calls to backend systems. To date, enterprises have used a layered security model to thwart external threats at the app layer. By mandating the use of APIs to access critical enterprise data, enterprises can create effective security control points and gain several security advantages.

Smaller attack surface

Today, when there’s a security breach in an application, all the backend systems that provide data to the app can be exposed. By ensuring the apps only use APIs to access data on backend systems, you can automatically restrict access to a smaller set of resources and data by passing the requests through an API gateway. This minimizes the attack surface should a breach occur.

Smaller attack window

Web apps that use simple usernames and passwords are challenged to securely store these credentials. Rotating them frequently is also difficult and therefore rarely implemented. In the event of a compromise, the enterprise is severely exposed as there is no time dimension to the credentials (they don’t generally expire).

By using APIs and OAuth, you limit the opportunity for attackers by using continuously rotating credentials that expire frequently.

Defense in depth

By using API management, enterprise security teams can deploy an effective, layered security framework. You can enforce a set of security policies across all your enterprise APIs (spike arrest and injection threat protection, for example) and, if required, enforce additional security policies, such as OAuth2, on select APIs.

Think of an API management platform as the internet-exposed layer of your applications environment that provides a defensive capability in front of your backend systems. 


With APIs and API management, you have the flexibility to downgrade or upgrade your API versions seamlessly as part of remediation efforts against security breaches. You can also roll out new API versions to select audiences before wider rollout to minimize security risks.

Virtual patching

An API management platform enables virtual patching and quick remediation against an identified vulnerability in a downstream system without having to change the source code of the system. By applying new security policies to limit potentially malicious input in the API gateway, you can significantly mitigate the impact of zero-day exploits and unpatched systems.

Security metrics

Most enterprises have limited visibility into the vulnerabilities of apps accessing your enterprise data. With APIs and API management, you automatically gain granular visibility into which backend systems are accessed insecurely.

API analytics provide traffic visibility that enables an enterprise to discern between bot traffic and legitimate traffic, transport layer security (TLS) versus non-TLS traffic, and authenticated versus unauthenticated requests. Security metrics like these enable you to fix vulnerabilities and improve overall application security.

By mandating the use of APIs to access any critical enterprise data, you create effective security control points and your enterprise assets become more secure. Enterprises that are going digital and are concerned about their assets should consider an API-first approach to their digital transformations.

Coming up, we’ll take a more in-depth look into the benefits of using APIs to secure enterprise data.

Black Friday: Protect Your APIs from Cyber Threats

Webcast replay

Bad bots can scrape your inventory and pricing information, and steal consumer credentials. Bots can also put stress on backend services and impact your SLA to customers and partners, especially around events like Black Friday.

To protect your APIs, you need a new, data-driven approach to identify and stop bad bots automatically.

In this webcast replay, Apigee's Subra Kumaraswamy and David Andrzejek discuss:

  • the nature of bot attacks and typical use cases
  • how to intelligently detect bad bots whilee leetting good bots in
  • how to implement technologies in your security stack to protect against bad bots



Security in the Digital Age: Deep-Dive Webcast

What are the biggest cyber threats facing financial and healthcare entities today and in the near future? How can organizations embrace innovation and agile development culture while balancing the time to market goals with risk management?

Join Jason Kobus, director of API banking at Silicon Valley Bank, and Apigee's head of security Subra Kumaraswamy in this webcast replay as they discuss how an effective API program, combined with a secure API management platform, can:

  • provide visibility for all security threats targeting their backend services

  • control access to sensitive data, end-to-end

  • enable developers to build secure apps with secure APIs

  • facilitate secure access with partners and developers