Apigee's Top API Editorials of 2018

Apigee experts published dozens of editorials in 2018 to help developers, IT architects, and business leaders understand how to maximize the value of APIs and keep pace with constant technological change.

With literally quadrillions of daily API calls connecting apps, data, and systems throughout the world, 2018 saw APIs reassert their position at the center of almost every digital use case. Though APIs are not a new concept, the ways in which organizations leverage them continue to expand, from APIs used within the enterprise to manage microservices and enable faster and more agile development methodologies to monetized APIs used to open new business models and expand an enterprise’s digital capabilities to new partners.

Here are some of our top articles from 2018, organized by some of the year’s biggest themes. Thank you to all of our readers, and stay tuned for more in 2019!


APIs are crucial to the automated connecting of data, applications, and systems—and when companies make automation easier for partners and customers, they often inadvertently make it easier for bad actors, too. Several organizations and their customers suffered through high-profile data breaches in 2018 thanks to API security lapses—which is why we dedicated several articles to helping enterprises make their APIs more secure. Some of our top security articles include:

Managing APIs as products

2018 saw more enterprise leaders recognize that APIs are not just an integration technology but also software products that help developers to more quickly and easily leverage and reuse digital assets. Enterprises should apply full lifecycle management and a customer-centric mindset to their API efforts. Some of the articles we wrote to help include:

Digital transformation, IT modernization, and digital ecosystem best practices

The digital economy moves faster than many legacy businesses are used to—and the constant change has meant that to compete, enterprises that lack API expertise have had to get up to speed quickly. From exploring why both external-facing and internal-facing APIs should be managed as products to detailing how to plan effective ecosystem participation and API monetization, we looked at many aspects of the digital transformation puzzle:


Because of the speed, scale, and agility they promise, microservices-based architectures continued in 2018 to be one of enterprise IT’s hottest topics. But despite the enthusiasm, microservices remain complicated to manage. To understand why APIs are an important part of the mix, check out Demystifying Microservices by Ruth Gantly in APIs and Digital Transformation.

APIs and banking

With new open banking requirements unrolling across many regions and fintech startups gaining traction around the world, 2018 was a disruptive year for bankers. From satisfying regulations to innovating faster and adding new ecosystem partners, APIs play vital roles in helping financial institutions to debut and iterate new services and helping legacy banks to compete in an increasingly fast-moving market. Some of our top banking articles from 2018 include:

The Top 5 API Editorials of the Quarter


APIs span a dizzying variety of constantly evolving use cases, with literally quadrillions of daily API calls connecting apps, data, and systems throughout the world. From improving internal efficiencies and participating in digital ecosystems to fortifying security and building machine intelligence into new products, the ways enterprises leverage APIs are changing and expanding all the time.

Apigee experts write dozens of editorials each year to help developers, IT architects, and business leaders understand how to maximize the value of APIs and keep pace with constant technological change. Here are our five most popular editorials from the most recently completed quarter.

APIs, Ecosystems, and the Democratization of Machine Intelligence

This article explores how productized APIs, as-a-service infrastructure, and other recent advances are democratizing machine intelligence and enabling enterprises to participate in ecosystems of intelligent software without having to build all the prohibitively expensive and complicated pieces themselves.

“Machine intelligence is becoming democratized,” author Anant Jhingran, who leads API management products at Google, says. “Organizations are increasingly pursuing ecosystems strategies in which they build digital products by combining their software, typically via APIs, with software from other companies.

“If a developer wants to build an app with mapping and navigational capabilities, for example, she doesn’t need to build the functionality herself—she can use APIs from companies such as Waze or Google ... Just as any developer today can easily build rich navigational functionality using APIs, they’ll soon be able to do similar things with [machine intelligence.”

Read the full article in CIO.

How APIs Become API Products

Using real-world examples from companies including Pitney Bowes, AccuWeather, and Walgreens, this article debunks the enduring bias that APIs are middleware. Rather, APIs can be the mechanisms through which value is exchanged in modern business — something the entire C-suite should understand. It’s not just an IT-only concern.

Maximizing the value of an API requires designing, marketing, and managing that API as a product that empowers developers.

Read the full article in our Medium publication,  “APIs and Digital Transformation.”

Using Behavior Analysis to Solve API Security Problems

Security has always been a top enterprise concern, but as the complexity of systems has increased, a company’s ability to protect its data and thwart attackers has become even more critical. In this article, Apigee’s head of vertical solutions David Andrzejek uses airport security as a metaphor to explain how behavior-based analysis is critical to protecting a company's APIs.

"Similar to airports, every enterprise has a multitude of entry points, including web apps, mobile apps, and partner integrations,” he writes. “The CIO also should be mindful of the expectation of low latency and continuous uptime, to provide the best customer experience and remain competitive.

“Like a traveler going through airport screenings, all incoming client credentials are validated for every request to enterprise systems,” Andrzejek continues. “All payloads are scanned for XML bombs, SQL injection, mutated or nested data forms, and the like. However, just as terrorists do not carry terrorist ID cards, hackers do not sign API requests with 'hacker credentials.'"

Read the full article in Help Net Security.

Lessons from Magazine Luiza’s Digital Transformation

Magazine Luiza, one of Brazil’s top retailers, is on a remarkable hot streak; after a scorching 2016 during which its stock price increased over 400%, the company has continued its momentum in 2017, achieving a 25.6% YoY increase in gross revenue and 55.4% YoY growth in eCommerce in Q2.

The company’s success has coincided with its execution of an aggressive digital transformation strategy. Comprehensive in scope, the strategy has touched virtually every aspect of the company: switching from monolithic applications to microservices; leveraging its API platform to create an ecosystem of mobile apps that span in-store sales, shipping and logistics, and even credit for customers; launching an online marketplace that boosted Magazine Luiza’s e-commerce inventory from 35,000 SKUs to over 1 million; and much more.

This article, based upon an interview with Magazine Luiza CTO André Fatala, is an in-depth exploration of the company’s transformation.

Read the full article  in our Medium publication,  “APIs and Digital Transformation.”

When Innovation Centers Don’t Innovate

Many enterprises launch innovation centers in response to the pressures of digital transformation—but not all of these centers actually lead to innovation. Apigee strategist John Rethans explains why innovation has to involve the core business, writing, “A bold bet that takes years to pay off is fine, but it has to be surrounded by a variety of other innovation projects that are both tied to core businesses and on clear paths to the market.”

One key, Rethans explains, is to empower creative thinkers to innovate the main business by giving them controlled and monitored access to core systems and services via APIs.

Read the full article in Forbes.

Security and Compliance Update: September 2017

We’re constantly updating our products to better protect your APIs and help you comply with relevant industry privacy and security standards. We’ve published a wealth of information to help you maintain security and compliance; here’s a quick guide to the latest tips and updates.

  • Our security best practice guide describes the actions you can take to ensure your API is secure before you put it into production.
  • As per PCI Council recommendations,  PCI applications need to use TLS 1.1 or higher prior to June 2018. If you need enable only TLS 1.1 or higher for your APIs contact Apigee Support.
  • We encourage you to periodically scan and penetration-test your APIs. Contact Apigee Support prior to your planned scan or penetration test.
  • Are you a PCI or HIPAA customer looking for recommended configurations and product features to help maintain compliance?  Check out the PCI or HIPAA configuration guides.
  • Want tighter controls on user onboarding/offboarding and user authentication? Apigee now offers SAML single sign-on (SSO) options for user authentication and recommends all customers take advantage of the user controls it offers. Visit our documentation on enabling SAML authentication for details.

How To Submit Security Tokens to an API Provider, Pt. 1

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower.

There are many blog posts out there about the pros and cons of using a stateful security session tracking mechanism (a cookie, in most cases) versus using a stateless bearer token that’s submitted with each request to the server (the prefered model for apps that use APIs). This post isn’t one of those. Here, I’ll share my thoughts regarding a question I was asked recently by someone at a client regarding how the bearer token should be passed from API consumer to API provider.

In web application SSO, it is typical that a bearer token (e.g. SAML2, JWT, etc.) is used to initially establish a user’s security session on a system, but then a cookie (probably with an opaque value) is used to track the user’s security session. This represents a stateful security session tracking mechanism that was initialized by an SSO protocol.

This post assumes that a bearer token (most likely a JWT token acting as an OAuth2 access token) is cached on the API consumer and passed in every interaction (API call) between the client and server, as is common with modern single page applications (SPAs) and native mobile applications.

The difference between these two approaches is this: statelessness employs a bearer token submitted on every API request, while statefulness involves a cookie to track security session state. One of the goals of the REST architectural style is statelessness; a major focus recently has been on how to achieve stateless security with API consumers for use with SPAs, traditional web applications, or native mobile applications.

Let’s focus on SPA and native mobile applications here and assume that a user has been authenticated with OpenID Connect or OAuth2.

The allure of token-based authentication

In our scenario, the advantages of bearer-token-based authentication include (at the API layer):

  • It’s stateless
  • It’s scalable
  • A bearer token, such as JWT, can be validated locally without an external call to the identity provider (IdP)
  • Cross-domain access controls can be enabled using Cross-origin resource sharing (CORS)
  • Claims are stored in the token (this enables the statelessness of the security context)
  • It’s more flexible
  • Expiration functionality is built in
  • There’s no need to ask users for “cookie consent” or to deal with blocked cookies
  • It works better on mobile devices

It is generally recommended that the Implicit Grant (OAuth2) or Implicit Flow (OIDC) be used with SPAs. These don’t include a refresh token, which is needed to obtain a new access token when it expires. The access token must either be valid for as long as the user’s session must last or an alternative approach must be uses, such as the Authorization Code Grant (OAuth2) or Authorization Code Flow (OIDC) with a public client (check out the discussion in this post for more).

Depending on how long the user must stay logged in, extending the validity of the access token may not be recommended. There are important security implications to doing these things that go beyond the scope of this post.

The token hand-off

To start, we will look at the ways that the access token can be passed from API consumer to API provider. To keep things simple, let’s just assume a simple scenario with an API consumer and an API provider (no API gateway). In many real-world SPAs, there’s a dedicated API backend that is built on top of a traditional web application framework that likely still uses a cookie to track a stateful security session.

When an API gateway is placed in front of that backend API and proxies traffic for multiple APIs or is outside the control of the API owner, then the access token passing patterns described here should be used.

If we are working with REST APIs, then the HTTP 1.1 RFC provides a couple of options for transporting the OAuth2 access token. The access token can be transported as a:

  • Query parameter
  • Form parameter
  • Message body field
  • Cookie
  • HTTP header (let’s assume authorization header)

Using query parameters (with an HTTP GET) to pass sensitive values is generally not advised per best practices. Those tend to get logged—especially on older systems—and everybody gets queasy when security-related information is displayed in the browser address bar. It should be noted that if you control the system end-to-end and can ensure no sensitive information is logged, then it could be okay. But, let’s skip this option for now, as end-to-end system control is rare with most mobile apps.

Form parameters could be used, but they don’t work for every type of HTTP operation (GET, for example). Similarly, not every HTTP operation (like, again, GET) has a message body; so, form parameters are not a good option for a general purpose security token propagation mechanism. So, that’s not a workable solution in the general case.

That leaves us with cookies and HTTP headers (let’s assume the the authorization header is being used).

Should you keep tokens in cookies or in local storage?

There are two patterns for client-side storage of bearer tokens: cookies and using HTML5 local storage. If cookies are being used to transmit the bearer token from client to server, then cookies would also be used to store the bearer token on the client side. Likewise, if the authorization header is used to transmit the token, then HTML5 local storage (or session storage) would have to be used to store the bearer token.

Another way of phrasing this discussion is: should the bearer tokens be stored on the client side in cookies or in local storage? The answer to this question will then answer our original question of how best to transmit the token. I’ve heard strong arguments in favor of each, which we will walk through.

There tends to be a misconception that using cookies implies that state must be maintained on the server side. This is not true if all claims needed to recreate the security session are available in a bearer token that is submitted via cookie. For the purpose of this post, I’m assuming that isn’t being done. As a result, there are unique benefits to both headers and cookies/local storage.

In favor of the HTTP authorization header and local storage:

  • The relevant specification, RFC 6750, states that bearer tokens (including the OAuth2 access tokens) should be passed between API actors in the authorization HTTP request header (with “bearer “ prepended to the value)
  • Local storage cannot be accessed across domains
  • Token size (if using JWT) depends on what the browser and server support

In favor of using HTTP cookies:

  • The browser prevents cookies for site A from being read by site B. By default, the browser will not allow JavaScript code from domain A to see responses from domain B (CORS can override this behavior, however).
  • The HTTP State Management RFC provides an HTTPOnly flag for cookies that prevents JavaScript in the browser from reading the cookie contents; in theory, this helps protect against cross-site scripting (XSS) vulnerabilities.
  • Token size (if using JWT) depends on what the browser and server support.

There are a host of potential implications for native mobile apps, traditional web apps, and SPAs that stem from these two approaches. We’ll cover those in an upcoming post.

Read other posts that Robert has contributed to the Apigee blog.

Image: Flickr Creative Commons/T-eresa

Grow Bigger by Thinking Smaller: Getting Started with Microservices

How to clear security, visibility, and dependency hurdles when implementing microservices

It sounds contradictory, but if your enterprise plans to scale in today’s digital-first world, it’s time to start thinking smaller.

Today, many of the most innovative enterprises are scaling up their applications by breaking them into smaller pieces. This approach to IT architecture—microservices, as it’s commonly known—is a way of restructuring applications into component services that can be scaled independently (depending on whether a team needs more compute resources, memory, or IO), and then having them talk to each other via API service interfaces.

Using microservices, companies reap not only the benefits of agility and speed when building software, but also the ability to easily share and reuse services across the enterprise and beyond. In effect, these smaller services make it possible to achieve both simplicity and complexity at the same time.

According to one recent survey of over 1800 IT professionals, nearly 70% of organizations are either using or investigating microservices, with nearly one-third of organizations using them in production. At Netflix, one of the earliest adopters of microservices, roughly 30 independent teams have delivered over 500 microservices. Amazon, another long-time champion of microservices, has employed the technique to ensure effective communication within teams and enable hundreds of code deployments per day. Numerous other examples, from the open-source Kubernetes project to the Walgreens digital platform strategy, speak to this growing momentum.

But just as microservices present new opportunities for organizational efficiency and growth, they also pose common stumbling blocks—chief among them security, usage and performance visibility, and agility/reuse.

Security: Managing microservices in a zero-trust environment

The microservices architectural model has been both successful and challenging—for many of the same reasons. In essence, developers often build APIs and microservices without the kind of centralized oversight that once existed, and then they deploy them more widely than ever. This can lead to inconsistent levels of security—or no security at all.

When developers deploy microservices in the public cloud and neglect to deploy common API security standards or consistent global policies, they expose the enterprise to potential security breaches. Companies therefore must assume a zero-trust environment. As research firms have noted, a well-managed API platform can help enterprises overcome these threats by enabling the implementation of security and governance policies like OAuth2 across all of their microservices APIs.

Reliability: Delivering performance and enforcing SLAs

Microservices involve building dependencies among your software, which means all of your microservices depend on all the rest of them. By extension, it also means there are interdependency problems not unlike those that exist for SOA.

There are many ways to stress-test the reliability of microservices infrastructure, but visibility is one of the best. Which services are talking to which other services? Which ones are dependent on which other ones? These are important questions to answer—especially when microservices are used by disparate teams in a large enterprise, or by partners and customers.

Echoing the previous section, one way to answer these questions is to implement a management platform for microservices APIs. API management platforms provide the analytics and reporting capabilities that enable enterprises to measure microservices’ usage and adoption, developer and partner engagement, traffic composition, total traffic, throughput, latency, errors, and anomalies.

Armed with this information, companies can iterate quickly, reinforcing components with promising usage trends and fixing interdependency problems as they’re identified. This speed and agility are important: stress-testing and optimization can cause a company to lose momentum as it examines unlikely theoretical scenarios—which is deeply problematic, given that for many enterprises, microservices and APIs are valuable because they can dramatically shorten a new service’s time to market.

With real-time insight into API behavior, companies can balance speed, scale, and reliability by launching new services, collecting analytics, and implementing a broad range of improvements after only a few weeks of development sprints.

Adaptability: Building agile microservices for clean reuse

Many existing and legacy services are not built for modern scale. Consequently, many enterprises are replacing monolithic applications in favor of microservices that adapt legacy resources to modern architectures. In most cases, however, many applications take advantage of services from the monoliths. This means the transition from monolith to microservices must be done in a way that makes it a seamless proposition—in other words, it should be invisible to those other applications and developers using the monolith services.

Furthermore, microservices are typically purpose-built for particular use cases. But as soon as a microservice is shared outside the “two-pizza team,” developers need the ability to adapt it for wider use. And what’s a service that’s meant to be shared and reused across teams and even outside of your company? It’s an API.

An API platform serves as an API facade, delivering modern APIs (RESTful, cached, and secured) for the legacy SOAP services of the monolith apps, and exposing the new microservices. This makes it possible for mobile and web app developers to continue consuming an enterprise’s services without needing to worry about the heterogeneous environment or any transitions from monolith app to microservices by the service provider.

The way forward

As microservices become increasingly popular throughout the enterprise, more and more of them are being shared—both internally and externally. And when it comes to sharing services, it comes down to APIs.

As a result, companies are increasingly looking to API management platforms to provide the security, reliability, visibility, and adaptability they need to properly run microservices architecture. Also known as “managed microservices,” this deployment model provides enterprises with a single window for managing all microservices APIs across microservices stacks and clouds—and it’s transforming enterprises far and wide.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

Image: Wikimedia Commons

Identity Propagation in an API Gateway Architecture

The power of end-to-end user security context with APIs

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower.

As enterprises continue to expand their usage of APIs, the need to keep those APIs secure increases as well. One way to bolster security and improve auditing and authentication is the transmission of an authenticated user’s security context from the front end of a request pipeline, beyond the API gateway, and all the way to the back-end implementation of an API or service.

End-to-end transmission of an authenticated user’s security context benefits API-based systems by enhancing overall security, eliminating the use of generic (privileged) accounts, providing a secure audit mechanism (of traffic traversing the system), and supporting advanced  authentication use cases.

This pattern can in some instances cause some complexity, but, as you’ll read in this post, it provides some valuable benefits.

End-to-end identity propagation

In most architectures, the propagation of the end user security context tends to stop at the API gateway layer, and a more generic security mechanism such as Mutual Auth SSL or a service account (with basic authentication) steps in to secure the API provider layer.  

However, there are several benefits to propagating the authenticated user security context all the way to the API provider:

  • Support for processing Attribute Based Access Control (ABAC) authorization decisions based upon the end-user identity at the API provider.   
  • Avoidance of generic service accounts with super-user privileges to connect to the backend.
  • Provision of a secure audit mechanism for the user who initiated requests that hit each layer of the infrastructure.
  • Maintenance of certain compliance requirements
  • Value in SOA and microservice architecture implementations
  • Support for authentication protocol translation

Token exchange functionality

If there is an intermediary between the front-end (such as a native mobile app or single-page web app) and the API implementation, such as an API gateway, this pattern becomes more complex, but is still possible.

This is where token exchange functionality such as that defined by the OAuth2 Token Exchange protocol (grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer with requested_token_use=on_behalf_of; in the most recent spec drafts "on_behalf_of" was changed to "subject_token"), the WS-Trust (active clients with On-Behalf-Of and ActAs) spec, and Windows Kerberos extensions (Protocol Transition and Constrained Delegation, which is based upon Kerberos) can be used.

It’s important to note that this activity requires the intermediary to play an active role in making this specific request; the identity provider must grant explicit rights to the intermediary to be allowed to make this request.

This token exchange functionality essentially enables an actor to exchange a token (such as SAML2 or JWT) with one audience (presumably its own) for a token that describes the same user, but with a different audience from an identity provider. This is done in a secure manner, meaning there are significant limits placed upon which audience the new token may reference. 

Two common forms of this token exchange are delegation, where the protocol provides a concept of the intermediary that is making the call (such as the WS-Trust ActAs), and impersonation, where the protocol provides no concept of the intermediary (such as WS-Trust On-Behalf-Of). The OAuth2 Token Exchange spec defines similar concepts for OAuth2 and OpenID Connect (OIDC); however, it uses the terms to mean the opposite of how they are used here.

The API gateway scenario

This pattern is an evolution of a similar pattern used in the SOA world. In 2011, I did a presentation about using this pattern for SOA implementations with SOAP actors, SOAP web services, SAML2, WS-Trust, and WS-Security. The emergence of APIs and API gateways merged the concepts used in the SOA world to enable front ends (such as native mobile applications and SPA applications) to communicate with the backend and what (SOAP web services) was used by backend intermediaries (ESBs) to communicate with the service implementation layer.

Now, the entire request/response pipeline, from the front-end consumer to the backend implementation, can use RESTful APIs.

With an API gateway intermediary in place, the whole model will look something like this:

Let's make some assumptions and set some requirements for this scenario:

  • The access token is a digitally signed bearer token (JWT).
  • All system actors are part of the same security realm (in other words, they use the same identity provider). This identity provider can authenticate all end users of the front-end application.
  • Every system actor (API gateway, API provider, and, possibly, the datastore) must validate the identity token (authenticate the request) attached to an incoming request.
  • This token validation includes audience restriction enforcement, which further ensures the token is only used where it is supposed to be (the granularity of the audience depends on several factors, including desired reusability of the token, resource authorization policy, and organization security standards).
  • All communication occurs over TLS v1.2 or better.
  • All communication occurs with RESTful APIs using JSON as the payloads (where appropriate)
  • All security details will be implemented with spec-based security (more on that below)
  • For added protection, use mutually authenticated SSL between actors where the server component only accepts connections from a single actor (between the API gateway and API provider, for example). This assumes that a single client certificate describes a single system or possibly a single node within a system. There shouldn’t be sharing of certificates across systems.

OAuth2 and OpenID Connect

So, what specs can be used to implement this security model? If you've read my earlier posts, then you might guess that the answer lies within OAuth2 and OpenID Connect—and you’d be correct.

In particular, OpenID Connect can be used to perform the initial authentication of the end user signing into the mobile application as described in the blog posts here and here.

As outlined in the diagram above, the result of that initial authentication includes an OIDC ID token and OAuth2 access token. The ID token must be validated per the spec, then information contained in it can be relied upon to describe the authenticated user.

If additional information is needed, the mobile app can make a call to the IdP OIDC UserInfo Endpoint to obtain more information about that user. Additionally, the OAuth2 access token that is provided should be cached on the mobile app and included with the authorization header of each API request per the OAuth2 Bearer Token Usage spec. When the mobile app needs data from the backend, it makes an API request to the API gateway.

Now, if the OAuth2 access token is also a JWT token, that makes the downstream authentication (access token validation by the API gateway) easier. If it is an opaque token, then the system actor must be able to support passing the access token into an IdP OAuth2 Introspection Endpoint to validate the token. 

All of these details are discussed in great detail in my OAuth2 blog post and OIDC Series. Regardless of the validation process, the token must be validated and, if successful, then the claims provided through that access token (JWT) are available to the system for authorization decisions.

Securing each step of the request’s journey

The mobile app will make API calls to the API gateway; identity information will be passed from the mobile app to the API gateway layer. When the request arrives at the API gateway, the access token must be extracted from the request and validated. This validation step could be performed locally, if the token is a JWT, or could be done by querying the identity provider's OAuth2 Introspection Endpoint or other identy provider-specific mechanism. If necessary, a call can be made to the OIDC UserInfo Endpoint.

Some type of coarse-grained authorization decision will likely be made at this point, the details of which are beyond the scope of this conversation. Finally, the access token that was passed into the API gateway with the request needs to be replaced with an access token whose scope matches the downstream actor’s (API provider’s) scope.

This is accomplished with a Token Exchange Grant call defined by the OAuth2 Token Exchange spec. The new access token is placed into the API request (replacing the existing token) and that request is routed to the API provider layer.

When the request arrives at the API layer, the access token is extracted and validated in the same fashion as the input token at the API gateway layer. Likewise, the UserInfo endpoint can be queried for additional information, a fine-grained authorization decision can be enforced, and possibly a new downstream token obtained for the datasource tier. For the last component, this assumes that the datastore layer can process an OAuth2 access token. 

The request that is sent to the datastore is almost certainly going to look different than the one that was sent to the API provider. There may also be multiple datasource queries for a single API request. Implementing the OAuth2 access token check at the datastore will likely require a custom authentication module on the target datastore in order to implement the desired behavior.

An example of a database that supports something like this is DB2 with DB2 Connect using Kerberos authentication. More details on that implementation can be seen here.

The authentication and identity propagation model described here isn’t applicable if:

  • the end user's security context isn’t extended to the backend
  • generic service accounts are used
  • OAuth2 Client Credential Grant is used
  • any of the other assumptions listed earlier in this post are not true 

While extending the end user’s security context back to the API provider introduces complexity to the authentication pattern, it brings valuable benefits including elimination of privileged service accounts on publically-facing systems (such as the API gateway), secure auditing, and efficient support for attribute-based access control (ABAC) decisions.  

Furthermore, using a spec-based approach to implement a complex identity pattern such as end-to-end identity propagation provides a layer of insulation between your solution and the identity provider vendor, making it easier to change out the identity provider later if needed.

Read other posts that Robert has contributed to the Apigee blog.


Identity and Access Considerations for Public & Private Clouds

APIs are everywhere. With organizations of all sizes connecting their customers, partners, and developers through APIs, identity and access management becomes a key consideration. But different use cases have different requirements.

Some organizations rely on SAML, while others might employ OAuth, OpenID Connect (OIDC), or JSON Web Tokens (JWT).

Apigee’s API platform, which can be deployed as a private cloud in a customer’s own data centers or can be leveraged from the Apigee Cloud, incorporates several different components, each with different security capabilities:

  • Apigee Edge admin UI  The admin UI used by API team.
  • Apigee Developer Portal  The web portal to expose your APIs, educate developers about your APIs, sign up developers, and let developers register apps.
  • Apigee Management API  Used primarily for CI/CD processes
  • Apigee runtime APIs  These are the customer APIs proxied by Apigee Edge

Here, we’ll explore how Apigee Edge addresses the identity and access needs for both private and public cloud deployments, as well as considerations for runtime security, developer portal security, and management UI/API security.

Enabling SSO for users logging in to Apigee Edge

Many enterprises need to support single sign on (SSO) for their employees to log in to vendor applications. The SSO providers are typically connected to the enterprise directory services, so that they’re kept in sync when employees join or leave the organization. 

How SSO can be enabled on different Apigee components

* Direct LDAP connectivity from Apigee Edge is not supported for Apigee public cloud. It’s generally considered a security risk to expose LDAP services over the internet, so Apigee recommends a SAML SSO provider (like Okta, Ping Identity, or SiteMinder, for example) for integration.

Leveraging Apigee to implement OAuth for APIs

OAuth has become the de facto standard for securing APIs. Apigee Edge provides a fully compliant OAuth 2.0 authorization server (AS). Apigee Edge also acts as a resource server.

In addition to managing the lifecycle of access tokens (generate, expire, and revoke), Apigee Edge also provides the ability to manage application credentials (client IDs and secrets).

Users can generate (and rotate) client IDs, set expirations, and revoke and approve access to APIs.

There are several capabilities provided for OAuth across public, private, and hybrid cloud scenarios. Microgateway is a lightweight gateway which can be deployed within a trusted network in close proximity to back-end target services.

This helps to achieve a hybrid deployment model by leveraging the Apigee Cloud for all public APIs and managing private APIs using the microgateway, while maintaining a single point of control.

Capabilities provided for OAuth across public, private, and hybrid cloud scenarios

Access an external identity provider for APIs

API access often requires integration with identity providers for authentication (and sometimes authorization) of users. As part of an OAuth flow (Auth code, password, or implicit grant types), Edge often needs to integrate with external identity providers. 

Edge comes with an OAuth authorization server and requires the user to integrate with an existing identity provider. For example, before completing an “auth code” or “password” grant OAuth flow, the user (the resource owner) must be authenticated against an identity provider.

There are multiple ways for Apigee Edge to integrate with an external identity provider. The first step is to establish a trusted and secure communication between Apigee Edge and the identity provider.

This is generally achieved using mutual trasnport layer security (TLS). The second step is for the identity provider to communicate the identity of the end-user in a trusted format. This trusted format is generally a SAML assertion or a JWT claim, signed by he identity provider. Apigee Edge has the public key to verify the assertion or claim, then extract the identity from the identity token.

Apigee can integrate with external identity providers (like Okta, Ping Identity, CA SiteMinder, and Active Directory, for example) for user authentication via one of the following options:

  • A signed token (SAML or JWT) containing the identity assertion, issued by the identity provider  Apigee validates the token and extracts the identity.
  • An API call  If mutual trust can be established between the identity provider and Apigee Edge (two-way TLS, for example), then Apigee can make a REST/API call to the identity provider for authentication.
  • Custom libraries  If the identity provider provides custom libraries in Java or JavaScript, then those libraries can be leveraged to communicate with the identity provider within Apigee.
  • LDAP  Apigee can communicate with identity providers directly via LDAPs.

Here are the different ways to implement the integrations:


Leverage external OAuth authorization server for APIs

Some enterprises already have an OAuth AS, like Active Directory Federation Services (ADFS) or Okta. These businesses might want to continue to leverage these authorization servers with Apigee Edge.

Apigee Edge supports a couple of integration patterns that involve an external AS.  

In the first pattern, the external AS also acts as the identity provider. Apigee Edge acts only as the resource server. Access token generation and life cycle management is performed by the AS. Client IDs and secrets are managed by the AS.

The advantages of this integration pattern include:

  • Token (web and API) lifecycle management happens in a centralized AS
  • Integration with existing solutions

The cons include:

  • Performance impact (gateway makes sideways calls to validate token for each API call)
  • Requires developer portal customization; the application registration process (a developer portal capability) must integrate with the external AS.
  • Does not work with Apigee API products.
  • Loss of application-based analytics; Apigee analytics is based on client ID, and because this is managed/maintained only externally, analytics will be limited.
  • Functionality of out-of-the-box developer and app lifecycle management is restricted.

In the second integration pattern, the external OAuth server acts both as the identity provider and the AS. Edge is also the AS and resource server. Access tokens are generated by the external AS and are imported into Edge. In the case of JSON web tokens, Apigee can validate the external token based on public keys.

Access token generation and lifecycle management is handled by the external AS and Apigee. Client IDs and secrets are managed by the external AS and a copy is imported into and stored in Edge.

The advantages of this integration pattern include:

  • Performance (No sideway calls to validate tokens)
  • Integrates with existing solutions/landscape
  • No loss of functionality from Apigee.

The cons include:

  • Developer portal customization is required
  • Token lifecycle is required in Apigee and the external AS 

Implement lightweight security

Apigee manages API programs of all sizes and of all kinds of enterprises. In some cases API programs are run by small teams inside large organizations. In other cases, the platform might belong to startups or small and medium-sized businesses. In these cases, if the data isn’t very sensitive, customers might initially want to adopt a lightweight security model to get the API program rolling.

Customers can start implementing basic authentication or API key-based security mechanisms on the runtime. The out-of-the-box username:password mechanism for developers  to log into developer portals or for an API team to log into the management UI/APIs makes it easy to get started. This enables customers to rapidly launch their API program and add advanced security features later on.

Apigee is widely used by organizations of all sizes, with a variety of security requirements, so the platform has evolved to provide a wide variety of choices and options. Our documentation contains a wealth of details on the topics covered in this post, including using OAuth with the Apigee Edge management UI, enabling SAML authentication for Edge,  configuring TLS on Edge, and defining user roles.


Embrace the Hybrid Cloud

If you’re still thinking “private data center” and don’t have an ironclad regulatory reason for doing so, you should really ask yourself: Why?

Fewer and fewer companies are holding out. They’ve had to navigate many wants and needs, from security and regulations to enabling digital platform strategies to extending the life of existing infrastructure investments. Analysts predict that most will settle on hybrid clouds as the end game that balances the pieces.

But with so many enterprises adopting hybrid architectures, the question rears its head again: If you’re not tethered to a private data center by an immovable regulatory requirement, what’s holding you back? The obstacles may be more easily surmounted than you think.

In my work with companies navigating transitions to the cloud, I’ve found that when companies worry about hybrid, hesitancy often boils down to two main areas: data security and latency and performance.

Latency: Quick solutions to slow clouds

Latency is a legitimate problem—users won’t tolerate slow apps!—but more and more, if you can identify potential sources of latency, you can also find a hybrid cloud option that architects around them.

Consider proximity. Digital assets exist in virtual space—but the machines hosting those assets sit in the physical world, and the bigger the distance between machines, the higher the potential for latency problems. An organization can often reduce latency simply by choosing a cloud service provider whose facilities are in the same region as the organization’s data center.

A more technical example: Suppose an enterprise wants to keep backends and APIs in its own data center while running management and analytics services in the cloud. While more cost-effective than doing everything in-house, this approach can introduce latency, as round-tripping the APIs to the cloud and back can slow down operations. For this problem, many enterprises are employing lightweight, federated cloud gateways that keep API runtimes in the data center while asynchronously pushing analytics data to the management service provider’s cloud.

Security: Focus on internal threats

So, if latency is a largely manageable issue, that means concern about hybrid clouds becomes mostly about data security. According to Gartner, “38 percent of companies who don’t plan to use the public cloud cited security and privacy as the main reasons.”  

But in a July webinar presentation, Gartner Research vice president Jay Heiser noted “no evidence indicates that [cloud service providers] have performed less securely than end user organizations. Quite the opposite.” He added, “Generally speaking, public cloud computing represents a more secure starting point than in-house implementations.”

This “more secure” starting point may mitigate some concerns—but it isn’t absolute. Some clouds are more equipped than others to extend corporate data centers while meeting regulatory and security requirements. It’s important to consider SLAs and redundancy agreements, and whether the provider submits to third-party audits and has been awarded industry certifications.

That’s just the tip of the iceberg, of course—you should also consider whether the provider has expertise delivering to your needs and goals, whether it ever directly accesses your data, and so on. But the point is, a lot of threat mitigation in the hybrid cloud world involves proper vetting of partners.

This vetting extends to physical security, too. Our fears are often defined by remote attackers who hack into a network—but in practice, many threats involve close-proximity attacks rather than network break-ins executed from afar. Think how Stuxnet, the malware associated with knocking Iranian centrifuges offline, is believed to have spread primarily through USB flash drives, contractors, and equipment compromised while in transit.

Against these sorts of threats, cloud providers often possess technical resources and data center management expertise that would be difficult and expensive for many private enterprises to cultivate internally. Many top clouds boast a variety of physical, on-site security mechanisms, such as data center access limited to very few individuals via biometrics, and machines custom-built to detect whether they’re booting appropriate software.

Threat mitigation also requires that companies take seriously the role of internal threats. Some of these involve nefarious intent, such as an ex-employee stealing intellectual property, but many stem from simple user error, such as sloppy password management or susceptibility to phishing scams. Indeed, Gartner predicts that through 2020, 95 percent of cloud security failures “will be the customer’s fault.”

The takeaway is that many of the threats perceived around cloud have little or nothing to do with vulnerabilities intrinsic to the technology itself—and much more to do with a company’s internal security and governance processes.

CSPs run on user trust

A final note: Successful cloud service provider models are almost necessarily built on a foundation of preserving user trust—an unspoken contract that manifests in CSP reliability and security investments whose scale exceeds what most in-house operations are capable of.

As cloud providers have continued to earn this user trust, they’ve demonstrated that many of potential customers’ most pressing fears involve abstract anxiety as much as (or more than) demonstrated dangers.

The majority of organizations are now embracing the cloud, especially hybrid flavors, as they’ve realized key concerns may be more manageable than anticipated—which again begs the question of those still holding out: Why?

This post originally appeared in

Image: Flickr Creative Commons/theaucitron

Western Union and Apigee: Building a Bot

How the Money Bot for Facebook Messenger was delivered with security, stability, and speed

We at Western Union were very excited when we struck a strategic partnership in March with Facebook to build a money transfer bot for the social network’s popular Messenger instant messaging service. But the real excitement began when we realized we had only three weeks to release and beta test the product.

Why the rush? We wanted to release the bot at the rapidly approaching F8 Facebook Developer Conference, because of the amazing opportunity to showcase the new product to our customers.

We made the deadline, launched on time, and got a lot of media coverage at F8 and in the press. But how did we do it?

Using the Apigee API platform, the development environment was ready in an instant with rapid deployment and configuration tools. It was simple to create Dev, QA, and Prod environments and move the Bot through the software development lifecycle.

The environment was also very stable and ensured that we completed the product from Dev to Production on time and didn't miss a beat on infrastructure-related issues.

Security is an important consideration for Western Union. The APIs servicing the Bot traffic needed to meet our strict corporate information security standards.

All of the required functionality came out-of-the-box so we didn't need to worry about certificates or API keys; we just had to configure the functionality and control the access.

Multiple other out-of-the-box Apigee services came to the rescue as we worked to develop Facebook Bot functionalities, including in-memory cache and key/value management for various environments, on a very tight deadline.

The Apigee API platform removed the burden of maintaining the infrastructural pieces like security, caching, and environment management and helped us focus on the business logic needed to launch the Bot on a very tight deadline.

We’re looking forward to building upon the momentum and putting the Apigee platform to work in many other use cases in the future.

Vijaya Kouru is a director of front-end engineering at Western Union Digital in San Francisco. He has led multiple transformations of front-end platforms at Western Union and helped grow and expand web and mobile into 40+ send countries and 200+ receive countries.

Design Principles for Seamless User Authentication

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge and WebSphere DataPower. This is the second post in a two-part series about securing APIs when users are part of multiple communities.

In a previous post, we discussed the recommended approaches to dealing with multiple user communities. To recap, they are (in order of preference):

  • Common technology in an identity stack with a single federation server and a separate user repository for each user repository
  • A single federation server that can communicate across a variety of user repository instances and technologies (LDAP and Active Directory, for example)
  • A single federation server that has established federation relationships across other federation servers that can access the user repository(ies) for the target user communities
  • Out-of-the-box functionality that allows for accessing multiple LDAP repositories or active directory domains
  • Functionality implemented through custom code

The rise of the standards-based approach

If I was writing this ten years ago, most of the conversation would have been centered around the LDAPS protocol. Since then, WS-Trust and OAuth2/OIDC have come along to provide a standards-based approach for any system actor to communicate with the identity stack. Even a decade ago, SPNEGO, SAML2 Browser Profile, and similar mechanisms existed, but these had nowhere near the market penetration and standardization in product feature sets that they do today. All of those tended to be focused on web app SSO only.

In 2005, the gold standard for web app SSO was SPNEGO, which was a  Microsoft-developed standard supported by Internet Explorer. On Jan. 1, 2005, IE had 90.31% market penetration for web browsers; this generally worked for most parties at the time—though, this was just about the peak of IE market penetration :). SPNEGO tended to have interoperability problems outside of the Microsoft technology stack.

Token transformation

If an API provider doesn’t have the capability to have the end user passed to it in the form of an identity token, then it’s likely that some type of generic service account is provided that has the appropriate permissions to perform whatever tasks are required on behalf of the original calling identity. Replacing the caller’s identity token with the service account’s username and password in the HTTP authorization header is an example of a very common security integration pattern called token transformation.

In this situation, the service account will have to be placed into one of the user repositories described above. If this API provider only processes requests from a single user community, then the natural decision is to place the service account into that user repository. If the API provider processes requests from multiple user communities, then there is a judgement call that needs to be applied—here, consistency is probably the most important rule. There may be overriding information security standards within your organization that may dictate the design decision. Of course, not every API provider has to support every user community.

In a similar fashion, an API consumer likely has a specific user community that it targets. If the same application serves users across multiple communities, that works too. Likewise, not every API consumer will support every user community

Multiple communities, the same APIs

It gets interesting when users from multiple communities begin using applications that invoke the same APIs. Then, we see the same identity mechanisms on the API gateway and identity stack authenticate and authorize API requests in a seamless fashion. Here, it’s important to adhere to the following design principles:

  • Each system should have a single identity provider it communicates with. Stated another way, the API gateway should not have to determine what type of user it just received a request from in order to make the authentication call to the correct identity provider.
  • The API gateway should trust a single identity provider. This keeps the identity concerns for this non-identity system simple, relatively speaking. The identity provider should have preexisting trust (federation) relationships with other identity providers necessary to support the legitimate use cases of system actors within the security realms if it does not have direct access to the user repositories. These federation relationships will likely reflect the needs of supporting B2E, B2B, and B2C user communities for the system. The API gateway should have a single endpoint (on that single identity provider) to communicate with for authentication of users—not a separate endpoint for each type of user community.
  • For authorization, a role abstraction layer should prevent runtime components from having to do something different between members of the disparate user communities.

Not every company experiences the B2C, B2B, and B2E user communities all at once. But many do. Some will have more granular definitions of user communities. Regardless of how these communities are split up in your organization or how many of them there are, the same basic pattern discussed here can be used. The API gateway described in my SAML2 vs JWT series must be able to authenticate and authorize users from each of the desired communities against a single identity provider that meets the qualifications described here.

Image: Flickr Creative Commons/"T"eresa