API Security

Apigee Sense: API Protection with Intelligent Behavior Detection

New explainer video

APIs are everywhere. With their pervasiveness comes a whole new set of security threats. They can come in the form of automated software programs that commit brute force attacks, information scraping, and account abuse. They can probe for API security weaknesses and skew analytics.

What’s worse, these threats can be difficult to detect because they blend in with normal API traffic.

That’s where Apigee Sense comes in. Apigee Sense detects, collects, analyzes, and mitigates API attacks, and is purpose-built to protect APIs.

Learn more in this two-minute video.

And visit the Apigee Sense page for details.

How Secure Are Your APIs?

Webcast replay

APIs have revolutionized how companies build new marketing channels, access new customers, and create ecosystems. Enabling all this requires the exposure of APIs to a broad range of partners and developers—and potential threats.

Watch this webcast replay to learn more about the latest API security issues. We discuss:

  • The API threat landscape
  • API security best practices
  • How Apigee is helping customers protect their APIs



API Security: It's More than Web Security

Since the advent of smartphones over a decade ago, enterprises and internet companies have focused on building apps to reach customers and to provide a richer, interactive experience.   This shift to apps has also impacted threats to the enterprise with bots and malicious actors shifting from attacking web sites to attacking public APIs.

When Apigee’s customers expose critical business functions through APIs, they tempt a variety of bad actors, who try to break in by probing login interfaces or scraping data from catalogs, among other methods.      

These threats also exist in web interfaces, of course. But the move toward apps has changed the basic interaction metaphor between the client and server. Apps can store more data and preserve more state between sessions. This, in turn, has changed security paradigms, and new approaches to securing the application using features like an API key have emerged.    

More than a firewall

A new security approach based on the application layer is required; one that understands the structure of APIs, rather than security at the networking layer based mostly on traffic volume.

But API security shouldn’t be considered merely an extension of web security. A common method of providing web security is a WAF (web application firewall), which enables one to set rules based on IP addresses and traffic volume, among other things. The intention is to block individual IP addresses from attacking the system.   

These rules can be effective in warding off simple attacks, but applying these same techniques to the API layer doesn’t take advantage of the richer data available for analysis and action in the application layer.

More than IP addresses

API traffic analysis can go beyond IP addresses; API keys and access tokens can be scrutinized. If the analysis determines that a combination of the IP address and API proxy is under attack, the action taken (blocking or tagging of the IP) can also be fine-tuned to the appropriate scope.    

In other words, API security enables a finer degree of control to block specific bad actors within the appropriate context rather than blocking all traffic from an IP address. The API security layer should be part of a multi-layered approach to protecting back ends against malicious behavior.

Because of its placement in the data stream, an API platform presents an opportunity to monitor for and detect anomalies in API traffic. But the possibilities go beyond detection—and they have to, in order to protect an organization’s backend.


Built on the Apigee Edge API platform, Apigee Sense is a full-cycle API security solution that was purpose-built for APIs. It detects bad behavior patterns at the API level, and blocks bad actors from access based on administrator specifications. Identifying behavior patterns enables a finer grain of control for blocking, so the system doesn’t have to block all users from a particular geography or service provider, which could potentially block normal behavior.

Joy Thomas is a data scientist on Google Cloud’s Apigee team. He is coauthor of “Elements of Information Theory.”


How To Submit Security Tokens to an API Provider, Pt. 2

Robert Broeckelmann is a principal consultant at Levvel. He focuses on API management, integration, and identity–especially where these three intersect. He’s worked with clients across the financial, healthcare, and retail fields. Lately, he’s been working with Apigee Edge, WebSphere DataPower, and 3Scale by Red Hat.

In a previous post, I discussed a variety of considerations regarding how bearer tokens should be passed from an API consumer to an API provider. I explored two approaches to client-side bearer token storage: cookies and HTML5 local storage. Here, I’ll look at the implications that these two approaches pose for native mobile apps, traditional web apps, and single page applications (SPAs).

Of course, there are plenty of things that don’t fall perfectly into one of these categories (for a more detailed analysis of the evolution of web applications, see this), but mobile, web, and SPAs comprise a large proportion of the use cases.

Most concerns come down to XSS vulnerabilities and cross-site request forgery (CSRF) attacks. Of course, if the device or server-side components have been compromised in some way, then this entire discussion is moot.


If HTML5 local storage is used, then the token is passed in the authorization header. Unlike with cookies, information stored in local storage is not automatically transmitted to the server (this does place an additional burden on the developer, but can be mitigated with supporting libraries).

Attack vectors include:

  • XSS: If an attacker successfully gets valid JavaScript inserted into input to your API, then later, when it is retrieved, it could be interpreted and executed by the browser’s JavaScript engine, which in turn could access the bearer token from local storage (because the script was technically loaded from your site and API endpoint). The standard defense against XSS attacks should be sufficient in this situation. Escape all HTML characters that are used to delimit JavaScript in HTML (&, <, >, “, ’,/) with HTML entity encoding (&amp;, &lt;, &gt;, &quot;, &#x27;, &#x2F;) referenced in input and other steps recommended by OWASP (see OWASP recommendations). Not all of these are relevant for API endpoints; for example, APIs typically don’t generate HTML. There is gn support for most of these recommendations built into modern web frameworks.
  • CSRF: Conventional wisdom suggests that this is not an issue because the attacking code would not have access to the JWT in the HTML5 local storage for the API endpoint. I’m not going to debate this or dissect the notion here, but let’s assume that all interaction between the SPA and its backend API use standard anti-CSRF patterns as described by OWASP. They describe two steps that should be taken: verify the same origin with standard headers (or a known origin), and require some type of randomly generated value be presented with each request that wouldn’t be known to the CSRF instigator. In a stateless REST API, both of these can be challenging. But many of the major frameworks provide support for implementing this functionality.
  • JavaScript from site A accessing data stored in local storage for site B: Not possible based on the security model implemented in browsers. This includes access by code on sub-domains.

If HTTP cookies are used for storage and transportation, then the attack vectors of concern are:

  • XSS: The token value stored in a cookie cannot be accessed by JavaScript injected as bad input and later interpreted in the browser. Nevertheless, use the same mitigation patterns described above.
  • CSRF: This is possible. Any request that is made (regardless of what’s triggering it) to the API from a browser session will include all cookies that are defined for the API endpoint. In this case, the mitigation strategies described above are absolutely imperative.
  • JavaScript code from site A accessing cookies from site B: By default, not possible, but CORS can override this.

Mobile apps

A native mobile application doesn’t run in a browser. It likely uses a library that either acts as a user agent to interact with the identity provider or launches the system browser to handle the IdP login workflow interaction. The library used to handle IdP interactions and login shouldn’t be a general purpose JavaScript engine (although it is likely launching the system browser that is). The login workflow that the browser interacts with should be a self-contained entity that isn’t relying upon external JavaScript sources.

Likewise, a native mobile app isn’t going to have local session storage. The use of cookies is limited to the HTTP client library or framework used to make API calls. The best option for securely storing a bearer token on each mobile platform is beyond the scope of this article.

Let’s assume that the API the mobile application is interacting with is the same one that is utilized by a SPA application and other API consumer actors. Attack vectors include:

  • XSS: If the native mobile app is using a component that has a JavaScript interpreter, then this is possible. In this case, the mitigation strategies described in the previous section should be utilized.
  • CSRF: This is only possible if a library is being use that includes cookies automatically. So, it depends. However, as always, the mitigation strategies described in the previous section should be used.

Traditional web applications

For our purposes, the difference between the SPA above and the traditional web application hinges on whether the server returns full HTML pages or JSON objects (or maybe XML). For the most part, the information provided in the SPA section applies here with the following exceptions.

If HTTP cookies are used for storage and transportation, then the attack vectors of concern are:

  • XSS: Same patterns described above should be used.
  • CSRF: Synchronizer (CSRF) token patterns can be used with the stateful security model.
  • JavaScript code from site A accessing cookies from site B: Comments in the previous section apply here.

So which approach should be used?

As I’ve mentioned in other posts, I always fall back on a standards-based approach to security. This implies that per RFC 6750, the bearer token should be placed in the HTTP request authorization header for each API call and the token stored in HTML5 local or session storage (for brower-based applications). For many IdPs, libraries will be provided that abstract these details away.

Is this the only way of accomplishing the desired effect? Obviously, no. But in my attempts to implement standards-based security solutions, it‘s the approach I recommend. Appropriate defense strategies for XSS and CSRF must be used, and, thankfully, can be largely accomplished with functionality in popular frameworks.

2016: The Year in Review

2016 was a great year for Apigee, and, more importantly, our customers. We introduced more than 90 new features to Apigee Edge and issued over 150 bug fixes via 35 public cloud and three private cloud releases. We open-sourced our mobile application performance monitoring solution. We added new solution accelerators. We processed over one billion API calls per day, and maintained 99.99% uptime. We even received some high praise from Gartner and Forrester.

Here’s a quick look at many of the new features our customers employed to accelerate their digital businesses.


We introduced several features to help customers tighten down the security screws on their API programs.

Two-factor authentication

At the API administration level, Edge now provides two-factor authentication in both the UI and the management API. Additionally you can lock down management API calls with OAuth 2.0 (using acurl), making it easy to invoke management APIs without repeatedly requiring credentials.

Encrypted KVMs

We've also added important security features at the messaging and API proxy development layer. Encrypted key value maps (KVMs) let you securely persist sensitive data, retrieve data at runtime with variables, and keep sensitive values from appearing in trace and debug sessions. See this October 2016 blog post for details.

Adaptive bot detection and protection

Apigee Sense provides protection from a number of different bot patterns. The new Sense Protection feature completes the “CAVA” (collect, analyze, visualize, and act) lifecycle. It enables an Apigee Sense customer to act on detected abuse and selectively stop abusive API traffic.

Productivity improvements
  • Logs sent to third-party message logging services including Splunk, Loggly, or Sumo (using the message logging policy) can now be securely sent over TLS/SSL.

  • API credentials, developers, and developer apps can now be managed through the management UI. Users can generate multiple key/secret credentials for an app, control key expiration, and assign different keys to different products—all in a single screen. This simplifies API key rotation, where a newer API key replaces an older API key set to expire.

  • Users can also revoke credentials using a cascading model. For example, you can deactivate a developer, revoke a developer app, or revoke individual API credentials.

  • When controlling access to specific API resources through API products, users now have more flexibility when defining valid resource paths with wildcards..


We added some powerful capabilities to cater to our customers’ governance and compliance requirements. To enable standardized governance of API proxy functionality, shared flows enable executing a group of policies (OAuth, spike arrest, and message logging, for example) consistently across all proxies. Flow hooks let you reference those operational behaviors before or after the proxy execution in the request and response. (See this October 2016 blog post for details).

Reliability and scale

We added several continuous reliability and performance improvements under the hood. We switched to the Nginx router for better API traffic performance (for both public and private cloud deployments).

For public cloud deployments, in 2016 we began releasing product updates using "blue/green" deployments--where a small amount of traffic is initially routed to the updated product so that we can monitor for potential issues (read more in this September 2016 blog post).

We also added support for automatic scaling in Apigee Edge Cloud. This helps maintain availability and enables customers to scale capacity up or down automatically based on policies. This has helped us deliver a more predictable API platform.

Developer productivity

In 2016, we spent a lot of time working to make API lifecycle management more intuitive and powerful—from design to development to publishing to analytics.

Integrated OpenAPI editor and spec repository

"New Edge," released in October, offers a new model for API proxy development and documentation. You can use the integrated editor by creating an OpenAPI specification to define your API, without leaving the Edge UI. You can generate an API proxy directly from the spec, create an API product, generate API documentation, and immediately publish it  to the New Edge developer portal. The new spec repository enables collaboration of OpenAPI specs and fosters team-based, iterative API development. Read more in this November 2016 blog post.

New API proxy editor

The API proxy editor in the management UI became easier to use by including full XML views of API proxy configuration, search, more options for adding policies, endpoints, and scripts, as well as an analytics dashboard that shows proxy performance. Regarding proxies that interact with SOAP services, the proxy builder evolved to provide even stronger support for SOAP passthrough messages by hosting the service WSDL in Edge, as well as more reliably generating policies that handle RESTful calls to backend SOAP services.

Proxy chaining and policy enhancements

Another cool enhancement we delivered is called proxy chaining. It lets you call one proxy from another proxy directly without having to call it via its HTTP/S URL. The platform does it for you. This saves a lot of time, particularly when the proxy being referred to changes.

Other notable proxy development enhancements include refactored policy error codes, deploy-time validation of proxy bundles to catch issues before runtime, new JavaScript crypto functions, providing more control over converting XML to JSON arrays, and improved rendering of JSON payloads generated by policies such as Assign Message and Raise Fault.

On-demand, lightweight developer portal

With New Edge, there's virtually no lag time between creating your API proxies and giving developers API documentation. A new lightweight portal framework lets you instantly provision  multiple developer portals, including API documentation that's automatically generated from your OpenAPI specs. You can use HTML/Markdown to create pages and add CSS styles on the fly for complete control over styling and layout. And we provide a new type of samples framework that lets users browse different types of Edge samples, deploy them, and learn more about them without leaving the UI.

Self service

Several customers wanted a more holistic view into their adoption and usage of the platform, so we delivered a broad set of information via Apigee 360. It offers a view of account information accessible through the Edge single sign-on,  including monthly API traffic volume, statistics for apps and developers, availability percentages, Edge features used and purchased, support cases and statistics, and server information.

We also rolled out a new mechanism, Apigee Advisory, to display messages in the Edge management UI. These advisories inform customers of availability and security issues that could impact their APIs.

Our web site, apigee.com, also underwent a significant redesign that provided clearer, more comprehensive information about Apigee products and solutions, as well as improved discoverability of our thought leadership content.

Business impact and reporting

A modern and scalable analytics platform was launched in 2016 built on big data technologies. This new architecture makes it easy to handle high traffic throughput, enable a variety of data queries (by time, tenants, applications, developers, clients, plans, and products), and provides flexibility to build new data-driven applications.

There was also a fundamental change introduced in the means of delivering the daily email digest. Rather than pushing out an email with all the report content, users now receive short summaries along with links back to the full report.

Finally, for customers who have APIs that record custom attributes using the Statistics Collector policy, they can request the creation of custom aggregation tables that can improve the query performance for those custom metrics if they are used on a regular basis to generate analytics reports.

For customers using monetization, several enhancements provide more control over charging models and notifications when users get close to (or exceed) their plan limits.

These enhancements include:

  • A new adjustable rate notification plan that enables a user to set different plan limits per app developer
  • Support for webhooks to notify developers and companies when they near or exceed their plan totals, as well as support for several different conditions under which notifications are triggered, including a new criterion based on combined transaction totals
  • A tool that migrates developers into the monetization framework (for users with an existing non-monetized developer ecosystem who later decide to use monetization)
  • A new API that lets users suspend and unsuspend developers (to support stronger control of developer participation)

Edge Private Cloud

The on-premises version of Edge got several improvements, including a simpler, more RPM-based code-with-config installation and upgrade framework, which enables easier product installation and upgrade with fewer errors.

A new monitoring tool lets on-premises customers understand the health of various components (routers, message processors, ZooKeeper, Cassandra) as well as HTTP error codes for various orgs and environments in their deployments. The tool lets customers take a snapshot of their dashboard data and share it with Apigee to help resolve support incidents.


Partner ecosystem

We continue to demonstrate our commitment to multi-cloud and cloud native deployments. Integration with Pivotal Cloud Foundry was a big focus area for Apigee in 2016.

The first new enhancement was Pivotal Cloud Foundry integration with Apigee Edge (public or private cloud) using the route services feature, which enables developers to use Apigee Edge as a Pivotal Cloud Foundry Service. The Apigee Edge service broker (see more details in this May 2016 blog post) approach brings simplicity and consistency to the range of services that customers typically use when developing apps.

More recently we announced the general availability of Apigee Edge Microgateway on Pivotal Cloud Foundry. This complements the previous release by providing a hybrid deployment option which is suitable for low-latency use cases.

We also announced Edge integrations with Amazon AWS (this enables users to proxy AWS apps and services such as AWS Lambda), Microsoft Azure (this enables users to deploy the Edge Private Cloud) and Google Cloud Platform (this enables GCP customers to use Edge Cloud for their API management needs).

Community and learning

The Apigee Community continues to be very active. We’ve have received great reviews from developers about our 4mv4d (four-minute videos for developers), which demonstrate how to use Edge policies, implement error handling, and much more.

Our product documentation received several additions and enhancements, notably a set of documentation for the New Edge release. The Private Cloud documentation also emerged from behind the firewall and joined our publicly accessible cloud docs.

Our docs team added deeper set of API development samples, redesigned tutorials for speed and ease of use, upgraded navigation and search for easier content discovery, and translated key sections of the cloud docs into Japanese. You can see more detailed lists of doc enhancements throughout the year in the Apigee Community.

Apigee Edge got to where it is today thanks in large part to our community and customers. As many of you know, we became part of the Google family. We look forward to an exciting 2017 and expect to do more amazing things for our customers as part of the Google Cloud Platform team.

Join us at our Adapt or Die World Tour stops in Sydney on Feb. 8 and London on Feb. 23, and in San Francisco at Google Cloud Next '17, March 8-10. 


Apigee and Okta Partner for API Security

Apigee is excited to announce a partnership with Okta, a leader in the Identity-as-a-Service space.

A critical reason customers use Apigee is to secure their APIs. And one of the most critical aspects of security is the authentication and authorization of the APIs.  

Apigee’s and Okta’s offerings complement one another to solve the AuthN/AuthZ problem for customers. Almost every customer using Apigee needs to integrate Apigee Edge with a central identity/SSO store. Enterprises traditionally have used legacy, on-premises identity providers.

But customers increasingly are choosing Apigee and Okta together as two critical platforms for digital transformation and their cloud journey.

There are two common use cases for this integration:

  • Customers want to use Apigee’s API management capabilities and use Okta for identity management
  • Customers use Okta as their OAuth/OpenID connect provider across the organization

As part of the partnership, we’ve created an out-of-the-box solution for the first scenario (learn about it here). The solution is intended as a reference implementation and is available as an open source solution.

Scenario 1: Apigee provides OAuth

In the first use case, Apigee acts as the OAuth provider while Okta provides the identity store and handles authentication of the identities.

Let’s take a quick look at what we’ve built here:

In the diagram above, Apigee provides the client_id and client_secret. Pre-work steps aren’t needed for the runtime API security; they’re performed beforehand:

  • The Apigee admin registers Apigee as a valid client with Okta; Apigee receives the public key from Okta and stores it in its KVM or vault
  • A developer self registers in the Apigee developer portal
  • The developer registers an app in the Apigee dev portal.,Apigee issues a client ID and Client secret to the app.

In the runtime flow:

  • The client makes a call to Apigee with the client ID and secret, which Apigee validates
  • Apigee redirects the client app to Okta
  • Okta collects the user credentials and authenticates and authorizes the user with support for multi-factor authentication ( if needed)
  • If the user is validated, then Okta returns an id_token back to Apigee
  • Apigee issues an access token against the id_token and stores a map of access_token to id_token in the key storage
  • When a secured API is invoked, it is invoked with the access token, which Apigee validates; depending on requirements, it either sends the whole id_token or part of it to the backend

Scenario 2: Okta acts as the OAuth provider

This is another common integration pattern, but currently we are not providing an out-of-the-box solution for this. Often, when Okta is the enterprise-wide identity platform (for web, mobile, or APIs), the customer is likely to leverage Okta as the OAuth provider.

To follow this integration pattern, it’s important to keep a few things in mind.

For an API program, a developer portal is crucial. This is where developers register themselves as well as their apps. Once the apps are registered, developers receive their API key or client_id/client secret from the dev portal.

Okta can be used as an OAuth provider, but Apigee must have knowledge of the client_ID and client_secret because:

  • It uses those IDs for throttling
  • The IDs are crucial for analytics and API debugging
  • IDs are used for app lifecycle management
We hope you've found this walk-through useful, and look forward to working with Okta to develop new integrations. To learn more about Okta’s OAuth and OpenID Connect support, visit https://developer.okta.com/docs/api/resources/oauth2.html.


The Banks Have Been Hacked. What Now?

APIDays in London last week encompassed two full days of banking and APIs. The event was full of excellent talks and useful content, but I keep thinking back to a talk by Stevie Graham about his company Teller.io.

Stevie explained how he’s hacked all the major banks’ mobile apps, reverse-engineering them to get at the underlying APIs that power them, and then exposing those APIs to developers.  In a highly charged, “thumb your nose at the man” talk, Stevie explained how he was also deploying "anti-anti-hacking techniques" to thwart banks’ attempts to stop him from turning their mobile apps into open APIs.  

Reflecting on the talk, a few things struck me.

Realize it or not, banks actually do have external APIs

Every mobile app is powered by APIs. Those APIs are not built out to be accessed by third parties, but they exist!!  In fact, everything you can do in a mobile app—check balances, transfer money, make a payment—all those functions are APIs. The “balance check”  button initiates an API call, which may initiate several other API calls, to fetch your bank balance. The fact that these APIs exist and power external experiences makes them essentially external, and thus vulnerable.

Banks need to invest in API security

Stevie has vividly illustrated this (no kidding, you say). Regardless of whether you plan to open APIs or not, having first-class tools to secure, monitor, and manage APIs are a must. As always in security, defense in depth (layers of security to ensure API clients are doing what they are supposed to be doing) is the right approach.

This includes sophisticated behavior analysis like what we are doing with Apigee Sense, to detect anomalies in traffic patterns. Just because an API client might look like your home-grown iOS app doesn’t mean that it is!

Developers want access to banking data APIs

PSD2 regulation aside, given there are over 3,000 developers on the waiting list for Teller.io, it‘s clear that demand is already there for API access to banking data. Whether and how banks want to exploit this interest is yet to be seen, but where there is unmet demand, there’s threat of disruption, as upstart challenger banks like Starling and Monzo illustrate. By building their banks API- and mobile-first, they are clearly moving to fill the void left open by the established brands.  

With mobile ubiquity and APIs being used as competitive weapons in digital-natives’ strategies, building first-class competency in API security and operations has never been more important.  

The unmet demand for programmable access to banking functions and data suggests, at best, a lack of execution among many banks. At worst, it reveals a fundamental lack of understanding of how today’s API economy works. 

Image: Flickr Creative Commons/Ofer Deshe

Apigee Sense Protection: Act on Threats to Your APIs

Our customers expose critical business functions via APIs, so API security is a significant concern. They want the ability to take action without delay when APIs are abused. Apigee Sense collects, analyzes, detects, and provides a dashboard showing abuse of APIs managed using Apigee Edge.

The Sense Protection feature completes the “CAVA” (collect, analyze, visualize, and act) lifecycle. It enables an Apigee Sense customer to act on detected abuse and to selectively stop abusive API traffic.

Why did we do this?

The protection feature represents significant progress within the Apigee API platform, and an important step in our effort to deliver data-driven API management. It shortens the lifespan of an attack and decreases the operational cost of dealing with one. Consequently, it reduces any economic incentive to mount automated attacks on APIs.


How did we do this?

As part of this feature, we added the ability to trap and shape API traffic to Apigee Edge. As a Sense customer, you’ll now see an “Act” button whenever an attack is detected. Clicking on it will bring you to a quick workflow that will enable you to block traffic from the identified attacker, and, if you choose, other similar bots.

Every API call passes through a pre-proxy Apigee Sense layer. The logic checks whether the call originates from a host that’s been blocked. If this is the case, then the rest of the chain is aborted and an HTTP 403 response is returned. Otherwise, the call continues on to the normal execution path.

In addition to blocking, we also offer the capability to flag API calls. A flagged call is passed on to the back-end system with an additional header field.

The challenge is in keeping the large blacklist across all the orgs that we service updated and current across our distributed API serving infrastructure. To ensure that we don’t add any significant latency to the API call path, this blacklist check has to be done in an entirely “local” way.

To do this, we cache (and actively refresh) a segment of the blacklist on every message processor. The choice of the segment is coordinated with the routing infrastructure so that each segment of the blacklist is available where it will be needed and only where it will be needed.

We have also built an internal service that maintains and serves the blacklist segments to the API gateway. It’s our intention that over time, this service will provide the additional context necessary for each API call. For example, it provides the state machine for slowly switching traffic from one version of logic to the next during a deployment cycle while continually monitoring availability data.

Data-driven improvements

Our vision is that eventually a lot of system change and the context for managing API calls will become data-driven and be mediated by the services we’ve built to enable Sense and Sense Protection.

This means that Sense Protection activity takes place in a separate path, outside of the proxy handling workflow of Apigee Edge. The deployment for protection is independent of that of the API proxy. Therefore, any change to the API proxy doesn’t compromise the ability to act against any attacker, and a change to the protection logic will not change the essential behavior of the API proxy.

Finally, blocked traffic never hits critical logic in the API handling path and, nor does it cause any back-end load. Because of this, any customer with blocked robot traffic will see an improvement in performance both at the API and, more importantly, within their back-end systems due to the reduction in traffic that had been generated by those abusing the API.

Learn more about Apigee Sense. —Harish Deshmukh, Prithpal Bhogill, and Vibhor Sonpar contributed to this post.


You're Thinking Cloud, But Not APIs? Seriously?

The principles of cloud computing might be decades old, but it wasn’t until 2006, when Amazon entered the cloud market, that the modern cloud era really began. Like the processor virtualization market a decade earlier, cost, benefit, and flexibility are providing the basis for an ever-increasing pace of enterprise cloud adoption.

Cloud is ultimately about three things:

  1. Building cloud native applications that scale

  2. Moving and modernizing existing applications

  3. Repartitioning your enterprise architecture flexibly between public and private clouds and operating all clouds as one

But if you’re not thinking about APIs as you embark upon your cloud journey, you’re thinking wrong.

Build for the cloud

Cloud is about self-service It’s about your developers (and you) not standing in line for resources. How is self service delivered? APIs, of course. IaaS APIs, PaaS APIs, and service APIs.

Cloud is about agility It’s about your developers (and you) not standing in line because of dependencies on other teams. How is this independence delivered? APIs, of course. Microservices APIs.

Cloud is about scale It’s about your application breaking free from the limits of a single VM.  How is this liberation achieved? APIs, of course. PaaS APIs.

Cloud is about a set of services It’s about services that you consume so that you can focus on your value-add. How are those services delivered to you? APIs, of course. Others make their services friendly through consumption-centric APIs.

Cloud is about serverless It’s about writing code and letting the cloud run it. How is the code run? APIs, of course. You build your logic, provide an API to it, and enable it to be used by everyone else.

Move to the cloud

Cloud is about moving key applications It’s about apps and workloads that seamlessly move to the cloud and can be modernized easily. How do you move applications? APIs, of course. You wrap your key application with APIs and route calls transparently to the right location.

Cloud is about modernization It’s about growing your business through new digital use cases.  How is this modernization achieved?  APIs, of course. You make your services developer-friendly through consumption-centric APIs.

Cloud is about no lock-in It’s about enabling applications to move to the best provider. How do you enable this optionality? APIs, of course. Applications and key components powered by APIs enable free movement between clouds.

Operate all your cloud assets uniformly

Cloud is about security It’s about ensuring that your applications, irrespective of location, are more secure than ever. How do you achieve this security? APIs, of course. When APIs power your key applications, your applications are more secure, because your security architects can see, manage, and control all of them without compromising the agility of your developers.

Cloud is about cost It’s about applications that deliver value getting more resources. It’s Darwinism in your application landscape. How do measure value? APIs, of course. Analytics on APIs helps you understand the usage and decide which location and resources make sense for each particular application.

Cloud is about operations It’s about applications residing wherever they do, but operating as one. How do you operate everything as one? APIs, of course.  APIs that manage the applications, the infrastructure, the network, the services.

Don’t think of cloud with one eye shut

Cloud is central to your business. But why handicap yourself? Think APIs in the same breath as thinking cloud.

Why Enterprise Security Needs APIs

From time to time, we will have guest authors weigh in on specific topics here. While I’m  interested in all topics, and have opinions on almost all things related to APIs and data :), sometimes it’s better to let the experts have a say. So here is a first guest post. This topic is especially important because by now people realize that digital transformation requires APIs.  And we know that APIs require security. But does security need APIs? I discussed this in a previous post; here's the first in a series of  posts that elaborates on the topic.

Enterprise security incidents surged by 38% last year and malicious attacks on enterprise data continue to have significant impact, incurring a cost of over $500 billion today to an estimated $2 trillion by 2019. Enterprises are adopting APIs and API facades to secure access to valuable enterprise data from both internal and external users and apps.

Although many new apps use REST APIs, many enterprises still employ web applications that make direct calls to backend systems. To date, enterprises have used a layered security model to thwart external threats at the app layer. By mandating the use of APIs to access critical enterprise data, enterprises can create effective security control points and gain several security advantages.

Smaller attack surface

Today, when there’s a security breach in an application, all the backend systems that provide data to the app can be exposed. By ensuring the apps only use APIs to access data on backend systems, you can automatically restrict access to a smaller set of resources and data by passing the requests through an API gateway. This minimizes the attack surface should a breach occur.

Smaller attack window

Web apps that use simple usernames and passwords are challenged to securely store these credentials. Rotating them frequently is also difficult and therefore rarely implemented. In the event of a compromise, the enterprise is severely exposed as there is no time dimension to the credentials (they don’t generally expire).

By using APIs and OAuth, you limit the opportunity for attackers by using continuously rotating credentials that expire frequently.

Defense in depth

By using API management, enterprise security teams can deploy an effective, layered security framework. You can enforce a set of security policies across all your enterprise APIs (spike arrest and injection threat protection, for example) and, if required, enforce additional security policies, such as OAuth2, on select APIs.

Think of an API management platform as the internet-exposed layer of your applications environment that provides a defensive capability in front of your backend systems. 


With APIs and API management, you have the flexibility to downgrade or upgrade your API versions seamlessly as part of remediation efforts against security breaches. You can also roll out new API versions to select audiences before wider rollout to minimize security risks.

Virtual patching

An API management platform enables virtual patching and quick remediation against an identified vulnerability in a downstream system without having to change the source code of the system. By applying new security policies to limit potentially malicious input in the API gateway, you can significantly mitigate the impact of zero-day exploits and unpatched systems.

Security metrics

Most enterprises have limited visibility into the vulnerabilities of apps accessing your enterprise data. With APIs and API management, you automatically gain granular visibility into which backend systems are accessed insecurely.

API analytics provide traffic visibility that enables an enterprise to discern between bot traffic and legitimate traffic, transport layer security (TLS) versus non-TLS traffic, and authenticated versus unauthenticated requests. Security metrics like these enable you to fix vulnerabilities and improve overall application security.

By mandating the use of APIs to access any critical enterprise data, you create effective security control points and your enterprise assets become more secure. Enterprises that are going digital and are concerned about their assets should consider an API-first approach to their digital transformations.

Coming up, we’ll take a more in-depth look into the benefits of using APIs to secure enterprise data.