11436 SSO

Field Notes: API Acceleration and Analytics ROI

mgardner
Jan 14, 2014

As Apigee has grown, we have added extremely capable and experienced people at every level of the company. With the talented and senior people now in place to lead various functional areas in engineering, I’m spending more time in the field with customers, helping ensure the best possible alignment between their needs and our support, cloud, and product offerings. This blog post marks the first of what I hope will become a series of periodical observations about industry patterns and trends, distilled from these customer discussions.

At the end of 2013, I visited a number of customers across the U.S. and Europe who had recently launched or were on the verge of launching production traffic on some new APIs. Some common patterns have emerged across these implementations.

Reaping the benefits of analytics

One common pattern: nearly all of these customers are starting to enjoy major benefits from Apigee Edge analytics. In one case, initial traffic through Edge showed some consistent 502 errors in standard Edge analytics reports. After closer examination, these turned out to be due to errors—never before detected—in an existing back-end system. So with this new diagnostic information, the API team was able to file a service request with the back-end team that will lead to an overall healthier system.

Another customer, after bringing Edge analytics online on a system newly migrated from our older 3.X architecture, quickly realized that nearly one-third of the production request volume they were receiving was associated with one internal developer key, and that it was hitting every few milliseconds. Obviously, an inside-the-firewall developer had set up a test process and then forgotten about it, leaving a kind of denial of service in place. That needless wear and tear or consumption on the back end was quickly throttled. The customer also was able to quickly recognize one client app that was consistently passing null data, and then work with the app developer to clean up that flow.

These customers described it as “whittling the red line down”—they’re systematically and methodically reducing needless errors so that alerting can enable sharper focus on real issues and system failures when they occur. There are clear and obvious benefits from this noise reduction. Yet these customers were equally enthusiastic about the business operations and business model insights they gain from the analytics.

The ability to instantly see the data on which APIs and API products are actually being called—and by whom—is powerful. But it’s particularly gratifying to see customers actually experience that, and begin to make business and API program plans based on the new insights and knowledge they’ve gained.

Real acceleration

One of the themes we’ve discussed a lot lately is the concept of Apigee Edge as “accelerator” to help API programs evolve faster (see the Apigee webcast "Why APIs are not SOA++"). There are many important concepts within this, including “pace gradients” or “pace layering,” in which central enterprise IT systems can change very slowly (and safely!) while evolution out towards the “skin-of-the-onion” (closer to the client apps, based on app-specific APIs) can occur at breakneck speeds. For more on this, see the "API Facade Pattern" webcast series.

There’s a related “layer” concept in APIs, as some are used for “exposure” (of back-end IT resources and systems) and others for “consumption” (by apps). This software paradigm is starting to drive changes in the organizational structure within enterprises.

For example, one company has embarked upon a significant replatforming effort to bring on a new third-party ecommerce system. APIs produced via Apigee Edge play a crucial role in this by enabling a proxy layer with which they can control the transition from the old system to the new, without having to make extensive changes to the top-level service consumers, which include apps and web pages.

APIs also give this customer the opportunity to implement subsystem-by-subsystem replacement of the old platform, rather than a wholesale “big bang” cutover from the old platform. In other words, the team adding proxy facades to each crucial subsystem, and, once each facade is in place, the old subsystem can be replaced with the new one.

As this customer progressed through this replacement and began to think through the new functionality to be built atop the new platform, it understood the need to develop an engineering social structure in which small teams create app-purposed consumption APIs that leverage the ecommerce system. In this way, the ecommerce system interface itself doesn’t need to change, though the customer can implement use-case friendly consumption APIs as rapidly and as often as necessary. This makes the actual use-case app implementers themselves that much more productive.

This team also realized they should use the “encapsulation” API pattern to make some additional surgical changes to a few legacy back-end systems (for example, some corporate databases) underneath the ecommerce system itself. In upcoming posts, I’ll discuss how API programs are inextricably linked with CI/CD, or "continuous integration, continuous delivery," and devops capabilities, for best effect. I’ll also discuss how organizations are leveraging proof points from the business gains they’ve derived from their initial API program deliveries.

 

image: Benjamin Deutsch/Flickr

 

 

API Management Decision-Making Kit

Next Steps

 
 

Resources Gallery

News