The Year Ahead in APIs

APIs have come a long way from the arcane geek-speak of software interfaces popularized by Win32 APIs (still among the most commonly searched API phrases). Today, APIs represent interfaces between businesses and large swaths of internal enterprise services and business units. APIs not only connect software to software but also help to create entire commercial ecosystems, and so have become integral parts of how enterprises conduct business.

Google Cloud's Apigee team has watched the API space evolve over the last decade or so, and we believe that the onwards and upwards march of “APIfication” within enterprises will take a few surprising and not-so-surprising twists and turns this year. Below are some of our predictions around the use and impact of APIs in 2019.

API standards

As APIs rapidly become the “contracts” between software systems within and outside of an enterprise, it seems natural that these contracts should get standardized. Shouldn’t there be one way to call an API to make a payment? Check a balance? Order a ticket? However, such standards have proven elusive because developers who write programs that call these APIs have largely been comfortable in tailoring their code to various API providers.

In 2019, we believe that the momentum will begin to shift toward greater API standardization in the following areas.


Hype surrounding GraphQL will accelerate, and GraphQL will be positioned as the first technology to solve the long-standing challenges around delivering reusable services and APIs. GraphQL will be most popular for first-party APIs that are intended to be used by mobile and single-page apps, although some API providers will also adopt it as an option, alongside conventional “REST-ish” APIs. We anticipate that as GraphQL’s popularity grows, there will be a vocal community that becomes highly critical of OpenAPI and argues that it is a legacy standard analogous to WSDL in the era of SOAP.


We predict that gRPC will continue to see adoption in 2019 due to its importance within the Kubernetes ecosystem, and many microservices will be gRPC-based. Most APIs intended for third-party developers will continue to be REST-based, but API providers will start to offer gRPC as an option, particularly for high-throughput and low-latency scenarios.

APIs in industries: mandates, competition, and standards

For open banking and PSD2 adherence, financial services regulators have mandated APIs as a way to spur competition and foster innovation among banks. We expect to continue to see this trend sweep the globe as more countries require banks to give third-party providers access to customer information and the ability to initiate payments via APIs, and we expect to see regulators specify standards for these APIs.

While banks have little choice but to comply with these mandates, we’ve seen new industry-generated API specifications such as the Durable Data API (DDA) or the BIAN standards emerge. Banks are starting to view APIs as a way to compete in an increasingly digital world, instead of seeing them as simply a regulatory compliance issue.

We believe that in 2019, new banking API standards will emerge and will quickly gain traction because they are focused on helping banks acquire customers, lend more, and build software faster to compete in the API economy. We expect to see such standards emerge in other industries as well, presenting organizations with the challenge of deciding which standards to support.  

There is a second trend that we believe will force the standards issue. APIs will increasingly be called by machines, which are often sitting behind web properties such as Google Search. These machines and these web properties will demand programmatic integrations, and this can only happen when some standards emerge. Schema.org is a good place for some of these standards to emerge (e.g., in parcel delivery), though others are entirely possible. We expect, in 2019, dozens of these specifications to emerge, often in the OpenAPI spec, which will give canonical call/response structures for various verticals.

The rise of machine and AI-driven API traffic

Today, most API traffic can be attributed to some human action. A consumer browses a product—a few API calls are made. A homeowner pays a utility bill—another few API calls. When traffic is machine generated, on the other hand, it can be malicious, comprising bots, or security breaches. We expect this to continue. Whether it’s for crypto-mining or for credential stealing, we expect APIs to continue to bear the burden of machine-driven traffic, which will burden backends unless the right security is built into APIs.

That said, we are beginning to see benign programmatic API calls, generated by algorithms or machine intelligence, take off. This is driven by several trends:

  1. The rise of voice applications Voice, in the end, needs to be heavily AI/ML driven—so a call such as “Pay my bill” needs to be understood as “Pay my utility bill from PG&E for the current month using my stored credit card.” Simple requests into the voice system result in hundreds of API calls at the backend, all driven by machine intelligence figuring things out.
  2. The rise of IoT and home automation At the recent CES conference, communicating devices were everywhere. They integrated with one another and with voice assistants through APIs, and through recipes such as IFTTT. With hundreds of thousands of different types of devices, bespoke integrations just do not work; APIs simplify the mix and match, though they don’t necessarily alleviate the need for some deeper business logic.
  3. AI going mainstream AI is only useful when it can be leveraged into applications. However, not every team, or every enterprise, has the capability to do AI from scratch. We will see API-driven AI, where one team, or one business, builds a very good model in some domain, and other teams leverage that work through APIs. These teams might build their own AI models, which, in turn, another team might leverage. We are already seeing examples of this—like Google’s AutoML for image and text analysis, but we expect this trend to accelerate.

API-driven ecosystems

We’ve noticed that enterprises have begun to understand the importance of developers. Of the top 100 domains (defined as those with the highest number of pages that appear in a sample of 10 billion domains on the web), 94% had some developer-facing property, and among those 94%, 100% were offering APIs in the Swagger framework or the OpenAPI spec.

Developer offerings will become more prevalent and more API-centric

While 94% of the top 100 domains offer something for developers, in the same sample, this number falls to 9.5% for the top one million domains. We expect this skew to become less pronounced as the importance of developers becomes well understood by a larger number of domains.

REST APIs will be specified by OpenAPI

We are already seeing a trend: OpenAPI specs (formerly known as Swagger) are becoming de jure standards for specifying APIs that enable developer self-service. Anecdotally, when we ask our customers if they use OpenAPI spec, the typical answer we get is, “of course!” In the analysis above, we only looked for Swagger or OpenAPI patterns, and even there, the percentage was very high.

API startups will proliferate

In the wake of Twilio and SendGrid, startups that provide infrastructure services via APIs will once again be considered viable investments for venture capitalists.  

Microservices and APIs

While APIs that drive ecosystems and are visible to the public drive a lot of press, a much larger number of APIs are to be found internally within enterprises as interfaces between software systems and teams. Many of these APIs will be called “microservices,” even though they do not fit any serious definition of “micro.”  

Envoy will be increasingly popular as the open source technology for APIs

With support from a lot of vendors, and with large enterprise implementations under its belt, Envoy is fast becoming the most popular choice for open source API gateways. We expect commercial offerings around Envoy continue to proliferate in 2019.

A majority of enterprises will view microservices as modernized SOA

Except for a small set of enterprises who are deep into cloud-native architectures, the majority of enterprises who say they are using microservices will in fact be using the term to describe internal APIs with lightweight governance. The result will be that microservices hype will increase considerably as vendors try to market their solutions to all possible microservices projects.

Microservices will continue to have lots of different recipes

Successful microservices architecture is complex to design, build, and manage. There is a lot of experimentation and iteration, but it is early days for microservices, and a proven recipe for success has yet to emerge. Key to realizing the promise and benefits of microservices architecture will be successfully designing and building reusable, decoupled services that deliver scalability and agility for the business, with the right level of governance and lifecycle management capabilities. The availability of appropriate tooling for supporting and debugging microservices architecture will be key to its success in the enterprise.  


APIs represent a way to access enterprise services. They are also therefore a convenient point of attack. While API vulnerabilities have garnered some attention, we believe that unsecured APIs will be a fresh vector of attack in 2019.

Breaches of APIs for crypto mining

The K8S API vulnerability has shown that using unsecured APIs as a vector for taking over container orchestration platforms can yield immediate financial gains. Every business that uses elastic cloud infrastructure can be a target for attacks that attempt to inject cryptomining code into their cloud workloads. This will catch many businesses who believe they haven’t put anything valuable in the cloud off guard when they find themselves paying for the compute bills generated by criminal crypto-miners.

Breaches because of poor API security

Developers have understood that their web sites are vulnerable to attacks, and the best practices for securing them have started to become more and more common. External APIs are still taking off, and the best practices are not that wide-spread. In 2019, we believe we’ll see at least three types of breaches due to poor API security. All API management vendors will have serious conversations about API abuse with all their top traffic customers. These abuses could include:

  • DDOS situations (high traffic rates) breaking API backends
  • Spam (APIs processing large amounts of junk content)
  • Credential abuse (Reusing credentials to break into protected APIs)


As APIs become mainstream, they offer an unprecedented opportunity to drive new business opportunities through ecosystems and new ways of rebooting enterprise architectures via microservices. APIs will support new formats, and some standardization will take root. Machine-driven API traffic (especially AI traffic) will become a new growth vector. Internal projects will leverage APIs, but dramatic new things will not happen. And security will need to be a continuous focus.

Happy 2019 from all of us in Google Cloud’s Apigee team!


Apigee's Top API Editorials of 2018

Apigee experts published dozens of editorials in 2018 to help developers, IT architects, and business leaders understand how to maximize the value of APIs and keep pace with constant technological change.

With literally quadrillions of daily API calls connecting apps, data, and systems throughout the world, 2018 saw APIs reassert their position at the center of almost every digital use case. Though APIs are not a new concept, the ways in which organizations leverage them continue to expand, from APIs used within the enterprise to manage microservices and enable faster and more agile development methodologies to monetized APIs used to open new business models and expand an enterprise’s digital capabilities to new partners.

Here are some of our top articles from 2018, organized by some of the year’s biggest themes. Thank you to all of our readers, and stay tuned for more in 2019!


APIs are crucial to the automated connecting of data, applications, and systems—and when companies make automation easier for partners and customers, they often inadvertently make it easier for bad actors, too. Several organizations and their customers suffered through high-profile data breaches in 2018 thanks to API security lapses—which is why we dedicated several articles to helping enterprises make their APIs more secure. Some of our top security articles include:

Managing APIs as products

2018 saw more enterprise leaders recognize that APIs are not just an integration technology but also software products that help developers to more quickly and easily leverage and reuse digital assets. Enterprises should apply full lifecycle management and a customer-centric mindset to their API efforts. Some of the articles we wrote to help include:

Digital transformation, IT modernization, and digital ecosystem best practices

The digital economy moves faster than many legacy businesses are used to—and the constant change has meant that to compete, enterprises that lack API expertise have had to get up to speed quickly. From exploring why both external-facing and internal-facing APIs should be managed as products to detailing how to plan effective ecosystem participation and API monetization, we looked at many aspects of the digital transformation puzzle:


Because of the speed, scale, and agility they promise, microservices-based architectures continued in 2018 to be one of enterprise IT’s hottest topics. But despite the enthusiasm, microservices remain complicated to manage. To understand why APIs are an important part of the mix, check out Demystifying Microservices by Ruth Gantly in APIs and Digital Transformation.

APIs and banking

With new open banking requirements unrolling across many regions and fintech startups gaining traction around the world, 2018 was a disruptive year for bankers. From satisfying regulations to innovating faster and adding new ecosystem partners, APIs play vital roles in helping financial institutions to debut and iterate new services and helping legacy banks to compete in an increasingly fast-moving market. Some of our top banking articles from 2018 include:

How HP Transformed Its Architecture with Microservices

Traditionally, enterprises built monolithic applications that contained functionality in a single program. While this approach simplified debugging and deployment, maintaining, developing and scaling monolithic applications proved to be a significant challenge. This became a significant handicap in the digital age.

To keep pace with digital innovation, many IT teams have adopted a microservices-based architecture by designing software applications as suites of independently deployable services.

Galo Gimenez, Distinguished Technologist and Platform Architect at HP Inc., and his team went through a similar transformation journey when developing the company’s core services and infrastructure (including, for example, identity management and content management, which are shared by business units across HP). Key considerations for Gimenez's team included security and encryption, developer productivity, and cost.

After extensive research, the team decided to adopt a microservices architecture with the help of Kubernetes container orchestration.

“Many teams at HP are already adopting microservices and container orchestration technology to deliver products faster and cheaper,” Gimenez says. “We decided to adopt Kubernetes because it offered a well-structured architecture along with a seamless developer experience—the teams working on the containers didn’t need to become experts on the entire architecture to be able to build and deploy applications.”

HP isn’t alone. Enterprises are increasingly adopting microservices to enable new levels of IT agility, scale, and innovation. Today, nearly 70% of organizations claim to be either using or investigating microservices, and nearly one-third currently use them in production.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. Yet these benefits come with increased complexity—many teams struggle to connect, secure, and monitor a growing network of microservices and increase the consumption of valuable microservices beyond the teams in which they were created.

Gimenez and his team experienced this challenge firsthand.

“As monolithic applications transition towards a distributed microservice architecture, they become more difficult to manage and understand,” he says.“These architectures need basic services such as: discovery, load balancing, failure recovery, metrics and monitoring, as well as complex operational requirements: monitoring, deep telemetry, rate limiting, access control, and end-to-end authentication.”

The solution to this challenge came in the form of Istio, a service mesh that helps simplify the complexities of microservices communications. It provides a standardized way to connect, secure, monitor, and manage microservices. A vital plane for service-to-service control and reliability, the service mesh handles application-layer load balancing, routing, service authentication, and more.

Other business units within HP can easily access these microservices-based core services and infrastructure using APIs. Sharing microservices is much easier; they can be exposed as APIs with  other teams in the organization or with external partners and developers.

But when microservices are exposed as APIs, they require API management. API management enables enterprises to extend the value of microservices both within the enterprise and to external developers, with security, visibility and control.

Gimenez and his team adopted Apigee along with Istio and Kubernetes to maximize the power of its microservices architecture.  


Demystifying Microservices: It's Complicated

Over the last few years, microservices architectures have been increasingly celebrated as a way for enterprises to become more agile, move faster, and deliver applications that keep pace with changing customer needs.

Because of their autonomous and atomic nature, microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs.

In this series of blog posts, we’ll discuss the growing complexity that organizations face as they establish and expand their microservices strategies, how a service mesh helps simplify that complexity, and why APIs and API management are a critical part of a comprehensive microservices strategy.

The rise of microservices

Small, fine-grained functions that can be independently scaled and deployed, microservices provide software development teams with a new, agile way of building applications.

As microservices architectures have become more closely associated with enterprise agility, microservices investments have accelerated across the business spectrum—not just among big companies, a majority of which are either experimenting with microservices or using them in production, but also among mid-market firms and SMBs.

Given the success stories that have accumulated, it’s easy to understand the enthusiasm. Netflix’s iterative transition from a monolith to microservices has famously helped the company to make its content available on a dizzying variety of screen sizes and device types.

South American retailer and Google Cloud customer Magazine Luiza has similarly leveraged microservices to accelerate the launch of new services, from in-store apps for employees to an Uber-like service to speed up deliveries, and help it earn praise as the “Amazon of Brazil.” Other major microservices adopters, including Airbnb, Disney, Dropbox, Goldman Sachs, and Twitter, have cut development time significantly.  

More microservices = more complexity

It’s clear that when microservices are implemented and managed well, they can deliver new levels of scale, speed, and responsiveness—the major IT ingredients a company needs to compete and delight customers.  

Implementing microservices successfully is notoriously complicated, however.

Instead of deploying all the code each time the application is updated, as is common in monolithic application architectures, enterprises can leverage microservices to deploy different pieces of an application on different schedules.

For this to work, individual teams or developers need the freedom to refactor and recombine services based on how the larger application is consumed by users. Because a microservice in an application depends on all the other microservices that compose the application, this complexity needs to be abstracted and managed so that one team’s work doesn’t break another’s.

If a business fails to recognize that complexity increases with the number of microservices it uses, the organization’s efforts are unlikely to succeed. Martin Fowler, one of the intellectual authors of the microservices movement, highlights “operational complexity” as one of the key drawbacks of the approach, and Gartner research vice president Gary Olliffe has warned that a majority of enterprises may find “microservices too complex, expensive, and disruptive to deliver a return on the investment required.”

As the use cases for microservices have expanded, so has the complexity. The original vision of microservices held that a microservice wouldn’t be shared outside the team its creator worked with. Among the things that made them “microservices,” as opposed to APIs or service oriented architecture (SOA), was the fact that developers no longer had to worry about the same level of documentation or change management that they did with a widely shared service.

But microservices are heralded as a valuable way to reuse functions and scale them to more developers, both inside and outside an organization. The granularity and agility they provide is too valuable to confine within a single team. As enterprises have attempted to extend the value of microservices to more teams and partners, many have struggled to make microservices secure, understand how microservices are used and are performing, and successfully deploy microservices beyond bespoke use cases.

So what needs to happen to surmount management problems as microservices networks grow within an organization? We'll discuss that in the next part of this series.

For more, read the Apigee eBook Maximizing Microservices.

Microservices in Action

How three companies use microservices to drive results

Technologies are often described in terms of hype cycles and reality distortion fields — and sometimes with good reason, as some heralded technologies have yet to really pan out (looking at you, 3D TVs) while others have taken decades of starts and stops to achieve widely useful real world use cases (welcome to renewed relevance, neural networks!).

For many companies, it might be hard to judge just where microservices-based architectures currently fall on the hype continuum. Because they offer small, lightweight snippets of functionality that can be independently deployed, microservices have been almost mythologized in some circles for their ability to promote speed and scale — but they’ve also prompted some analysts to predict that a majority of microservices deployments will fail under mounting complexity. Sort of mixed signals, right?

The reality is that microservices-based architectures can be very complicated—but they’re far from hype. Those complexities are becoming increasingly manageable, and companies from all industries — not just digital natives such as Netflix—are using microservices to drive real business results, not just innovation projects and bespoke use cases.

Specifically, more companies are recognizing three things:

  • As the number of microservices increases, so does complexity — but this complexity can be mitigated by employing a service mesh so that developers do not have to be relied upon to include elements of service-to-service communication in their code.
  • Though microservices were originally intended to be shared only within small teams, many microservices express valuable functions that should be shared throughout the organization.
  • This sharing can be efficiently and relatively easily facilitated by packaging and managing microservices as application programming interfaces (APIs).

To survey some of the possibilities, here are three enterprises, drawn from some of the companies that Google Cloud’s Apigee team has worked with, whose leaders are embracing microservices and APIs to drive results.

PricewaterhouseCoopers: Creating new services, innovating faster, and entering digital ecosystems

A part of “Big Four” accounting firm PricewaterhouseCoopers, PwC Australia is leveraging microservices and APIs to accelerate innovation and create new lines of business.

The organization has decades of experience in traditional professional services such as auditing, insurance, tax, legal, and management consulting, but over the last few years, PwC Australia’s Innovation and Venture group has set out to express these capabilities as software—including lightweight, single-function microservices made consumable to developers via APIs. Because these APIs abstract the complexity of microservices and other backend systems, the company can replace and update functionality without disrupting developers or users of the apps those developers build.

By enabling a faster, more granular approach to software development, microservices and APIs have helped the company to replace some of its backward-facing, reactive, and labor-intensive legacy methods with more forward-looking proactive services. Its Cashflow Coach product, for example, applies machine learning models to ledger and banking data in order to predict cash flows based not only on when invoices should be paid but also when customers have traditionally paid.

PwC’s efforts also include a microservices-based product that uses blockchain to prevent counterfeiting schemes in the meat industry, such as attempts to use fraudulent health and provenance information to sell old or sub-standard meat products that could be dangerous to consumers. The tool relies on a physical “krypto anchor” (an edible substance stamped on meat) that can be scanned at the point of unpacking in order to verify it matches a blockchain-based certificate. When it does, the meat’s data is verified.

The company is also packaging and monetizing services via APIs to open its data and technologies to new ecosystems of external partners and developers. Because these APIs are based on a microservices architecture, PwC can observe how its services are being used and responsively update particular aspects without disrupting the APIs developers use or breaking the end-user experiences those APIs enable.

“That really is the key part of our strategy,” said Trent Lund, head of PwC Australia’s Innovation and Ventures group. “We act as a middle layer. Extract, but don’t try to own the entire business ecosystem because you can’t grow fast enough.”

Walgreens: Rewarding customer loyalty

For the last several years, Walgreens has been blending the physical and digital retail experiences by extending its services to a range of partner apps powered by microservices and APIs.

“We’ve focused on making our services as light as possible and easy to develop on,” said Walgreens developer evangelist Drew Schweinfurth in an interview.

The company’s Photo Prints API, for example, has allowed external developers to build apps that let smartphone users snap a picture from their phone on their way to lunch, send the photo to a local Walgreens for processing, then drop by to pick up prints at the store on their way home.

Its Balance Rewards API, meanwhile, lets partners build Walgreens loyalty points into their health and wellness apps, allowing the apps to reward users with loyalty points when they do things such as take a jog or monitor their blood pressure. Apps that leverage the API reach hundreds of thousands of users and have distributed billions of rewards points.

Magazine Luiza: From brick-and-mortar to omnichannel

South American retailer Magazine Luiza is a strong testament to the results a company can achieve when it deploys microservices and APIs with vision and purpose. A traditional brick-and-mortar business for much of its history, the company has enjoyed soaring revenue and seen its stock become one of the hottest in Brazil as it has sharpened its focus on modernizing IT.

Just a few years ago, the company’s technology capabilities were relatively modest. Its e-commerce efforts relied on a monolithic backend built with over 150,000 lines of code. Functionality was tightly-coupled, which made it easy to introduce bugs when pushing updates, limited the company’s ability to scale functionality, and entrenched silos between business and IT teams.

A small development team’s switch to API-first approaches helped turn everything around. As the team moved faster and introduced new products, the company scaled out the team’s best practices, eventually distributing development efforts across many small, independent teams. When its transformation began, the company had been delivering only eight new software deployments monthly — today, it pushes more than 40 per day.

The speed and agility that Magazine Luiza’s new architecture offered have produced big business results. Prior to its shift to microservices, for example, the company offered modest e-commerce capabilities that supported fewer than 50,000 SKUs. Today, Magazine Luiza operates a vast and growing online marketplace that enables new merchants to join via an API and includes over 1.5 million SKUs from sellers around the globe.

The company was able to scale up its efforts so dramatically, including expanding from a handful of engineers to more than one hundred in just a few years, because executive leadership not only understood the company’s digital vision but also enforced mandates to align the organization, such as using APIs as a communication interface between microservices and other systems, so that functionality could be consumed for innovation throughout the company.

It’s not about scale or speed—it’s about customers

Many of the companies whose stories are included in this article have achieved the benefits that make microservices so alluring—faster software development, greater ability to scale services and resources, and more.

But these technology capabilities alone are not why these companies have enjoyed success. Rather, they’ve been successful because they’ve deployed these capabilities with purpose to serve customer needs. APIs can help companies make their microservices more manageable and valuable—but a customer focus remains the most important ingredient to success.

This post originally appeared on Medium. For more on microservices, read out new eBook, "Maximizing Microservices."


Maximizing Microservices

New eBook

A microservices approach is a significant departure from traditional software development models in which applications are built and deployed in monolithic blocks of tightly coupled code. These legacy approaches can make updating applications time-consuming, increase the potential for updates to cause bugs, and often limit how easily and quickly an organization can share or monetize its data, functions, and applications.

Microservices, in contrast, are fine-grained, single-function component services that can be scaled and deployed independently, enabling organizations to update or add new features to an application without necessarily affecting the rest of the application’s functionality.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs. Managing the complexity of large numbers of microservices can be a serious challenge; doing so demands empowering developers to focus on what microservices do rather than how they are doing it. For this, enterprises are increasingly using a “service mesh”—an abstraction layer that provides a uniform way to connect, secure, monitor, and manage microservices.

The service mesh reduces many challenges associated with complexity but does not provide an easy way for enterprises to share the value of microservices with new teams or with external partners and developers. For this, an enterprise needs managed APIs. APIs and API management help expand the universe of developers who can take advantage of microservices, while giving organizations governance over how their microservices are used and shared. Whenever a microservice is shared outside the team that created it, that microservice should be packaged and managed as an API.

Put simply, if an enterprise is serious about its microservices strategy, it needs both a service mesh to help simplify the complexity of a network of microservices and API management to increase consumption and extend the value of microservices to new collaborators.

In the recently published eBook, "Maximizing Microservices," we explore:

  • The role of a service mesh in simplifying complexity intrinsic to microservices architectures
  • How APIs enable the value of microservices to be scaled and shared with additional teams, developers, and partners
  • Why an enterprise’s ability to secure, monitor the use of, and derive insights from microservices relies on properly managing the APIs that make microservices accessible
  • How a comprehensive microservices strategy combines both a service mesh and API management to manage complexity and securely increase consumption.

Demystifying Microservices

It’s easy to see why microservices have become so popular.

Netflix’s shift to microservices has famously helped it to serve up its content to so many different devices. Companies such as Twitter and Airbnb have leveraged microservices to dramatically accelerate development. South American retailer Magazine Luiza’s adoption of microservices has helped it transform from a brick-and-mortar company with basic e-commerce capabilities to, as some analysts have put it, the “Amazon of Brazil.”* In an age when delivering great digital experiences is more important than ever, microservices have shown themselves to be an important part of the mix.

But microservices are also notoriously complicated—in no small part because it’s not always clear what microservices even are. Are they basically just an evolved version of service-oriented architectures (SOA)? Since a microservice must be exposed via an application programming interface (API) for an organization to scale it to new developers, does that mean managing microservices is basically the same as managing APIs?

As this article will discuss, the answer to both of these questions is a clear “no” — and companies looking to get the most from their microservices need to understand why.

What are microservices anyway?

Broadly, the term “microservices” refers to a software development methodology organized around granular business capabilities that can be recombined at scale for new apps. A microservice architecture, then, is a method of developing software applications more quickly by building them as collections of independent, small, modular services.

A primary benefit of this architecture is to empower decentralized governance that allows small, independent teams to innovate faster. Rather than working as a large team on a monolithic application, these small teams work on their own granular services, share those services with others via APIs so the services can be leveraged for more apps and digital experiences, and avoid disrupting the work of other developers or end user experiences because each microservice can be deployed independently.

Microservices can be deployed independently and easily moved across clouds because—unlike monolithic applications—microservices are deployed in containers that include all the business logic the service needs to run. Developers can use APIs to create applications composed of services from multiple clouds, and microservices in one cloud can interact via APIs with systems elsewhere.

In more specific terms, microservices are:

  • fine-grained services that deliver a unique, single responsibility crucial to an overall business goal.
  • independent and event-driven. They share nothing with other microservices and can be deployed or replaced without impacting other services.
  • deployed separately but able to communicate with other services though a well-defined interface, generally RESTful APIs that expose the service for developer consumption.
  • code components that are typically replaced wholesale, rather than versioned, and can be disposed of without affecting other components of the architecture.
  • governed decentrally. Microservices architectures are generally incompatible with centralized legacy operational approaches that generally lock down IT assets and impose heavy governance. Microservices still requires some blanket governance practices—security concepts such as OAuth should apply to all APIs being used to consume microservices, for example. But it is important to let teams develop in the best way for them, with the best tools for their work and the autonomy to innovate.

How are businesses implementing microservices?

In many top businesses, the shift from a single development pipeline to multiple pipelines, and from waterfall development to automated continuous integration/continuous delivery (CI/CD) processes that facilitate rapid iteration, is already happening. Microservices are a major driver. Release cycles that were once long and arduous have now been reduced so that businesses can release daily or even multiple times in one day. Magazine Luiza, for example, leveraged microservices to accelerate from fewer than 10 deployments per month to more than 40 per day.

These changes emphasizes that microservices may not be useful when a company is merely trying to bolt technology onto its status quo business model. Rather, microservices are a way for businesses to use technology to change how they operate. If a company uses one large, slow-moving development team, microservices may not be much use until it has reorganized around dozens or even hundreds of small, fast-moving development teams. That transition doesn’t and typically won’t happen overnight—but microservices are built to empower small, independent teams, so a company may need at least skunkworks projects to get started.

In a blog post, T-Mobile CIO Cody Sanford describes this trend well:

“Gone are the highly-coupled applications, massive waterfall deliveries, broken and manual processes, and mounting technical debt. In their place are digital architecture standards, exposed APIs, hundreds of applications containerized and in the cloud, and a passionate agile workforce.”

If an organization is prepared to make the right organizational changes, microservices can accelerate multi-cloud strategies because the services can be scaled elastically, enabling more workloads to run in the cloud and across clouds. They can be deployed to create a more flexible architecture in which individual services can be released, scaled, and updated without impact to the rest of the system.

Monolithic applications contain all of the business logic, rules, authentication, authorization, and data tied together, which typically makes them much more difficult and time-intensive to update in even relatively modest ways. Microservices architecture, in contrast, support the separation of duties into individual self-contained entities that work together to deliver the full experience—instead of a team spending months or years creating tightly-coupled code for a single app, many teams create microservices daily or weekly that can be leveraged indefinitely in changing contexts.

How should I manage microservices?

For many businesses, the best way to manage microservices will be through a combination of a mesh network such as Istio and an API management platform. It’s important not to conflate these two things. The former handles service-to-service interactions, such as load balancing, service authentication, service discovery, routing, and policy enforcement. The latter provides the tools, control, and visibility to scale microservices via APIs to new developers and connect them via APIs to new systems.

More specifically, a service mesh can provide fine-grained visibility and insights into the microservices environment, control traffic, implement security, and enforce policies for all services within the mesh. The advantage is that it all happens without making changes to application code.

API management, meanwhile, provides for easy developer consumption of microservices. Organizations need APIs so that teams can share microservices and interact with other systems. These APIs should not be designed for simple exposure but rather as products designed and managed to empower developers. An API management platform should provide mechanisms to onboard and manage developers, create an API catalog and documentation, generate API usage reporting, productize or monetize APIs, and enforce throttling, caching, and other security and reliability precautions.

Consequently, microservices are distinct but deeply connected to APIs. Both APIs and microservices are also distinct from legacy SOA techniques in several crucial ways, including that microservices thrive in more decentralized environments with more autonomous teams and that the emphasis is not just on reusing digital assets but also making them easy to consume so that developers can leverage them in new ways.

Microservices: only one piece of the digital transformation puzzle

The benefits of microservices don’t typically emerge until a business needs to scale a service, both in terms of the number of requests it needs to handle and the number of developers who need to work with it. Additionally, while companies may break some existing systems into microservices or use microservices for current development goals, many older systems, such as enterprise service bus (ESB) or mainframe systems, will remain part of an enterprise’s overall topology. These heterogeneous systems typically communicate with one another via APIs, emphasizing again that though APIs are central to microservices, what can be done with a microservice and what can be done with an API are not the same.

Microservices will remain challenging as organizations continue to implement them — but with the right understanding of how microservices fit alongside other technologies and should be managed, this complexity may be more conquerable than it appears!

This article originally appeared in Medium.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

Introducing Apigee API Management for Istio

Simplify exposing microservices as APIs both inside and outside your organization

Hundreds of companies rely on Apigee to create and deliver strong API programs to developers both inside and outside their organizations. At the same time, Istio has been gaining rapid acceptance as a way to bring control to networks of services.

Since joining Google Cloud in late 2016, the Apigee team has been working to make our products work more closely with other Google technologies. One of the first teams that we collaborated with was the Istio team.

It quickly became clear that both Istio and Apigee provide complementary capabilities to teams building APIs and services in today’s world. We agreed that our customers would benefit if we could ensure that Apigee and Istio work well together.

Today, we’re announcing the integration of API management with Istio so that microservices can be exposed as APIs and more easily shared with developers inside and outside your organization.

What Istio and Apigee bring

Istio simplifies the life of organizations contemplating a “microservices” approach, or who are simply deploying many services that communicate with each other. Istio creates a “service mesh” that routes traffic between interrelated services in a secure and robust way so that the developers of each individual service can focus on what a service does rather than  the details of how it communicates.

Apigee is built around the realization that, in order to be successful, modern organizations must create APIs and share them with other developers who might be part of the organization or who might be external, or even unknown. API teams using Apigee achieve this by combining APIs into “API products” that offer different capabilities and levels of service.

This enables them to control who consumes each API product, and  how much is consumed. The team gets the ability to open an API to third-party developers without worrying about seeing precious API capacity monopolized by a single developer without permission.

Bringing microservices and APIs together

When a group of developers builds a system composed of many individual microservices, a service mesh like Istio adds an essential layer of reliability and security to the whole mesh.

However, when those developers wish to share their services with another group, or with developers entirely outside the organization, a service mesh isn’t enough—it’s time for the service to be exposed as an API.

But a successful API needs to be easily consumed and that's where Apigee API Management comes in. Without it, developers who use the API have no easy way to discover what an API does, or how to sign up and start using it. The team  producing the API has no mechanism to control how the API is used and how resources are allocated.

Adding API management to Istio

Previously, developers could add API management capabilities to Istio by simply deploying Apigee Edge outside the Istio mesh and configuring it to treat Istio like any other target service.

With this new capability, an Istio user can now expose one or more services from an Istio mesh as APIs by adding API management capabilities via Istio’s native configuration mechanism.

Furthermore, Apigee users can now take advantage of Istio to bring API management to a large set of services by adding an Istio mesh to their existing Apigee installation, and then moving services into the mesh. These users may find this to be a more scalable alternative than today’s approach of deploying a large number of Apigee Edge proxies, or deploying many instances of Apigee Edge Microgateway.

This is all possible because Istio includes a component called Mixer that runs as a central part of every Istio mesh. Mixer's plugin model enables new rules and policies to be added to groups of services in the mesh without touching the individual services or the nodes where they run.

Once Apigee integration is enabled within an Istio mesh, the operator can simply use Istio’s native configuration tools to apply Apigee's API management policies and reporting to any service. Once enabled, management policies such as API key validation, quota enforcement, and JSON web token validation can be easily controlled from the Apigee UI.

Likewise, the Apigee user may view and report on API analytics, just as they expect to today. There is no need to create or deploy additional API gateways or proxies—Apigee’s integration with Mixer ensures that policy configuration changes take effect across the whole mesh, without any additional steps.

Because Mixer adds API management features to the native configuration of an Istio mesh, it greatly reduces the amount of work required to turn a large number of services into APIs. For instance, with Istio it is possible to ensure that a valid API key is required for a single service, or for a group or services, or for all services in the mesh, all using the same configuration mechanism.

Is there more?

Apigee users are accustomed to employing a richer set of features that enable API producers to customize API requests and responses to simplify internal APIs for external consumption and help transform legacy systems into consumable APIs. None of this changes because of Istio—the existing Apigee Edge product is still powerful as a facade in front of services in an Istio mesh.

Furthermore, as the Istio community grows and the project adopts new capabilities, we hope to make some of these other Apigee features equally straightforward to add to an Istio mesh, so that we can bring the best of both products to our customers.

Learn more about Apigee API Management for Istio here.

Thanks to Scott Ganyo and Will Witman for their invaluable help with this post.

To learn more, read the Apigee eBook, "Maximizing Microservices."  



API Management for Microservices

Apigee Microgateway for Pivotal Cloud Foundry is now in public beta

We’re excited to announce the public beta of Apigee Microgateway for Pivotal Cloud Foundry, a new model for deploying a microgateway inside Cloud Foundry apps to provide integrated API management for microservices.

Service broker

This new feature is delivered as service broker tile and deployed as a service inside of Cloud Foundry. It ships a new microgateway decorator buildpack, which leverages the meta buildpack through which the microservices API runtime (Apigee Microgateway) can be injected into the app container as a sidecar proxy (Learn more about buildpacks here).

Apigee Microgateway

Apigee Microgateway is a lightweight, secure, HTTP-based message processor designed especially for microservices. It processes requests and responses to and from backend services securely while asynchronously pushing valuable API execution data to Apigee Edge, where it’s consumed by the analytics system. The microgateway provides vital functions, including traffic management (protection from spikes in traffic and usage quota enforcement, for example), security (OAuth and API key validation) and API analytics.

Microgateway decorator buildpack

A meta buildpack is a Cloud Foundry buildpack that enables decomposition and reordering (execution order) of buildpacks. Using this approach, the provided microgateway decorator buildpack gets inserted inside the app container during the app build process and “decorates” or frontends all calls made to the underlying app or API.

Container sidecar 

The light footprint deployment model of the Apigee Microgateway makes it easy for it to be coresident within apps and microservices developed in Cloud Foundry as a container sidecar.

The coresident model provides the following general benefits and features:

  • No routing overhead Apigee Microgateway is embedded into the app container, so this option is great for apps that have low latency requirements.
  • Simplified lifecycle management and automatic scaling As Apigee Microgateway is part of the app container, it participates in the app lifecycle process and scales up automatically as the app is scaled.
  • Consistent and pervasive protection to service implementations All ingress to the service runs through Apigee Microgateway.
  • Analytics across all API exposures Consolidated analytics is valuable across all internal and external consumption patterns. Apigee Microgateway provides a lightweight mechanism for capturing analytics for all use cases.
  • Consistent access to APIs Common infrastructure is leveraged for API keys, tokens, and API product definitions.
  • Self service Internal and external API consumers are served through a single developer portal.

That’s all great, but we wanted to make it even simpler for Cloud Foundry app developers to start using this new feature. So we built a series of Cloud Foundry CLI plugins. These make pushing, binding, and unbinding the Apigee Service Broker more streamlined and easier to use.

The new capabilities eliminate the friction for app developers and makes it easy to enable API management for their microservices, so they can be secured, managed, and the relevant stakeholders get out-of-the-box, end-to-end visibility.

It’s easy to get started:

We’d love to hear from you; please reach out to us at the Apigee Community.

To learn more, read the Apigee eBook, "Maximizing Microservices."  


State of Microservices: Are You Prepared to Adopt?

Webcast replay

The allure of microservices is clear: shorter development time, continuous delivery, agility, and scalability are characteristics that all IT teams can appreciate. But microservices can increase complexity and require new infrastructure—in other words, they can lead teams into uncharted territory.

Join Gartner’s Anne Thomas and Google Cloud’s Ed Anuff as they present an in-depth look at the state of microservices.

They discuss:

  • what a microservice is and what it isn’t
  • trends in microservices architecture
  • the relationship between microservices, APIs, and SOA architecture
  • connecting, securing, managing, and monitoring microservices

Watch the webcast replay now.

To learn more, read the Apigee eBook, "Maximizing Microservices."