Demystifying Microservices: Understanding the Microservices Management Stack
In a previous post, we explored the role of a service mesh and APIs in a successful microservices strategy. Now we’ll wrap up this series with a more detailed look at the microservices management stack.
Microservices are typically deployed in containers such as Docker that provide everything needed for the service to run. Containers are a significant architectural departure compared to legacy applications that ran on purpose-configured hardware, and the agility they can provide is one of the key reasons microservices can accelerate multi-cloud strategies; microservices can dynamically scale up or down the resources they need and apps can draw from services spread across many clouds.
“We can run our workloads anywhere. Microservices and Kubernetes give us freedom. It’s been especially helpful with our multi-cloud strategy,” Magazine Luiza CTO André Fatala said in an interview.
But the very reasons for the success of this architectural model also present some challenges. Today’s developers build APIs and microservices without the kind of centralized oversight that once existed. Because an application might rely on calls to many services, it can be an enormous challenge to manage which services are allowed to communicate and how calls should be routed to maintain excellent end user experiences.
The service mesh
In modern decentralized application architectures, containers offer the first important layer of control and resiliency. Typically, when enterprises deploy containers, they apply an orchestration layer such as Kubernetes to abstract the underlying hardware and enable the services to be exposed to developers via an API. The orchestration layer facilitates several important infrastructure scaling functions as well as transport layer load balancing and health checks.
A service mesh such as Istio constitutes the next layer in the microservices management stack. Its responsibilities include application layer load balancing, service authentication, policy enforcement, routing, telemetry reporting, and other important aspects of service-to-service control and reliability.
In essence, the service mesh lets developers decouple network functions from their service code. Developers don’t need to implement code for these resiliency and management functions in their services—they can focus on what the service does rather than the complexities of how it communicates in the underlying network.
The API layer
APIs sit above the service mesh and enable microservices to scale to more developers, inside and outside an organization. Though APIs are necessary to expose microservices, APIs and microservices are not the same. APIs can expose systems and digital assets beyond microservices, for example, and APIs support deeper levels of management functionality.
API management is vital to enforcing policies, and potentially upholding service-level agreements (SLAs), around the use of those microservices.
As companies open up access to microservices and other digital assets via APIs, they must assume they are operating in a zero-trust environment. When developers deploy microservices in the public cloud and neglect to include common API security standards or consistent global policies, they expose the enterprise to potential security breaches.
An API management platform enables enterprises to implement security and governance policies such as OAuth2 and bot detection across all of their microservices APIs. They also provide a plane for analytics and reporting, granting visibility into and control over how microservices and other digital assets are used.
API management helps companies not only secure their APIs but also make them more consumable and useful to developers. The service mesh typically includes a registry of microservices to facilitate service-to-service communication, for example, but the API management platform includes discoverability tools that help developers avoid re-creating APIs that are already available and so help the enterprise avoid development bloat.
A robust API platform should also facilitate onboarding of developers via a self-service portal, include documentation resources, and offer tools for API monetization and productization.
“We want to expose more than just data [with our APIs]. We also want to expose functional elements for our developers to accelerate what they could create and imagine,” Trent Lund, head of Google Cloud customer PwC Australia’s Innovation and Ventures group, said in an interview.
Ultimately, a services management stack should include both a service mesh to keep microservices connected and secure while freeing the developers from service management distractions, and an API management platform to provide security, control, and visibility for all a company’s APIs.
This relationship between a service mesh and API management is so important that some of today’s most popular solutions have begun to bake aspects of both into their offerings. For example, Google Cloud’s Apigee API management platform is now natively integrated with the Istio service mesh. This integration enables microservices to be easily exposed as APIs, while taking advantage of Apigee’s robust API management capabilities.
Remember—it’s all about customers
To get the most leverage from a microservices approach, a business needs both a service mesh to manage microservices networks and API management to maintain security, control, and visibility as microservices are extended as APIs to more partners, teams, and developers. With this combination, corporations are equipped to reduce the complexity that has hamstrung many microservices efforts, and to accelerate developer innovation by increasing consumption of valuable microservices via APIs.
But beyond the technology itself, it’s important to remember microservices and APIs aren’t just about scale, agility, or any other IT buzzword—they’re about creating better experiences for customers. That’s the reason microservices have become so popular.
The number of new updates that a development team pushes, the number of services in a given compute cluster, and the number of developers consuming an API are all important—but only because these factors have helped companies to continue engaging and delighting customers.
To learn more, read the Apigee eBook “Maximizing Microservices.”