Istio

How HP Transformed Its Architecture with Microservices

Traditionally, enterprises built monolithic applications that contained functionality in a single program. While this approach simplified debugging and deployment, maintaining, developing and scaling monolithic applications proved to be a significant challenge. This became a significant handicap in the digital age.

To keep pace with digital innovation, many IT teams have adopted a microservices-based architecture by designing software applications as suites of independently deployable services.

Galo Gimenez, Distinguished Technologist and Platform Architect at HP Inc., and his team went through a similar transformation journey when developing the company’s core services and infrastructure (including, for example, identity management and content management, which are shared by business units across HP). Key considerations for Gimenez's team included security and encryption, developer productivity, and cost.

After extensive research, the team decided to adopt a microservices architecture with the help of Kubernetes container orchestration.

“Many teams at HP are already adopting microservices and container orchestration technology to deliver products faster and cheaper,” Gimenez says. “We decided to adopt Kubernetes because it offered a well-structured architecture along with a seamless developer experience—the teams working on the containers didn’t need to become experts on the entire architecture to be able to build and deploy applications.”

HP isn’t alone. Enterprises are increasingly adopting microservices to enable new levels of IT agility, scale, and innovation. Today, nearly 70% of organizations claim to be either using or investigating microservices, and nearly one-third currently use them in production.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. Yet these benefits come with increased complexity—many teams struggle to connect, secure, and monitor a growing network of microservices and increase the consumption of valuable microservices beyond the teams in which they were created.

Gimenez and his team experienced this challenge firsthand.

“As monolithic applications transition towards a distributed microservice architecture, they become more difficult to manage and understand,” he says.“These architectures need basic services such as: discovery, load balancing, failure recovery, metrics and monitoring, as well as complex operational requirements: monitoring, deep telemetry, rate limiting, access control, and end-to-end authentication.”

The solution to this challenge came in the form of Istio, a service mesh that helps simplify the complexities of microservices communications. It provides a standardized way to connect, secure, monitor, and manage microservices. A vital plane for service-to-service control and reliability, the service mesh handles application-layer load balancing, routing, service authentication, and more.

Other business units within HP can easily access these microservices-based core services and infrastructure using APIs. Sharing microservices is much easier; they can be exposed as APIs with  other teams in the organization or with external partners and developers.

But when microservices are exposed as APIs, they require API management. API management enables enterprises to extend the value of microservices both within the enterprise and to external developers, with security, visibility and control.

Gimenez and his team adopted Apigee along with Istio and Kubernetes to maximize the power of its microservices architecture.  

 

Demystifying Microservices: Understanding the Microservices Management Stack

In a previous post, we explored the role of a service mesh and APIs in a successful microservices strategy. Now we’ll wrap up this series with a more detailed look at the microservices management stack.

Containers

Microservices are typically deployed in containers such as Docker that provide everything needed for the service to run. Containers are a significant architectural departure compared to legacy applications that ran on purpose-configured hardware, and the agility they can provide is one of the key reasons microservices can accelerate multi-cloud strategies; microservices can dynamically scale up or down the resources they need and apps can draw from services spread across many clouds.

“We can run our workloads anywhere. Microservices and Kubernetes give us freedom. It’s been especially helpful with our multi-cloud strategy,” Magazine Luiza CTO AndrĂ© Fatala said in an interview.

But the very reasons for the success of this architectural model also present some challenges. Today’s developers build APIs and microservices without the kind of centralized oversight that once existed. Because an application might rely on calls to many services, it can be an enormous challenge to manage which services are allowed to communicate and how calls should be routed to maintain excellent end user experiences.  

The service mesh

In modern decentralized application architectures, containers offer the first important layer of control and resiliency. Typically, when enterprises deploy containers, they apply an orchestration layer such as Kubernetes to abstract the underlying hardware and enable the services to be exposed to developers via an API. The orchestration layer facilitates several important infrastructure scaling functions as well as transport layer load balancing and health checks.

A service mesh such as Istio constitutes the next layer in the microservices management stack. Its responsibilities include application layer load balancing, service authentication, policy enforcement, routing, telemetry reporting, and other important aspects of service-to-service control and reliability.

In essence, the service mesh lets developers decouple network functions from their service code. Developers don’t need to implement code for these resiliency and management functions in their services—they can focus on what the service does rather than the complexities of how it communicates in the underlying network.

The API layer

APIs sit above the service mesh and enable microservices to scale to more developers, inside and outside an organization. Though APIs are necessary to expose microservices, APIs and microservices are not the same. APIs can expose systems and digital assets beyond microservices, for example, and APIs support deeper levels of management functionality.

API management is vital to enforcing policies, and potentially upholding service-level agreements (SLAs), around the use of those microservices.

API management

As companies open up access to microservices and other digital assets via APIs, they must assume they are operating in a zero-trust environment. When developers deploy microservices in the public cloud and neglect to include common API security standards or consistent global policies, they expose the enterprise to potential security breaches.

An API management platform enables enterprises to implement security and governance policies such as OAuth2 and bot detection across all of their microservices APIs. They also provide a plane for analytics and reporting, granting visibility into and control over how microservices and other digital assets are used.

API management helps companies not only secure their APIs but also make them more consumable and useful to developers. The service mesh typically includes a registry of microservices to facilitate service-to-service communication, for example, but the API management platform includes discoverability tools that help developers avoid re-creating APIs that are already available and so help the enterprise avoid development bloat.

A robust API platform should also facilitate onboarding of developers via a self-service portal, include documentation resources, and offer tools for API monetization and productization.

“We want to expose more than just data [with our APIs]. We also want to expose functional elements for our developers to accelerate what they could create and imagine,” Trent Lund, head of Google Cloud customer PwC Australia’s Innovation and Ventures group, said in an interview.

Ultimately, a services management stack should include both a service mesh to keep microservices connected and secure while freeing the developers from service management distractions, and an API management platform to provide security, control, and visibility for all a company’s APIs.

This relationship between a service mesh and API management is so important that some of today’s most popular solutions have begun to bake aspects of both into their offerings. For example, Google Cloud’s Apigee API management platform is now natively integrated with the Istio service mesh. This integration enables microservices to be easily exposed as APIs, while taking advantage of Apigee’s robust API management capabilities.

Remember—it’s all about customers

To get the most leverage from a microservices approach, a business needs both a service mesh to manage microservices networks and API management to maintain security, control, and visibility as microservices are extended as APIs to more partners, teams, and developers. With this combination, corporations are equipped to reduce the complexity that has hamstrung many microservices efforts, and to accelerate developer innovation by increasing consumption of valuable microservices via APIs.

But beyond the technology itself, it’s important to remember microservices and APIs aren’t just about scale, agility, or any other IT buzzword—they’re about creating better experiences for customers. That’s the reason microservices have become so popular.

The number of new updates that a development team pushes, the number of services in a given compute cluster, and the number of developers consuming an API are all important—but only because these factors have helped companies to continue engaging and delighting customers.  

To learn more, read the Apigee eBook “Maximizing Microservices.

Demystifying Microservices: What Happens When You Share

In the previous post in this series, we discussed how organizations are expanding their use of microservices, and how that leads to various struggles and complexity. Here, we take a look at the roles that APIs and service meshes play in successful microservices strategies.

Shared microservices are packaged as APIs

Microservices present management challenges from two angles—the complexity that arises from a growing network of microservices and the intricacies that result from sharing microservices as APIs with new teams and developers.   

To conquer the first challenge, enterprises must recognize that as they increase the number of microservices in production, the complexity of managing service-to-service communications within even a single team can dramatically increase. It’s important that developers be able to focus on the functionality microservices provide rather managing the complexities of how they interact.

For this, businesses are increasingly using a service mesh such as the open source Istio project. A service mesh provides a uniform way to connect, secure, manage, and monitor microservices without forcing developers to (probably inconsistently) bake these features into their service code.

Moving to the second challenge, as enterprises share microservices, they need APIs to package them for easy developer consumption. Any time a microservice is shared outside the team that created it, the microservice should be presented as an API.

APIs need to be managed

Without management, an organization can’t gauge how reliable its systems are and how developers are adopting APIs and microservices. Without API management, the organization cannot determine how easy it is for developers to consume APIs, control who consumes APIs, and dictate how much traffic each API consumer uses. The organization has no assurance developers are implementing security precautions properly—or at all.

When a business wants to scale microservices to new teams, partners, and developers, API management must become a cornerstone of its strategy.

Doing microservices well, then, means two things: applying a service mesh to maintain resilience and security while freeing developers from having to implement these solutions into their code; and using well-managed APIs to extend the value of these microservices beyond the teams in which they were created.

In the next post, we’ll discuss how all of these elements fit together into the microservices management stack.

For more, read the Apigee eBook “Maximizing Microservices.

Demystifying Microservices: It's Complicated

Over the last few years, microservices architectures have been increasingly celebrated as a way for enterprises to become more agile, move faster, and deliver applications that keep pace with changing customer needs.

Because of their autonomous and atomic nature, microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs.

In this series of blog posts, we’ll discuss the growing complexity that organizations face as they establish and expand their microservices strategies, how a service mesh helps simplify that complexity, and why APIs and API management are a critical part of a comprehensive microservices strategy.

The rise of microservices

Small, fine-grained functions that can be independently scaled and deployed, microservices provide software development teams with a new, agile way of building applications.

As microservices architectures have become more closely associated with enterprise agility, microservices investments have accelerated across the business spectrum—not just among big companies, a majority of which are either experimenting with microservices or using them in production, but also among mid-market firms and SMBs.

Given the success stories that have accumulated, it’s easy to understand the enthusiasm. Netflix’s iterative transition from a monolith to microservices has famously helped the company to make its content available on a dizzying variety of screen sizes and device types.

South American retailer and Google Cloud customer Magazine Luiza has similarly leveraged microservices to accelerate the launch of new services, from in-store apps for employees to an Uber-like service to speed up deliveries, and help it earn praise as the “Amazon of Brazil.” Other major microservices adopters, including Airbnb, Disney, Dropbox, Goldman Sachs, and Twitter, have cut development time significantly.  

More microservices = more complexity

It’s clear that when microservices are implemented and managed well, they can deliver new levels of scale, speed, and responsiveness—the major IT ingredients a company needs to compete and delight customers.  

Implementing microservices successfully is notoriously complicated, however.

Instead of deploying all the code each time the application is updated, as is common in monolithic application architectures, enterprises can leverage microservices to deploy different pieces of an application on different schedules.

For this to work, individual teams or developers need the freedom to refactor and recombine services based on how the larger application is consumed by users. Because a microservice in an application depends on all the other microservices that compose the application, this complexity needs to be abstracted and managed so that one team’s work doesn’t break another’s.

If a business fails to recognize that complexity increases with the number of microservices it uses, the organization’s efforts are unlikely to succeed. Martin Fowler, one of the intellectual authors of the microservices movement, highlights “operational complexity” as one of the key drawbacks of the approach, and Gartner research vice president Gary Olliffe has warned that a majority of enterprises may find “microservices too complex, expensive, and disruptive to deliver a return on the investment required.”

As the use cases for microservices have expanded, so has the complexity. The original vision of microservices held that a microservice wouldn’t be shared outside the team its creator worked with. Among the things that made them “microservices,” as opposed to APIs or service oriented architecture (SOA), was the fact that developers no longer had to worry about the same level of documentation or change management that they did with a widely shared service.

But microservices are heralded as a valuable way to reuse functions and scale them to more developers, both inside and outside an organization. The granularity and agility they provide is too valuable to confine within a single team. As enterprises have attempted to extend the value of microservices to more teams and partners, many have struggled to make microservices secure, understand how microservices are used and are performing, and successfully deploy microservices beyond bespoke use cases.

So what needs to happen to surmount management problems as microservices networks grow within an organization? We'll discuss that in the next part of this series.

For more, read the Apigee eBook Maximizing Microservices.

Maximizing Microservices

New eBook

A microservices approach is a significant departure from traditional software development models in which applications are built and deployed in monolithic blocks of tightly coupled code. These legacy approaches can make updating applications time-consuming, increase the potential for updates to cause bugs, and often limit how easily and quickly an organization can share or monetize its data, functions, and applications.

Microservices, in contrast, are fine-grained, single-function component services that can be scaled and deployed independently, enabling organizations to update or add new features to an application without necessarily affecting the rest of the application’s functionality.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs. Managing the complexity of large numbers of microservices can be a serious challenge; doing so demands empowering developers to focus on what microservices do rather than how they are doing it. For this, enterprises are increasingly using a “service mesh”—an abstraction layer that provides a uniform way to connect, secure, monitor, and manage microservices.

The service mesh reduces many challenges associated with complexity but does not provide an easy way for enterprises to share the value of microservices with new teams or with external partners and developers. For this, an enterprise needs managed APIs. APIs and API management help expand the universe of developers who can take advantage of microservices, while giving organizations governance over how their microservices are used and shared. Whenever a microservice is shared outside the team that created it, that microservice should be packaged and managed as an API.

Put simply, if an enterprise is serious about its microservices strategy, it needs both a service mesh to help simplify the complexity of a network of microservices and API management to increase consumption and extend the value of microservices to new collaborators.

In the recently published eBook, "Maximizing Microservices," we explore:

  • The role of a service mesh in simplifying complexity intrinsic to microservices architectures
  • How APIs enable the value of microservices to be scaled and shared with additional teams, developers, and partners
  • Why an enterprise’s ability to secure, monitor the use of, and derive insights from microservices relies on properly managing the APIs that make microservices accessible
  • How a comprehensive microservices strategy combines both a service mesh and API management to manage complexity and securely increase consumption.

Introducing Apigee API Management for Istio

Simplify exposing microservices as APIs both inside and outside your organization

Hundreds of companies rely on Apigee to create and deliver strong API programs to developers both inside and outside their organizations. At the same time, Istio has been gaining rapid acceptance as a way to bring control to networks of services.

Since joining Google Cloud in late 2016, the Apigee team has been working to make our products work more closely with other Google technologies. One of the first teams that we collaborated with was the Istio team.

It quickly became clear that both Istio and Apigee provide complementary capabilities to teams building APIs and services in today’s world. We agreed that our customers would benefit if we could ensure that Apigee and Istio work well together.

Today, we’re announcing the integration of API management with Istio so that microservices can be exposed as APIs and more easily shared with developers inside and outside your organization.

What Istio and Apigee bring

Istio simplifies the life of organizations contemplating a “microservices” approach, or who are simply deploying many services that communicate with each other. Istio creates a “service mesh” that routes traffic between interrelated services in a secure and robust way so that the developers of each individual service can focus on what a service does rather than  the details of how it communicates.

Apigee is built around the realization that, in order to be successful, modern organizations must create APIs and share them with other developers who might be part of the organization or who might be external, or even unknown. API teams using Apigee achieve this by combining APIs into “API products” that offer different capabilities and levels of service.

This enables them to control who consumes each API product, and  how much is consumed. The team gets the ability to open an API to third-party developers without worrying about seeing precious API capacity monopolized by a single developer without permission.

Bringing microservices and APIs together

When a group of developers builds a system composed of many individual microservices, a service mesh like Istio adds an essential layer of reliability and security to the whole mesh.

However, when those developers wish to share their services with another group, or with developers entirely outside the organization, a service mesh isn’t enough—it’s time for the service to be exposed as an API.

But a successful API needs to be easily consumed and that's where Apigee API Management comes in. Without it, developers who use the API have no easy way to discover what an API does, or how to sign up and start using it. The team  producing the API has no mechanism to control how the API is used and how resources are allocated.

Adding API management to Istio

Previously, developers could add API management capabilities to Istio by simply deploying Apigee Edge outside the Istio mesh and configuring it to treat Istio like any other target service.

With this new capability, an Istio user can now expose one or more services from an Istio mesh as APIs by adding API management capabilities via Istio’s native configuration mechanism.

Furthermore, Apigee users can now take advantage of Istio to bring API management to a large set of services by adding an Istio mesh to their existing Apigee installation, and then moving services into the mesh. These users may find this to be a more scalable alternative than today’s approach of deploying a large number of Apigee Edge proxies, or deploying many instances of Apigee Edge Microgateway.

This is all possible because Istio includes a component called Mixer that runs as a central part of every Istio mesh. Mixer's plugin model enables new rules and policies to be added to groups of services in the mesh without touching the individual services or the nodes where they run.

Once Apigee integration is enabled within an Istio mesh, the operator can simply use Istio’s native configuration tools to apply Apigee's API management policies and reporting to any service. Once enabled, management policies such as API key validation, quota enforcement, and JSON web token validation can be easily controlled from the Apigee UI.

Likewise, the Apigee user may view and report on API analytics, just as they expect to today. There is no need to create or deploy additional API gateways or proxies—Apigee’s integration with Mixer ensures that policy configuration changes take effect across the whole mesh, without any additional steps.

Because Mixer adds API management features to the native configuration of an Istio mesh, it greatly reduces the amount of work required to turn a large number of services into APIs. For instance, with Istio it is possible to ensure that a valid API key is required for a single service, or for a group or services, or for all services in the mesh, all using the same configuration mechanism.

Is there more?

Apigee users are accustomed to employing a richer set of features that enable API producers to customize API requests and responses to simplify internal APIs for external consumption and help transform legacy systems into consumable APIs. None of this changes because of Istio—the existing Apigee Edge product is still powerful as a facade in front of services in an Istio mesh.

Furthermore, as the Istio community grows and the project adopts new capabilities, we hope to make some of these other Apigee features equally straightforward to add to an Istio mesh, so that we can bring the best of both products to our customers.

Learn more about Apigee API Management for Istio here.

Thanks to Scott Ganyo and Will Witman for their invaluable help with this post.

To learn more, read the Apigee eBook, "Maximizing Microservices."