Microservices

Demystifying Microservices: It's Complicated

Over the last few years, microservices architectures have been increasingly celebrated as a way for enterprises to become more agile, move faster, and deliver applications that keep pace with changing customer needs.

Because of their autonomous and atomic nature, microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs.

In this series of blog posts, we’ll discuss the growing complexity that organizations face as they establish and expand their microservices strategies, how a service mesh helps simplify that complexity, and why APIs and API management are a critical part of a comprehensive microservices strategy.

The rise of microservices

Small, fine-grained functions that can be independently scaled and deployed, microservices provide software development teams with a new, agile way of building applications.

As microservices architectures have become more closely associated with enterprise agility, microservices investments have accelerated across the business spectrum—not just among big companies, a majority of which are either experimenting with microservices or using them in production, but also among mid-market firms and SMBs.

Given the success stories that have accumulated, it’s easy to understand the enthusiasm. Netflix’s iterative transition from a monolith to microservices has famously helped the company to make its content available on a dizzying variety of screen sizes and device types.

South American retailer and Google Cloud customer Magazine Luiza has similarly leveraged microservices to accelerate the launch of new services, from in-store apps for employees to an Uber-like service to speed up deliveries, and help it earn praise as the “Amazon of Brazil.” Other major microservices adopters, including Airbnb, Disney, Dropbox, Goldman Sachs, and Twitter, have cut development time significantly.  

More microservices = more complexity

It’s clear that when microservices are implemented and managed well, they can deliver new levels of scale, speed, and responsiveness—the major IT ingredients a company needs to compete and delight customers.  

Implementing microservices successfully is notoriously complicated, however.

Instead of deploying all the code each time the application is updated, as is common in monolithic application architectures, enterprises can leverage microservices to deploy different pieces of an application on different schedules.

For this to work, individual teams or developers need the freedom to refactor and recombine services based on how the larger application is consumed by users. Because a microservice in an application depends on all the other microservices that compose the application, this complexity needs to be abstracted and managed so that one team’s work doesn’t break another’s.

If a business fails to recognize that complexity increases with the number of microservices it uses, the organization’s efforts are unlikely to succeed. Martin Fowler, one of the intellectual authors of the microservices movement, highlights “operational complexity” as one of the key drawbacks of the approach, and Gartner research vice president Gary Olliffe has warned that a majority of enterprises may find “microservices too complex, expensive, and disruptive to deliver a return on the investment required.”

As the use cases for microservices have expanded, so has the complexity. The original vision of microservices held that a microservice wouldn’t be shared outside the team its creator worked with. Among the things that made them “microservices,” as opposed to APIs or service oriented architecture (SOA), was the fact that developers no longer had to worry about the same level of documentation or change management that they did with a widely shared service.

But microservices are heralded as a valuable way to reuse functions and scale them to more developers, both inside and outside an organization. The granularity and agility they provide is too valuable to confine within a single team. As enterprises have attempted to extend the value of microservices to more teams and partners, many have struggled to make microservices secure, understand how microservices are used and are performing, and successfully deploy microservices beyond bespoke use cases.

So what needs to happen to surmount management problems as microservices networks grow within an organization? We'll discuss that in the next part of this series.

For more, read the Apigee eBook Maximizing Microservices.

Microservices in Action

How three companies use microservices to drive results

Technologies are often described in terms of hype cycles and reality distortion fields — and sometimes with good reason, as some heralded technologies have yet to really pan out (looking at you, 3D TVs) while others have taken decades of starts and stops to achieve widely useful real world use cases (welcome to renewed relevance, neural networks!).

For many companies, it might be hard to judge just where microservices-based architectures currently fall on the hype continuum. Because they offer small, lightweight snippets of functionality that can be independently deployed, microservices have been almost mythologized in some circles for their ability to promote speed and scale — but they’ve also prompted some analysts to predict that a majority of microservices deployments will fail under mounting complexity. Sort of mixed signals, right?

The reality is that microservices-based architectures can be very complicated—but they’re far from hype. Those complexities are becoming increasingly manageable, and companies from all industries — not just digital natives such as Netflix—are using microservices to drive real business results, not just innovation projects and bespoke use cases.

Specifically, more companies are recognizing three things:

  • As the number of microservices increases, so does complexity — but this complexity can be mitigated by employing a service mesh so that developers do not have to be relied upon to include elements of service-to-service communication in their code.
  • Though microservices were originally intended to be shared only within small teams, many microservices express valuable functions that should be shared throughout the organization.
  • This sharing can be efficiently and relatively easily facilitated by packaging and managing microservices as application programming interfaces (APIs).

To survey some of the possibilities, here are three enterprises, drawn from some of the companies that Google Cloud’s Apigee team has worked with, whose leaders are embracing microservices and APIs to drive results.

PricewaterhouseCoopers: Creating new services, innovating faster, and entering digital ecosystems

A part of “Big Four” accounting firm PricewaterhouseCoopers, PwC Australia is leveraging microservices and APIs to accelerate innovation and create new lines of business.

The organization has decades of experience in traditional professional services such as auditing, insurance, tax, legal, and management consulting, but over the last few years, PwC Australia’s Innovation and Venture group has set out to express these capabilities as software—including lightweight, single-function microservices made consumable to developers via APIs. Because these APIs abstract the complexity of microservices and other backend systems, the company can replace and update functionality without disrupting developers or users of the apps those developers build.

By enabling a faster, more granular approach to software development, microservices and APIs have helped the company to replace some of its backward-facing, reactive, and labor-intensive legacy methods with more forward-looking proactive services. Its Cashflow Coach product, for example, applies machine learning models to ledger and banking data in order to predict cash flows based not only on when invoices should be paid but also when customers have traditionally paid.

PwC’s efforts also include a microservices-based product that uses blockchain to prevent counterfeiting schemes in the meat industry, such as attempts to use fraudulent health and provenance information to sell old or sub-standard meat products that could be dangerous to consumers. The tool relies on a physical “krypto anchor” (an edible substance stamped on meat) that can be scanned at the point of unpacking in order to verify it matches a blockchain-based certificate. When it does, the meat’s data is verified.

The company is also packaging and monetizing services via APIs to open its data and technologies to new ecosystems of external partners and developers. Because these APIs are based on a microservices architecture, PwC can observe how its services are being used and responsively update particular aspects without disrupting the APIs developers use or breaking the end-user experiences those APIs enable.

“That really is the key part of our strategy,” said Trent Lund, head of PwC Australia’s Innovation and Ventures group. “We act as a middle layer. Extract, but don’t try to own the entire business ecosystem because you can’t grow fast enough.”

Walgreens: Rewarding customer loyalty

For the last several years, Walgreens has been blending the physical and digital retail experiences by extending its services to a range of partner apps powered by microservices and APIs.

“We’ve focused on making our services as light as possible and easy to develop on,” said Walgreens developer evangelist Drew Schweinfurth in an interview.

The company’s Photo Prints API, for example, has allowed external developers to build apps that let smartphone users snap a picture from their phone on their way to lunch, send the photo to a local Walgreens for processing, then drop by to pick up prints at the store on their way home.

Its Balance Rewards API, meanwhile, lets partners build Walgreens loyalty points into their health and wellness apps, allowing the apps to reward users with loyalty points when they do things such as take a jog or monitor their blood pressure. Apps that leverage the API reach hundreds of thousands of users and have distributed billions of rewards points.

Magazine Luiza: From brick-and-mortar to omnichannel

South American retailer Magazine Luiza is a strong testament to the results a company can achieve when it deploys microservices and APIs with vision and purpose. A traditional brick-and-mortar business for much of its history, the company has enjoyed soaring revenue and seen its stock become one of the hottest in Brazil as it has sharpened its focus on modernizing IT.

Just a few years ago, the company’s technology capabilities were relatively modest. Its e-commerce efforts relied on a monolithic backend built with over 150,000 lines of code. Functionality was tightly-coupled, which made it easy to introduce bugs when pushing updates, limited the company’s ability to scale functionality, and entrenched silos between business and IT teams.

A small development team’s switch to API-first approaches helped turn everything around. As the team moved faster and introduced new products, the company scaled out the team’s best practices, eventually distributing development efforts across many small, independent teams. When its transformation began, the company had been delivering only eight new software deployments monthly — today, it pushes more than 40 per day.

The speed and agility that Magazine Luiza’s new architecture offered have produced big business results. Prior to its shift to microservices, for example, the company offered modest e-commerce capabilities that supported fewer than 50,000 SKUs. Today, Magazine Luiza operates a vast and growing online marketplace that enables new merchants to join via an API and includes over 1.5 million SKUs from sellers around the globe.

The company was able to scale up its efforts so dramatically, including expanding from a handful of engineers to more than one hundred in just a few years, because executive leadership not only understood the company’s digital vision but also enforced mandates to align the organization, such as using APIs as a communication interface between microservices and other systems, so that functionality could be consumed for innovation throughout the company.

It’s not about scale or speed—it’s about customers

Many of the companies whose stories are included in this article have achieved the benefits that make microservices so alluring—faster software development, greater ability to scale services and resources, and more.

But these technology capabilities alone are not why these companies have enjoyed success. Rather, they’ve been successful because they’ve deployed these capabilities with purpose to serve customer needs. APIs can help companies make their microservices more manageable and valuable—but a customer focus remains the most important ingredient to success.

This post originally appeared on Medium. For more on microservices, read out new eBook, "Maximizing Microservices."

 

Maximizing Microservices

New eBook

A microservices approach is a significant departure from traditional software development models in which applications are built and deployed in monolithic blocks of tightly coupled code. These legacy approaches can make updating applications time-consuming, increase the potential for updates to cause bugs, and often limit how easily and quickly an organization can share or monetize its data, functions, and applications.

Microservices, in contrast, are fine-grained, single-function component services that can be scaled and deployed independently, enabling organizations to update or add new features to an application without necessarily affecting the rest of the application’s functionality.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. But these benefits come with some costs. Managing the complexity of large numbers of microservices can be a serious challenge; doing so demands empowering developers to focus on what microservices do rather than how they are doing it. For this, enterprises are increasingly using a “service mesh”—an abstraction layer that provides a uniform way to connect, secure, monitor, and manage microservices.

The service mesh reduces many challenges associated with complexity but does not provide an easy way for enterprises to share the value of microservices with new teams or with external partners and developers. For this, an enterprise needs managed APIs. APIs and API management help expand the universe of developers who can take advantage of microservices, while giving organizations governance over how their microservices are used and shared. Whenever a microservice is shared outside the team that created it, that microservice should be packaged and managed as an API.

Put simply, if an enterprise is serious about its microservices strategy, it needs both a service mesh to help simplify the complexity of a network of microservices and API management to increase consumption and extend the value of microservices to new collaborators.

In the recently published eBook, "Maximizing Microservices," we explore:

  • The role of a service mesh in simplifying complexity intrinsic to microservices architectures
  • How APIs enable the value of microservices to be scaled and shared with additional teams, developers, and partners
  • Why an enterprise’s ability to secure, monitor the use of, and derive insights from microservices relies on properly managing the APIs that make microservices accessible
  • How a comprehensive microservices strategy combines both a service mesh and API management to manage complexity and securely increase consumption.

Demystifying Microservices

It’s easy to see why microservices have become so popular.

Netflix’s shift to microservices has famously helped it to serve up its content to so many different devices. Companies such as Twitter and Airbnb have leveraged microservices to dramatically accelerate development. South American retailer Magazine Luiza’s adoption of microservices has helped it transform from a brick-and-mortar company with basic e-commerce capabilities to, as some analysts have put it, the “Amazon of Brazil.”* In an age when delivering great digital experiences is more important than ever, microservices have shown themselves to be an important part of the mix.

But microservices are also notoriously complicated—in no small part because it’s not always clear what microservices even are. Are they basically just an evolved version of service-oriented architectures (SOA)? Since a microservice must be exposed via an application programming interface (API) for an organization to scale it to new developers, does that mean managing microservices is basically the same as managing APIs?

As this article will discuss, the answer to both of these questions is a clear “no” — and companies looking to get the most from their microservices need to understand why.

What are microservices anyway?

Broadly, the term “microservices” refers to a software development methodology organized around granular business capabilities that can be recombined at scale for new apps. A microservice architecture, then, is a method of developing software applications more quickly by building them as collections of independent, small, modular services.

A primary benefit of this architecture is to empower decentralized governance that allows small, independent teams to innovate faster. Rather than working as a large team on a monolithic application, these small teams work on their own granular services, share those services with others via APIs so the services can be leveraged for more apps and digital experiences, and avoid disrupting the work of other developers or end user experiences because each microservice can be deployed independently.

Microservices can be deployed independently and easily moved across clouds because—unlike monolithic applications—microservices are deployed in containers that include all the business logic the service needs to run. Developers can use APIs to create applications composed of services from multiple clouds, and microservices in one cloud can interact via APIs with systems elsewhere.

In more specific terms, microservices are:

  • fine-grained services that deliver a unique, single responsibility crucial to an overall business goal.
  • independent and event-driven. They share nothing with other microservices and can be deployed or replaced without impacting other services.
  • deployed separately but able to communicate with other services though a well-defined interface, generally RESTful APIs that expose the service for developer consumption.
  • code components that are typically replaced wholesale, rather than versioned, and can be disposed of without affecting other components of the architecture.
  • governed decentrally. Microservices architectures are generally incompatible with centralized legacy operational approaches that generally lock down IT assets and impose heavy governance. Microservices still requires some blanket governance practices—security concepts such as OAuth should apply to all APIs being used to consume microservices, for example. But it is important to let teams develop in the best way for them, with the best tools for their work and the autonomy to innovate.

How are businesses implementing microservices?

In many top businesses, the shift from a single development pipeline to multiple pipelines, and from waterfall development to automated continuous integration/continuous delivery (CI/CD) processes that facilitate rapid iteration, is already happening. Microservices are a major driver. Release cycles that were once long and arduous have now been reduced so that businesses can release daily or even multiple times in one day. Magazine Luiza, for example, leveraged microservices to accelerate from fewer than 10 deployments per month to more than 40 per day.

These changes emphasizes that microservices may not be useful when a company is merely trying to bolt technology onto its status quo business model. Rather, microservices are a way for businesses to use technology to change how they operate. If a company uses one large, slow-moving development team, microservices may not be much use until it has reorganized around dozens or even hundreds of small, fast-moving development teams. That transition doesn’t and typically won’t happen overnight—but microservices are built to empower small, independent teams, so a company may need at least skunkworks projects to get started.

In a blog post, T-Mobile CIO Cody Sanford describes this trend well:

“Gone are the highly-coupled applications, massive waterfall deliveries, broken and manual processes, and mounting technical debt. In their place are digital architecture standards, exposed APIs, hundreds of applications containerized and in the cloud, and a passionate agile workforce.”

If an organization is prepared to make the right organizational changes, microservices can accelerate multi-cloud strategies because the services can be scaled elastically, enabling more workloads to run in the cloud and across clouds. They can be deployed to create a more flexible architecture in which individual services can be released, scaled, and updated without impact to the rest of the system.

Monolithic applications contain all of the business logic, rules, authentication, authorization, and data tied together, which typically makes them much more difficult and time-intensive to update in even relatively modest ways. Microservices architecture, in contrast, support the separation of duties into individual self-contained entities that work together to deliver the full experience—instead of a team spending months or years creating tightly-coupled code for a single app, many teams create microservices daily or weekly that can be leveraged indefinitely in changing contexts.

How should I manage microservices?

For many businesses, the best way to manage microservices will be through a combination of a mesh network such as Istio and an API management platform. It’s important not to conflate these two things. The former handles service-to-service interactions, such as load balancing, service authentication, service discovery, routing, and policy enforcement. The latter provides the tools, control, and visibility to scale microservices via APIs to new developers and connect them via APIs to new systems.

More specifically, a service mesh can provide fine-grained visibility and insights into the microservices environment, control traffic, implement security, and enforce policies for all services within the mesh. The advantage is that it all happens without making changes to application code.

API management, meanwhile, provides for easy developer consumption of microservices. Organizations need APIs so that teams can share microservices and interact with other systems. These APIs should not be designed for simple exposure but rather as products designed and managed to empower developers. An API management platform should provide mechanisms to onboard and manage developers, create an API catalog and documentation, generate API usage reporting, productize or monetize APIs, and enforce throttling, caching, and other security and reliability precautions.

Consequently, microservices are distinct but deeply connected to APIs. Both APIs and microservices are also distinct from legacy SOA techniques in several crucial ways, including that microservices thrive in more decentralized environments with more autonomous teams and that the emphasis is not just on reusing digital assets but also making them easy to consume so that developers can leverage them in new ways.

Microservices: only one piece of the digital transformation puzzle

The benefits of microservices don’t typically emerge until a business needs to scale a service, both in terms of the number of requests it needs to handle and the number of developers who need to work with it. Additionally, while companies may break some existing systems into microservices or use microservices for current development goals, many older systems, such as enterprise service bus (ESB) or mainframe systems, will remain part of an enterprise’s overall topology. These heterogeneous systems typically communicate with one another via APIs, emphasizing again that though APIs are central to microservices, what can be done with a microservice and what can be done with an API are not the same.

Microservices will remain challenging as organizations continue to implement them — but with the right understanding of how microservices fit alongside other technologies and should be managed, this complexity may be more conquerable than it appears!

This article originally appeared in Medium.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

Introducing Apigee API Management for Istio

Simplify exposing microservices as APIs both inside and outside your organization

Hundreds of companies rely on Apigee to create and deliver strong API programs to developers both inside and outside their organizations. At the same time, Istio has been gaining rapid acceptance as a way to bring control to networks of services.

Since joining Google Cloud in late 2016, the Apigee team has been working to make our products work more closely with other Google technologies. One of the first teams that we collaborated with was the Istio team.

It quickly became clear that both Istio and Apigee provide complementary capabilities to teams building APIs and services in today’s world. We agreed that our customers would benefit if we could ensure that Apigee and Istio work well together.

Today, we’re announcing the integration of API management with Istio so that microservices can be exposed as APIs and more easily shared with developers inside and outside your organization.

What Istio and Apigee bring

Istio simplifies the life of organizations contemplating a “microservices” approach, or who are simply deploying many services that communicate with each other. Istio creates a “service mesh” that routes traffic between interrelated services in a secure and robust way so that the developers of each individual service can focus on what a service does rather than  the details of how it communicates.

Apigee is built around the realization that, in order to be successful, modern organizations must create APIs and share them with other developers who might be part of the organization or who might be external, or even unknown. API teams using Apigee achieve this by combining APIs into “API products” that offer different capabilities and levels of service.

This enables them to control who consumes each API product, and  how much is consumed. The team gets the ability to open an API to third-party developers without worrying about seeing precious API capacity monopolized by a single developer without permission.

Bringing microservices and APIs together

When a group of developers builds a system composed of many individual microservices, a service mesh like Istio adds an essential layer of reliability and security to the whole mesh.

However, when those developers wish to share their services with another group, or with developers entirely outside the organization, a service mesh isn’t enough—it’s time for the service to be exposed as an API.

But a successful API needs to be easily consumed and that's where Apigee API Management comes in. Without it, developers who use the API have no easy way to discover what an API does, or how to sign up and start using it. The team  producing the API has no mechanism to control how the API is used and how resources are allocated.

Adding API management to Istio

Previously, developers could add API management capabilities to Istio by simply deploying Apigee Edge outside the Istio mesh and configuring it to treat Istio like any other target service.

With this new capability, an Istio user can now expose one or more services from an Istio mesh as APIs by adding API management capabilities via Istio’s native configuration mechanism.

Furthermore, Apigee users can now take advantage of Istio to bring API management to a large set of services by adding an Istio mesh to their existing Apigee installation, and then moving services into the mesh. These users may find this to be a more scalable alternative than today’s approach of deploying a large number of Apigee Edge proxies, or deploying many instances of Apigee Edge Microgateway.

This is all possible because Istio includes a component called Mixer that runs as a central part of every Istio mesh. Mixer's plugin model enables new rules and policies to be added to groups of services in the mesh without touching the individual services or the nodes where they run.

Once Apigee integration is enabled within an Istio mesh, the operator can simply use Istio’s native configuration tools to apply Apigee's API management policies and reporting to any service. Once enabled, management policies such as API key validation, quota enforcement, and JSON web token validation can be easily controlled from the Apigee UI.

Likewise, the Apigee user may view and report on API analytics, just as they expect to today. There is no need to create or deploy additional API gateways or proxies—Apigee’s integration with Mixer ensures that policy configuration changes take effect across the whole mesh, without any additional steps.

Because Mixer adds API management features to the native configuration of an Istio mesh, it greatly reduces the amount of work required to turn a large number of services into APIs. For instance, with Istio it is possible to ensure that a valid API key is required for a single service, or for a group or services, or for all services in the mesh, all using the same configuration mechanism.

Is there more?

Apigee users are accustomed to employing a richer set of features that enable API producers to customize API requests and responses to simplify internal APIs for external consumption and help transform legacy systems into consumable APIs. None of this changes because of Istio—the existing Apigee Edge product is still powerful as a facade in front of services in an Istio mesh.

Furthermore, as the Istio community grows and the project adopts new capabilities, we hope to make some of these other Apigee features equally straightforward to add to an Istio mesh, so that we can bring the best of both products to our customers.

Learn more about Apigee API Management for Istio here.

Thanks to Scott Ganyo and Will Witman for their invaluable help with this post.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

 

 

API Management for Microservices

Apigee Microgateway for Pivotal Cloud Foundry is now in public beta

We’re excited to announce the public beta of Apigee Microgateway for Pivotal Cloud Foundry, a new model for deploying a microgateway inside Cloud Foundry apps to provide integrated API management for microservices.

Service broker

This new feature is delivered as service broker tile and deployed as a service inside of Cloud Foundry. It ships a new microgateway decorator buildpack, which leverages the meta buildpack through which the microservices API runtime (Apigee Microgateway) can be injected into the app container as a sidecar proxy (Learn more about buildpacks here).

Apigee Microgateway

Apigee Microgateway is a lightweight, secure, HTTP-based message processor designed especially for microservices. It processes requests and responses to and from backend services securely while asynchronously pushing valuable API execution data to Apigee Edge, where it’s consumed by the analytics system. The microgateway provides vital functions, including traffic management (protection from spikes in traffic and usage quota enforcement, for example), security (OAuth and API key validation) and API analytics.

Microgateway decorator buildpack

A meta buildpack is a Cloud Foundry buildpack that enables decomposition and reordering (execution order) of buildpacks. Using this approach, the provided microgateway decorator buildpack gets inserted inside the app container during the app build process and “decorates” or frontends all calls made to the underlying app or API.

Container sidecar 

The light footprint deployment model of the Apigee Microgateway makes it easy for it to be coresident within apps and microservices developed in Cloud Foundry as a container sidecar.

The coresident model provides the following general benefits and features:

  • No routing overhead Apigee Microgateway is embedded into the app container, so this option is great for apps that have low latency requirements.
  • Simplified lifecycle management and automatic scaling As Apigee Microgateway is part of the app container, it participates in the app lifecycle process and scales up automatically as the app is scaled.
  • Consistent and pervasive protection to service implementations All ingress to the service runs through Apigee Microgateway.
  • Analytics across all API exposures Consolidated analytics is valuable across all internal and external consumption patterns. Apigee Microgateway provides a lightweight mechanism for capturing analytics for all use cases.
  • Consistent access to APIs Common infrastructure is leveraged for API keys, tokens, and API product definitions.
  • Self service Internal and external API consumers are served through a single developer portal.

That’s all great, but we wanted to make it even simpler for Cloud Foundry app developers to start using this new feature. So we built a series of Cloud Foundry CLI plugins. These make pushing, binding, and unbinding the Apigee Service Broker more streamlined and easier to use.

The new capabilities eliminate the friction for app developers and makes it easy to enable API management for their microservices, so they can be secured, managed, and the relevant stakeholders get out-of-the-box, end-to-end visibility.

It’s easy to get started:

We’d love to hear from you; please reach out to us at the Apigee Community.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

 

State of Microservices: Are You Prepared to Adopt?

Webcast replay

The allure of microservices is clear: shorter development time, continuous delivery, agility, and scalability are characteristics that all IT teams can appreciate. But microservices can increase complexity and require new infrastructure—in other words, they can lead teams into uncharted territory.

Join Gartner’s Anne Thomas and Google Cloud’s Ed Anuff as they present an in-depth look at the state of microservices.

They discuss:

  • what a microservice is and what it isn’t
  • trends in microservices architecture
  • the relationship between microservices, APIs, and SOA architecture
  • connecting, securing, managing, and monitoring microservices

Watch the webcast replay now.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

Grow Bigger by Thinking Smaller: Getting Started with Microservices

How to clear security, visibility, and dependency hurdles when implementing microservices

It sounds contradictory, but if your enterprise plans to scale in today’s digital-first world, it’s time to start thinking smaller.

Today, many of the most innovative enterprises are scaling up their applications by breaking them into smaller pieces. This approach to IT architecture—microservices, as it’s commonly known—is a way of restructuring applications into component services that can be scaled independently (depending on whether a team needs more compute resources, memory, or IO), and then having them talk to each other via API service interfaces.

Using microservices, companies reap not only the benefits of agility and speed when building software, but also the ability to easily share and reuse services across the enterprise and beyond. In effect, these smaller services make it possible to achieve both simplicity and complexity at the same time.

According to one recent survey of over 1800 IT professionals, nearly 70% of organizations are either using or investigating microservices, with nearly one-third of organizations using them in production. At Netflix, one of the earliest adopters of microservices, roughly 30 independent teams have delivered over 500 microservices. Amazon, another long-time champion of microservices, has employed the technique to ensure effective communication within teams and enable hundreds of code deployments per day. Numerous other examples, from the open-source Kubernetes project to the Walgreens digital platform strategy, speak to this growing momentum.

But just as microservices present new opportunities for organizational efficiency and growth, they also pose common stumbling blocks—chief among them security, usage and performance visibility, and agility/reuse.

Security: Managing microservices in a zero-trust environment

The microservices architectural model has been both successful and challenging—for many of the same reasons. In essence, developers often build APIs and microservices without the kind of centralized oversight that once existed, and then they deploy them more widely than ever. This can lead to inconsistent levels of security—or no security at all.

When developers deploy microservices in the public cloud and neglect to deploy common API security standards or consistent global policies, they expose the enterprise to potential security breaches. Companies therefore must assume a zero-trust environment. As research firms have noted, a well-managed API platform can help enterprises overcome these threats by enabling the implementation of security and governance policies like OAuth2 across all of their microservices APIs.

Reliability: Delivering performance and enforcing SLAs

Microservices involve building dependencies among your software, which means all of your microservices depend on all the rest of them. By extension, it also means there are interdependency problems not unlike those that exist for SOA.

There are many ways to stress-test the reliability of microservices infrastructure, but visibility is one of the best. Which services are talking to which other services? Which ones are dependent on which other ones? These are important questions to answer—especially when microservices are used by disparate teams in a large enterprise, or by partners and customers.

Echoing the previous section, one way to answer these questions is to implement a management platform for microservices APIs. API management platforms provide the analytics and reporting capabilities that enable enterprises to measure microservices’ usage and adoption, developer and partner engagement, traffic composition, total traffic, throughput, latency, errors, and anomalies.

Armed with this information, companies can iterate quickly, reinforcing components with promising usage trends and fixing interdependency problems as they’re identified. This speed and agility are important: stress-testing and optimization can cause a company to lose momentum as it examines unlikely theoretical scenarios—which is deeply problematic, given that for many enterprises, microservices and APIs are valuable because they can dramatically shorten a new service’s time to market.

With real-time insight into API behavior, companies can balance speed, scale, and reliability by launching new services, collecting analytics, and implementing a broad range of improvements after only a few weeks of development sprints.

Adaptability: Building agile microservices for clean reuse

Many existing and legacy services are not built for modern scale. Consequently, many enterprises are replacing monolithic applications in favor of microservices that adapt legacy resources to modern architectures. In most cases, however, many applications take advantage of services from the monoliths. This means the transition from monolith to microservices must be done in a way that makes it a seamless proposition—in other words, it should be invisible to those other applications and developers using the monolith services.

Furthermore, microservices are typically purpose-built for particular use cases. But as soon as a microservice is shared outside the “two-pizza team,” developers need the ability to adapt it for wider use. And what’s a service that’s meant to be shared and reused across teams and even outside of your company? It’s an API.

An API platform serves as an API facade, delivering modern APIs (RESTful, cached, and secured) for the legacy SOAP services of the monolith apps, and exposing the new microservices. This makes it possible for mobile and web app developers to continue consuming an enterprise’s services without needing to worry about the heterogeneous environment or any transitions from monolith app to microservices by the service provider.

The way forward

As microservices become increasingly popular throughout the enterprise, more and more of them are being shared—both internally and externally. And when it comes to sharing services, it comes down to APIs.

As a result, companies are increasingly looking to API management platforms to provide the security, reliability, visibility, and adaptability they need to properly run microservices architecture. Also known as “managed microservices,” this deployment model provides enterprises with a single window for managing all microservices APIs across microservices stacks and clouds—and it’s transforming enterprises far and wide.

To learn more, read the Apigee eBook, "Maximizing Microservices."  

Image: Wikimedia Commons

Tutorial: Deploying Apigee Edge Microgateway

In a previous post, we discussed some of the features of Apigee Edge Microgateway and the power of hybrid API management.

Here, we’ll walk you through tutorials to deploy Apigee Edge Microgateway as a Docker container, in PaaS platforms like Cloud Foundry, and in cloud-native PaaS platforms like Google App Engine (GAE) and Azure App Services.

Recommended prerequisites

Before you adopt any of these deployment options, there are some steps to complete first:

  1. Configure Microgateway on a VM or host outside of the intended deployment pattern. This will produce a configuration YAML file that will be used in all of the following deployment options. The configuration file is of the format: {orgname}-{env}-config.yaml
  2. Enable plugins as necessary in the YAML file. Configure and set other parameters as necessary (log levels and connection settings, for example).
  3. Develop custom plugins as npm modules. Installation of npm modules can be done via a public npm repo (npm.org) or a private npm repo.
  4. Fork Apigee Edge Microgateway in GitHub for Azure App Services. It’s available on GitHub here. Some cloud vendors (such as Google) even provide local repositories (in which case you can load a clone of the microgateway project).
  5. Edit the config YAML to expose just a set of API proxies. For more information, check out this documentation.

Build a Docker image for Microgateway

In this section we’ll show you how to build a Docker image for Microgateway.

Step 1: Clone the project

git clone https://github.com/srinandan/apigee-edgemicro-docker

Step 2: Switch the directory

cd apigee-edgemicro-docker

Step 3: Copy the {org}-{env}-config.yaml file to the current folder and edit the Dockerfile with the correct file name (see the prerequisites).

Step 4: Build the Docker image

docker build --build-arg ORG="your-orgname" --build-arg ENV="your-env"
--build-arg KEY="bx..xxx2" --build-arg SECRET="exx..x0" -t microgateway .

Step 5: Start Microgateway

docker run -d -p 8000:8000 -e EDGEMICRO_ORG="your-orgname" -e
EDGEMICRO_ENV="your-env" -e EDGEMICRO_KEY="bxx..x2" -e
EDGEMICRO_SECRET="ex..x0" -P -it microgateway

The default path for microgateway logs are in /var/tmp. You might want to consider mounting a volume to this folder so the logs are accessible from outside the Docker image.

Microgateway on Google App Engine

Here we’ll walk you through deploying Microgateway as an app on Google App Engine (GAE).

Step 1: Fork or clone the Apigee Edge Microgateway GitHub repo (this is optional).

Step 2: Clone the forked (or main) repo in the gcloud shell.

git clone https://github.com/apigee-internal/microgateway.git
cd microgateway

Step 3: Copy the {org}-{env}-config.yaml file to the microgateway/config folder. 

Step 4: Review the app.yaml file.

# [START runtime]
service: microgateway
runtime: nodejs
env: flex
automatic_scaling:
 min_num_instances: 1
 max_num_instances: 2
resources:

cpu: 1

memory_gb: 2

 disk_size_gb: 10
env_variables:
 EDGEMICRO_KEY: 'bx..x2'
 EDGEMICRO_SECRET: 'ex..x0'
 EDGEMICRO_CONFIG_DIR: '/app/config'
 EDGEMICRO_ENV: 'env-name'
 EDGEMICRO_ORG: 'org-name'
# [END runtime]

Review the following fields:

  • The min and max instances (for auto-scaling)
  • Resources (cpu, memory)
  • Microgateway environment variables (key, secret, org and env)

Step 5: Deploy the app to GAE

gcloud app deploy --project your-project-name

Microgateway on Azure App Services

Here we'll walk you through how to deploy Microgateway as an app on Azure’s App Services platform.

In the Azure portal, perform the following steps:

Step 1: Click on “App Services.”

Step 2: Click on “+ Add.”

Step 3: Search for “node.js”and  select “API App.”

and click “Create.”

Step 4: Enter application details.

Step 5: Click on "Application Settings."

Step 6: Add the environment variables required for Microgateway.

Step 7: Save the settings (key and secret are obtained when Microgateway is configured to the org and env).

 

Step 8: Fork the Apigee Microgateway repo. Set up deployment option (for example, GitHub) and point it to the Microgateway repo.  

Step 9: Enter authentication details to the repo.

Step 10: Ensure the deployment is successful.

Microgateway on Cloud Foundry

In this section, we'll show you how to deploy Microgateway as an app on Cloud Foundry (get all the details in Pivotal’s documentation and GitHub).

Step 1: Fork the Apigee Edge Microgateway GitHub repo (this is optional).

Step 2: Clone the forked (or main) repo

git clone https://github.com/apigee-internal/microgateway.git

cd microgateway 

Step 3: Copy the {org}-{env}-config.yaml file to the microgateway/config folder.

Add the “cloud-foundry-route-service” plugin to the config file if it doen’t exist in the plugin sequence.

edgemicro:
 port: 8000
max_connections: 1000
 …
 plugins:
   sequence:
     - oauth
     - cloud-foundry-route-service

Step 4: Review the manifest.yml file

---
applications:
- name: edgemicro
 memory: 512M
 instances: 1
 host: edgemicro
 path: .
 buildpack: nodejs_buildpack
 env: 
   EDGEMICRO_KEY: 'bx..x2'
   EDGEMICRO_SECRET: 'ex..x0'
   EDGEMICRO_CONFIG_DIR: '/app/config'
   EDGEMICRO_ENV: 'env-name'
   EDGEMICRO_ORG: 'org-name'

Review the following fields:

  • Instances (for auto-scaling)
  • Memory (min: 512M)
  • Microgateway environment variables (key, secret, org, and env)

 Step 5: Deploy the app to Cloud Foundry

cf push

Step 6: Review the logs

If your Cloud Foundry instance doesn’t internet access (to download npm modules), you must follow the instructions for using the Node.js buildpack in a disconnected environment here.

Apigee Microgateway is a great choice for microservice developers and teams when they want to add API management features as close to their microservices as possible (to reduce latency), and do so natively (with no additional skills required) to the microservices platform. To learn more, read the Apigee eBook, "Maximizing Microservices."  

Questions, comments, or observations? Join the conversation on the Apigee Community.

 
 
 
 

 

Deploying Microgateway in Docker and PaaS

How to add API management capabilities natively on the microservices stack of your choice

A lot of enterprises are exploring microservices as an architecture pattern for building or exposing new APIs. Often, a microservices strategy includes an infrastructure stack with components like Docker, Cloud Foundry, Kubernetes, and OpenShift, or cloud-native PaaS platforms like Google App Engine (GAE) and Azure App Services. Apigee Edge provides API management capabilities for microservices deployed in such an infrastructure stack. 

In this post, we’ll explain the power of Apigee Edge Microgateway and the options for deploying it. In an upcoming installment, we’ll walk you through a handful of quick tutorials to get you started deploying Microgateway as a Docker container, in PaaS platforms like Cloud Foundry, and in GAE and Azure App Services. 

These options help microservices developers and teams add API management capabilities natively on the microservices stack of their choice.

What is Apigee Microgateway?

What is Apigee Edge Microgateway, you ask? It’s a secure, HTTP-based message processor for APIs. Its main job is to process requests and responses to and from backend services securely while asynchronously pushing API execution data to the Apigee Edge API platform, where it’s consumed by the Edge analytics system.

Edge Microgateway is easy to install and deploy—you can have an instance up and running within minutes.

Typically, Edge Microgateway is installed within a trusted network, in close proximity to backend target services. It provides enterprise-grade security, and some key plug-in features including spike arrest, quota, analytics, and customer extensions, but not the full capabilities or footprint of Apigee Edge. You can install Edge Microgateway in the same data center or even on the same machine as your backend services.

For a detailed explanation of how to install, setup, and use Apigee Edge Microgateway, check out this documentation page.

The power of hybrid API management

Microgateway provides the user the ability to perform hybrid API management, which enables a user to:

  • Centrally define/author API proxies
  • Centrally define API products, developer apps, and developer catalogs, among other things
  • Distribute policy enforcement of API proxies on many gateways, which can be deployed on customer data centers or other cloud providers
  • Centrally collect and view API analytics

 

Multi-cloud deployment options

Microgateway provides the user with the ability to leverage cloud-native deployments across major cloud providers.

In the next post, we'll offer some tutorials to help you get started with various deployment options for Apigee Edge Microgateway.

To learn more about Apigee and microservices, read the Apigee eBook, "Maximizing Microservices."