How HP Transformed Its Architecture with Microservices

Traditionally, enterprises built monolithic applications that contained functionality in a single program. While this approach simplified debugging and deployment, maintaining, developing and scaling monolithic applications proved to be a significant challenge. This became a significant handicap in the digital age.

To keep pace with digital innovation, many IT teams have adopted a microservices-based architecture by designing software applications as suites of independently deployable services.

Galo Gimenez, Distinguished Technologist and Platform Architect at HP Inc., and his team went through a similar transformation journey when developing the company’s core services and infrastructure (including, for example, identity management and content management, which are shared by business units across HP). Key considerations for Gimenez's team included security and encryption, developer productivity, and cost.

After extensive research, the team decided to adopt a microservices architecture with the help of Kubernetes container orchestration.

“Many teams at HP are already adopting microservices and container orchestration technology to deliver products faster and cheaper,” Gimenez says. “We decided to adopt Kubernetes because it offered a well-structured architecture along with a seamless developer experience—the teams working on the containers didn’t need to become experts on the entire architecture to be able to build and deploy applications.”

HP isn’t alone. Enterprises are increasingly adopting microservices to enable new levels of IT agility, scale, and innovation. Today, nearly 70% of organizations claim to be either using or investigating microservices, and nearly one-third currently use them in production.

Microservices can help a business achieve unprecedented levels of agility, empowering development teams to innovate faster by building new features and services in parallel. Yet these benefits come with increased complexity—many teams struggle to connect, secure, and monitor a growing network of microservices and increase the consumption of valuable microservices beyond the teams in which they were created.

Gimenez and his team experienced this challenge firsthand.

“As monolithic applications transition towards a distributed microservice architecture, they become more difficult to manage and understand,” he says.“These architectures need basic services such as: discovery, load balancing, failure recovery, metrics and monitoring, as well as complex operational requirements: monitoring, deep telemetry, rate limiting, access control, and end-to-end authentication.”

The solution to this challenge came in the form of Istio, a service mesh that helps simplify the complexities of microservices communications. It provides a standardized way to connect, secure, monitor, and manage microservices. A vital plane for service-to-service control and reliability, the service mesh handles application-layer load balancing, routing, service authentication, and more.

Other business units within HP can easily access these microservices-based core services and infrastructure using APIs. Sharing microservices is much easier; they can be exposed as APIs with  other teams in the organization or with external partners and developers.

But when microservices are exposed as APIs, they require API management. API management enables enterprises to extend the value of microservices both within the enterprise and to external developers, with security, visibility and control.

Gimenez and his team adopted Apigee along with Istio and Kubernetes to maximize the power of its microservices architecture.  


Amadeus: Shaping the Future of Travel with Apigee


If you’ve taken a trip in the past 30 years, then you’ve probably used Amadeus technology. Our solutions connect over 1.5 billion travellers every year to the journeys they want, linking them via travel agents, search engines, and tour operators to over 700 airlines, 110 airports, 580,000 hotel properties, 40 car rental companies, 90 railways, and more.

In 2016, over 595 million total travel agency bookings were processed using the Amadeus distribution platform. In addition, over 175 Amadeus airline customers processed over 1.3 billion passengers using Amadeus’ Passenger Service Systems. We combine an understanding of how people travel with the development of the most complex, trusted, critical systems our customers need.

A platform for scalability and speed

In today’s crowded travel marketplace, our customers want IT solutions that can scale up to match their complex needs—whether this includes solving the challenge of ever increasing flight search volumes, delivering flight search results in milliseconds, or enabling “pop-up” check-in and bag drop from anywhere.

Amadeus operates at large scale with hundreds of thousands of transactions processed per second to deliver mission-critical services in travel. Having a scalable and secure platform is essential to continue driving solutions for our customers, and Apigee’s API management platform fulfills this objective.

At the same time, our customers also want solutions that can adapt quickly with new features and upgrades. We’re talking days, not weeks or months. Apigee provides on-premise gateways to securely expose our APIs to our customers. These can be scaled to deliver our APIs according to our business needs. Apigee’s great capacity to create rock-solid API infrastructure gives us more freedom to focus on the architectural details of the technology we create for the travel industry.

A platform for collaboration

In the fast-paced and competitive travel industry, our customers hunger for new ways of doing things. This hunger can only be met with an open and collaborative approach across the sector.

That’s why we use an open systems architecture that offers SOAP/XML and REST/JSON formatting to be entirely platform neutral. It is totally independent of language and application frameworks, making implementation fast and efficient.

But as the number of customers using our APIs grows, so does the need to shorten the time to deploy our applications to market and evolve our API strategy.

The Apigee platform is key here. For one thing, it’s always up to date with constantly evolving industry standards, in particular with security standards like OAuth.

The platform also forms the backbone for the web app development cycle for Amadeus and our customers to jointly build applications and release them in production. Ultimately, by integrating Apigee’s control plane seamlessly with our APIs, we are able to foster fully automated operations.

A platform for visibility

Understanding how our APIs are consumed is also key for us and our customers. With Apigee we are able to see this and provide them with a detailed view of API analytics. In this big data era, knowing the number of transactions, response times on APIs, or the page travellers are spending the most time on with a mobile app could be invaluable to make the informed decisions that help us maintain an edge over competitors. This also serves as a great feedback tool to closely monitor where the industry is heading.

As a leader in travel technology, we're committed to open systems. That’s why Amadeus also works with Kubernetes. We have a strong partnership with Red Hat through its OpenShift platform, which is based on Kubernetes. Amadeus Cloud Services works with this open-source system and enables us to use automated cloud methods to deploy our services in a flexible mix of private and public clouds.

We're excited to collaborate with players like Google and Apigee, because together we can pave the way for technology that makes better journeys and creates value for our customers, travellers, and society.

Olivier Richaud is senior manager, API management & web services, technology platforms & engineering, at Amadeus. Xavier Gardien is head of portfolio and product management, technology platforms & engineering, at Amadeus.

API Management with Kubernetes

Webcast replay

Developers increasingly use Kubernetes to deploy, scale, and manage their containerized applications. So how can you securely manage and gain visibility for the APIs deployed for these apps? 

In this webcast replay, you'll learn:

  • how native integration enables app developers to easily manage application endpoints
  • how to transparently add security [OAuth/key] to your application containers and endpoints
  • how to transparently manage traffic and track analytics for endpoints exposed through Kubernetes



Kubernetes Authentication for the Enterprise

Using Cloud Foundry's UAA for Kubernetes authentication

At Apigee, we're using Kubernetes to automate deployment, scaling, and management of containerized apps. While working on securing access to it, our goal was to use our existing single-sign-on solution. In the process, we learned a few things that we felt could be useful to other Kubernetes users.

Here, we’ll discuss how we used Cloud Foundry's UAA as an OpenID connect provider for Kubernetes authentication. If you’re not using UAA but you're using an OAuth2 provider for authentication, this post could be useful to you, too.

Note: This post provides background on our process and how we successfully wired things up. If this doesn’t interest you and you just want to know the steps required to use UAA (and possibly other OAuth 2.0 providers) as an OIDC provider for Kubernetes, please skip to the Cliffs notes section below.

Kubernetes authentication

Upon evaluating the different options Kubernetes offers for cluster authentication, there was only one that seemed plausible: the OpenID Connect (OIDC) authentication provider. Per the OpenID Connect spec, OIDC is a "simple identity layer on top of the OAuth 2.0 protocol" and it just so happens that UAA is an OAuth 2.0 provider with limited OIDC support. Even with only limited OIDC support, this seemed like as good a place to start as any.

Our first step was to see how far we could get by configuring Kubernetes to use our UAA server as an OIDC provider. We passed the following command-line options to the Kubernetes API server (based on the OpenID Connect Tokens section of the Kubernetes authentication guide):

  • --oidc-issuer-url: this tells Kubernetes where your OIDC server is
  • --oidc-client-id: this tells Kubernetes the OAuth client application to use

But during startup, the API server failed to start, and we saw errors like this:

Failed to fetch provider config, trying again in 3s: invalid character '<' looking for beginning of value.

After some digging in the Kubernetes sources and the go-oidc sources, we found out that upon start, the Kubernetes API server expects to find a document located at $OIDC_ISSUER_URL/.well-known/openid-configuration. What kind of file is this and what are its contents? After some Googling, we found out this document is an OpenID Provider metadata document and is used as part of OpenID connect discovery, something UAA itself does not support.

OpenID connect discovery

Instead of giving up, we decided to look into what it takes to implement OIDC discovery, starting with the $OIDC_ISSUER_URL/.well-known/openid-configuration. Reading the "obtaining OpenID provider configuration information" portion of the OIDC discovery specification, we learned that $OIDC_ISSUER_URL/.well-known/openid-configuration is used by OIDC clients to obtain the OpenID provider configuration. Once we understood the structure of the URL that Kubernetes was looking for, we needed to understand the structure of this document.

As expected, the OIDC discovery specification explains this in the OpenID provider metadata section. Since this post is not an OpenID Connect tutorial, I will instead point you to a few public OpenID providers for reference:

Based on the OIDC discovery specification and the various examples we found online from public OIDC providers, we felt confident that we could create an OpenID provider metadata document for our UAA server. That was our next step.

Note: Since UAA does not support OIDC discovery, we had to serve the OpenID provider metadata document ourselves; you will likely need to solve this as well.

JSON web tokens and signing

Once we had our OpenID provider metadata document served at $OIDC_ISSUER_URL/.well-known/openid-configuration, we restarted the API server. This time, there were no errors related to OIDC, and the API server started successfully. The next step was to get a token and attempt to authenticate to Kubernetes using said token. Of course, depending on your environment, how you get your token will change, but for UAA users, you could use the UAAC to do this:

# Set the uaac target (The UAA server location)
uaac target $OIDC_ISSUER_URL

# Get a user token from UAA
uaac token authcode get

# Print the uaac contexts
uaac contexts

The last command will print out your UAAC contexts and one should match your target server. Once you find it, the token needed is the access_token property.

Once we have the token from UAA, we need to create/update our Kubernetes client (kubectl) context to contain our newly-retrieved token like so:

# Create a new kubectl cluster configuration
kubectl config set-cluster $CLUSTER_NAME --server=$K8S_SERVER_URL --certificate-authority=$K8S_CA_CRT

# Configure a context user (This user IS NOT the username used to authenticate to Kubernetes, that is in your token)
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER_NAME

# Configure the context user to use the token we just retrieved
kubectl config set-credentials $USER_NAME --token=$TOKEN

# Configure kubectl to use the context we just created
kubectl config use-context $CONTEXT_NAME

Here’s an example:

Note: It's only a coincidence that we use kube-solo-secure for all names in the examples below. That is not a requirement and is done purely to make cleaning things up simpler.

kubectl config set-cluster kube-solo-secure --server=kube-solo-secure --certificate-authority=/tmp/ca.crt --embed-certs

kubectl config set-context kube-solo-secure --cluster=kube-solo-secure --user=kube-solo-secure

kubectl config set-credentials kube-solo-secure --token="$TOKEN"

kubectl config use-context kube-solo-secure
Each of these commands above should output a value of [cluster|context|user] "kube-solo-secure" set., except for the kubectl config use-context command, which should have output switched to context "kube-solo-secure". Once this was done, we were ready to see how much further this got us so we ran kubectl get pods and, unfortunately, we got this error: error: you must be logged in to the server (the server has asked for the client to provide credentials)

Looking into the API server logs, we saw this error: Unable to authenticate the request due to an error: [oidc: failed syncing KeySet: illegal base64 data at input byte 19, crypto/rsa: verification error] After a great deal of research and digging around, we found out the JSON web keys document, whose location is set via the jwks_uri in the OpenID Provider Metadata document, was invalid; that's when we ran into our first incompatibility with UAA's OIDC support.

UAA's incompatibility with OIDC

JSON Web Keys (JWK) are used to verify JSON Web Tokens (JWT); the structure of a JWS mandates that the modulus used to verify signatures is to be base64url encoded but the modules (the n property of the JWK provided by UAA) was only base64 encoded.

So UAA is not encoding JWK appropriately per the JWK specification. This led to us file a bug and come up with a workaround. Much like the need for us to host our own /.well-known/openid-configuration document alongside UAA, we also created a new version of the JWS file (/token_keys) at (/k8s_token_keys) and updated our OpenID provider metadata document to have the jwks_uri use the new document.

After this was up, we re-ran kubectl get pods and this time we got another error: JWT claims invalid: invalid claim value: 'iss'. expected=$ISSUER_URL, found=$ISSUER_URL/oauth/token., crypto/rsa: verification error (notice the extra /oauth/token).

Note: At this point I would like to point out that while progress was being made, we were beginning to think we would continue down this rabbit hole forever.

The good news: this error was easy to understand and, based on the OpenID provider metadata documentation, the iss claim value MUST MATCH the issuer value of the OpenID provider metadata document. Unfortunately, this is not something you can toggle within UAA, which led to a pull request.

The purpose of this PR was to get the ball rolling on fixing this officially; the PR contains the exact changes we made to our custom UAA server to fix this. Once we deployed the new version of UAA with the PR changes made, lo and behold: kubectl get pods worked as expected.


Building a custom version of UAA to help it implement OIDC just for Kubernetes authentication might seem like a bit much, not to mention that that alone is just one of a handful of steps that workaround UAA's lack of OIDC support. If that’s the case, you have two options:

  1. Wait until UAA officially supports OIDC
  2. Use the dex (a pluggable OIDC provider from the CoreOS folks) UAA support

Cliffs notes

The explanation above discusses how we got to a working deployment of UAA that’s being used for Kubernetes authentication via OIDC. To summarize the things that were required, here's a bulleted list of the steps:

  1. Patch UAA (using this PR: https://github.com/cloudfoundry/uaa/pull/425) and rebuild to avoid /oauth/token from being appended to your iss claim
  2. Create a version of $UAA_SERVER/token_keys that has the n properties base64url encoded instead of just base64 encoded
  3. Create an OpenID provider metadata document based on the OpenID provider metadata and the examples linked to above
  4. Serve your OpenID provider metadata and JWS documents at $UAA_SERVER/.well-known/openid-configuration and $UAA_SERVER/k8s_token_keys respectively (the latter URL is just an example; you can use any path you like as long as it matches the jwks_uri property in your OpenID provider metadata document)
  5. Create an OAuth client application in UAA that has the appropriate scope (openid)
  6. Update the Kubernetes API server options to have the --oidc-issuer-url option set to the $UAA_SERVER portion of the URLs mentioned in steps two and four
  7. Update the Kubernetes API server options to have the --oidc-client-id option set to the UAA OAuth client application created in step five
  8. Update the Kubernetes API server options for OIDC as needed (beyond the update options in steps six and seven)
In the end, our goals were met and we were able to successfully use our single-sign-on solution for Kubernetes authentication. While it would be ideal if UAA supported OIDC and we could just point Kubernetes to UAA and call it good, the steps above are easy to repeat, safe and have enabled us to get what we need quickly.