API Management

APIs: The New Security Layer

Webcast replay

APIs provide both an extraordinary opportunity for building engaging customer experiences and for strengthening your relationship with key business partners. But they also provide potential openings for savvy hackers to get unauthorized access to customer data and perhaps even to compromise your key business systems.

In this webcast replay, Apigee chief architect Greg Brail discusses:

  • API security fundamentals
  • how to proactively watch for trouble
  • protection and mitigation strategies to keep your customers and your business safe



A Proof of Concept for API Management

Get started fast with a downloadable POC requirements document

API management is a requirement for building a digital business, as it grants the ability to secure, scale, analyze, and manage your APIs. But evaluating an API management platform is no small task, as there's a lot to consider. An API management platform should:

  • enable you to build well-designed APIs from existing services
  • make API and app developers productive quickly
  • extract operational and business insights from your API and app ecosystem
  • provide the ability to monetize APIs using different rate plans

Understanding these capabilities—whether your business needs them today or in the future—is key to making an informed buying decision. 

To save you time and effort, we've built this detailed POC requirements document with the key objectives that should drive the evaluation of an API management platform. 

Download this easily customizable Word document to get started. 

Building vs. Buying Your API Infrastructure

Webcast replay

Should you buy or build your own API layer?

Apigee's Brian Pagano and Ed Anuff engage in a lively discussion in which they analyze costs and benefits of building versus buying API management.

They discussed:

  • when to build or buy—picking your battles
  • real-world experiences of companies that have tried to build
  • a checklist to help you weigh the pros and cons

Managing APIs with Apigee Edge Microgateway and NGINX

A tutorial on creating a scalable, robust proxying solution

You have a set of APIs that you want to expose to your internal or external developers, but building security, analytics, or load balancing mechanisms aren’t on the schedule, so you opt to use an API gateway. It has be lightweight and easy to deploy without breaking the bank on spinning up new servers or cloud instances. Using Apigee Edge Microgateway in conjunction with NGINX, a web server, will achieve this and more.

Apigee Edge Microgateway (MGW) is a lightweight API gateway solution that provides developers with OAuth and API key security, analytics, spike arrest, quota, custom plugin integration, and much more all in a simple service that takes two minutes to set up. NGINX is a web server (among many other things), but in this implementation we will be using it specifically for load balancing.

We will start by setting up MGW and creating a proxy for your application, ensure it is working correctly, then we will setup load balancing for your MGW instances with NGINX. The resulting system architecture will look something like this, with each node being their own server:


Microgateway configuration

On the machine that is running your target application, we’ll also run MGW. You must have Node installed (ideally version 4.2.1), and have Microgateway 1.1.2 in hand. Unzip the MGW package and navigate to the cli/bin directory. Using your Apigee Edge credentials, run the configuration command, like this:


./edgemicro configure -o testorg -e test -u test.account@example.com


This will do several things, including generating a key/secret pair that you should save for use in starting the proxy server. The pair appears in the output like this:

The following credentials are required to start edge micro

 key: 452800eab0f10ab5c95450dafe3ddc1a5b22a56d63396bc88215940a1

 secret: 6172281f8dd8ff59751a9b24efb89a1b5b4f9a1ccc8b33de8097666


With MGW successfully configured, you’ll need an “Edge Micro aware” proxy that fronts your target application. From the Apigee Edge API proxies dashboard, click the “+ API Proxy” button. We will use the Reverse Proxy setup, so continue from the Type step by pressing “next.”

In the Details step, fill in the fields based on the example below. It's important that you add the “edgemicro_” prefix to your desired name to make it Edge Micro aware.


For simplicity's sake, on the Security step, select “Pass through.” Continue through the remaining steps with the default selections and deploy the proxy. Now you have a proxy for your application!

On the machine on which you ran the ./edgemicro configure command, navigate to the agent/ directory of MGW and use your key/secret pair to start MGW.

env EDEGMICRO_KEY=452800eab0f10ab5c95450dafe3ddc1a5b22a56d63396bc88215940a1 EDGEMICRO_SECRET=6172281f8dd8ff59751a9b24efb89a1b5b4f9a1ccc8b33de8097666 npm start


The gateway should be listening on port 8000, and now we can test it by hitting MGW via curl from a separate machine.

curl -i “http://<ip of machine running MGW>:8000/blog”


This request should succeed and now you have successfully proxied your application. To replicate this setup, copy ~/.edgemicro/config.yaml from the configured machine to ~/.edgemicro/ on an unconfigured machine and start it with the same command.

NGINX configuration

At this point, we have two separate machines running MGW that are proxying local instances of the same application, and we need to balance the traffic between them. We will use a basic NGINX load-balancing configuration to achieve this.

On a separate machine, install NGINX and open the configuration at /etc/nginx/nginx.conf for editing (this may require sudo access).

Add to the http {} block the following information, replacing the example-servers names with your own name and the <mgwX-ip> with locations of your MGW instances:

upstream example-servers {

server <mgw1-ip>:8000;

     server <mgw2-ip>:8000;


server {

listen       80;

server_name  emgw;

location /blog {

proxy_pass http://example-servers;




The upstream block is the load balancing configuration; it indicates the location of servers that can fulfill the given requests, and for this case, they are calls to our target application proxied by MGW. The server block configures the NGINX server to listen on port 80 for traffic and to pass any call to the /blog endpoint upstream to our cluster of servers.

Expose the location of the NGINX server with the /blog basepath to your developers to handle all traffic. This path is appended to the URL specified by the proxy_pass property, so it should be the same as the base path unique to your Edge Micro aware proxy.

Start the NGINX server using the native Linux service daemon manager (this might require sudo access). For other operating systems, please use whatever daemon manager available.

service nginx start


Test that your setup works by hitting NGINX via curl:

curl -i “http://<ip of machine running nginx>:80/blog”


You are now load balancing multiple instances of Edge Microgateway with NGINX to proxy your application.

Customizing NGINX load balancing

The load balancing configuration we just implemented is a basic one that does not utilize the options provided by NGINX.

First, there are a couple options of load balancing method to choose from, including round robin (default), least-connected and IP hash. Round robin simply moves down the list of upstream servers and passes each a request as they come in.

Least-connected load balancing determines which upstream server has the least amount of outstanding connections alive and passes it more traffic. IP hash load balancing maps the client IP to a server in the list, always sending requests from a single client to the same server. To use either least-connected or IP hash methods, indicate it in the upstream block.

upstream <server cluster name> {

least_conn; OR ip_hash;

server <mgw1-ip>:8000;

     server <mgw2-ip>:8000;



Secondly, the load balancing configuration allows for weighted load balancing, configured like so:

upstream <server cluster name> {

server <mgw1-ip>:8000 weight=3;

     server <mgw2-ip>:8000;

     server <mgw3-ip>:8000;



Given five requests, three of them will be given to the first server (with weight=3) and the other two will be separated among the remaining servers. This can be used with any of the three load balancing methods.

This Edge Microgateway and NGINX proxying stack is highly customizable and quick to set up. MGW provides security options, analytics functionality, traffic management features, and customizable plugins. It’s so lightweight that it can be run on the same server as your target application, eliminating the need for a dedicated proxy server for each target application instance. NGINX provides easily configured load balancing, SSL configuration, and much more. Together, they create a highly scalable, robust proxying solution for your APIs.

Worked with NGINX or another load balancing tool? Join the conversation in the Apigee Community


Glh: Disrupting the Hotel Model with APIs

When hotel management company Glh went live with its first API (a hotel room availability API), it expected a trickle of interest. It got a torrent.

“We partnered with a travel agency and we thought we’d get a hundred calls. We got 150,000 calls a day, seven days a week,” said Glh enterprise architect Matthew Newton. “It took us completely by surprise.”

Glh was prepared, however. Its room availability API was running on Apigee Edge, which enabled the company’s API team to instantly see the high traffic and adjust accordingly without disrupting its back-end systems, Newton said.

“Apigee provides a brilliant platform to physically run our API on,” he said during an interview at I Love APIs 2015. “That’s been instrumental in making sure that … we haven’t been pulled back by small snags.”

APIs are playing a key role in helping Glh in its push to “disrupt the traditional hotel management model … and enrich the customer experience,” Newton added. 

APIs "get so many conversations started,” he said. “They are simple enough for everyone to understand; they are technical enough for those people that know how to use them to make real inroads and developments. It answers the question of ‘How can we get all of this information to work together?’"

An RFP Template for API Management

Key criteria for selecting a digital business platform

As Forrester analyst Randy Heffner wrote in a recent report, APIs are the underpinning of digital business platforms. They help enterprises prepare for an unpredictable future. So what goes into the evaluation of an API management platform? 

It's critical to carefully define all the requirements of building an API-powered digital business platform. This is time-consuming, however; there's a lot to consider:

  • what's the vendor's track record in API management?
  • what kind of architecture and deployment options does the vendor offer?
  • how can the platform leverage your existing technology assets?
  • what kind of analytics, security, and developer portal does the platform offer?

And there's much more. We've reduced the amount of time it takes to create an RFP for API management from hours to minutes with this RFP template. We hope it helps you on the path to building your API-powered digital business platform.


Belly & Apigee: Conserving Resources with API Management

Create new APIs while minimizing work from back-end teams

In the previous two posts in this series, Belly's director of platform Darby Frey discussed why the popular rewards program needed an API management platform and how Apigee Edge helped the company customize APIs for particular apps and devices.

Here, Darby discusses how Belly used Apigee Edge to quickly assemble new APIs for a mobile app the company was about to launch from existing microservices. Apigee Edge helped Belly avoid expending a lot of back-end developer effort, and led to a successful launch.

“Through Apigee, we were able to build out an API using components that already existed,” Darby said.

Belly and Apigee: Building APIs for Microservice-based Implementations

Why Belly needed API management for its microservices

In our first video post, Darby Frey, director of platform at rewards program provider Belly, explained why the company needed an API management platform for its microservices.

Here, Darby delves into the benefits of using Apigee Edge to customize APIs for particular apps and devices. Each of these customized APIs talk to one or more microservices.

Belly built separate APIs for iOS and Android. Apigee Edge enabled the company to customize APIs by device, which in turn helped Belly create optimal mobile experiences for its customers. Also, the ability to firewall customized APIs reduced regression testing, Darby said.

“We saw a huge benefits from Apigee, because we can make those interfaces completely custom to the products,” Darby said. “Customized mobile APIs makes it easier to sleep at night.”

In the next post in this series, we discuss how Apigee helped Belly conserve precious back-end developer resources.

API Management: A Survival Imperative

A new video on the importance of APIs and API platforms

Analysts are predicting that 2016 will be the year of the API, but why? 

It’s no secret that APIs are a key to improving customer experiences at low cost and high speed, interacting with older IT systems, and building a better Internet of Things.

Leading companies have been doing this for some time now, and have realized the importance and benefits of APIs and API platforms.

What’s happening now? The rest of the market is beginning to understand the importance of having an API program. It’s dawning on companies in financial services, retail, telecommunications, CPG, and healthcare that APIs are key to keeping up with the disruptors. Without APIs, they just can’t move fast enough.

In this short video, I sat down with my colleague Denise Persson to discuss the importance of APIs and API management, and why investing in an API platform is a “survival imperative."

How does your organization think about the business of APIs? Join the conversation on the Apigee Community.

And, for an in-depth look at the features of sophisticated API management, download the free eBook, “The Definitive Guide to API Management."

Bringing API Management to AWS-Powered Backends

Webcast replay

API management makes it easy to expose and consume APIs from services built on Amazon Web Services. In this webcast replay, Apigee's Alan Ho and AWS's Chris Munns discuss using Apigee's API management for AWS-powered backends.

Alan and Chris cover:

  • an introduction to API management
  • the benefits of API management for AWS customers
  • performance and security best practices for AWS and Apigee
  • an Apigee/AWS Lambda demo