11436 SSO

Managing APIs with Apigee Edge Microgateway and NGINX

A tutorial on creating a scalable, robust proxying solution
Mar 23, 2016

You have a set of APIs that you want to expose to your internal or external developers, but building security, analytics, or load balancing mechanisms aren’t on the schedule, so you opt to use an API gateway. It has be lightweight and easy to deploy without breaking the bank on spinning up new servers or cloud instances. Using Apigee Edge Microgateway in conjunction with NGINX, a web server, will achieve this and more.

Apigee Edge Microgateway (MGW) is a lightweight API gateway solution that provides developers with OAuth and API key security, analytics, spike arrest, quota, custom plugin integration, and much more all in a simple service that takes two minutes to set up. NGINX is a web server (among many other things), but in this implementation we will be using it specifically for load balancing.

We will start by setting up MGW and creating a proxy for your application, ensure it is working correctly, then we will setup load balancing for your MGW instances with NGINX. The resulting system architecture will look something like this, with each node being their own server:


Microgateway configuration

On the machine that is running your target application, we’ll also run MGW. You must have Node installed (ideally version 4.2.1), and have Microgateway 1.1.2 in hand. Unzip the MGW package and navigate to the cli/bin directory. Using your Apigee Edge credentials, run the configuration command, like this:


./edgemicro configure -o testorg -e test -u test.account@example.com


This will do several things, including generating a key/secret pair that you should save for use in starting the proxy server. The pair appears in the output like this:

The following credentials are required to start edge micro

 key: 452800eab0f10ab5c95450dafe3ddc1a5b22a56d63396bc88215940a1

 secret: 6172281f8dd8ff59751a9b24efb89a1b5b4f9a1ccc8b33de8097666


With MGW successfully configured, you’ll need an “Edge Micro aware” proxy that fronts your target application. From the Apigee Edge API proxies dashboard, click the “+ API Proxy” button. We will use the Reverse Proxy setup, so continue from the Type step by pressing “next.”

In the Details step, fill in the fields based on the example below. It's important that you add the “edgemicro_” prefix to your desired name to make it Edge Micro aware.


For simplicity's sake, on the Security step, select “Pass through.” Continue through the remaining steps with the default selections and deploy the proxy. Now you have a proxy for your application!

On the machine on which you ran the ./edgemicro configure command, navigate to the agent/ directory of MGW and use your key/secret pair to start MGW.

env EDEGMICRO_KEY=452800eab0f10ab5c95450dafe3ddc1a5b22a56d63396bc88215940a1 EDGEMICRO_SECRET=6172281f8dd8ff59751a9b24efb89a1b5b4f9a1ccc8b33de8097666 npm start


The gateway should be listening on port 8000, and now we can test it by hitting MGW via curl from a separate machine.

curl -i “http://<ip of machine running MGW>:8000/blog”


This request should succeed and now you have successfully proxied your application. To replicate this setup, copy ~/.edgemicro/config.yaml from the configured machine to ~/.edgemicro/ on an unconfigured machine and start it with the same command.

NGINX configuration

At this point, we have two separate machines running MGW that are proxying local instances of the same application, and we need to balance the traffic between them. We will use a basic NGINX load-balancing configuration to achieve this.

On a separate machine, install NGINX and open the configuration at /etc/nginx/nginx.conf for editing (this may require sudo access).

Add to the http {} block the following information, replacing the example-servers names with your own name and the <mgwX-ip> with locations of your MGW instances:

upstream example-servers {

server <mgw1-ip>:8000;

     server <mgw2-ip>:8000;


server {

listen       80;

server_name  emgw;

location /blog {

proxy_pass http://example-servers;




The upstream block is the load balancing configuration; it indicates the location of servers that can fulfill the given requests, and for this case, they are calls to our target application proxied by MGW. The server block configures the NGINX server to listen on port 80 for traffic and to pass any call to the /blog endpoint upstream to our cluster of servers.

Expose the location of the NGINX server with the /blog basepath to your developers to handle all traffic. This path is appended to the URL specified by the proxy_pass property, so it should be the same as the base path unique to your Edge Micro aware proxy.

Start the NGINX server using the native Linux service daemon manager (this might require sudo access). For other operating systems, please use whatever daemon manager available.

service nginx start


Test that your setup works by hitting NGINX via curl:

curl -i “http://<ip of machine running nginx>:80/blog”


You are now load balancing multiple instances of Edge Microgateway with NGINX to proxy your application.

Customizing NGINX load balancing

The load balancing configuration we just implemented is a basic one that does not utilize the options provided by NGINX.

First, there are a couple options of load balancing method to choose from, including round robin (default), least-connected and IP hash. Round robin simply moves down the list of upstream servers and passes each a request as they come in.

Least-connected load balancing determines which upstream server has the least amount of outstanding connections alive and passes it more traffic. IP hash load balancing maps the client IP to a server in the list, always sending requests from a single client to the same server. To use either least-connected or IP hash methods, indicate it in the upstream block.

upstream <server cluster name> {

least_conn; OR ip_hash;

server <mgw1-ip>:8000;

     server <mgw2-ip>:8000;



Secondly, the load balancing configuration allows for weighted load balancing, configured like so:

upstream <server cluster name> {

server <mgw1-ip>:8000 weight=3;

     server <mgw2-ip>:8000;

     server <mgw3-ip>:8000;



Given five requests, three of them will be given to the first server (with weight=3) and the other two will be separated among the remaining servers. This can be used with any of the three load balancing methods.

This Edge Microgateway and NGINX proxying stack is highly customizable and quick to set up. MGW provides security options, analytics functionality, traffic management features, and customizable plugins. It’s so lightweight that it can be run on the same server as your target application, eliminating the need for a dedicated proxy server for each target application instance. NGINX provides easily configured load balancing, SSL configuration, and much more. Together, they create a highly scalable, robust proxying solution for your APIs.

Worked with NGINX or another load balancing tool? Join the conversation in the Apigee Community


Creating World-Class Developer Experiences