11436 SSO

API-First Development with Apigee-127 (and Swagger-Node)

Chapter 3: More models for building APIs
Gregory Brail
Nov 04, 2015

This is the third in a series of posts examining the considerations that went into building an API to power our intelligent API management platform solution.

In the previous installment, I explored how we started to close the loop between API design and implementation and the role Swagger played. I also discussed a framework for building APIs in which the code defines the API.

In this post, we’ll look at two more models—one in which the code is generated from the API definition and one which has no API definition at all—and we’ll tee up a third approach that gives us the advantages of both.

Generated source code: the API generates the code

This category is represented by “IDL” (Interface Definition Language) systems such as SOAP, CORBA, and the many RPC systems that have been developed over the years.

In these types of systems, the interface is formally defined in a separate file, and then used to generate client- and server-side “stubs” that connect the bits and bytes sent over the network to actual code written by a developer. Developers on both sides then incorporate their stubs into the source code that they build and ship.

The advantages of the generated-code approach are performance, and simplicity for the developer. Since the stub code is generated at compile time, the generator can take care to make it efficient and pre-compile it, and starting it up at runtime is fast since there is no need to parse the IDL. Furthermore, once the developer specifies the IDL, the interface that's needed to code to in order to make the stubs invoke the code is typically simple.

However, with generated stub code, the possibility of the code falling out of sync with the specification is still present, because the synchronization only happens when the code is re-generated. Good RPC systems make this simple, whereas more primitive systems that generate code and expect the user to modify it are much harder to maintain.

In addition, we have the same documentation problems that we had before. If we annotate the IDL to contain the docs, and then generate the “real” docs from the IDL, then how do we keep everything in sync? Do we re-generate the stubs and re-build the product just because we changed the docs? If we don’t, then will we miss the re-generation the next time we make a “real” change, and end up with clients and servers that don’t interoperate?

And dealing with generated code in a modern software development process is painful. Do you manually run the stub generator and check in the results? If you do, then once again you'd better not forget to run it, and better remember never to modify the generated code manually. Or do you make the build system run it on every build? That may mean creating and testing custom build steps for whatever build system you use.

One advantage of this mechanism is the performance gain received by building the stubs at build time, rather than when the system starts up. That made a lot of sense in the 1990s. But today, with CPUs immensely faster, the time needed to parse and validate an IDL file and generate a run-time representation is just not there, and languages like JavaScript (and even Java) are dynamic enough that they can work without loads of generate code.

Node.js itself is a great example of this—even a simple Node.js-based server loads and compiles tens of thousands of lines of JavaScript code when it is first started, and yet it's rare for the startup of a Node.js-based application to take more than one or two seconds. (And yet, even a simple Java app, fully compiled, takes seconds if not more to start—but I digress.)

No connection at all: the code is the API

Many other server-side frameworks, especially popular Node.js-based frameworks like Express, do not have the concept of an API description at all. In the Express framework, the developer simply wires JavaScript functions to URI paths and verbs using a set of function calls, and then Express handles the rest.

Here’s a simple example of code written in the Express framework:


// GET method route
app.get(‘/’, function (req, res) {
 res.send(‘GET request to the homepage’);
});

// POST method route
app.post(‘/’, function (req,res) {
 res.send(‘POST request to the homepage’);
});

These frameworks are very nice for quickly building APIs and apps without a lot of pre-planning and configuration. However, they don't offer any model for automatically generating documentation or for sharing the API design with a larger community.

So, you might ask, why not write code like above, which admittedly every Node.js developer in the world knows how to write, and then annotate it with Swagger later? Because by doing that, we end up with the API defined in two places—once in Swagger and the other time in the code.

Which one is correct? The code, obviously! But what if the code doesn’t implement the API design that everyone agreed on after carefully following the design principles? Now we’re back to the problem that we were describing at the very beginning of this series.

Based on our experience and on experience with popular frameworks, we proposed a third approach, where the API design drives the code.

We’ll explore this approach in the next and final installment of this series.

Photo: Moyan Brenn/Flickr

Microservices Done Right

Next Steps

 
 

Resources Gallery

News