API Management

A Checklist for Every API Call

What every stakeholder in the API lifecycle needs to know

While most API gateway solutions can do basic API proxying, APIs have become the fabric of the digital enterprise, so companies need comprehensive API management functionality that addresses the concerns and use cases of several stakeholders.

An API management solution enables the entire API lifecycle. It is imperative for the people responsible for the API lifecycle to effectively and securely deliver APIs that are easy to use by app developers.

This new Apigee technical brief covers use cases for:

  • security architects or CISOs
  • developers or enterprise architects
  • operations engineers
  • API business or product owners

Download the brief.

Black Friday: Protect Your APIs from Cyber Threats

Webcast replay

Bad bots can scrape your inventory and pricing information, and steal consumer credentials. Bots can also put stress on backend services and impact your SLA to customers and partners, especially around events like Black Friday.

To protect your APIs, you need a new, data-driven approach to identify and stop bad bots automatically.

In this webcast replay, Apigee's Subra Kumaraswamy and David Andrzejek discuss:

  • the nature of bot attacks and typical use cases
  • how to intelligently detect bad bots whilee leetting good bots in
  • how to implement technologies in your security stack to protect against bad bots

 

 

API-First Development with Apigee-127 (and Swagger-Node)

Chapter 4: A new philosophy

This is the fourth and final post in a series examining the considerations that went into building an API to power our intelligent API management platform solution.

In the previous post of this series, we explored frameworks for creating APIs in which the code is generated from the API definition and in which there’s no API definition at all. Here we’ll wrap up by describing a new approach which we proposed based on our experience and on experience with popular frameworks—one in which the API design drives the code.

This new approach is different from the “code-first” approach we described in an earlier post in that the API definition is created before the code may be invoked. It is also different from the code generation approach because there is no intermediate step of generated code to fall out of sync.

This idea resulted in us coming up with the following philosophy:

  1. The API must be designed at the start
  2. The contract that represents the API design must drive the API runtime
  3. The API design will change, so the framework must adapt without the code, design, and documentation falling out of sync

When we say that the design “drives” the code, we mean just that—the document that describes the API design is parsed and turned into a set of data structures that the runtime uses, in real time, to classify, validate, and route each incoming API call.

This bit is the part that most people mis-understand. This approach is not “model-driven,” in which there is some separate artifact that is generated from the code, or that is used to generate code. Rather, the API design is consumed every time the API server is started and used to decide how to process every incoming API call. If there are existing systems that work this way, we are not aware of them.

Keeping the definition and implementation in sync

We believe that there are several advantages to this mechanism, and very few costs. For instance, although the API definition is validated every time the API server is started, in our implementation the API server starts as quickly as any Node.js app—which is to say, very, very quickly.

Most importantly, with this approach it is not possible for the definition of an API and the implementation to fall out of sync. In this case, the API specification is more than a description of a possible truth. It is the definition of truth.

The resulting bit of technology is called “swagger-node.” It consists of a validation component and a runtime component.

The validation component is a Node.js module that parses and validates the Swagger 2.0 API definition and turns it into an easily-navigable data structure. Since it’s written in JavaScript, the same code is used to validate the API design in the server code and in the interactive Swagger editor, which enables developers to see how their API documentation renders as they type.

The runtime component uses the validated data structure to wire the API design into whatever server the developer is using. Since there are so many servers for Node.js, swagger-node works with the most popular ones, including  Express, Hapi, Restify, and Sails, as well as any Connect-based middleware.

For instance, when Express is used, swagger-node works as “middleware” that plugs into the HTTP call processing chain, validates each API call, and routes it to some Node.js code that actually handles the API call.

Although swagger-node is built for Node.js, there is no reason why the “design-driven” philosophy it uses could not be extended to any other environment that supports dynamic invocation, such as Java, Go, or even Erlang. In fact, the swagger-node project has inspired swagger-inflector (in Java) and Connexion (in Python).

Pulling it together: Apigee-127

In addition to the basic API design, the API specification can also include additional API metadata.

For instance, the Swagger 2.0 specification allows the API definition to be annotated with security information, such as what flavour of OAuth to use for particular API calls. It also allows for vendor-specific extensions. These can be used to specify additional information about the API contract, additional documentation fields, or information about policies that apply to the API traffic.

Based on these concepts, we also assembled “Apigee-127,” or “a127.” (The name comes from the IP address of “localhost”—127.0.0.1—and the idea that 127 would allow developers to easily run many of the API management functions of our Apigee Edge product on their laptops).

For instance, Apigee-127 builds upon swagger-node to allow annotations in the Swagger document for:

  • OAuth authentication, with the ability to require different “scopes,” or no authentication at all, on a call-by-call basis
  • API key validation
  • Response caching
  • Spike arresting (service-level traffic limits designed to arrest out-of-control API clients)
  • Quotas (application- and user-specific traffic limits designed to address business requirements and allow APIs to be monetized)
  • API analytics (which are gathered at runtime and pushed asynchronously to the Apigee cloud)

By using Apigee-127, a software developer for the Node.js platform can design an API, handle non-functional requirements without writing additional code, and quickly jump between the API definition and the code using the very short “compile”-edit-debug link that Node.js enables.

Conclusion

Based on our experiences designing and implementing web APIs, we felt that there was room in the world for a different approach for defining APIs and connecting them to code that would result in higher-quality APIs.

The resulting tools—swagger-node and Apigee-127—are available on NPM and GitHub and have been used by many developers to build productive APIs. Please try them out and let us know how they can be better!

Acknowledgements

Ideas are cheap, but execution isn’t. The people who actually created Apigee-127, swagger-node, and the Swagger editor are Scott Ganyo, Jeff West, Jeremy Whitlock, and Mohsen Azimi, and they were guided by Marsh Gardiner, Ed Anuff, and myself. Special thanks also to Swagger's Tony Tam for his support and encouragement of this project along the way.

Photo: Zhang Wenjie/Flickr

Financial Services and APIs to Share the Spotlight in 2016

“APIs and open platforms will take center stage.”

That’s the first prediction Forrester makes in the recent report “Predictions 2016: Financial Services Execs Wake Up To Digital Transformation.”

The report, which details eight digital shifts the research firm expects to see in retail financial services next year (and is summarized here), aligns with trends in many industries where mobile has changed consumer expectations. But Forrester’s prediction suggests a host of new opportunities for financial services firms that master APIs in the coming year.

The power of platform

Mobile has become table stakes for financial services providers. In the Apigee Institute’s 2014 Digital Impact Survey, 72% of adult U.S. smartphone owners reported that apps had changed how they bank. Nearly all (92%) expected banks to offer key functions via apps by 2016.

But pumping out individual apps isn’t enough to meet customer demands and resist competitive pressure. The MIT Center for Information Systems Research (CISR) cuts to the chase in “Thriving with Digital Disruption: Five Propositions” with this quote: “Don’t have a digital platform? You’re cooked!”

The authors explain:

Because the digital economy makes rapid innovation possible, it also makes it essential. Often lost in the rush to innovate, however, is the fact that an underlying digitized platform is table stakes for rapid innovation. Consider the development of a new customer mobile app: if it can’t be integrated onto the platform, you end up with data that can’t easily be analyzed and transactions that take time to process (if they can be processed at all). With a digitized platform and its associated APIs, the app can plug in to the platform and immediately start delivering service, speeding rollout and resulting in great experience.

Real-time unicorns

The ability to quickly plug into a platform and immediately consume (and contribute) data that can shape a customer experience is something IDC emphasizes in no uncertain terms.  

On the one hand, real-time services are a matter of defense against disruptors like Venmo (PayPal’s peer-to-peer payment app).

“It won't be long before customers find it absolutely preposterous that it takes two to three days for their money to get where they want it to go,” observes one bank executive quoted in the IDC report.

However, when “real time goes prime time,” as Forrester describes it, financial services firms will unlock new opportunities for mobile app-driven business. For example, “combining real-time data with predictive analytics will enable a firm to offer a personalized auto loan or insurance policy to a customer about to start a test drive at a car dealership.”

The appeal of this scenario is one reason fin tech has been a hotbed for investment in digital start-ups—more than $12 billion in funding that has helped created 36 “unicorns” in this sector (each of them valued at $1 billion or more). IDC predicts that 2016 will be the year where there will be “a shakeout of winners and losers.”

APIs as matchmakers

This brings home the importance of APIs. Every startup is a potential acquisition. Every narrowly focused digital native offering a single great consumer experience is a potential asset to an established firm’s digital portfolio.

But the pace of change in the market no longer accommodates “technology integration” projects that may take 18 months or longer.

On the start-up side, the hopeful ranks of digital native would-be disruptors are largely built in an API-centric way. They’re ready to be “plugged in” to an incumbent’s platform—if the established firm has taken the initiative to create an API-centric platform that’s ready for them.

“APIs are perhaps the most important technology in digital business design,” IDC says.

If the research firm is right, financial services firms that have been slow to embrace APIs will struggle to keep up with the fin tech startup ecosystem that's been reshaping consumers' expectations.

They may also miss a singular opportunity to capitalize on a shakeout among those very same disruptors.

Image: Aha-Soft/The Noun Project

How the API Tier Trumps Monolithic Web Architecture

The evolution to today's unified app interaction channel

Previously in this video series, we discussed the differences between APIs and SOA, and how SOA simply wasn't designed for horizontal scale. 

In our latest fireside chat, I sat down with my colleague Brian Pagano to discuss how we progressed from a monolithic web architecture to today's unified app interaction channel architecture.

Each step in the evolution is not simply a bolt-on to a previous architecture; a modern API architecture requires thinking "outside in," and it requires consideration of the needs of people beyond certain trust boundaries. An API tier requires a completely different way of thinking about exposure and consumption.

 

For more on the evolution of the modern API architecture and the difference between APIs and SOA, download the free eBooks, "APIs are Different than Integration" and "Beyond ESB Architecture with APIs."

 

Edge Microgateway: Hybrid Cloud API Management

Announcing our 1.1.0 version release

We're pleased to announce a new version of Apigee Edge Microgateway, a lightweight solution that enables enterprises to manage their APIs in a hybrid deployment. 

API traffic flows through a gateway running close to the application while being managed centrally through Apigee Edge, which enables organizations to securely deliver and manage APIs, with agility at scale. Customers use Microgateway (which we released in limited availability back in July) to manage their internal APIs/microservices.

 

Screen Shot 2015-11-16 at 3.40.43 PM.png

 

The first release had the following features:

  • Authentication and authorization using OAuth 2.0 protocols
  • Analytics
  • Quota
  • Spike arrest

Customers appreciated its simplicity and ease of installation, but also said it lacked a few features that would help them really get the most out of it. So we incorporated their feedback and released a new 1.1.0 version, with the following new features:

  • Authentication and authorization using simple API keys
  • Plugin support that enables custom code in Node.js for both request and response paths
  • Ease of installation, such that it takes one command and less than a minute to get started

We now have a lightweight but powerful gateway for API management in a hybrid cloud environment. Microgateway is available for all Edge Startup, Edge SMB, and Edge Enterprise customers. Please visit our community forum to discuss your use cases or ask questions.

How to Drive Adoption with Developer Portals

Webcast replay

A developer portal is the face of your API program. It provides everything that internal, partner, and third-party developers need to build new apps and experiences. In this webcast replay, Apigee's Tej Ravindra, Chris Novak, and Martin Nally discuss what makes a successful developer portal, both internal and external. Find out from developers about what it takes to help them adopt your APIs quickly—and help you innovate faster.

Tej, Chris, and Martin discuss:

  • the definition of a developer program
  • the reasons to build a developer program—both internal and external
  • what differentiates successful developer programs
  • key considerations while launching third-party developer programs

 

API-First Development with Apigee-127 (and Swagger-Node)

Chapter 3: More models for building APIs

This is the third in a series of posts examining the considerations that went into building an API to power our intelligent API management platform solution.

In the previous installment, I explored how we started to close the loop between API design and implementation and the role Swagger played. I also discussed a framework for building APIs in which the code defines the API.

In this post, we’ll look at two more models—one in which the code is generated from the API definition and one which has no API definition at all—and we’ll tee up a third approach that gives us the advantages of both.

Generated source code: the API generates the code

This category is represented by “IDL” (Interface Definition Language) systems such as SOAP, CORBA, and the many RPC systems that have been developed over the years.

In these types of systems, the interface is formally defined in a separate file, and then used to generate client- and server-side “stubs” that connect the bits and bytes sent over the network to actual code written by a developer. Developers on both sides then incorporate their stubs into the source code that they build and ship.

The advantages of the generated-code approach are performance, and simplicity for the developer. Since the stub code is generated at compile time, the generator can take care to make it efficient and pre-compile it, and starting it up at runtime is fast since there is no need to parse the IDL. Furthermore, once the developer specifies the IDL, the interface that's needed to code to in order to make the stubs invoke the code is typically simple.

However, with generated stub code, the possibility of the code falling out of sync with the specification is still present, because the synchronization only happens when the code is re-generated. Good RPC systems make this simple, whereas more primitive systems that generate code and expect the user to modify it are much harder to maintain.

In addition, we have the same documentation problems that we had before. If we annotate the IDL to contain the docs, and then generate the “real” docs from the IDL, then how do we keep everything in sync? Do we re-generate the stubs and re-build the product just because we changed the docs? If we don’t, then will we miss the re-generation the next time we make a “real” change, and end up with clients and servers that don’t interoperate?

And dealing with generated code in a modern software development process is painful. Do you manually run the stub generator and check in the results? If you do, then once again you'd better not forget to run it, and better remember never to modify the generated code manually. Or do you make the build system run it on every build? That may mean creating and testing custom build steps for whatever build system you use.

One advantage of this mechanism is the performance gain received by building the stubs at build time, rather than when the system starts up. That made a lot of sense in the 1990s. But today, with CPUs immensely faster, the time needed to parse and validate an IDL file and generate a run-time representation is just not there, and languages like JavaScript (and even Java) are dynamic enough that they can work without loads of generate code.

Node.js itself is a great example of this—even a simple Node.js-based server loads and compiles tens of thousands of lines of JavaScript code when it is first started, and yet it's rare for the startup of a Node.js-based application to take more than one or two seconds. (And yet, even a simple Java app, fully compiled, takes seconds if not more to start—but I digress.)

No connection at all: the code is the API

Many other server-side frameworks, especially popular Node.js-based frameworks like Express, do not have the concept of an API description at all. In the Express framework, the developer simply wires JavaScript functions to URI paths and verbs using a set of function calls, and then Express handles the rest.

Here’s a simple example of code written in the Express framework:


// GET method route
app.get(‘/’, function (req, res) {
 res.send(‘GET request to the homepage’);
});

// POST method route
app.post(‘/’, function (req,res) {
 res.send(‘POST request to the homepage’);
});

These frameworks are very nice for quickly building APIs and apps without a lot of pre-planning and configuration. However, they don't offer any model for automatically generating documentation or for sharing the API design with a larger community.

So, you might ask, why not write code like above, which admittedly every Node.js developer in the world knows how to write, and then annotate it with Swagger later? Because by doing that, we end up with the API defined in two places—once in Swagger and the other time in the code.

Which one is correct? The code, obviously! But what if the code doesn’t implement the API design that everyone agreed on after carefully following the design principles? Now we’re back to the problem that we were describing at the very beginning of this series.

Based on our experience and on experience with popular frameworks, we proposed a third approach, where the API design drives the code.

We’ll explore this approach in the next and final installment of this series.

Photo: Moyan Brenn/Flickr

API-First Development with Apigee-127 (and Swagger-Node)

Chapter 2: Closing the loop between API design and implementation

This is the second in a series of posts examining the considerations that went into building an API to power our intelligent API management platform solution.

In the previous installment of this series, I explored why and how Apigee set out to build an API to power its platform product. Here I’ll describe how we started to close the loop between API design and implementation, the role Swagger played, and alternative frameworks for building APIs.

We set out to create a way to build APIs that solved for a broken design-to-code workflow. Design and code needed to happen in parallel, but we also needed to adhere to our central tenets of API design:

  1. API design is important; it’s the language that developers use to communicate with the API.
  2. The style of APIs that we design is centered around well-known URIs and verbs.
  3. Documentation is important and should be a first-class citizen.

We also saw an opportunity to close the gap between the API design and the implementation.

At the same time, we expanded our use of Node.js. We saw Node.js as an opportunity to quickly build APIs for a variety of situations, so we used it for internal projects and incorporated it into the Apigee product stack. We learned that while it’s not the right choice for everything, it’s ideal for quickly building network-oriented code that performs well.

Finally, and possibly most importantly, we began to engage with the Swagger community, and started to work with version 2.0 of Swagger. Swagger is a community-driven way to design APIs. A Swagger document describes, in a formal way, all the URI paths for an API, all the query parameters, the request and response bodies, and basically everything that a client needs to know in order to successfully make API calls.

Swagger grew out of a company called Wordnik, which was having similar API challenges. They created Swagger to meet their own needs, opened it up to the community, and enthusiastically worked with us to create the Swagger 2.0 working group. This led to a description format that is designed to be more “writable,” because the API may be specified in YAML as well as in JSON.

One alternative: the code defines the API

Most existing frameworks for creating APIs fall into one of two basic categories: either the code defines the API, or the API generates the code.

The first category is represented by Java-based frameworks such as JAX-RS, and extensions such as the original Swagger for Java. In these frameworks, the developer writes code and annotates the code to specify additional attributes of each API call (such as specific names and types of various parameters, descriptions, and additional validation rules).

For instance, here's an example of an API call written in Java using the Swagger framework:


@Path("/my-resource")
@Api(value="/my-resource”,
    description="Rest api for do operations on admin",
    produces=MediaType.APPLICATION_JSON)
@Produces({ MediaType.APPLICATION_JSON })
class MyResource{
   @ApiOperation(value = "Get specific element",
     httpMethod = "GET",
     notes = "Fetch the element of the collection",
     response = Response.class)
   @ApiResponses(value = {
  @ApiResponse(code = 200, message = ”Element found"),
  @ApiResponse(code = 404, message = “Element not found"),
   @ApiResponse(code = 500, message = “Server error due to encoding"),
   @ApiResponse(code = 400, message = "Bad request: decoding error"),
   @ApiResponse(code = 412, message = ”Prereq: Required data not found")
   })
   public Response get(
  @ApiParam(value = "UUID of the element", required = true)
  @PathParam("uuid") String uuid) {

Then, a separate tool is run which introspects the code to generate documentation, client-side code, and other artifacts from the source code. Sometimes this tool happens at runtime, as with the original Swagger, and sometimes it happens at deployment time or compile time—but the effect is the same.

The advantage of this approach is that the code and documentation are never out of sync, as long as the documentation is always re-generated (and updated on the web site, or wherever) whenever the code changes.

However, with this approach, there is no formal mechanism for the developer of the code to have a “conversation” with other parties regarding the API, other than via the source code itself.

There’s also not a great mechanism for the technical writer to tie quality documentation to the structure of the API. If the writing team wants to use the generated documentation as the basis of the “real” documentation, then that means that the tech writers need access to the code base in order to update the bits of documentation that are kept in the code.

The only real option is to have all parties collaborate on the code base itself, and to keep abreast of any changes (and, if necessary, to stop them before going live). This works particularly badly in closed-source situations, or even in open-source situations when the code that runs the API is sufficiently large or complex that the average “user” cannot be expected to keep track of it. It is similarly cumbersome if the technical writers don’t want to have to learn how to build and test the codebase in order to check in documentation changes.

Also, what if you don’t want the documentation to match the code? Perhaps there are parts of the API that you don’t want to document, at least not right away. Or perhaps there is a need to change the docs without doing a code release. All of these things end up driving us to the conclusion that, for “real products” at least, the API docs can’t simply be generated from the code and then put up on the web site for everyone to see.

The next installment in this series explores another existing framework for creating APIs, in which the API generates the code.

Photo: Matthew Powell/Flickr

API-First Development with Apigee-127 (and Swagger-Node)

Chapter 1: How and why we built an API to power our platform

This is the first in a series of posts examining the considerations that went into building an API to power our intelligent API management platform solution.

Starting in in early 2011, we set out to re-design our product for managing APIs. The project had a lot of requirements based on our experience running our product on our own and in our customers’ data centers. One of the requirements would be to make sure that everything it could do was powered by a well-designed, intuitive API.

Furthermore, the bar for this API was pretty high. We’ve made a big deal at this company about APIs (hence the name) and have been clear that the most successful APIs are ones that are designed in a clear and consistent way. We knew that this management API would be used by everyone who uses our platform, which we hoped would be tens of thousands of developers, if not more. We also knew that we had a reputation to maintain, so the design had to be pretty good.

That meant following certain patterns that we had seen used in other successful APIs, and following them in a consistent way. For instance:

  • To create a new “application,” POST to a URI named /applications
  • To get a particular application, GET from a URI named /applications/{name}
  • To delete an application, DELETE that same URI
  • To update that application, PUT to the same URI

There are other aspects to the pattern, but it was one that's well understood around the industry and which, if done consistently, would result in an intuitive API that didn’t require a huge cognitive load. Once you understood “applications,” the same patterns would work for “developers” or “messages,” and so on.

Building the API

Now the next step was to build it. We had a small but effective team located in Bangalore that was tasked with building this new platform. While they were devoted to building a great product, they didn’t have as much experience with our particular way of designing APIs as I did, nor did they feel as much passion about it. More importantly, they had many more urgent things to do than debate the finer points of the design of each API. Plus, they were 12 (and a half!) time zones away.

I felt that I had to design the API myself. I ended up doing what generations of API designers did, which is to open up a text editor and start typing. I ended up writing up little snippets like many others have done:

GET /applications Return a list of the names of the applications

GET /applications/{id} Return the application with the specified ID

POST /applications Create a new application

And so on.

The developers then did what they did, which was to turn those high-level descriptions into APIs. We were building this API in Java, so the code looked a bit like this:

@GET

@Path("/applications")

public List<String> getApplications() {

// Blah blah blah

}

@GET

@Produces({"application/json","application/xml"})

@Path("/applications/{id}")

public Application getApplication(@PathParam("id") String id) {

// Blah blah blah

}

@POST

@Produces({"application/json","application/xml"})

@Consumes({"application/json","application/xml"})

@Path("/applications")

public void createApplication(Application app) {

// Bleh bleh bleh

}

So far so good—we have an API!

The disconnect

Of course, things didn’t always work out so cleanly. For instance, a developer might end up wanting to add a method to get all the applications, but to get all the details, not just their names.

According to our API design, this would happen by adding a query parameter called “expand” to the “/applications” URI. A GET with “expand=true” would return more information than one that did not.

The developer, however, might have a different idea, and instead add a completely new URI like this:

@GET

@Path("/getApplicationsWithDetails")

public List<Applications> getApplicationsWithDetails() {

// Blah blah blah

}

(Note: Our actual developers understood our standards better than that and wouldn’t have done something that far from our design principles, but I’ve seen things like this happen elsewhere.)

As the product grew, and as the team grew, these kinds of problems kept cropping up.

We eventually grew to realize that API design is like user interface design. Just as we would not roll out a new user interface without letting our user interaction people work on it, and wouldn’t put something on the corporate web site without approval from our chief graphic designer, we would not want to put out an API without input, or even approval, from the small, passionate group of API experts on our product team.

But that didn’t happen. New APIs kept cropping up, and the PMs and others would ask questions like, “did you know about that API? Why did they decide to call it ‘audit’ rather than ‘audittrails’? Why is that a POST and not a DELETE?” And so on.

Plus, we didn’t have an effective way to decide on the API design before the code was done and pushed to production. In lots of cases, if we changed it “now” we’d have to stop-ship the whole release, or go back to customers and tell them that we made an incompatible change to an API which would break all the scripts that they wrote, not to mention our own UI and other tools.

The documentation

What made things extra difficult was that we’d now have to go back and document these APIs. Since we had built them in Java and the truth was in the code—and nowhere else—that meant the process was expensive:

  1. Either the (expensive) engineers would have to sit down and write out, in a text doc of some sort, exactly what the API did, what all the parameters were, example inputs and outputs, error conditions, and so on. Some engineers are also great writers who love to describe what they did, but that’s not everyone. Furthermore, documenting every aspect of every parameter is no fun for anyone to have to do more than once.

  2. Or the (expensive) technical writers would have to read the code and figure those things out, which slowed them down since the answers were not always obvious and required asking more questions of the engineers.

  3. Or, someone like me would have to read the code, read the docs, and do a two-way mental diff until we felt that the docs were correct enough.

But Roy Fielding said ...

The API design principles I just described are popular in the world of APIs, and in my experience represent what nearly every developer today recognizes as an “API.” The term “REST” is often used to describe such APIs, but that’s not what “REST” is really supposed to mean.

It is popular in API design circles today to advocate for “hypermedia-driven” APIs, which are constructed just like web pages, as a series of documents containing hyperlinks. The more extreme proponents of hypermedia claim that documenting such APIs is unnecessary, because a smart client will discover what they do by following links, just as a human discovers what a web app does by looking at rendered HTML pages.

While the APIs described here are not hypermedia-based and thus aren’t actually “REST,” I believe that the principles are the same. The idea that a set of hypermedia links in an API response obviates the need to create API documentation for humans just doesn’t make sense.

At this point, we knew that we could come up with something better—something that adhered to our central tenets of API design and helped us solve some of these problems.

In the next installment of this series, I’ll describe how we started to close the loop between API design and implementation, the role Swagger played, and alternative frameworks for building APIs.

Photo: webtreats/Flickr