11436 SSO

What's your API's Cachiness Factor?

Jun 26, 2012

 "Cachiness factor" is the degree to which your API design supports the caching of responses. Low cachiness means that a relatively higher than optimal number of requests is forwarded to the back end for retrieving data; a high cachiness factor means that the number of requests serviced through the cache layer is reduced and optimized.

Every time a request is sent to the API provider endpoint, the provider incurs the cost of servicing the request. Investing in a good caching mechanism reduces the number of requests that hit the endpoint, leading to a faster response time, lower servicing costs and saved bandwidth. Resources can then be spent on servicing requests that otherwise would have had to compete with cacheable requests.

Cachiness in an API design refers to understanding how a piece of retrieved data can be reused to serve other API requests. Such an understanding can be transformed into a set of actions that store the retrieved copy of the data in an optimal form for reuse.This coupled with insights from API usage analytics can provide direct benefits in terms of app performance and operational costs.

An API proxy can be designed to do a number of things when a request arrives:

Determine the quality or fidelity of the data requested by the app or end user

This information can then be used to
- Transform the API request to retrieve the data from the endpoint data store at the highest possible fidelity and breadth
- Save the retrieved data in the proxy cache
- Extract the appropriate fidelity and breadth (determined by the original request) and send as the response to the app/end user

For example, if the request is for weather patterns for a city, the system can potentially map the response to a response for all zipcodes in that city and store it accordingly in the cache.

Pre-fetch based on temporal and spatial locality
Predict based on usage patterns what the next request is likely to be and pre-fetch this data from the endpoint for storing in the cache.

For example, given a request for browsing a list of plasma TVs on sale at a retailer, it might make sense to cache the entire response set and serve subsequent requests for more data (e.g. the next set of TVs) from the cache.

Pre-fetch based on similarity
Use the idea of similarity in data sets to predict the next request and retrieve the data for storing at the proxy cache.

For example, for the scenario in which our user requests a list of TVs from one manufacturer, it might make sense to pre-fetch a list of TVs from another manufacturer with a similar product line and store this information in the cache.

Parameters-based selection
If your API supports “select” on your data through parameters, another option to optimize the cachiness of your API is to retrieve the entire data set (within certain bounds) from the backend, store it in your cache, and return only the appropriate data set for the request. Similarly, filtering of data can be performed at the proxy as opposed to the end point, increasing the cachiness factor for the API.

Using Data Analysis to improve cachiness

You can also use data analysis techniques to understand request patterns for your data and use this information to pre-fetch or over-fetch data from the endpoint to increase the cachiness factor of your API.

Caching Diffs
Another possible technique is building a mechanism where updated data is automatically sent from the end point data store to the cache as new updates are generated in the backend. At the cache level, instead of expiring the entire data set, the part of the data set that is least likely to be relevant is automatically expired and the new updated “diff” is appended to the cache data set.

The technique that will work for an API will vary from API to API. You might need to experiment with various techniques to identify the one that makes sense for your scenario and API.

API Management Decision-Making Kit

Next Steps


Resources Gallery