Managing API Performance
This set of Learning Paths outlines common scenarios API developers face when designing and optimizing APIs for performance and describes how the API Platform helps address these scenarios.
Why manage API performance?
- Don't Have A Meltdown - Why technical and business teams need to measure traffic flow.
- Protecting Users, Apps and APIs from Abuse - Any workflow that creates or consumes content, shares content, sends or receives communications can be vulnerable to attack.
Protect your back-end services from abuse
- Throttle at the proxy level: Control Traffic Flow
- Throttle for customer, tier, IP, region, etc.: Protect your API from Traffic Spikes
Configure Service levels for API resource bundles
- Rate Limit by developer, app and/or customer org (Business Consumption) - Set Up an API Product
- Rate Limit specific clients - Reset your rate limit count using reset quota
- Enable developers to create apps that consume your APIs - Provisioning API Products, Developers and Apps
Control the rate of traffic sent to or received by an API with quotas
- Protecting a system from receiving more operations than it can address - Rate Limit API traffic using Quotas
Perform root-cause analysis to locate performance bottlenecks
- Learn how to get data on how messages are executed from the time a call is sent to the response - Troubleshoot API Traffic using Trace
Learn how to use the Trace tool with the Trace and Revisions (video)
Configure load balancing and failover across endpoints
- Creating dynamic API flows using conditions
- Load balancing API traffic across multiple back-end servers
- pipeline requests to backend services to minimize network latency
- Reroute traffic based on network conditions
Pipeline requests to backend services to minimize network latency, review HERE
Optimize response caching based on data freshness HERE
Help or comments?