API proxy performance dashboard
The API Proxy Performance dashboard tracks how much time API proxies spend processing requests and responses. This dashboard lets you visualize the network latency between Apigee Edge and backend servers.
The API Proxy Performance dashboard
The API Proxy Performance dashboard tracks these metrics:
|Network latency||Takes the total response time and subtracts the request processing latency and the response processing latency.|
The amount of time it takes from the point that the entire request from the client app is received on Apigee Edge to the time Edge begins sending it to the target. See also "What is request and response latency".
The amount of time it takes from the point that the entire response from the target is received on Apigee Edge to the time Edge begins sending it to the client app. See also "What is request and response latency".
|Total errors||The total number of all API requests that are unsuccessful, that is, the request does not deliver a response as desired by the end user.|
|4xx errors||The number API requests that resulted in HTTP 4xx errors. For a complete list of HTTP status codes, see "Status Code Definitions" in the HTTP specification.|
|5xx errors||The number API requests that resulted in HTTP 5xx errors. For a complete list of HTTP status codes, see "Status Code Definitions" in the HTTP specification.|
|Transactions per second (TPS)||The number of API requests and resulting responses per second.|
|Cache errors||The sum of errors generated where the API call hit the cache.|
You can choose to view data for all API proxies in your organization, or you can select individual ones. Hover over the API Proxy Errors summary to see a breakdown of errors by HTTP error code.
This dashboard uses standard controls, like the date and data aggregation selectors, hovering over graphs for more context, and so on. To learn more, see Using the analytics dashboards.
Request latency refers to the amount of time it takes for a request to be processed inside Apigee Edge. The following diagram illustrates request latency. The request latency is calculated from the time between when the entire request is received by Apigee Edge and when Apigee Edge begins to send the request to the backend target. Response latency calculation follows the same pattern, only on the response flow.
Help or comments?