- If the slowness being seen by just one app or is it from multiple apps? If one app, then it might be a problem with the app.
- If it being seen by multiple users across multiple apps and users seem to be in the same geo location, then it could be a network issue
- If you're not seeing either of these issue, it could be an issue on the gateway. If you recently added or updated a policy. It could be configured incorrectly.
- If the total response time is being reported as high, but the average endpoint response time has not changed then it might be an Apigee issue. If the average endpoint response time is also high then it could be an issue in the network between Apigee and the target server, or an internal application server
- Use contextual analytics to identify which API is showing an increased response time
- Use Trace to figure out if the increased response time is happening in your live traffic
- Generate a latency report to figure out exactly which API and resource is causing the issue
Use contextual analytics to identify the issue
- Click API in the main menu.
- On the Performance section, click the Metrics menu and choose Average Response Time.
The chart will show you the response time for all your APIs. Look for spikes or gradual increases in response time. You should be able to quickly see which API is having an issue.
Run a trace to identify the bottleneck
Trace tells you if the problem that's being reported is happening right now. It lets you identify where exactly the slow API is occurring in your live traffic
- Click on the API that seems slower than usual (in this example we used StickerAPI). The API's details page will appear
- Click the Trace button to set up a trace session for the API. This will help you better understand where the bottleneck is occurring.
For example in this trace session you might see that a POST call is taking much longer than GET calls.
Create a custom report
Create a new report to measure max latency by app and resource. This report lets you figure out if there is a pattern of service issues. You can use this as yet another data point to identify the issue.
Set up the basics and data display
- On the Analytics tab, click the + Custom Report button.
- Enter Latency Report as the Report Name.
- Enter a brief description of the report in the Report Description field.
- Select Column as the Chart type.
- Select prod from the Environment menu.
- Select Hourly as the Data Aggregation Interval.
- Select total_response_time as the Primary Measure.
Set up aggregation an Y-axis measures
- Select Max as the Aggregation Function.
- Select Total Response Time as Measure 1. This will be the primary measure for the report.
Set up chart drill down
These selection will determine how you can refine the view of your content.
- Choose API Proxy as Drilldown 1. This measure lets you see the response time of all the APIs in your org.
- Choose Request Path as Drilldown 2. This dimension will break up the responses by the actual resources in an API. So using this drilldown you can see the response time by resource.
Analyze the report
The new latency report will show you the response times by API and then by each resource within an API. By combining this information with what you know about your network architecture, you can quickly find issues that may be related to your infrastructure.
- Locate the worst performing API (that is, the one with the highest latency).
- Drill down by selecting the API and view the worst performing resource.
Decide on an action
Now that you know which resource is performing badly, you can examine your network to see if there's a service issue, or you can add a 3rd dimension like developer_app to see which Apps are impacted by this slow resources, or developer to figure out which developers that are impacted by the slow resource.