The ConcurrentRatelimit policy enables you to throttle inbound connections to your backend services from API proxies running on Apigee Edge. In distributed environments, app traffic may be managed by many replicated API proxies. While each API proxy might be handling just a few connections, collectively, a set of replicated API proxies, all of which point to the same backend service, might swamp the capacity of the service to which they forward requests.

When the connection limit is exceeded, additional requests return HTTP response code 503:

503 Service Unavailable


<ConcurrentRatelimit name="ConnectionThrottler" >
  <AllowConnections count="200" ttl="5" />
  <TargetIdentifier name="MyTargetEndpoint"  ref="header/qparam/flow variables" /> 

Policy attachment

The ConcurrentRatelimit policy must be attached as a Step to three Flows on a TargetEndpoint: request, response, and DefaultFaultRule. (A validation error will be thrown at deployment time if the policy is attached to any other Flows, including any ProxyEndpoint Flows.)

Note that when an API proxy is re-deployed, the counter values are reset.

For example to attach a ConcurrentRatelimit policy called ConnectionThrottler to a TargetEndpoint called MyTargetEndpoint, create the following TargetEndpoint configuration:

<TargetEndpoint name="MyTargetEndpoint">
  <DefaultFaultRule name="DefaultFaultRule">
  <PostFlow name="PostFlow">
  <PreFlow name="PreFlow">

Note: Counters are reset when the API proxy is redeployed.

Configuring a ConcurrentRatelimit policy

Configure the ConcurrentRatelimit policy using the following elements.

Field Name Description
name The unique name of the policy. Characters you can use in the name are restricted to: A-Z0-9._\-$ %.
AllowConnections The number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time. The optional attribute ttl can be added to this element to cause the counter to automatically decrement after the number of seconds configured. This can clean up any connections that were not decremented properly in the response path. 
Distributed A boolean that determines whether counter values are shared across instances of Apigee Edge's server infrastructure.
StrictOnTtl true to honor the ttl attribute setting regardless of backend server throughput. Consider setting this property to true for high throughput or low latency backend services. Default is false.
TargetIdentifier The name of the TargetEndpoint to which the throttling should be applied.

Policy-specific Flow variables

A set of pre-defined Flow variables are populated each time the policy executes:

  • concurent.ratelimit.{policy_name}.allowed.count
  • concurent.ratelimit.{policy_name}.used.count
  • concurent.ratelimit.{policy_name}.available.count
  • concurent.ratelimit.{policy_name}.identifier

Policy-specific error codes

The default format for error codes returned by Policies is:

  "code" : " {ErrorCode} ",
  "message" : " {Error message} ",
  "contexts" : [ ]
Error Code Message
ConcurrentRatelimtViolation ConcurrentRatelimit connection exceeded. Connection limit : {0}
InvalidCountValue ConcurrentRatelimit invalid count value specified.
ConcurrentRatelimitStepAttachmentNotAllowedAtProxyEndpoint Concurrent Ratelimit policy {0} attachment is not allowed at proxy request/response/fault paths
ConcurrentRatelimitStepAttachmentMissingAtTargetEndpoint Concurrent Ratelimit policy {0} attachment is missing at target request/response/fault paths
InvalidTTLForMessageTimeOut ConcurrentRatelimit invalid ttl value specified for message timeout

Policy schema

Each policy type is defined by an XML schema (.xsd). For reference, policy schemas are available on GitHub.

Help or comments?

  • Something's not working: See Apigee Support
  • Something's wrong with the docs: Click Send Feedback in the lower right.
    (Incorrect? Unclear? Broken link? Typo?)