Was this helpful?

The ConcurrentRatelimit policy enables you to throttle inbound connections to your backend services from API proxies running on Apigee Edge. In distributed environments, app traffic may be managed by many replicated API proxies. While each API proxy might be handling just a few connections, collectively, a set of replicated API proxies, all of which point to the same backend service, might swamp the capacity of the service to which they forward requests.

When the connection limit is exceeded, additional requests return HTTP response code 503:

503 Service Unavailable


<ConcurrentRatelimit name="ConnectionThrottler" >
  <AllowConnections count="200" ttl="5" />
  <TargetIdentifier name="MyTargetEndpoint"  ref="header/qparam/flow variables" /> 

Policy attachment

The ConcurrentRatelimit policy must be attached as a Step to three Flows on a TargetEndpoint: request, response, and DefaultFaultRule. (A validation error will be thrown at deployment time if the policy is attached to any other Flows, including any ProxyEndpoint Flows.)

Note that when an API proxy is re-deployed, the counter values are reset.

For example to attach a ConcurrentRatelimit policy called ConnectionThrottler to a TargetEndpoint called MyTargetEndpoint, create the following TargetEndpoint configuration:

<TargetEndpoint name="MyTargetEndpoint">
  <DefaultFaultRule name="DefaultFaultRule">
  <PostFlow name="PostFlow">
  <PreFlow name="PreFlow">

Note: Counters are reset when the API proxy is redployed.

Configuring a ConcurrentRatelimit policy

Configure the ConcurrentRatelimit policy using the following elements.

Field Name Description
name The unique name of the policy.
AllowConnections The number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time. The optional attribute ttl can be added to this element to cause the counter to automatically decrement after the number of seconds configured. This can clean up any connections that were not decremented properly in the response path. 
isDistributed A boolean that determines whether counter values are shared across instances of Apigee Edge's server infrastructure.
TargetIdentifier The name of the TargetEndpoint to which the throttling should be applied.

Policy-specific Flow variables

A set of pre-defined Flow variables are populated each time the policy executes:

  • concurent.ratelimit.{policy_name}.allowed.count
  • concurent.ratelimit.{policy_name}.used.count
  • concurent.ratelimit.{policy_name}.available.count
  • concurent.ratelimit.{policy_name}.identifier

Policy-specific error codes

The default format for error codes returned by Policies is:

  "code" : " {ErrorCode} ",
  "message" : " {Error message} ",
  "contexts" : [ ]
Error Code Message
ConcurrentRatelimtViolation ConcurrentRatelimit connection exceeded. Connection limit : {0}
InvalidCountValue ConcurrentRatelimit invalid count value specified.
ConcurrentRatelimitStepAttachmentNotAllowedAtProxyEndpoint Concurrent Ratelimit policy {0} attachment is not allowed at proxy request/response/fault paths
ConcurrentRatelimitStepAttachmentMissingAtTargetEndpoint Concurrent Ratelimit policy {0} attachment is missing at target request/response/fault paths
InvalidTTLForMessageTimeOut ConcurrentRatelimit invalid ttl value specified for message timeout

Policy schema

Each policy type is defined by an XML schema (.xsd). For reference, policy schemas are available on GitHub.

Add new comment

Provide your email address if you wish to be contacted offline about your comment.
We will not display your email address as part of your comment.

We'd love your feedback and perspective! Please be as specific as possible.
Type the characters you see in this picture. (verify using audio)

Type the characters you see in the picture above; if you can't read them, submit the form and a new image will be generated. Not case sensitive.