—Rate this article—
 

Concurrent Rate Limit policy

ConcurrentRatelimit policy icon Concurrent Rate Limit policy

About | Sample | Element reference | Flow variables | Error codes | Schemas | Usage notes | Related topics

What

Throttles inbound connections from your API proxies running on Apigee Edge to your backend services.

Need help deciding which rate limiting policy to use? See Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies.

Where

This policy must be attached in the following locations. In addition, it should be attached to the target endpoint DefaultFaultRule. A validation error will be thrown at deployment time if the policy is attached to any other flows, including any ProxyEndpoint flows. For details, see Usage notes.

ProxyEndpoint TargetEndpoint
    PreFlow Flow PostFlow PreFlow Flow PostFlow    
Request              
              Response
    PostFlow Flow PreFlow PostFlow Flow PreFlow    

Sample

<ConcurrentRatelimit name="ConnectionThrottler" >
   <AllowConnections count="200" ttl="5" />
   <Distributed>true</Distributed>
   <StrictOnTtl>false</StrictOnTtl>
   <TargetIdentifier name="MyTargetEndpoint"  ref="header/qparam/flow variables" /> 
</ConcurrentRatelimit>

Element reference

The element reference describes the elements and attributes of the ConcurrentRatelimit policy.

<ConcurrentRatelimit async="false" continueOnError="false" enabled="true" name="Concurrent-Rate-Limit-1">
   <DisplayName>Concurrent Rate Limit 1</DisplayName>
   <AllowConnections count="200" ttl="5"/>
   <Distributed>true</Distributed>
   <StrictOnTtl>false</StrictOnTtl>
   <TargetIdentifier name="default"></TargetIdentifier> 
</ConcurrentRatelimit>

<ConcurrentRatelimit> attributes

<ConcurrentRatelimit async="false" continueOnError="false" enabled="true" name="Concurrent-Rate-Limit-1">
Attribute Description Default Presence
async

Set to true to specify that the policy should be run in a thread pool different than the pool servicing the request/response flow. Default is false.

Note: This setting is only used for for internal optimization. Contact Apigee support at the Support Portal for more information.

false Optional
continueOnError

Most policies are expected to return an error when a failure occurs (for example, when a Quota is exceeded). By setting this attribute to true, Flow execution continues on failure.

false Optional
enabled Determines whether a policy is enforced or not. If set to false, a policy is 'turned off', and not enforced (even though the policy remains attached to a Flow). true Optional
name

The internal name of the policy. Characters you can use in the name are restricted to: A-Z0-9._\-$ %. However, the Edge management UI enforces additional restrictions, such as automatically removing characters that are not alphanumeric.

Optionally, use the <DisplayName> element to label the policy in the management UI proxy editor with a different, natural-language name.

N/A Required

<DisplayName> element

A natural-language name that labels the policy in the management UI proxy editor. If omitted, the policy name attribute is used.

<DisplayName>Custom label used in UI</DisplayName>
Default: Policy name attribute value.
Presence: Optional
Type: String

<AllowConnections> element

Provides the number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time.

<AllowConnections count="200" ttl="5"/>
Default: N/A
Presence: Optional
Type: N/A

Attributes

Attribute Description Default Presence
count Specifies the number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time. N/A Optional
ttl

Include to automatically decrement the counter after the number of seconds specified. This can help to clean up any connections that were not decremented properly in the response path. 

N/A Optional

<Distributed> element

Specify whether counter values are shared across instances of Apigee Edge's server infrastructure.

<Distributed>true</Distributed>
Default: false
Presence: Optional
Type: Boolean

<StrictOnTtl> element

Set to true to honor the <AllowConnections> ttl attribute setting regardless of backend server throughput. Consider setting this property to true for high throughput or low latency backend services.

<StrictOnTtl>false</StrictOnTtl>
Default: false
Presence: Optional
Type: Boolean

<TargetIdentifier> element

Provides the name of the TargetEndpoint to which the throttling should be applied.

<TargetIdentifier name="default"></TargetIdentifier>
Default: N/A
Presence: Optional
Type: N/A

Attributes

Attribute Description Default Presence
name Specifies the name of the TargetEndpoint to which the throttling should be applied. N/A Optional
ref

 

N/A Optional

Flow variables

A set of predefined flow variables are populated each time the policy executes:

  • concurent.ratelimit.{policy_name}.allowed.count
  • concurent.ratelimit.{policy_name}.used.count
  • concurent.ratelimit.{policy_name}.available.count
  • concurent.ratelimit.{policy_name}.identifier

Error codes

The default format for error codes returned by Policies is:

{
  "code" : " {ErrorCode} ",
  "message" : " {Error message} ",
  "contexts" : [ ]
}
Error Code Message
ConcurrentRatelimtViolation ConcurrentRatelimit connection exceeded. Connection limit : {0}
InvalidCountValue ConcurrentRatelimit invalid count value specified.
ConcurrentRatelimitStepAttachmentNotAllowedAtProxyEndpoint Concurrent Ratelimit policy {0} attachment is not allowed at proxy request/response/fault paths
ConcurrentRatelimitStepAttachmentMissingAtTargetEndpoint Concurrent Ratelimit policy {0} attachment is missing at target request/response/fault paths
InvalidTTLForMessageTimeOut ConcurrentRatelimit invalid ttl value specified for message timeout

Schemas

See our GitHub repository samples for the most recent schemas.

Usage notes

In distributed environments, app traffic may be managed by many replicated API proxies. While each API proxy might be handling just a few connections, collectively, a set of replicated API proxies, all of which point to the same backend service, might swamp the capacity of the service to which they forward requests. When the connection limit is exceeded, additional requests return HTTP response code 503: 503 Service Unavailable.

Policy attachment details

The ConcurrentRatelimit policy must be attached as a Step to three Flows on a TargetEndpoint: request, response, and DefaultFaultRule. (A validation error will be thrown at deployment time if the policy is attached to any other Flows, including any ProxyEndpoint Flows.) For example to attach a ConcurrentRatelimit policy called ConnectionThrottler to a TargetEndpoint called MyTargetEndpoint, create the following TargetEndpoint configuration:

<TargetEndpoint name="MyTargetEndpoint">
  <DefaultFaultRule name="DefaultFaultRule">
    <Step><Name>ConnectionThrottler</Name></Step>
    <AlwaysEnforce>true</AlwaysEnforce>
  </DefaultFaultRule>
  <PostFlow name="PostFlow">
    <Response>
      <Step><Name>ConnectionThrottler</Name></Step>
    </Response>
  </PostFlow>
  <PreFlow name="PreFlow">
    <Request>
      <Step><Name>ConnectionThrottler</Name></Step>
    </Request>
  </PreFlow>
  <HTTPTargetConnection>
    <URL>http://api.mybackend.service.com</URL>
  </HTTPTargetConnection>
</TargetEndpoint>

Counters are reset when the API proxy is redeployed.

Related topics

Quota policy

Spike Arrest policy

Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies


Help or comments?

  • Something's not working: See Apigee Support
  • Something's wrong with the docs: Click Send Feedback in the lower right.
    (Incorrect? Unclear? Broken link? Typo?)