Rate limits

The API uses a number of safeguards against bursts of incoming traffic to help maximise its stability. We have several limiters in the API, including:

Standard rate limiting

This places a limit on the number of requests that can be made in a certain timeframe

With the exception of Call connector endpoint (refer concurrency limit below), all endpoints are rate limited at 30 requests per second or 1800 requests per minute.

There is limited burst capacity to allow for momentary spikes (up to 50 requests per second) in usage, as long as the overall rate is not broken in a set window.

Treat these limits as maximums and don’t generate unnecessary load. See Handling limiting gracefully for advice on handling 429s.

If you suddenly see a rising number of rate limited requests, please contact support.

Apart from the rate limits above, you should also consider the rate limiting policies of the 3rd party services you are using in your integrations (i.e. not the rate limiting policy for any of Tray's endpoints)

Please see our Rate limiting (3rd party) page for more details.

Concurrency limiting

The Call connector endpoint is NOT rate limited rather uses a concurrency limit.

This limits the concurrent calls to 1000 i.e. 1000 requests can be active at any given time

It is very unlikely that you will face any issues with this, unless you are making large number of long lived requests at the same time.

If you need to request an increased concurrency limit, please contact your customer success representative.

Event Delivery from Trigger API

Once a subscription is created, the delivery of events to your endpoint is not rate limited.

Tray will deliver events as quickly as possible.

If you have set a rate-limit on the number of events your endpoint can accept, Tray will retry the delivery of those events with exponential backoff.

Thus, you will not be losing any events even if your endpoint is rate-limited.

Handling limiting gracefully

A basic technique for integrations to gracefully handle limiting is to watch for 429 status codes and build in a retry mechanism.

The retry mechanism should follow an exponential backoff schedule to reduce request volume when necessary. We’d also recommend building some randomness into the backoff schedule to avoid a thundering herd effect.