Skip to main content
4. Future Consideration

Request to Increase Secure Listener Rate Limit

Related products:Agent Studio
  • December 30, 2025
  • 3 replies
  • 18 views

Forum|alt.badge.img+3

Your documentation notes that Secure Listeners are rate-limited to “up to 5 requests per second.” This is a constraint for our workflows where we need to notify 100+ users concurrently for a single daily use case.

Under the current rate limit:

  • While we can delay individual webhook calls from a large batch of triggers, doing so introduces delays in the overall communication and degrades the user experience.
  • We can also attempt to load balance webhook calls using multiple listeners to stay within the limit, but this adds unnecessary complexity to our architecture and maintenance.
  • In many cases, we cannot easily control or predict how many webhooks will be fired within a single second, especially for event-driven workflows that generate bursts of activity.

It would be very helpful to significantly increase this rate limit or provide higher-throughput options so we can better support high-volume workflows without additional overhead.

3 replies

Forum|alt.badge.img
  • Community Manager
  • January 6, 2026

Hi ​@glamb, thank you for submitting this idea! To help us evaluate the right solution a couple of clarifying questions:

  1. Use case shape: Is this primarily a daily batch-style notification (e.g., one job notifying 100+ users at roughly the same time), or are you seeing frequent bursty traffic throughout the day?

  2. Target throughput: When you say “significantly increase,” what order of magnitude would unblock you? For example, would ~25 rps or ~50 rps per listener be sufficient, or do you anticipate needing higher sustained bursts?

  3. Delivery expectations: Is near-simultaneous delivery critical for the user experience, or is a short fan-out window (e.g., a few seconds) acceptable?

This context will help us determine whether the right path. Appreciate the detailed write-up -- it’s very helpful!


Forum|alt.badge.img+3
  • Author
  • Inspiring
  • January 6, 2026

Hi ​@dhorvath, thanks again for the follow-up. Here’s an overview:

Use case shape  

  • We see both batch and bursty patterns:
    • Hourly “check-in” reminders tied to reservations, where the number of webhook triggers can vary from 0 at 2AM to hundreds of webhook triggers firing around 8AM.
    • Several use cases with a single large burst per day, where a workflow can generate on the order of 50–300 webhook triggers at roughly the same time.
  • Several workflows are event-driven: a single upstream sync or update can cause many items to change at once, each resulting in its own webhook call. This is where we regularly run into the 5 rps listener limit. 
  • We previously designed these workflows with Events in mind, so a different approach with Webhooks may be necessary to queue on our end before triggering.

Target throughput & delivery  

  • Based on what we’re seeing today, increasing to something in the 25–50 rps per listener range would likely help a lot with typical bursts, and 100+ rps would better cover our larger spikes — but we’d want to validate this against real traffic patterns.
  • A short delivery window (a few seconds, via a queue or similar on your side) is totally fine. The main concern is avoiding stretches into tens of seconds or minutes for a single burst.

Architecture considerations  

  • If Moveworks increases throughput (say up to ~100 rps, or offers higher burst/queueing per listener), we can keep a simple design.
  • If limits are higher but still constrained, we can adjust our architecture to rate limit or load balance on our side — but we’d prefer not to, as 5 rps is generally too slow for our common burst patterns and introduces extra complexity on our end.

Please let me know if you’d like any more information! Happy to provide more context on our technical architecture privately via email.


Ajay Merchia
Forum|alt.badge.img+2
  • Community Manager
  • January 8, 2026

Thanks for those details ​@glamb. We’re actively working on some platform scalability upgrades so we can accommodate requests like these.

While this isn’t immediately on our roadmap, we’re going to keep this top of mind so that when the dependency infrastructure work is done, we’ll consider it again.

One thing to keep in mind is a SINGLE webhook can send MANY different messages. So you could have a scheduled job take over these bursty workflows. Or generate the dependency/linked events based on a primary event in the interim