Implement rate limiting in all API models by albertvillanova · Pull Request #1516 · huggingface/smolagents (original) (raw)
Navigation Menu
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Appearance settings
Conversation
Implement rate limiting in all API models.
This PR implements rate limiting across all API model classes by adding calls to self._apply_rate_limit() before API requests are made.
Changes:
- Implement
RateLimiterclass - Added
RateLimiterclass initialization in ApiModel.init - Implemented
_apply_rate_limit()method in ApiModel - Added rate limiting calls in all API model implementations:
- LiteLLMModel
- InferenceClientModel
- OpenAIServerModel
- AzureOpenAIServerModel
- AmazonBedrockServerModel
- LiteLLMRouterModel
- Added calls in both generate() and generate_stream() methods
Follow-up to:
It seems to me that common providers lit rate limits more in Requests Per Minute (RPM), cf Nebius, Anthropic...: maybe implement this per minute instead of per second?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! It would be nice to add a mention of this possibility to rate limit in the docs and add the arg in some of the examples!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})