BentoML Documentation (original) (raw)
Toggle table of contents sidebar
BentoML is a Unified Inference Platform for deploying and scaling AI systems with any model, on any cloud.
Featured examples¶
Serve large language models with OpenAI-compatible APIs and vLLM inference backend.
Protect your LLM API endpoint from harmful input using Google’s safety content moderation model.
Explore what developers are building with BentoML.
What is BentoML¶
BentoML is a Unified Inference Platform for deploying and scaling AI models with production-grade reliability, all without the complexity of managing infrastructure. It enables your developers to build AI systems 10x faster with custom models, scale efficiently in your cloud, and maintain complete control over security and compliance.
To get started with BentoML:
- Use pip to install the BentoML open-source model serving framework, which is distributed as a Python package on PyPI.
Recommend Python 3.9+
pip install bentoml
- Sign up for BentoCloud to get a free trial.
How-tos¶
Stay informed¶
The BentoML team uses the following channels to announce important updates like major product releases and share tutorials, case studies, as well as community news.
To receive release notifications, star and watch the BentoML project on GitHub. For release notes and detailed changelogs, see the Releases page.