Providers | liteLLM (original) (raw)

Learn how to deploy + call models from different providers on LiteLLM

🗃️ OpenAI3 items

📄️ OpenAI (Text Completion)LiteLLM supports OpenAI text completion models

📄️ OpenAI-Compatible EndpointsSelecting openai as the provider routes your request to an OpenAI-compatible endpoint using the upstream

📄️ Azure OpenAIOverview

📄️ Azure AI StudioLiteLLM supports all models on Azure AI Studio

📄️ AI/ML APIGetting started with the AI/ML API is simple. Follow these steps to set up your integration:

📄️ VertexAI [Anthropic, Gemini, Model Garden]Overview

🗃️ Google AI Studio2 items

📄️ AnthropicLiteLLM supports all anthropic models.

📄️ AWS SagemakerLiteLLM supports All Sagemaker Huggingface Jumpstart Models

🗃️ Bedrock2 items

📄️ LiteLLM Proxy (LLM Gateway)| Property | Details |

📄️ Meta Llama| Property | Details |

📄️ Mistral AI APIhttps://docs.mistral.ai/api/

📄️ Codestral API [Mistral AI]Codestral is available in select code-completion plugins but can also be queried directly. See the documentation for more details.

📄️ CohereAPI KEYS

📄️ Anyscalehttps://app.endpoints.anyscale.com/

📄️ Hugging FaceLiteLLM supports running inference across multiple services for models hosted on the Hugging Face Hub.

📄️ DatabricksLiteLLM supports all models on Databricks

📄️ DeepgramLiteLLM supports Deepgram's /listen endpoint.

📄️ IBM watsonx.aiLiteLLM supports all IBM watsonx.ai foundational models and embeddings.

📄️ PredibaseLiteLLM supports all models on Predibase

📄️ Nvidia NIMhttps://docs.api.nvidia.com/nim/reference/

📄️ Nscale (EU Sovereign)| Property | Details |

📄️ xAIhttps://docs.x.ai/docs

📄️ LM Studiohttps://lmstudio.ai/docs/basics/server

📄️ Cerebrashttps://inference-docs.cerebras.ai/api-reference/chat-completions

📄️ Volcano Engine (Volcengine)https://www.volcengine.com/docs/82379/1263482

📄️ Triton Inference ServerLiteLLM supports Embedding Models on Triton Inference Servers

📄️ OllamaLiteLLM supports all models from Ollama

📄️ Perplexity AI (pplx-api)https://www.perplexity.ai

📄️ FriendliAIWe support ALL FriendliAI models, just set friendliai/ as a prefix when sending completion requests

📄️ Galadrielhttps://docs.galadriel.com/api-reference/chat-completion-API

📄️ Topaz| Property | Details |

📄️ Groqhttps://groq.com/

📄️ 🆕 Githubhttps://github.com/marketplace/models

📄️ Deepseekhttps://deepseek.com/

📄️ Fireworks AIWe support ALL Fireworks AI models, just set fireworks_ai/ as a prefix when sending completion requests

📄️ ClarifaiAnthropic, OpenAI, Mistral, Llama and Gemini LLMs are Supported on Clarifai.

📄️ VLLMLiteLLM supports all models on VLLM.

📄️ LlamafileLiteLLM supports all models on Llamafile.

📄️ Infinity| Property | Details |

📄️ Xinference [Xorbits Inference]https://inference.readthedocs.io/en/latest/index.html

📄️ Cloudflare Workers AIhttps://developers.cloudflare.com/workers-ai/models/text-generation/

📄️ DeepInfrahttps://deepinfra.com/

📄️ AI21LiteLLM supports the following AI21 models:

📄️ NLP CloudLiteLLM supports all LLMs on NLP Cloud.

📄️ ReplicateLiteLLM supports all models on Replicate

📄️ Together AILiteLLM supports all models on Together AI.

📄️ Voyage AIhttps://docs.voyageai.com/embeddings/

📄️ Jina AIhttps://jina.ai/embeddings/

📄️ Aleph AlphaLiteLLM supports all models from Aleph Alpha.

📄️ BasetenLiteLLM supports any Text-Gen-Interface models on Baseten.

📄️ OpenRouterLiteLLM supports all the text / chat / vision models from OpenRouter

📄️ Sambanovahttps://cloud.sambanova.ai/

📄️ Custom API Server (Custom Format)Call your custom torch-serve / internal LLM APIs via LiteLLM

📄️ PetalsPetals//github.com/bigscience-workshop/petals

📄️ Snowflake| Property | Details |