Model Compatibility - Featherless.ai (original) (raw)

What models run on Featherless?

Compatibility

Featherless aims to provide serverless inference for all AI models. We currently support +3,200 text generation models which are fine-tunes of

With full support for Qwen 2 7B, 14B and 32B and Mistral 3 Small coming soon.

We also support fine-tunes of the following depth up-scale architectures

HuggingFace Repo Requirements

For models to be loaded in featherless, we require

Model Availability

Any public model from hugging face with 100+ downloads will automatically be available for inference on featherless. Users may request public models with fewer downloads either by email or through the #model-suggestions channel discord.

Private models meeting the compatibility requirements outlined here can be run on featherless by scale customers that have connected their hugging face account. Please visit the private models page in the profile section of the web-app.

Context Lengths

All models are served at one of 4k, 8k or 16k context length - i.e. the total of token count of the prompt plus the completion cannot exceed the context length of the model.

What context length a model can be used at depends on it’s architecture and the following table.

Context Length Model Architectures Serving this Length
4k Llama 2 (7B, 11B, 13B)
8k Llama 3 (8B, 15B, 70B)Mistral v2 (7B)
16k Llama 3.1 (8B, 70B)Mistral Nemo (12B)Qwen (1.5-32B, 2-72B)

e.g. since Anthracite’s Magnum is a Qwen 2 72B fine-tune, it’s context length is 16k

e.g. since Sao10K’s fimbulvetr is a fine-tune of the Llama2 11B, it’s context length is 4k

We aim to operate the models at maximum useable context, however continue to make tradeoffs to ensure sufficiently low TTFT and a consistent token throughput of > 10 tok/s for all models.

Quantization

Though our model ingestion pipeline requires weights in safetensors format with FP16 precision, all models are served at FP8 precision (they are quanted before loading). This is a tradeoff to balance output quality with inference speeds.

Last edited: Jan 31, 2025