Hugging Face – Pricing (original) (raw)
Leveling up AI collaboration and compute.
Users and organizations already use the Hub as a collaboration platform,
we’re making it easy to seamlessly and scalably launch ML compute directly from the Hub.
Need support to accelerate AI in your organization? View our Expert Support.
The HF Hub is the central place to explore, experiment, collaborate and build technology with Machine Learning.
Join the open source Machine Learning movement!
Spaces Hardware
Starting at $0
Spaces are one of the most popular ways to share ML applications and demos with the world.
Upgrade your Spaces with our selection of custom on-demand hardware:
Name | CPU | Memory | Accelerator | VRAM | Hourly price |
---|---|---|---|---|---|
CPU Basic | 2 vCPU | 16 GB | - | - | FREE |
CPU Upgrade | 8 vCPU | 32 GB | - | - | $0.03 |
Nvidia T4 - small | 4 vCPU | 15 GB | Nvidia T4 | 16 GB | $0.40 |
Nvidia T4 - medium | 8 vCPU | 30 GB | Nvidia T4 | 16 GB | $0.60 |
1x Nvidia L4 | 8 vCPU | 30 GB | Nvidia L4 | 24 GB | $0.80 |
4x Nvidia L4 | 48 vCPU | 186 GB | Nvidia L4 | 96 GB | $3.80 |
1x Nvidia L40S | 8 vCPU | 62 GB | Nvidia L4 | 48 GB | $1.80 |
4x Nvidia L40S | 48 vCPU | 382 GB | Nvidia L4 | 192 GB | $8.30 |
8x Nvidia L40S | 192 vCPU | 1534 GB | Nvidia L4 | 384 GB | $23.50 |
Nvidia A10G - small | 4 vCPU | 15 GB | Nvidia A10G | 24 GB | $1.00 |
Nvidia A10G - large | 12 vCPU | 46 GB | Nvidia A10G | 24 GB | $1.50 |
2x Nvidia A10G - large | 24 vCPU | 92 GB | Nvidia A10G | 48 GB | $3.00 |
4x Nvidia A10G - large | 48 vCPU | 184 GB | Nvidia A10G | 96 GB | $5.00 |
Nvidia A100 - large | 12 vCPU | 142 GB | Nvidia A100 | 40 GB | $4.00 |
TPU v5e 1x1 | 22 vCPU | 44 GB | Google TPU v5e | 16 GB | $1.38 |
TPU v5e 2x2 | 110 vCPU | 186 GB | Google TPU v5e | 64 GB | $5.50 |
TPU v5e 2x4 | 220 vCPU | 380 GB | Google TPU v5e | 128 GB | $11.00 |
Custom | on demand | on demand | on demand | on demand | on demand |
Spaces Persistent Storage
All Spaces get ephemeral storage for free but you can upgrade and add persistent storage at any time.
Name | Storage | Monthly price |
---|---|---|
Small | 20 GB | $5 |
Medium | 150 GB | $25 |
Large | 1 TB | $100 |
Building something cool as a side project? We also offer community GPU grants.
Inference Endpoints
Starting at $0.033/hour
Inference Endpoints (dedicated) offers a secure production solution to easily deploy any ML model on dedicated and autoscaling infrastructure, right from the HF Hub.
CPU instances
Provider | Architecture | vCPUs | Memory | Hourly rate |
---|---|---|---|---|
aws | Intel Sapphire Rapids | 1 | 2GB | $0.03 |
2 | 4GB | $0.07 | ||
4 | 8GB | $0.13 | ||
8 | 16GB | $0.27 | ||
azure | Intel Xeon | 1 | 2GB | $0.06 |
2 | 4GB | $0.12 | ||
4 | 8GB | $0.24 | ||
8 | 16GB | $0.48 | ||
gcp | Intel Sapphire Rapids | 1 | 2GB | $0.07 |
2 | 4GB | $0.14 | ||
4 | 8GB | $0.28 | ||
8 | 16GB | $0.56 |
Accelerator instances
Provider | Architecture | Topology | Accelerator Memory | Hourly rate |
---|---|---|---|---|
aws | Inf2 Neuron | x1 | 14.5GB | $0.75 |
x12 | 760GB | $12.00 | ||
gcp | TPU v5e | 1x1 | 16GB | $1.38 |
2x2 | 64GB | $5.50 | ||
2x4 | 128GB | $11.00 |
GPU instances
Provider | Architecture | GPUs | GPU Memory | Hourly rate |
---|---|---|---|---|
aws | NVIDIA T4 | 1 | 14GB | $0.50 |
4 | 56GB | $3.00 | ||
aws | NVIDIA L4 | 1 | 24GB | $0.80 |
4 | 96GB | $3.80 | ||
aws | NVIDIA L40S | 1 | 48GB | $1.80 |
4 | 192GB | $8.30 | ||
8 | 384GB | $23.50 | ||
aws | NVIDIA A10G | 1 | 24GB | $1.00 |
4 | 96GB | $5.00 | ||
aws | NVIDIA A100 | 1 | 80GB | $4.00 |
2 | 160GB | $8.00 | ||
4 | 320GB | $16.00 | ||
8 | 640GB | $32.00 | ||
gcp | NVIDIA T4 | 1 | 16GB | $0.50 |
gcp | NVIDIA L4 | 1 | 24GB | $1.00 |
4 | 96GB | $5.00 | ||
gcp | NVIDIA A100 | 1 | 80GB | $6.00 |
2 | 160GB | $12.00 | ||
4 | 320GB | $24.00 | ||
8 | 640GB | $48.00 | ||
gcp | NVIDIA H100 | 1 | 80GB | $12.50 |
2 | 160GB | $25.00 | ||
4 | 320GB | $50.00 | ||
8 | 640GB | $100.00 |