Triton Inference Server (original) (raw)
NVIDIA Triton Inference Server Organization
NVIDIA Triton Inference Serverprovides a cloud and edge inferencing solution optimized for both CPUs and GPUs.
This top level GitHub organization host repositories for officially supported backends, including TensorRT, TensorFlow,PyTorch,Python,ONNX Runtime, and OpenVino. The organization also hosts several popular Triton tools, including:
- Model Analyzer: A tool to analyze the runtime performance of a model and provide an optimized model configuration for Triton Inference Server.
- Model Navigator: a tool that provides the ability to automate the process of moving a model from source to optimal format and configuration for deployment on Triton Inference Server.
Getting Started
To learn about NVIDIA Triton Inference Server, refer to theTriton developer pageand read our Quickstart Guide. Official Triton Docker containers are available from NVIDIA NGC.
Product Documentation
User documentation on Triton features, APIs, and architecture is located in the server documents on GitHub. A table of contents for the user documentation is located in the server README file.
Release Notes, Support Matrix, and Licenses information are available in theNVIDIA Triton Inference Server Documentation.
Examples
Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. Additional generic examples can be found in theserver documents.
FAQ
For technical questions about Triton Inference Server, please consult the Triton FAQ Guide. Information about future support & updates for Triton can be found in the Dynamo FAQ Guide.
Feedback
Share feedback or ask questions about NVIDIA Triton Inference Server by filing aGitHub issue.