GitHub - NVIDIA/ChatRTX: A developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM (original) (raw)

πŸš€ RAG on Windows using TensorRT-LLM, NVIDIA NIM and LlamaIndex πŸ¦™

ChatRTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own contentβ€”docs, notes, photos. Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, NVIDIA NIM microservices and RTX acceleration, you can query a custom chatbot to quickly get contextually relevant answers. This app also lets you give query through your voice. As it all runs locally on your Windows RTX PC, you’ll get fast and secure results. ChatRTX supports various file formats, including text, pdf, doc/docx, xml, png, jpg, bmp. Simply point the application at the folder containing your files and it'll load them into the library in a matter of seconds.

ChatRTX supports following AI models:

Model Supported GPUs
LlaMa 3.1 8B NIM RTX 6000 Ada, RTX GPUs 4080, 4090, 5080, 5090
RIVA Parakeet 0.6B NIM (for supporting voice input) RTX 6000 Ada, RTX GPUs 4080, 4090, 5080, 5090
CLIP (for images) RTX 6000 Ada, RTX 3xxx, RTX 4xxx, RTX 5080, RTX 5090
Whisper Medium (for supporting voice input) RTX 6000 Ada, RTX 3xxx and RTX 4xxx series GPUs that have at least 8GB of GPU memory
Mistral 7B RTX 6000 Ada, RTX 3xxx and RTX 4xxx series GPUs that have at least 8GB of GPU memory
ChatGLM3 6B RTX 6000 Ada, RTX 3xxx and RTX 4xxx series GPUs that have at least 8GB of GPU memory
LLaMa 2 13B RTX 6000 Ada, RTX 3xxx and RTX 4xxx series GPUs that have at least 16GB of GPU memory
Gemma 7B RTX 6000 Ada, RTX 3xxx and RTX 4xxx series GPUs that have at least 16GB of GPU memory

The pipeline incorporates the above AI models, TensorRT-LLM, LlamaIndex and the FAISS vector search library. In the sample application here, we have a dataset consisting of recent articles sourced from NVIDIA Geforce News.

What is RAG? πŸ”

Retrieval-augmented generation (RAG) for large language models (LLMs) that seeks to enhance prediction accuracy by connecting the LLM to your data during inference. This approach constructs a comprehensive prompt enriched with context, historical data, and recent or relevant knowledge.

Repository details

Getting Started

Hardware requirement

This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.