GitHub - intel/intel-extension-for-transformers: ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡ (original) (raw)

🏃Installation

Quick Install from Pypi

pip install intel-extension-for-transformers

For system requirements and other installation tips, please refer to Installation Guide

🌟Introduction

Intel® Extension for Transformers is an innovative toolkit designed to accelerate GenAI/LLM everywhere with the optimal performance of Transformer-based models on various Intel platforms, including Intel Gaudi2, Intel CPU, and Intel GPU. The toolkit provides the below key features and examples:

🔓Validated Hardware

Hardware Fine-Tuning Inference
Full PEFT 8-bit 4-bit
Intel Gaudi2 WIP (FP8) -
Intel Xeon Scalable Processors ✔ (INT8, FP8) ✔ (INT4, FP4, NF4)
Intel Xeon CPU Max Series ✔ (INT8, FP8) ✔ (INT4, FP4, NF4)
Intel Data Center GPU Max Series WIP WIP WIP (INT8) ✔ (INT4)
Intel Arc A-Series - - WIP (INT8) ✔ (INT4)
Intel Core Processors - ✔ (INT8, FP8) ✔ (INT4, FP4, NF4)

In the table above, "-" means not applicable or not started yet.

🔓Validated Software

Software Fine-Tuning Inference
Full PEFT 8-bit 4-bit
PyTorch 2.0.1+cpu, 2.0.1a0 (gpu) 2.0.1+cpu, 2.0.1a0 (gpu) 2.1.0+cpu, 2.0.1a0 (gpu) 2.1.0+cpu, 2.0.1a0 (gpu)
Intel® Extension for PyTorch 2.1.0+cpu, 2.0.110+xpu 2.1.0+cpu, 2.0.110+xpu 2.1.0+cpu, 2.0.110+xpu 2.1.0+cpu, 2.0.110+xpu
Transformers 4.35.2(CPU), 4.31.0 (Intel GPU) 4.35.2(CPU), 4.31.0 (Intel GPU) 4.35.2(CPU), 4.31.0 (Intel GPU) 4.35.2(CPU), 4.31.0 (Intel GPU)
Synapse AI 1.13.0 1.13.0 1.13.0 1.13.0
Gaudi2 driver 1.13.0-ee32e42 1.13.0-ee32e42 1.13.0-ee32e42 1.13.0-ee32e42
intel-level-zero-gpu 1.3.26918.50-736~22.04 1.3.26918.50-736~22.04 1.3.26918.50-736~22.04 1.3.26918.50-736~22.04

Please refer to the detailed requirements in CPU, Gaudi2, Intel GPU.

🔓Validated OS

Ubuntu 20.04/22.04, Centos 8.

🌱Getting Started

Chatbot

Below is the sample code to create your chatbot. See more examples.

Serving (OpenAI-compatible RESTful APIs)

NeuralChat provides OpenAI-compatible RESTful APIs for chat, so you can use NeuralChat as a drop-in replacement for OpenAI APIs. You can start NeuralChat server either using the Shell command or Python code.

Shell Command

neuralchat_server start --config_file ./server/config/neuralchat.yaml

Python Code

from intel_extension_for_transformers.neural_chat import NeuralChatServerExecutor server_executor = NeuralChatServerExecutor() server_executor(config_file="./server/config/neuralchat.yaml", log_file="./neuralchat.log")

NeuralChat service can be accessible through OpenAI client library, curl commands, and requests library. See more in NeuralChat.

Offline

from intel_extension_for_transformers.neural_chat import build_chatbot chatbot = build_chatbot() response = chatbot.predict("Tell me about Intel Xeon Scalable Processors.")

Transformers-based extension APIs

Below is the sample code to use the extended Transformers APIs. See more examples.

INT4 Inference (CPU)

We encourage you to install NeuralSpeed to get the latest features (e.g., GGUF support) of LLM low-bit inference on CPUs. You may also want to use v1.3 without NeuralSpeed by following the document

from transformers import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM model_name = "Intel/neural-chat-7b-v3-1"
prompt = "Once upon a time, there existed a little girl,"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids

model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True) outputs = model.generate(inputs)

You can also load GGUF format model from Huggingface, we only support Q4_0/Q5_0/Q8_0 gguf format for now.

from transformers import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM

Specify the GGUF repo on the Hugginface

model_name = "TheBloke/Llama-2-7B-Chat-GGUF"

Download the the specific gguf model file from the above repo

gguf_file = "llama-2-7b-chat.Q4_0.gguf"

make sure you are granted to access this model on the Huggingface.

tokenizer_name = "meta-llama/Llama-2-7b-chat-hf" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids

model = AutoModelForCausalLM.from_pretrained(model_name, gguf_file = gguf_file) outputs = model.generate(inputs)

You can also load PyTorch Model from Modelscope

Note:require modelscope

from transformers import TextStreamer from modelscope import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM model_name = "qwen/Qwen-7B" # Modelscope model_id or local model prompt = "Once upon a time, there existed a little girl,"

model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, model_hub="modelscope") tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)

You can also load the low-bit model quantized by GPTQ/AWQ/RTN/AutoRound algorithm.

from transformers import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, GPTQConfig

Hugging Face GPTQ/AWQ model or use local quantize model

model_name = "MODEL_NAME_OR_PATH" prompt = "Once upon a time, a little girl"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) outputs = model.generate(inputs)

INT4 Inference (GPU)

import intel_extension_for_pytorch as ipex from intel_extension_for_transformers.transformers.modeling import AutoModelForCausalLM from transformers import AutoTokenizer import torch

device_map = "xpu" model_name ="Qwen/Qwen-7B" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) prompt = "Once upon a time, there existed a little girl," inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(device_map)

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map=device_map, load_in_4bit=True)

model = ipex.optimize_transformers(model, inplace=True, dtype=torch.float16, quantization_config=True, device=device_map)

output = model.generate(inputs)

Note: Please refer to the example and script for more details.

Langchain-based extension APIs

Below is the sample code to use the extended Langchain APIs. See more examples.

from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline from langchain.chains import RetrievalQA from langchain_core.vectorstores import VectorStoreRetriever from intel_extension_for_transformers.langchain.vectorstores import Chroma retriever = VectorStoreRetriever(vectorstore=Chroma(...)) retrievalQA = RetrievalQA.from_llm(llm=HuggingFacePipeline(...), retriever=retriever)

🎯Validated Models

You can access the validated models, accuracy and performance from Release data or Medium blog.

📖Documentation

OVERVIEW
NeuralChat Neural Speed
NEURALCHAT
Chatbot on Intel CPU Chatbot on Intel GPU Chatbot on Gaudi
Chatbot on Client More Notebooks
NEURAL SPEED
Neural Speed Streaming LLM Low Precision Kernels Tensor Parallelism
LLM COMPRESSION
SmoothQuant (INT8) Weight-only Quantization (INT4/FP4/NF4/INT8) QLoRA on CPU
GENERAL COMPRESSION
Quantization Pruning Distillation Orchestration
Data Augmentation Export Metrics Objectives
Pipeline Length Adaptive Early Exit
TUTORIALS & RESULTS
Tutorials LLM List General Model List Model Performance

🙌Demo

📃Selected Publications/Events

View Full Publication List

Additional Content

Acknowledgements

💁Collaborations

Welcome to raise any interesting ideas on model compression techniques and LLM-based chatbot development! Feel free to reach us, and we look forward to our collaborations on Intel Extension for Transformers!