Add AWQ quantization inference support (#1019) by Narsil · Pull Request #1054 · huggingface/text-generation-inference (original) (raw)
Add AWQ quantization inference support
Fixes
#781
This PR (partially) adds support for AWQ quantization for inference.
More information on AWQ here. In
general, AWQ is faster and more accurate than GPTQ, which is currently
supported by TGI.
This PR installs 4-bit GEMM custom CUDA kernels released by AWQ authors
(in requirements.txt
, just one line change).
Quick way to test this PR would be bring up TGI as follows:
text-generation-server download-weights abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq
text-generation-launcher \
--huggingface-hub-cache ~/.cache/huggingface/hub/ \
--model-id abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq \
--trust-remote-code --port 8080 \
--max-input-length 2048 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 \
--quantize awq
Please note:
- This PR was tested with FlashAttention v2 and vLLM.
- This PR adds support for AWQ inference, not quantizing the models.
That needs to be done outside of TGI, instructions
here. - This PR only adds support for
FlashLlama
models for now. - Multi-GPU setup has not been tested.
- No integration tests have been added so far, will add later if
maintainers are interested in this change. - This PR can be tested on any of the models released
here.
Please refer to the linked issue for benchmarks for
abhinavkulkarni/meta-llama-Llama-2-7b-chat-hf-w4-g128-awq
vs
TheBloke/Llama-2-7b-Chat-GPTQ.
Please note, AWQ has released faster (and in case of Llama, fused)
kernels for 4-bit GEMM, currently at the top of the main
branch at
https://github.com/mit-han-lab/llm-awq, but this PR uses an older commit
that has been tested to work. We can switch to latest commit later on.
Who can review?
What does this PR do?
Fixes # (issue)
Before submitting
- This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the contributor guideline,
Pull Request section? - Was this discussed/approved via a Github issue or the forum? Please add a link
to it if that's the case. - Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings. - Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.