Quantization Fundamentals with Hugging Face (original) (raw)

What you'll learn

About this course

Generative AI models, like large language models, often exceed the capabilities of consumer-grade hardware and are expensive to run. Compressing models through methods such as quantization makes them more efficient, faster, and accessible. This allows them to run on a wide variety of devices, including smartphones, personal computers, and edge devices, and minimizes performance degradation.

Join this course to:

By the end of this course, you will have a foundation in quantization techniques and be able to apply them to compress and optimize your own generative AI models, making them more accessible and efficient.

Who should join?

This is an introduction to the fundamental concepts of quantization for learners with a basic understanding of machine learning concepts and some experience with PyTorch, who is interested in learning about model quantization in generative AI.

Course Outline

7 Lessons・3 Code Examples

Instructors

Younes Belkada

Younes Belkada

Machine Learning Engineer at Hugging Face

Marc Sun

Marc Sun

Machine Learning Engineer at Hugging Face

Quantization Fundamentals with Hugging Face

Enroll for Free

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!