Advanced Quantization Techniques for Large Language Models
With Nayan Saxena
Liked by 14 users
Duration: 1h 10m
Skill level: Advanced
Released: 1/15/2026
Course details
Discover cutting-edge quantization techniques for large language models, focusing on the algorithms and optimization strategies that deliver the best performance. Instructor Nayan Saxena begins by covering mathematical foundations, before progressing through advanced methods including GPTQ, AWQ, and SmoothQuant with hands-on examples in Google Colab. Along the way, gather quick tips to master critical concepts such as precision formats, calibration strategies, and evaluation methodologies. Leveraging both theoretical principles and practical applications, this course equips you with in-demand skills to significantly reduce model size and accelerate inference while maintaining performance quality.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructor
Learner reviews
Contents
What’s included
- Learn on the go Access on tablet and phone