quantize_qat — PyTorch 2.7 documentation (original) (raw)
class torch.ao.quantization.quantize_qat(model, run_fn, run_args, inplace=False)[source][source]¶
Do quantization aware training and output a quantized model
Parameters
- model – input model
- run_fn – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop
- run_args – positional arguments for run_fn
Returns
Quantized model.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.