Quantization (original) (raw)

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn’t be able to fit into memory, and speeding up inference.

Learn how to quantize models in the Quantization guide.

PipelineQuantizationConfig

class diffusers.PipelineQuantizationConfig

< source >

( quant_backend: str = None quant_kwargs: typing.Dict[str, typing.Union[str, float, int, dict]] = None components_to_quantize: typing.Union[typing.List[str], str, NoneType] = None quant_mapping: typing.Dict[str, typing.Union[diffusers.quantizers.quantization_config.QuantizationConfigMixin, ForwardRef('TransformersQuantConfigMixin')]] = None )

Parameters

Configuration class to be used when applying quantization on-the-fly to from_pretrained().

BitsAndBytesConfig

class diffusers.BitsAndBytesConfig

< source >

( load_in_8bit = False load_in_4bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False llm_int8_has_fp16_weight = False bnb_4bit_compute_dtype = None bnb_4bit_quant_type = 'fp4' bnb_4bit_use_double_quant = False bnb_4bit_quant_storage = None **kwargs )

Parameters

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes.

This replaces load_in_8bit or load_in_4bit therefore both options are mutually exclusive.

Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes, then more arguments will be added to this class.

Returns True if the model is quantizable, False otherwise.

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

This method returns the quantization method used for the model. If the model is not quantizable, it returnsNone.

to_diff_dict

< source >

( ) → Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

GGUFQuantizationConfig

class diffusers.GGUFQuantizationConfig

< source >

( compute_dtype: typing.Optional[ForwardRef('torch.dtype')] = None )

Parameters

This is a config class for GGUF Quantization techniques.

QuantoConfig

class diffusers.QuantoConfig

< source >

( weights_dtype: str = 'int8' modules_to_not_convert: typing.Optional[typing.List[str]] = None **kwargs )

Parameters

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using quanto.

modules_to_not_convert (list, optional, default to None): The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).

Safety checker that arguments are correct

TorchAoConfig

class diffusers.TorchAoConfig

< source >

( quant_type: typing.Union[str, ForwardRef('AOBaseConfig')] modules_to_not_convert: typing.Optional[typing.List[str]] = None **kwargs )

Parameters

This is a config class for torchao quantization/sparsity techniques.

Example:

from diffusers import FluxTransformer2DModel, TorchAoConfig

from torchao.quantization import Int8WeightOnlyConfig

quantization_config = TorchAoConfig(Int8WeightOnlyConfig())

quantization_config = TorchAoConfig("int8wo") transformer = FluxTransformer2DModel.from_pretrained( "black-forest-labs/Flux.1-Dev", subfolder="transformer", quantization_config=quantization_config, torch_dtype=torch.bfloat16, )

from_dict

< source >

( config_dict return_unused_kwargs = False **kwargs )

Create configuration from a dictionary.

Create the appropriate quantization method based on configuration.

Convert configuration to a dictionary.

DiffusersQuantizer

class diffusers.DiffusersQuantizer

< source >

( quantization_config: QuantizationConfigMixin **kwargs )

Abstract class of the HuggingFace quantizer. Supports for now quantizing HF diffusers models for inference and/or quantization. This class is used only for diffusers.models.modeling_utils.ModelMixin.from_pretrained and cannot be easily used outside the scope of that method yet.

Attributes quantization_config (diffusers.quantizers.quantization_config.QuantizationConfigMixin): The quantization config that defines the quantization parameters of your model that you want to quantize. modules_to_not_convert (List[str], optional): The list of module names to not convert when quantizing the model. required_packages (List[str], optional): The list of required pip packages to install prior to using the quantizer requires_calibration (bool): Whether the quantization method requires to calibrate the model before using it.

adjust_max_memory

< source >

( max_memory: typing.Dict[str, typing.Union[int, str]] )

adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization

adjust_target_dtype

< source >

( torch_dtype: torch.dtype )

Parameters

Override this method if you want to adjust the target_dtype variable used in from_pretrained to compute the device_map in case the device_map is a str. E.g. for bitsandbytes we force-set target_dtype to torch.int8and for 4-bit we pass a custom enum accelerate.CustomDtype.int4.

check_if_quantized_param

< source >

( model: ModelMixin param_value: torch.Tensor param_name: str state_dict: typing.Dict[str, typing.Any] **kwargs )

checks if a loaded state_dict component is part of quantized param + some validation; only defined for quantization methods that require to create a new parameters for quantization.

check_quantized_param_shape

< source >

( *args **kwargs )

checks if the quantized param has expected shape.

create_quantized_param

< source >

( *args **kwargs )

takes needed components from state_dict and creates quantized param.

Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance. Note not all quantization schemes support this.

The factor to be used in caching_allocator_warmup to get the number of bytes to pre-allocate to warm up cuda. A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means we allocate half the memory of the weights residing in the empty model, etc…

get_special_dtypes_update

< source >

( model torch_dtype: torch.dtype )

Parameters

returns dtypes for modules that are not quantized - used for the computation of the device_map in case one passes a str as a device_map. The method will use the modules_to_not_convert that is modified in_process_model_before_weight_loading. diffusers models don’t have any modules_to_not_convert attributes yet but this can change soon in the future.

postprocess_model

< source >

( model: ModelMixin **kwargs )

Parameters

Post-process the model post weights loading. Make sure to override the abstract method_process_model_after_weight_loading.

preprocess_model

< source >

( model: ModelMixin **kwargs )

Parameters

Setting model attributes and/or converting model before weights loading. At this point the model should be initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace modules in-place. Make sure to override the abstract method _process_model_before_weight_loading.

update_device_map

< source >

( device_map: typing.Optional[typing.Dict[str, typing.Any]] )

Parameters

Override this method if you want to pass a override the existing device map with a new one. E.g. for bitsandbytes, since accelerate is a hard requirement, if no device_map is passed, the device_map is set to `“auto”“

update_missing_keys

< source >

( model missing_keys: typing.List[str] prefix: str )

Parameters

Override this method if you want to adjust the missing_keys.

update_torch_dtype

< source >

( torch_dtype: torch.dtype )

Parameters

Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to override this method in case you want to make sure that behavior is preserved

This method is used to potentially check for potential conflicts with arguments that are passed infrom_pretrained. You need to define it for all future quantizers that are integrated with diffusers. If no explicit check are needed, simply return nothing.

Update on GitHub