llmcompressor.modifiers.quantization
Modules:
-
calibration– -
gptq– -
group_size_validation–Early validation for divisibility requirements by quantization strategy.
-
quantization–
Classes:
-
GPTQModifier–Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier
-
QuantizationMixin–Mixin which enables a Modifier to act as a quantization config, attaching observers,
-
QuantizationModifier–Enables post training quantization (PTQ) and quantization aware training (QAT) for a
GPTQModifier
Bases: Modifier, QuantizationMixin
Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier uses activations to calibrate a hessian matrix, which is then used to determine optimal quantization values and orderings for the model weights.
Sample yaml:
test_stage:
obcq_modifiers:
GPTQModifier:
block_size: 128
dampening_frac: 0.001
offload_hessians: False
actorder: static
config_groups:
group_0:
targets:
- "Linear"
input_activations: null
output_activations: null
weights:
num_bits: 8
type: "int"
symmetric: true
strategy: group
group_size: 128
Lifecycle:
- on_initialize
- apply config to model
- on_start
- add activation calibration hooks
- add gptq weight calibration hooks
- on_sequential_epoch_end
- quantize_weight
- on_finalize
- remove_hooks()
- model.apply(freeze_module_quantization)
Parameters:
-
–sequential_targetslist of layer names to compress during GPTQ, or 'ALL' to compress every layer in the model
-
–block_sizeUsed to determine number of columns to compress in one pass
-
–dampening_fracAmount of dampening to apply to H, as a fraction of the diagonal norm
-
–actorderorder in which weight columns are quantized. Defaults to "static" activation ordering, which achieves best accuracy recovery with no runtime cost. For more information, see https://github.com/vllm-project/vllm/pull/8135
-
–offload_hessiansSet to True for decreased memory usage but increased runtime.
-
–config_groupsdictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.
-
–targetslist of layer names to quantize if a scheme is provided. Defaults to Linear layers
-
–ignoreoptional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.
-
–schemea single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format
preset_scheme_name: targetsfor example:W8A8: ['Linear']for weight and activation 8-bit. -
–kv_cache_schemeoptional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the
q_projandk_projmodules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules withk_projandv_projin their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail
Methods:
-
calibrate_module–Calibration hook used to accumulate the hessian of the input to the module
-
compress_modules–Quantize modules which have been calibrated
-
on_end–Finish calibrating by removing observers and calibration hooks
-
on_finalize–disable the quantization observers used by the OBCQ algorithm
-
on_initialize–Initialize and run the GPTQ algorithm on the current state
calibrate_module
Calibration hook used to accumulate the hessian of the input to the module
Parameters:
-
(moduleModule) –module being calibrated
-
(argsTuple[Tensor, ...]) –inputs to the module, the first element of which is the canonical input
-
(_outputTensor) –uncompressed module output, unused
Source code in llmcompressor/modifiers/quantization/gptq/base.py
compress_modules
Quantize modules which have been calibrated
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_end
Finish calibrating by removing observers and calibration hooks
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_finalize
disable the quantization observers used by the OBCQ algorithm
Parameters:
-
(stateState) –session state storing input model and calibration data
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_initialize
Initialize and run the GPTQ algorithm on the current state
Parameters:
-
(stateState) –session state storing input model and calibration data
Source code in llmcompressor/modifiers/quantization/gptq/base.py
QuantizationMixin
Bases: HooksMixin
Mixin which enables a Modifier to act as a quantization config, attaching observers, calibration hooks, and compression wrappers to modifiers
Lifecycle:
- on_initialize: QuantizationMixin.initialize_quantization
- Attach schemes to modules
- Attach observers to modules
- Disable quantization until calibration starts/finishes
- on_start: QuantizationMixin.start_calibration
- Attach calibration hooks
- Apply calibration status
- Enable quantization during calibration
- on_end: QuantizationMixin.end_calibration
- Remove calibration hooks
- Apply freeze status
- Keep quantization enabled for future steps
NOTE: QuantizationMixin does not update scales and zero-points on its own, as this is not desired for all Modifiers inheriting from it. Modifier must explicitly call update_weight_zp_scale. See QuantizationModifier.on_start method for example
Parameters:
-
–config_groupsdictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.
-
–targetslist of layer names to quantize if a scheme is provided. If unset, will contain all targets listed in config_groups. If config_groups is also unset, will default to ["Linear"] (i.e. all Linear layers will be targeted). This field is not the source of truth for finding all matching target layers in a model. Additional information can be stored in
config_groups. Use self.resolved_targets instead. -
–ignoreoptional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.
-
–schemea single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format
preset_scheme_name: targetsfor example:W8A8: ['Linear']for weight and activation 8-bit. -
–kv_cache_schemeoptional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the
q_projandk_projmodules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules withk_projandv_projin their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail -
–weight_observeroptional observer name for weight quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".
-
–input_observeroptional observer name for input activation quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".
-
–output_observeroptional observer name for output activation quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".
-
–observeroptional dictionary to specify observers for multiple quantization types at once. Keys can be "weights", "input", or "output". Values are observer names. Example: {"weights": "MSE", "input": "MSE"}. If both individual observer parameters (weight_observer, input_observer, output_observer) and observer dict are provided, the observer dict takes precedence.
-
–bypass_divisibility_checksif True, skip the check that weight columns are divisible by group_size for GROUP/TENSOR_GROUP. Use when your runtime (e.g. vLLM) supports non-divisible dimensions. Defaults to False.
Methods:
-
end_calibration–Remove calibration hooks and observers, and set the model status to frozen.
-
has_config–Determine if the user has specified a quantization config on this modifier
-
initialize_quantization–Attach quantization schemes to modules in the model according to
-
resolve_quantization_config–Returns the quantization config specified by this modifier
-
start_calibration–Attach observers, register activation calibration hooks (including
-
validate_observer–Validate observer dictionary format. Accepts keys: 'weights', 'input', 'output'
Attributes:
-
resolved_config(QuantizationConfig) –Quantization config needs to be resolved just once based on
-
resolved_targets(Set[str]) –Set of all resolved targets, i.e. all unique targets listed
resolved_config property
Quantization config needs to be resolved just once based on scheme and config_groups inputs.
resolved_targets property
Set of all resolved targets, i.e. all unique targets listed in resolved quantization config. Use this property instead of the targets field, as targets can also come from config_groups depending on how recipe is configured.
end_calibration
Remove calibration hooks and observers, and set the model status to frozen. Keep quantization enabled for future operations
Parameters:
-
(modelModule) –model to end calibration for
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
has_config
Determine if the user has specified a quantization config on this modifier
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
initialize_quantization
Attach quantization schemes to modules in the model according to the quantization config specified on this modifier
Parameters:
-
(modelModule) –model to attach schemes and observers to
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
resolve_quantization_config
Returns the quantization config specified by this modifier
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
start_calibration
Attach observers, register activation calibration hooks (including kv_cache quantization) and enable quantization as we calibrate
Parameters:
-
(modelModule) –model to prepare for calibration
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
validate_observer
Validate observer dictionary format. Accepts keys: 'weights', 'input', 'output'
Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
QuantizationModifier
Bases: Modifier, QuantizationMixin
Enables post training quantization (PTQ) and quantization aware training (QAT) for a given module or its submodules. After calibration (PTQ) or the start epoch (QAT), the specified module(s) forward pass will emulate quantized execution and the modifier will be enabled until training is completed.
Parameters:
-
–config_groupsdictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.
-
–targetslist of layer names to quantize if a scheme is provided. Defaults to Linear layers
-
–ignoreoptional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.
-
–schemea single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format
preset_scheme_name: targetsfor example:W8A8: ['Linear']for weight and activation 8-bit. -
–kv_cache_schemeoptional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the
q_projandk_projmodules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules withk_projandv_projin their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail
Methods:
-
on_end–Finish calibrating by removing observers and calibration hooks
-
on_initialize–Prepare to calibrate activations and weights
-
on_start–Begin calibrating activations and weights. Calibrate weights only once on start
on_end
Finish calibrating by removing observers and calibration hooks
Source code in llmcompressor/modifiers/quantization/quantization/base.py
on_initialize
Prepare to calibrate activations and weights
According to the quantization config, a quantization scheme is attached to each targeted module. The module's forward call is also overwritten to perform quantization to inputs, weights, and outputs.
Then, according to the module's quantization scheme, observers and calibration hooks are added. These hooks are disabled until the modifier starts.
Source code in llmcompressor/modifiers/quantization/quantization/base.py
on_start
Begin calibrating activations and weights. Calibrate weights only once on start