Skip to content

llmcompressor.modifiers.quantization

Modules:

Classes:

GPTQModifier

Bases: Modifier, QuantizationMixin

Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier uses activations to calibrate a hessian matrix, which is then used to determine optimal quantization values and orderings for the model weights.

Sample yaml:

test_stage:
  obcq_modifiers:
    GPTQModifier:
      block_size: 128
      dampening_frac: 0.001
      offload_hessians: False
      actorder: static
      config_groups:
        group_0:
          targets:
            - "Linear"
          input_activations: null
          output_activations: null
          weights:
            num_bits: 8
            type: "int"
            symmetric: true
            strategy: group
            group_size: 128

Lifecycle:

  • on_initialize
    • apply config to model
  • on_start
    • add activation calibration hooks
    • add gptq weight calibration hooks
  • on_sequential_epoch_end
    • quantize_weight
  • on_finalize
    • remove_hooks()
    • model.apply(freeze_module_quantization)

Parameters:

  • sequential_targets

    list of layer names to compress during GPTQ, or 'ALL' to compress every layer in the model

  • block_size

    Used to determine number of columns to compress in one pass

  • dampening_frac

    Amount of dampening to apply to H, as a fraction of the diagonal norm

  • actorder

    order in which weight columns are quantized. Defaults to "static" activation ordering, which achieves best accuracy recovery with no runtime cost. For more information, see https://github.com/vllm-project/vllm/pull/8135

  • offload_hessians

    Set to True for decreased memory usage but increased runtime.

  • config_groups

    dictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.

  • targets

    list of layer names to quantize if a scheme is provided. Defaults to Linear layers

  • ignore

    optional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.

  • scheme

    a single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format preset_scheme_name: targets for example: W8A8: ['Linear'] for weight and activation 8-bit.

  • kv_cache_scheme

    optional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the q_proj and k_proj modules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules with k_proj and v_proj in their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail

Methods:

  • calibrate_module

    Calibration hook used to accumulate the hessian of the input to the module

  • compress_modules

    Quantize modules which have been calibrated

  • on_end

    Finish calibrating by removing observers and calibration hooks

  • on_finalize

    disable the quantization observers used by the OBCQ algorithm

  • on_initialize

    Initialize and run the GPTQ algorithm on the current state

calibrate_module

calibrate_module(
    module: Module,
    args: Tuple[Tensor, ...],
    _output: Tensor,
)

Calibration hook used to accumulate the hessian of the input to the module

Parameters:

  • module

    (Module) –

    module being calibrated

  • args

    (Tuple[Tensor, ...]) –

    inputs to the module, the first element of which is the canonical input

  • _output

    (Tensor) –

    uncompressed module output, unused

Source code in llmcompressor/modifiers/quantization/gptq/base.py
def calibrate_module(
    self,
    module: torch.nn.Module,
    args: Tuple[torch.Tensor, ...],
    _output: torch.Tensor,
):
    """
    Calibration hook used to accumulate the hessian of the input to the module

    :param module: module being calibrated
    :param args: inputs to the module, the first element of which is the
        canonical input
    :param _output: uncompressed module output, unused
    """
    # Assume that first argument is the input
    inp = args[0]

    # Initialize hessian if not present
    if module not in self._num_samples:
        init_device = (
            "cpu" if self.offload_hessians else get_execution_device(module)
        )
        self._hessians[module] = make_empty_hessian(module, device=init_device)
        self._num_samples[module] = torch.zeros(
            tuple(), device=get_execution_device(module)
        )

    # Accumulate hessian with input with optional offloading
    with self._maybe_onload_hessian(module):
        self._hessians[module], self._num_samples[module] = accumulate_hessian(
            inp,
            module,
            self._hessians[module],
            self._num_samples[module],
        )

compress_modules

compress_modules()

Quantize modules which have been calibrated

Source code in llmcompressor/modifiers/quantization/gptq/base.py
def compress_modules(self):
    """
    Quantize modules which have been calibrated
    """
    ### Not Distributed
    if not is_distributed():
        self.compress_module_list(list(self._num_samples.keys()))
        return

    ### Distributed
    rank = dist.get_rank()
    world_size = dist.get_world_size()

    # Assign modules to ranks
    module_list, rank_to_modules, module_to_rank = greedy_bin_packing(
        list(self._hessians.keys()),
        world_size,
        item_weight_fn=lambda mod: self._hessians[mod].shape[0],
    )

    # send hessians to assigned ranks
    self._reduce_hessian_to_target_rank(module_list, module_to_rank)

    self.compress_module_list(rank_to_modules[rank])

    # broadcast compressed modules to each rank
    self._broadcast_quantized_params(module_list, module_to_rank)

on_end

on_end(state: State, event: Event, **kwargs)

Finish calibrating by removing observers and calibration hooks

Source code in llmcompressor/modifiers/quantization/gptq/base.py
def on_end(self, state: State, event: Event, **kwargs):
    """
    Finish calibrating by removing observers and calibration hooks
    """
    self.ended_ = True
    QuantizationMixin.end_calibration(self, state.model)
    self.remove_hooks()  # remove gptq hooks

on_finalize

on_finalize(state: State, **kwargs) -> bool

disable the quantization observers used by the OBCQ algorithm

Parameters:

  • state

    (State) –

    session state storing input model and calibration data

Source code in llmcompressor/modifiers/quantization/gptq/base.py
def on_finalize(self, state: State, **kwargs) -> bool:
    """
    disable the quantization observers used by the OBCQ algorithm

    :param state: session state storing input model and calibration data
    """
    if not self.ended_:
        self.on_end(state, None)

    if len(self._num_samples) > 0:
        raise ValueError(f"Failed to compress {len(self._num_samples)} modules")

    self._hessians = dict()
    self._num_samples = dict()

    return True

on_initialize

on_initialize(state: State, **kwargs) -> bool

Initialize and run the GPTQ algorithm on the current state

Parameters:

  • state

    (State) –

    session state storing input model and calibration data

Source code in llmcompressor/modifiers/quantization/gptq/base.py
def on_initialize(self, state: State, **kwargs) -> bool:
    """
    Initialize and run the GPTQ algorithm on the current state

    :param state: session state storing input model and calibration data
    """
    # apply config to model and prepare calibration hooks
    if QuantizationMixin.has_config(self):
        QuantizationMixin.initialize_quantization(self, state.model)

    # prepare module names
    self._module_names = {
        m: name
        for name, m in match_named_modules(
            state.model, self.resolved_targets, self.ignore
        )
    }

    return True

QuantizationMixin

Bases: HooksMixin

Mixin which enables a Modifier to act as a quantization config, attaching observers, calibration hooks, and compression wrappers to modifiers

Lifecycle:

  • on_initialize: QuantizationMixin.initialize_quantization
    • Attach schemes to modules
    • Attach observers to modules
    • Disable quantization until calibration starts/finishes
  • on_start: QuantizationMixin.start_calibration
    • Attach calibration hooks
    • Apply calibration status
    • Enable quantization during calibration
  • on_end: QuantizationMixin.end_calibration
    • Remove calibration hooks
    • Apply freeze status
    • Keep quantization enabled for future steps

NOTE: QuantizationMixin does not update scales and zero-points on its own, as this is not desired for all Modifiers inheriting from it. Modifier must explicitly call update_weight_zp_scale. See QuantizationModifier.on_start method for example

Parameters:

  • config_groups

    dictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.

  • targets

    list of layer names to quantize if a scheme is provided. If unset, will contain all targets listed in config_groups. If config_groups is also unset, will default to ["Linear"] (i.e. all Linear layers will be targeted). This field is not the source of truth for finding all matching target layers in a model. Additional information can be stored in config_groups. Use self.resolved_targets instead.

  • ignore

    optional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.

  • scheme

    a single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format preset_scheme_name: targets for example: W8A8: ['Linear'] for weight and activation 8-bit.

  • kv_cache_scheme

    optional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the q_proj and k_proj modules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules with k_proj and v_proj in their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail

  • weight_observer

    optional observer name for weight quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".

  • input_observer

    optional observer name for input activation quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".

  • output_observer

    optional observer name for output activation quantization. Overrides the default observer specified in the scheme. Valid values include "minmax", "mse", "static_minmax", "memoryless_minmax", "memoryless_mse".

  • observer

    optional dictionary to specify observers for multiple quantization types at once. Keys can be "weights", "input", or "output". Values are observer names. Example: {"weights": "MSE", "input": "MSE"}. If both individual observer parameters (weight_observer, input_observer, output_observer) and observer dict are provided, the observer dict takes precedence.

  • bypass_divisibility_checks

    if True, skip the check that weight columns are divisible by group_size for GROUP/TENSOR_GROUP. Use when your runtime (e.g. vLLM) supports non-divisible dimensions. Defaults to False.

Methods:

  • end_calibration

    Remove calibration hooks and observers, and set the model status to frozen.

  • has_config

    Determine if the user has specified a quantization config on this modifier

  • initialize_quantization

    Attach quantization schemes to modules in the model according to

  • resolve_quantization_config

    Returns the quantization config specified by this modifier

  • start_calibration

    Attach observers, register activation calibration hooks (including

  • validate_observer

    Validate observer dictionary format. Accepts keys: 'weights', 'input', 'output'

Attributes:

  • resolved_config (QuantizationConfig) –

    Quantization config needs to be resolved just once based on

  • resolved_targets (Set[str]) –

    Set of all resolved targets, i.e. all unique targets listed

resolved_config property

resolved_config: QuantizationConfig

Quantization config needs to be resolved just once based on scheme and config_groups inputs.

resolved_targets property

resolved_targets: Set[str]

Set of all resolved targets, i.e. all unique targets listed in resolved quantization config. Use this property instead of the targets field, as targets can also come from config_groups depending on how recipe is configured.

end_calibration

end_calibration(model: Module)

Remove calibration hooks and observers, and set the model status to frozen. Keep quantization enabled for future operations

Parameters:

  • model

    (Module) –

    model to end calibration for

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
def end_calibration(self, model: torch.nn.Module):
    """
    Remove calibration hooks and observers, and set the model status to frozen.
    Keep quantization enabled for future operations

    :param model: model to end calibration for
    """
    self.remove_hooks(self._calibration_hooks)
    for _, module in match_named_modules(model, self.resolved_targets, self.ignore):
        freeze_module_quantization(module)  # remove observers

    model.apply(enable_quantization)  # keep quantization enabled

has_config

has_config() -> bool

Determine if the user has specified a quantization config on this modifier

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
def has_config(self) -> bool:
    """
    Determine if the user has specified a quantization config on this modifier
    """
    return not (
        self.config_groups is None
        and self.targets == ["Linear"]
        and self.ignore == []
        and self.scheme is None
        and self.kv_cache_scheme is None
    )

initialize_quantization

initialize_quantization(model: Module)

Attach quantization schemes to modules in the model according to the quantization config specified on this modifier

Parameters:

  • model

    (Module) –

    model to attach schemes and observers to

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
def initialize_quantization(self, model: torch.nn.Module):
    """
    Attach quantization schemes to modules in the model according to
    the quantization config specified on this modifier

    :param model: model to attach schemes and observers to
    """

    for _, module in match_named_modules(model, self.resolved_targets, self.ignore):
        reset_quantization_status(module)  # reset any previously applied qconfigs

    apply_quantization_config(model, self.resolved_config)

    if not self.bypass_divisibility_checks:
        validate_group_size_divisibility(model, self.resolved_targets, self.ignore)

    # disable quantization until calibration
    model.apply(disable_quantization)

resolve_quantization_config

resolve_quantization_config() -> QuantizationConfig

Returns the quantization config specified by this modifier

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
def resolve_quantization_config(self) -> QuantizationConfig:
    """
    Returns the quantization config specified by this modifier
    """
    scheme = self.scheme
    targets = self.targets
    config_groups = self.config_groups
    kv_cache_scheme = self.kv_cache_scheme
    ignore = self.ignore

    if scheme is not None and config_groups is not None:
        raise ValueError("Please specify either `scheme` or `config_groups`")

    if scheme is not None:
        # takes precedence over config_groups

        if isinstance(scheme, str) and is_preset_scheme(scheme):
            # attach targets to scheme
            scheme = {scheme: targets}

        config_groups = {}
        for idx, key in enumerate(scheme.keys()):
            if is_preset_scheme(key):
                scheme_obj = preset_name_to_scheme(key, scheme[key])
            else:
                scheme_obj = QuantizationScheme.model_validate(
                    {"targets": scheme[key], **scheme}
                )

            # Apply observer overrides if specified
            scheme_obj = self._apply_observer_overrides(scheme_obj)

            group_name = f"group_{idx}"
            config_groups[group_name] = scheme_obj

    if config_groups is None or len(config_groups) == 0:
        default_quant_scheme = QuantizationScheme(targets=targets)
        # Apply observer overrides to default scheme as well
        default_quant_scheme = self._apply_observer_overrides(default_quant_scheme)
        config_groups = {"group_0": default_quant_scheme}
    elif scheme is None:
        # Apply observer overrides to all config groups when config_groups
        # was provided directly (not derived from scheme)
        for scheme_obj in config_groups.values():
            self._apply_observer_overrides(scheme_obj)

    return QuantizationConfig(
        config_groups=config_groups,
        kv_cache_scheme=kv_cache_scheme,
        quantization_status=QuantizationStatus.INITIALIZED,
        ignore=ignore,
    )

start_calibration

start_calibration(model: Module)

Attach observers, register activation calibration hooks (including kv_cache quantization) and enable quantization as we calibrate

Parameters:

  • model

    (Module) –

    model to prepare for calibration

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
def start_calibration(self, model: torch.nn.Module):
    """
    Attach observers, register activation calibration hooks (including
    kv_cache quantization) and enable quantization as we calibrate

    :param model: model to prepare for calibration
    """
    targets = match_named_modules(model, self.resolved_targets, self.ignore)
    if targets_embeddings(model, targets):
        untie_word_embeddings(model)

    for _, module in match_named_modules(model, self.resolved_targets, self.ignore):
        self._initialize_observers(module)
        self._calibration_hooks |= self._initialize_hooks(module)
        apply_calibration_status(module)

    model.apply(enable_quantization)  # quantize at the same time as calibrate

validate_observer

validate_observer(value: Any) -> Optional[Dict[str, str]]

Validate observer dictionary format. Accepts keys: 'weights', 'input', 'output'

Source code in llmcompressor/modifiers/quantization/quantization/mixin.py
@field_validator("observer", mode="before")
def validate_observer(cls, value: Any) -> Optional[Dict[str, str]]:
    """
    Validate observer dictionary format. Accepts keys: 'weights', 'input', 'output'
    """
    if value is None:
        return value

    if not isinstance(value, dict):
        raise ValueError("`observer` must be a dictionary")

    valid_keys = {"weights", "input", "output"}
    for key in value.keys():
        if key not in valid_keys:
            raise ValueError(
                f"Invalid observer key '{key}'. Valid keys are: {valid_keys}"
            )
        if not isinstance(value[key], str):
            raise ValueError(f"Observer value for '{key}' must be a string")

    return value

QuantizationModifier

Bases: Modifier, QuantizationMixin

Enables post training quantization (PTQ) and quantization aware training (QAT) for a given module or its submodules. After calibration (PTQ) or the start epoch (QAT), the specified module(s) forward pass will emulate quantized execution and the modifier will be enabled until training is completed.

Parameters:

  • config_groups

    dictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.

  • targets

    list of layer names to quantize if a scheme is provided. Defaults to Linear layers

  • ignore

    optional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.

  • scheme

    a single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format preset_scheme_name: targets for example: W8A8: ['Linear'] for weight and activation 8-bit.

  • kv_cache_scheme

    optional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the q_proj and k_proj modules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aforementioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules with k_proj and v_proj in their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail

Methods:

  • on_end

    Finish calibrating by removing observers and calibration hooks

  • on_initialize

    Prepare to calibrate activations and weights

  • on_start

    Begin calibrating activations and weights. Calibrate weights only once on start

on_end

on_end(state: State, event: Event, **kwargs)

Finish calibrating by removing observers and calibration hooks

Source code in llmcompressor/modifiers/quantization/quantization/base.py
def on_end(self, state: State, event: Event, **kwargs):
    """
    Finish calibrating by removing observers and calibration hooks
    """
    self.ended_ = True
    QuantizationMixin.end_calibration(
        self, state.model
    )  # keep quantization enabled

on_initialize

on_initialize(state: State, **kwargs) -> bool

Prepare to calibrate activations and weights

According to the quantization config, a quantization scheme is attached to each targeted module. The module's forward call is also overwritten to perform quantization to inputs, weights, and outputs.

Then, according to the module's quantization scheme, observers and calibration hooks are added. These hooks are disabled until the modifier starts.

Source code in llmcompressor/modifiers/quantization/quantization/base.py
def on_initialize(self, state: State, **kwargs) -> bool:
    """
    Prepare to calibrate activations and weights

    According to the quantization config, a quantization scheme is attached to each
    targeted module. The module's forward call is also overwritten to perform
    quantization to inputs, weights, and outputs.

    Then, according to the module's quantization scheme, observers and calibration
    hooks are added. These hooks are disabled until the modifier starts.
    """
    if not QuantizationMixin.has_config(self):
        raise ValueError(
            "QuantizationModifier requires that quantization fields be specified"
        )
    QuantizationMixin.initialize_quantization(self, state.model)

    return True

on_start

on_start(state: State, event: Event, **kwargs)

Begin calibrating activations and weights. Calibrate weights only once on start

Source code in llmcompressor/modifiers/quantization/quantization/base.py
def on_start(self, state: State, event: Event, **kwargs):
    """
    Begin calibrating activations and weights. Calibrate weights only once on start
    """
    self.started_ = True
    QuantizationMixin.start_calibration(self, state.model)

    named_modules = list(
        match_named_modules(state.model, self.resolved_targets, self.ignore)
    )
    # TODO: this step can be combined with update_weight_zp_scale
    # once update_fused_layer_weight_global_scales is removed
    # and not required by vLLM
    for _, module in named_modules:
        update_weight_global_scale(module)

    # NOTE: update_fused_layer_weight_global_scales operates on Attention
    # and MLP layers, not quantizable Linear layers. Rather than running
    # on targeted modules, we need to run on all modules.
    # Because this call is idempotent, setting all global_scales to the
    # min value, it is ok to run potentially multiple times for all modules
    for module in state.model.modules():
        update_fused_layer_weight_global_scales(module)

    for _, module in tqdm.tqdm(named_modules, desc="Calibrating weights"):
        update_weight_zp_scale(module)