llmcompressor.modifiers.awq.base
Classes:
-
AWQModifier–Implements the AWQ (Activation-Weighted Quantization) algorithm,
AWQModifier
Bases: Modifier, QuantizationMixin
Implements the AWQ (Activation-Weighted Quantization) algorithm, as described in https://arxiv.org/pdf/2306.00978. The algorithm significantly reduces quantization error by protecting only 1% of the most salient weight channels.
Instead of relying on raw weight values, AWQ identifies important channels by analyzing activation patterns, focusing on the channels in the weight tensor that are most responsive to the input. To reduce quantization error, it scales these channels in a way that preserves the model's original behavior, using scaling factors computed offline from activation statistics.
Because this modifier manipulates the weights of the model, it can only be used in in one-shot and not during training. Activation ranges are determined by running a small set of calibration data through the model.
example recipe:
AWQModifier:
mappings:
- smooth_layer: "re:.*self_attn_layer_norm"
balance_layers: ["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"]
- smooth_layer: "re:.*final_layer_norm"
balance_layers: ["re:.*fc1"]
# activation_hook_target specifies which submodule of the parent to hook
# for activation caching.
# This change is only useful for MoE models with parallel transformer blocks,
# and one should use the default value (None) in most cases.
ignore: ["lm_head"]
config_groups:
group_0:
targets:
- "Linear"
input_activations: null
output_activations: null
weights:
num_bits: 4
type: int
symmetric: false
strategy: group
group_size: 128
Lifecycle:
- on_initialize
- resolve mappings
- capture kwargs needed for forward passes into modules
- on_start
- set up activation cache hooks to capture input activations to balance layers
- on sequential epoch end
- apply smoothing to each smoothing layer
- consume cached activations across all batches
- clear cached activations as they are used
- find best smoothing scale for each smoothing layer via grid search
- apply best scales to model weights
- raise error if any unused activations remain
- consume cached activations across all batches
- apply smoothing to each smoothing layer
- on_end
- re-run logic of sequential epoch end (in case of basic pipeline)
- set scales and zero points
- remove activation hooks
- on_finalize
- clear resolved mappings and captured activations
Parameters:
-
–sequential_targetslist of module names to compress in the same calibration pass
-
–mappingslist activation layers to smooth, and which layers to scale the output such that activations are smoothed. Each entry of the mapping list should be a list itself, in which the first entry is a list of layers who share the same input activation (the one to be to smoothed) and the second entry is the layer whose output is scaled to achieve the smoothing. If regex is used, it matches layers with the largest overlap in module name. Each mapping may also include an
activation_hook_target: a dotted attribute path relative to the parent module (lowest common ancestor) specifying which submodule to hook for activation caching. This is useful for parallel transformer blocks where the default (hookingbalance_layers[0]) would capture the wrong activations. -
–ignorelist of layers to ignore during quantization (not smoothed). It should match the name of layers whose outputs are scaled to achieve smoothing (the second entry of the mappings list).
-
–offload_deviceoffload cached args to this device, which reduces memory requirements but requires more time to move data between cpu and execution device. Defaults to None, so cached args are not offloaded. Consider setting to torch.device("cpu") if you are encountering OOM errors
-
–duo_scalingwhether to use duo scaling, which uses both input activations and weights to determine the scaling factor. Defaults to True If True, both activations and weights are used. If False, only activations are used. If "both", half the grid search is performed with duo_scaling=False and the other half is performed with duo_scaling=True.
-
–n_gridwhen performing the best scales grid search for each mapping, this specifies how many grid points should be used. To decrease the runtime, at the possible cost of slightly worse scales, this can be decreased. Defaults to 20
Methods:
-
on_end–Finish calibrating by setting scales and zero-points,
-
on_finalize–Clean up by clearing the activations and mapping data
-
on_initialize–Initialize AWQ on the given state
-
validate_duo_scaling–Validate that duo_scaling is either True, False, or 'both' (lowercase)
on_end
Finish calibrating by setting scales and zero-points, removing observers and calibration hooks
Source code in llmcompressor/modifiers/awq/base.py
on_finalize
Clean up by clearing the activations and mapping data
Parameters:
-
(stateState) –unused
Returns:
-
bool–True
Source code in llmcompressor/modifiers/awq/base.py
on_initialize
Initialize AWQ on the given state Initialize quantization, resolve mappings, cache module kwargs
Parameters:
-
(stateState) –state to run AWQ on
Returns:
-
bool–True on a successful run, False otherwise
Source code in llmcompressor/modifiers/awq/base.py
validate_duo_scaling classmethod
Validate that duo_scaling is either True, False, or 'both' (lowercase)
Source code in llmcompressor/modifiers/awq/base.py
get_lowest_common_ancestor_with_avoid
get_lowest_common_ancestor_with_avoid(
balance_names: Iterator[str],
model: Module,
avoid=torch.nn.ModuleList,
)
Get the lowest ancestor that is not the avoided class/type. see compressed_tensors.utils.get_lowest_common_ancestor_name for detail on case handling.
NOTE: primarily used to exclude parents of type ModuleList, which don't play nicely with hooks because their forward method is never directly called for MoE models. See Qwen3MoeSparseMoeBlock for example, experts are selected based on router output and their forward method is called. https://github.com/huggingface/transformers/blob/v4.52.4/src/transformers/models/qwen3_moe/modeling_qwen3_moe.py#L233