llmcompressor.utils.transformers
Functions:
-
get_embeddings–Returns input and output embeddings of a model. If
get_input_embeddings/ -
targets_embeddings–Returns True if the given targets target the word embeddings of the model
-
untie_word_embeddings–Untie word embeddings, if possible. This function raises a warning if
get_embeddings
Returns input and output embeddings of a model. If get_input_embeddings/ get_output_embeddings is not implemented on the model, then None will be returned instead.
Parameters:
-
(modelPreTrainedModel) –model to get embeddings from
Returns:
-
tuple[Module | None, Module | None]–tuple of containing embedding modules or none
Source code in llmcompressor/utils/transformers.py
targets_embeddings
targets_embeddings(
model: PreTrainedModel,
targets: NamedModules,
check_input: bool = True,
check_output: bool = True,
) -> bool
Returns True if the given targets target the word embeddings of the model
Parameters:
-
(modelPreTrainedModel) –containing word embeddings
-
(targetsNamedModules) –named modules to check
-
(check_inputbool, default:True) –whether to check if input embeddings are targeted
-
(check_outputbool, default:True) –whether to check if output embeddings are targeted
Returns:
-
bool–True if embeddings are targeted, False otherwise
Source code in llmcompressor/utils/transformers.py
untie_word_embeddings
Untie word embeddings, if possible. This function raises a warning if embeddings cannot be found in the model definition.
The model config will be updated to reflect that embeddings are now untied
Parameters:
-
(modelPreTrainedModel) –transformers model containing word embeddings