Configuration (original) (raw)

The base class PreTrainedConfig implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS S3 repository).

Each derived config class implements model specific attributes. Common attributes present in all config classes are:hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement:vocab_size.

class transformers.PreTrainedConfig

< source >

( output_hidden_states: bool = False output_attentions: bool = False return_dict: bool = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None tie_word_embeddings: bool = True chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False is_decoder: bool = False cross_attention_hidden_size: typing.Optional[int] = None add_cross_attention: bool = False tie_encoder_decoder: bool = False architectures: typing.Optional[list[str]] = None finetuning_task: typing.Optional[str] = None id2label: typing.Optional[dict[int, str]] = None label2id: typing.Optional[dict[str, int]] = None num_labels: typing.Optional[int] = None task_specific_params: typing.Optional[dict[str, typing.Any]] = None problem_type: typing.Optional[str] = None tokenizer_class: typing.Optional[str] = None prefix: typing.Optional[str] = None bos_token_id: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Optional[int] = None sep_token_id: typing.Optional[int] = None decoder_start_token_id: typing.Optional[int] = None **kwargs )

Parameters

Parameters for fine-tuning tasks

Parameters linked to the tokenizer

PyTorch specific parameters

Base class for all configuration classes. Handles a few parameters common to all models’ configurations as well as methods for loading/downloading/saving configurations.

A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does not load the model weights. It only affects the model’s configuration.

Class attributes (overridden by derived classes):

Common attributes (present in all subclasses):

Setting parameters for sequence generation in the model config is deprecated. For backward compatibility, loading some of them will still be possible, but attempting to overwrite them will throw an exception — you should set them in a [~transformers.GenerationConfig]. Check the documentation of [~transformers.GenerationConfig] for more information about the individual parameters.

push_to_hub

< source >

( repo_id: str commit_message: str | None = None commit_description: str | None = None private: bool | None = None token: bool | str | None = None revision: str | None = None create_pr: bool = False max_shard_size: int | str | None = '50GB' tags: list[str] | None = None )

Parameters

Upload the configuration file to the 🤗 Model Hub.

Examples:

from transformers import AutoConfig

config = AutoConfig.from_pretrained("google-bert/bert-base-cased")

config.push_to_hub("my-finetuned-bert")

config.push_to_hub("huggingface/my-finetuned-bert")

Checks whether the passed dictionary and its nested dicts have a dtype key and if it’s not None, converts torch.dtype to a string of just the type. For example, torch.float32 get converted into _“float32”_string, which can then be stored in the json format.

from_dict

< source >

( config_dict: dict **kwargs ) → PreTrainedConfig

Parameters

The configuration object instantiated from those parameters.

Instantiates a PreTrainedConfig from a Python dictionary of parameters.

from_json_file

< source >

( json_file: str | os.PathLike ) → PreTrainedConfig

Parameters

The configuration object instantiated from that JSON file.

Instantiates a PreTrainedConfig from the path to a JSON file of parameters.

from_pretrained

< source >

( pretrained_model_name_or_path: str | os.PathLike cache_dir: str | os.PathLike | None = None force_download: bool = False local_files_only: bool = False token: str | bool | None = None revision: str = 'main' **kwargs ) → PreTrainedConfig

Parameters

The configuration object instantiated from this pretrained model.

Instantiate a PreTrainedConfig (or a derived class) from a pretrained model configuration.

Examples:

config = BertConfig.from_pretrained( "google-bert/bert-base-uncased" )
config = BertConfig.from_pretrained( "./test/saved_model/" )
config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json") config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False) assert config.output_attentions == True config, unused_kwargs = BertConfig.from_pretrained( "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True ) assert config.output_attentions == True assert unused_kwargs == {"foo": False}

get_config_dict

< source >

( pretrained_model_name_or_path: str | os.PathLike **kwargs ) → tuple[Dict, Dict]

Parameters

Returns

tuple[Dict, Dict]

The dictionary(ies) that will be used to instantiate the configuration object.

From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating aPreTrainedConfig using from_dict.

get_text_config

< source >

( decoder = None encoder = None )

Parameters

Returns the text config related to the text input (encoder) or text output (decoder) of the model. Thedecoder and encoder input arguments can be used to specify which end of the model we are interested in, which is useful on models that have both text input and output modalities.

There are three possible outcomes of using this method:

  1. On most models, it returns the original config instance itself.
  2. On newer (2024+) composite models, it returns the text section of the config, which is nested under a set of valid names.
  3. On older (2023-) composite models, it discards decoder-only parameters when encoder=True and vice-versa.

register_for_auto_class

< source >

( auto_class = 'AutoConfig' )

Parameters

Register this class with a given auto class. This should only be used for custom configurations as the ones in the library are already mapped with AutoConfig.

save_pretrained

< source >

( save_directory: str | os.PathLike push_to_hub: bool = False **kwargs )

Parameters

Save a configuration object to the directory save_directory, so that it can be re-loaded using thefrom_pretrained() class method.

to_dict

< source >

( ) → dict[str, Any]

Dictionary of all the attributes that make up this configuration instance.

Serializes this instance to a Python dictionary.

to_diff_dict

< source >

( ) → dict[str, Any]

Dictionary of all the attributes that make up this configuration instance.

Removes all attributes from the configuration that correspond to the default config attributes for better readability, while always retaining the config attribute from the class. Serializes to a Python dictionary.

to_json_file

< source >

( json_file_path: str | os.PathLike use_diff: bool = True )

Parameters

Save this instance to a JSON file.

to_json_string

< source >

( use_diff: bool = True ) → str

Parameters

String containing all the attributes that make up this configuration instance in JSON format.

Serializes this instance to a JSON string.

update

< source >

( config_dict: dict )

Parameters

Updates attributes of this class with attributes from config_dict.

update_from_string

< source >

( update_str: str )

Parameters

Updates attributes of this class with attributes from update_str.

The expected format is ints, floats and strings as is, and for booleans use true or false. For example: “n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index”

The keys to change have to already exist in the config object.