Creating Custom Models — Sentence Transformers documentation (original) (raw)

Structure of Sentence Transformer Models

A Sentence Transformer model consists of a collection of modules (docs) that are executed sequentially. The most common architecture is a combination of a Transformer module, a Pooling module, and optionally, a Dense module and/or a Normalize module.

For example, the popular all-MiniLM-L6-v2 model can also be loaded by initializing the 3 specific modules that make up that model:

from sentence_transformers import models, SentenceTransformer

transformer = models.Transformer("sentence-transformers/all-MiniLM-L6-v2", max_seq_length=256) pooling = models.Pooling(transformer.get_word_embedding_dimension(), pooling_mode="mean") normalize = models.Normalize()

model = SentenceTransformer(modules=[transformer, pooling, normalize])

Saving Sentence Transformer Models

Whenever a Sentence Transformer model is saved, three types of files are generated:

As a result, if I call SentenceTransformer.save_pretrained("local-all-MiniLM-L6-v2") on the model from the previous snippet, the following files are generated:

local-all-MiniLM-L6-v2/ ├── 1_Pooling │ └── config.json ├── 2_Normalize ├── README.md ├── config.json ├── config_sentence_transformers.json ├── model.safetensors ├── modules.json ├── sentence_bert_config.json ├── special_tokens_map.json ├── tokenizer.json ├── tokenizer_config.json └── vocab.txt

This contains a modules.json with these contents:

[ { "idx": 0, "name": "0", "path": "", "type": "sentence_transformers.models.Transformer" }, { "idx": 1, "name": "1", "path": "1_Pooling", "type": "sentence_transformers.models.Pooling" }, { "idx": 2, "name": "2", "path": "2_Normalize", "type": "sentence_transformers.models.Normalize" } ]

And a config_sentence_transformers.json with these contents:

{ "version": { "sentence_transformers": "3.0.1", "transformers": "4.43.4", "pytorch": "2.5.0" }, "prompts": {}, "default_prompt_name": null, "similarity_fn_name": null }

Additionally, the 1_Pooling directory contains the configuration file for the Pooling module, while the 2_Normalize directory is empty because the Normalize module does not require any configuration. The sentence_bert_config.json file contains the configuration of the Transformer module, and this module also saved a lot of files related to the tokenizer and the model itself in the root directory.

Loading Sentence Transformer Models

To load a Sentence Transformer model from a saved model directory, the modules.json is read to determine the modules that make up the model. Each module is initialized with the configuration stored in the corresponding module directory, after which the SentenceTransformer class is instantiated with the loaded modules.

Sentence Transformer Model from a Transformers Model

When you initialize a Sentence Transformer model with a pure Transformers model (e.g., BERT, RoBERTa, DistilBERT, T5), Sentence Transformers creates a Transformer module and a Mean Pooling module by default. This provides a simple way to leverage pre-trained language models for sentence embeddings.

To be specific, these two snippets are identical:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("bert-base-uncased")

from sentence_transformers import models, SentenceTransformer

transformer = models.Transformer("bert-base-uncased") pooling = models.Pooling(transformer.get_word_embedding_dimension(), pooling_mode="mean") model = SentenceTransformer(modules=[transformer, pooling])

Advanced: Custom Modules

Input Modules

The first module in a pipeline is called the input module. It is responsible for tokenizing the input text and generating the input features for the subsequent modules. The input module can be any module that implements the InputModule class, which is a subclass of the Module class.

It has three abstract methods that you need to implement:

Optionally, you can also implement the following methods:

Subsequent Modules

Subsequent modules in the pipeline are called non-input modules. They are responsible for processing the input features generated by the input module and generating the final sentence embeddings. Non-input modules can be any module that implements the Module class.

It has two abstract methods that you need to implement:

Optionally, you can also implement the following methods:

Example Module

For example, we can create a custom pooling method by implementing a custom Module.

decay_pooling.py

import torch from sentence_transformers.models import Module

class DecayMeanPooling(Module): config_keys: list[str] = ["dimension", "decay"]

def __init__(self, dimension: int, decay: float = 0.95, **kwargs) -> None:
    super(DecayMeanPooling, self).__init__()
    self.dimension = dimension
    self.decay = decay

def forward(self, features: dict[str, torch.Tensor], **kwargs) -> dict[str, torch.Tensor]:
    # This module is expected to be used after some modules that provide "token_embeddings"
    # and "attention_mask" in the features dictionary.
    token_embeddings = features["token_embeddings"]
    attention_mask = features["attention_mask"].unsqueeze(-1)

    # Apply the attention mask to filter away padding tokens
    token_embeddings = token_embeddings * attention_mask
    # Calculate mean of token embeddings
    sentence_embeddings = token_embeddings.sum(1) / attention_mask.sum(1)
    # Apply exponential decay
    importance_per_dim = self.decay ** torch.arange(
        sentence_embeddings.size(1), device=sentence_embeddings.device
    )
    features["sentence_embedding"] = sentence_embeddings * importance_per_dim
    return features

def get_sentence_embedding_dimension(self) -> int:
    return self.dimension

def save(self, output_path, *args, safe_serialization=True, **kwargs) -> None:
    self.save_config(output_path)

# The `load` method by default loads the config.json file from the model directory
# and initializes the class with the loaded parameters, i.e. the `config_keys`.
# This works for us, so no need to override it.

Note

Adding **kwargs to the __init__, forward, save, load, and tokenize methods is recommended to ensure that the methods remain compatible with future updates to the Sentence Transformers library.

This can now be used as a module in a Sentence Transformer model:

from sentence_transformers import models, SentenceTransformer from decay_pooling import DecayMeanPooling

transformer = models.Transformer("bert-base-uncased", max_seq_length=256) decay_mean_pooling = DecayMeanPooling(transformer.get_word_embedding_dimension(), decay=0.99) normalize = models.Normalize()

model = SentenceTransformer(modules=[transformer, decay_mean_pooling, normalize]) print(model) """ SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): DecayMeanPooling() (2): Normalize() ) """

texts = [ "Hello, World!", "The quick brown fox jumps over the lazy dog.", "I am a sentence that is used for testing purposes.", "This is a test sentence.", "This is another test sentence.", ] embeddings = model.encode(texts) print(embeddings.shape)

[5, 768]

You can save this model with SentenceTransformer.save_pretrained, resulting in a modules.json of:

[ { "idx": 0, "name": "0", "path": "", "type": "sentence_transformers.models.Transformer" }, { "idx": 1, "name": "1", "path": "1_DecayMeanPooling", "type": "decay_pooling.DecayMeanPooling" }, { "idx": 2, "name": "2", "path": "2_Normalize", "type": "sentence_transformers.models.Normalize" } ]

To ensure that decay_pooling.DecayMeanPooling can be imported, you should copy over the decay_pooling.py file to the directory where you saved the model. If you push the model to the Hugging Face Hub, then you should also upload the decay_pooling.py file to the model’s repository. Then, everyone can use your custom module by calling SentenceTransformer("your-username/your-model-id", trust_remote_code=True).

Note

Using a custom module with remote code stored on the Hugging Face Hub requires that your users specify trust_remote_code as True when loading the model. This is a security measure to prevent remote code execution attacks.

If you have your models and custom modelling code on the Hugging Face Hub, then it might make sense to separate your custom modules into a separate repository. This way, you only have to maintain one implementation of your custom module, and you can reuse it across multiple models. You can do this by updating the type in modules.json file to include the path to the repository where the custom module is stored like {repository_id}--{dot_path_to_module}. For example, if the decay_pooling.py file is stored in a repository called my-user/my-model-implementation and the module is called DecayMeanPooling, then the modules.json file may look like this:

[ { "idx": 0, "name": "0", "path": "", "type": "sentence_transformers.models.Transformer" }, { "idx": 1, "name": "1", "path": "1_DecayMeanPooling", "type": "my-user/my-model-implementation--decay_pooling.DecayMeanPooling" }, { "idx": 2, "name": "2", "path": "2_Normalize", "type": "sentence_transformers.models.Normalize" } ]

Advanced: Keyword argument passthrough in Custom Modules

If you want your users to be able to specify custom keyword arguments via the SentenceTransformer.encode method, then you can add their names to the modules.json file. For example, if my module should behave differently if your users specify a task keyword argument, then your modules.json might look like:

[ { "idx": 0, "name": "0", "path": "", "type": "custom_transformer.CustomTransformer", "kwargs": ["task"] }, { "idx": 1, "name": "1", "path": "1_Pooling", "type": "sentence_transformers.models.Pooling" }, { "idx": 2, "name": "2", "path": "2_Normalize", "type": "sentence_transformers.models.Normalize" } ]

Then, you can access the task keyword argument in the forward method of your custom module:

from sentence_transformers.models import Transformer

class CustomTransformer(Transformer): def forward(self, features: dict[str, torch.Tensor], task: Optional[str] = None, **kwargs) -> dict[str, torch.Tensor]: if task == "default": # Do something else: # Do something else return features

This way, users can specify the task keyword argument when calling SentenceTransformer.encode:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("your-username/your-model-id", trust_remote_code=True) texts = [...] model.encode(texts, task="default")