GitHub - MinishLab/model2vec: Fast State-of-the-Art Static Embeddings (original) (raw)

Model2Vec logo

Fast State-of-the-Art Static Embeddings

Model2Vec is a technique to turn any sentence transformer into a really small static model, reducing model size by a factor up to 50 and making the models up to 500 times faster, with a small drop in performance. Our best model is the most performant static embedding model in the world. See our results here, or dive in to see how it works.

Quickstart

Install the lightweight base package with:

You can start using Model2Vec by loading one of our flagship models from the HuggingFace hub. These models are pre-trained and ready to use. The following code snippet shows how to load a model and make embeddings, which you can use for any task, such as text classification, retrieval, clustering, or building a RAG system:

from model2vec import StaticModel

Load a model from the HuggingFace hub (in this case the potion-base-8M model)

model = StaticModel.from_pretrained("minishlab/potion-base-8M")

Make embeddings

embeddings = model.encode(["It's dangerous to go alone!", "It's a secret to everybody."])

Make sequences of token embeddings

token_embeddings = model.encode_as_sequence(["It's dangerous to go alone!", "It's a secret to everybody."])

Instead of using one of our models, you can also distill your own Model2Vec model from a Sentence Transformer model. First, install the distillation extras with:

pip install model2vec[distill]

Then, you can distill a model in ~30 seconds on a CPU with the following code snippet:

from model2vec.distill import distill

Distill a Sentence Transformer model, in this case the BAAI/bge-base-en-v1.5 model

m2v_model = distill(model_name="BAAI/bge-base-en-v1.5", pca_dims=256)

Save the model

m2v_model.save_pretrained("m2v_model")

After distillation, you can also fine-tune your own classification models on top of the distilled model, or on a pre-trained model. First, make sure you install the training extras with:

pip install model2vec[training]

Then, you can fine-tune a model as follows:

import numpy as np from datasets import load_dataset from model2vec.train import StaticModelForClassification

Initialize a classifier from a pre-trained model

classifier = StaticModelForClassification.from_pretrained(model_name="minishlab/potion-base-32M")

Load a dataset. Note: both single and multi-label classification datasets are supported

ds = load_dataset("setfit/subj")

Train the classifier on text (X) and labels (y)

classifier.fit(ds["train"]["text"], ds["train"]["label"])

Evaluate the classifier

classification_report = classifier.evaluate(ds["test"]["text"], ds["test"]["label"])

For advanced usage, please refer to our usage documentation.

Updates & Announcements

Main Features

What is Model2Vec?

Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Like BPEmb, it can create subword embeddings, but with much better performance. Distillation doesn't need any data, just a vocabulary and a model.

The core idea is to forward pass a vocabulary through a sentence transformer model, creating static embeddings for the indiviudal tokens. After this, there are a number of post-processing steps we do that results in our best models. For a more extensive deepdive, please refer to the following resources:

Documentation

Our official documentation can be found here. This includes:

Model List

We provide a number of models that can be used out of the box. These models are available on the HuggingFace hub and can be loaded using the from_pretrained method. The models are listed below.

Model Language Sentence Transformer Params Task
potion-base-32M English bge-base-en-v1.5 32.3M General
potion-base-8M English bge-base-en-v1.5 7.5M General
potion-base-4M English bge-base-en-v1.5 3.7M General
potion-base-2M English bge-base-en-v1.5 1.8M General
potion-retrieval-32M English bge-base-en-v1.5 32.3M Retrieval
M2V_multilingual_output Multilingual LaBSE 471M General

Results

We have performed extensive experiments to evaluate the performance of Model2Vec models. The results are documented in the results folder. The results are presented in the following sections:

License

MIT

Citing

If you use Model2Vec in your research, please cite the following:

@article{minishlab2024model2vec, author = {Tulkens, Stephan and {van Dongen}, Thomas}, title = {Model2Vec: Fast State-of-the-Art Static Embeddings}, year = {2024}, url = {https://github.com/MinishLab/model2vec} }