Semantic Search using Text Embedding | OpenVINO GenAI (original) (raw)

Convert and Optimize Model

Download and convert a text embedding model (e.g. BAAI/bge-small-en-v1.5) to OpenVINO format from Hugging Face:

optimum-cli export openvino --model BAAI/bge-small-en-v1.5 --task feature-extraction bge-small-en-v1_5_ov

See all supported Text Embedding Models.

info

Refer to the Model Preparation guide for detailed instructions on how to download, convert and optimize models for OpenVINO GenAI.

Run Model Using OpenVINO GenAI

TextEmbeddingPipeline generates vector representations for text using embedding models.

import openvino_genai as ov_genai

pipeline = ov_genai.TextEmbeddingPipeline(
  models_path,
  "CPU",
  pooling_type = ov_genai.TextEmbeddingPipeline.PoolingType.MEAN,
  normalize = True
)

documents_embeddings = pipeline.embed_documents(documents)
query_embeddings = pipeline.embed_query("What is the capital of France?")

tip

Use CPU or GPU as devices without any other code change.

Additional Usage Options

Pooling Strategies

Text embedding models support different pooling strategies to aggregate token embeddings into a single vector:

You can set the pooling strategy via the pooling_type parameter.

L2 Normalization

L2 normalization can be applied to the output embeddings for improved retrieval performance. Enable it with the normalize parameter.

Input Size and Padding

You can control how input texts are tokenized and padded:

Batch Size Configuration

The batch_size parameter is useful for optimizing performance during database population:

Fixed Shape Optimization

Setting batch_size, max_length, and pad_to_max_length=true together will fix the model shape for optimal inference performance.

info

Fixed shapes are required for NPU device inference.

Query and Embed Instructions

Some models support special instructions for queries and documents. Use query_instruction and embed_instruction to provide these if needed.

Example: Custom Configuration

import openvino_genai as ov_genai
pipeline = ov_genai.TextEmbeddingPipeline(
    models_path,
    "CPU",
    pooling_type=ov_genai.TextEmbeddingPipeline.PoolingType.MEAN,
    normalize=True,
    max_length=512,
    pad_to_max_length=True,
    padding_side="left",
    batch_size=4,
    query_instruction="Represent this sentence for searching relevant passages: ",
    embed_instruction="Represent this passage for retrieval: "
)