RAG (original) (raw)
TFRagModel
class transformers.TFRagModel
( config: Optional[PretrainedConfig] = None question_encoder: Optional[TFPreTrainedModel] = None generator: Optional[TFPreTrainedModel] = None retriever: Optional[RagRetriever] = None load_weight_prefix: Optional[str] = None **kwargs )
Parameters
- config (RagConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
- question_encoder (TFPreTrainedModel) — An encoder model compatible with the faiss index encapsulated by the
retriever
. - generator (TFPreTrainedModel) — A seq2seq model used as the generator in the RAG architecture.
- retriever (RagRetriever) — A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any_autoencoding_ model as the question_encoder
and any seq2seq model with language model head as the generator
. It has been tested with TFDPRQuestionEncoder as the question_encoder
and TFBartForConditionalGenerationas the generator
.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Tensorflow keras.Modelsubclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in SavedModel format.
call
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Tuple[Tuple[Union[np.ndarray, tf.Tensor]]] | None = None doc_scores: np.ndarray | tf.Tensor | None = None context_input_ids: np.ndarray | tf.Tensor | None = None context_attention_mask: np.ndarray | tf.Tensor | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None output_retrieved: bool | None = None n_docs: int | None = None return_dict: bool | None = None training: bool = False **kwargs ) → transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput
or tuple(tf.Tensor)
Parameters
- input_ids (
tf.Tensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to obtain the indices. - attention_mask (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
What are attention masks?
- encoder_outputs (
tuple(tuple(tf.Tensor)
, optional) — Tuple consists of (generator_enc_last_hidden_state
, optional:generator_enc_hidden_states
,optional:generator_enc_attentions
).generator_enc_last_hidden_state
of shape(batch_size, n_docs * sequence_length, hidden_size)
is a sequence of hidden-states at the output of the last layer of the generator’s encoder.
Used by the (TFRagModel) model during decoding. - decoder_input_ids (
tf.Tensor
of shape(batch_size, target_sequence_length)
, optional) — Provide for generation tasks.None
by default, construct as per instructions for the generator model you’re using with your RAG instance. - decoder_attention_mask (
torch.BoolTensor
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default. - past_key_values (
tuple(tuple(tf.Tensor))
) — Tuple consists of two elements:encoder_outputs
of the RAG model (seeencoder_outputs
) andpast_key_values
of the underlying generator. Can be used to speed up decoding.past_key_values
are used in the (RagTokenForGeneration) model during decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. If the model has is not initialized with aretriever
doc_scores
has to be provided to the forward pass.doc_scores
can be computed viaquestion_encoder_last_hidden_state
andretrieved_doc_embeds
, see examples for more information. - context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input IDs post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
`context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. context_attention_mask (tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, _optional_, returned when_output_retrieved=True_): Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
context_attention_mask
has to be provided to the forward pass.context_attention_mask
are returned by__call__()
. - use_cache (
bool
, optional, defaults toTrue
) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - output_retrieved(
bool
, optional) — Whether or not to return theretrieved_doc_embeds
,retrieved_doc_ids
,context_input_ids
andcontext_attention_mask
. See returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return aTFRetrievAugLMOutput
instead of a plain tuple. - n_docs (
int
, optional, defaults to `config.n_docs“) — Number of documents to retrieve and/or number of documents for which to generate an answer.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput
or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput
or a tuple of tf.Tensor
(ifreturn_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (RagConfig) and inputs.
- logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for each vocabulary token. - past_key_values (
List[tf.Tensor]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — List oftf.Tensor
of lengthconfig.n_layers
, with each tensor of shape(2, batch_size, num_heads, sequence_length, embed_size_per_head)
).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used (seepast_key_values
input) to speed up sequential decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. - retrieved_doc_embeds (
tf.Tensor
of shape(batch_size, config.n_docs, hidden_size)
, optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used withquestion_encoder_last_hidden_state
to compute thedoc_scores
. - retrieved_doc_ids (
tf.Tensor
of shape(batch_size, config.n_docs)
, optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever. - context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever. - context_attention_mask (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever. - question_encoder_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the model. - question_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs. - question_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_enc_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model. - generator_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs. - generator_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_dec_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs. - generator_dec_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFRagModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
from transformers import AutoTokenizer, RagRetriever, TFRagModel import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base") retriever = RagRetriever.from_pretrained( ... "facebook/rag-token-base", index_name="exact", use_dummy_dataset=True ... )
model = TFRagModel.from_pretrained("facebook/rag-token-base", retriever=retriever, from_pt=True)
input_dict = tokenizer.prepare_seq2seq_batch( ... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf" ... ) input_ids = input_dict["input_ids"] outputs = model(input_ids)
TFRagSequenceForGeneration
class transformers.TFRagSequenceForGeneration
( config: Optional[PretrainedConfig] = None question_encoder: Optional[TFPreTrainedModel] = None generator: Optional[TFPreTrainedModel] = None retriever: Optional[RagRetriever] = None **kwargs )
Parameters
- config (RagConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
- question_encoder (TFPreTrainedModel) — An encoder model compatible with the faiss index encapsulated by the
retriever
. - generator (TFPreTrainedModel) — A seq2seq model used as the generator in the RAG architecture.
- retriever (RagRetriever) — A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagSequenceForGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
A TF RAG-sequence model implementation. It performs RAG-sequence specific marginalization in the forward pass.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any_autoencoding_ model as the question_encoder
and any seq2seq model with language model head as the generator
. It has been tested with TFDPRQuestionEncoder as the question_encoder
and TFBartForConditionalGenerationas the generator
.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Tensorflow keras.Modelsubclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in SavedModel format.
call
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None doc_scores: np.ndarray | tf.Tensor | None = None context_input_ids: np.ndarray | tf.Tensor | None = None context_attention_mask: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None output_retrieved: Optional[bool] = None n_docs: Optional[int] = None exclude_bos_score: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None reduce_loss: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False **kwargs ) → transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or tuple(tf.Tensor)
Parameters
- input_ids (
tf.Tensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to obtain the indices. - attention_mask (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
What are attention masks?
- encoder_outputs (
tuple(tuple(tf.Tensor)
, optional) — Tuple consists of (generator_enc_last_hidden_state
, optional:generator_enc_hidden_states
,optional:generator_enc_attentions
).generator_enc_last_hidden_state
of shape(batch_size, n_docs * sequence_length, hidden_size)
is a sequence of hidden-states at the output of the last layer of the generator’s encoder.
Used by the (TFRagModel) model during decoding. - decoder_input_ids (
tf.Tensor
of shape(batch_size, target_sequence_length)
, optional) — Provide for generation tasks.None
by default, construct as per instructions for the generator model you’re using with your RAG instance. - decoder_attention_mask (
torch.BoolTensor
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default. - past_key_values (
tuple(tuple(tf.Tensor))
) — Tuple consists of two elements:encoder_outputs
of the RAG model (seeencoder_outputs
) andpast_key_values
of the underlying generator. Can be used to speed up decoding.past_key_values
are used in the (RagTokenForGeneration) model during decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. If the model has is not initialized with aretriever
doc_scores
has to be provided to the forward pass.doc_scores
can be computed viaquestion_encoder_last_hidden_state
andretrieved_doc_embeds
, see examples for more information. - context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input IDs post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
`context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. context_attention_mask (tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, _optional_, returned when_output_retrieved=True_): Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
context_attention_mask
has to be provided to the forward pass.context_attention_mask
are returned by__call__()
. - use_cache (
bool
, optional, defaults toTrue
) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - output_retrieved(
bool
, optional) — Whether or not to return theretrieved_doc_embeds
,retrieved_doc_ids
,context_input_ids
andcontext_attention_mask
. See returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return aTFRetrievAugLMOutput
instead of a plain tuple. - n_docs (
int
, optional, defaults to `config.n_docs“) — Number of documents to retrieve and/or number of documents for which to generate an answer. - exclude_bos_score (
bool
, optional) — Only relevant iflabels
is passed. IfTrue
, the score of the BOS token is disregarded when computing the loss. - labels (
tf.Tensor
ornp.ndarray
of shape(batch_size, sequence_length)
, optional) — Labels for computing the cross entropy classification loss according to Rag-Sequence model formulation Seehttps://arxiv.org/pdf/2005.11401.pdf Section 2.1 for details about Rag-Sequence formulation. Indices should be in[0, ..., config.vocab_size - 1]
. - reduce_loss (
bool
, optional) — Only relevant iflabels
is passed. IfTrue
, the NLL loss is reduced using thetf.Tensor.sum
operation. - kwargs (
Dict[str, any]
, optional, defaults to{}
) — Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or a tuple of tf.Tensor
(ifreturn_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (RagConfig) and inputs.
- loss (
tf.Tensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss. - logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for each vocabulary token. - past_key_values (
List[tf.Tensor]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — List oftf.Tensor
of lengthconfig.n_layers
, with each tensor of shape(2, batch_size, num_heads, sequence_length, embed_size_per_head)
).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used (seepast_key_values
input) to speed up sequential decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. - retrieved_doc_embeds (
tf.Tensor
of shape(batch_size, config.n_docs, hidden_size)
, optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used withquestion_encoder_last_hidden_state
to compute thedoc_scores
. - retrieved_doc_ids (
tf.Tensor
(int32) of shape(batch_size, config.n_docs)
, optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever. - context_input_ids (
tf.Tensor
(int32) of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever. - context_attention_mask (
tf.Tensor
(int32) of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever. - question_encoder_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the model. - question_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs. - question_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_enc_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model. - generator_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs. - generator_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_dec_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs. - generator_dec_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFRagSequenceForGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
from transformers import AutoTokenizer, RagRetriever, TFRagSequenceForGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained( ... "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True ... )
model = TFRagSequenceForGeneration.from_pretrained( ... "facebook/rag-sequence-nq", retriever=retriever, from_pt=True ... )
input_dict = tokenizer.prepare_seq2seq_batch( ... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf" ... ) outputs = model(input_dict, output_retrieved=True)
input_ids = input_dict["input_ids"] question_hidden_states = model.question_encoder(input_ids)[0]
docs_dict = retriever(input_ids.numpy(), question_hidden_states.numpy(), return_tensors="tf") doc_scores = tf.squeeze( ... tf.matmul( ... tf.expand_dims(question_hidden_states, axis=1), docs_dict["retrieved_doc_embeds"], transpose_b=True ... ), ... axis=1, ... )
outputs = model( ... inputs=None, ... context_input_ids=docs_dict["context_input_ids"], ... context_attention_mask=docs_dict["context_attention_mask"], ... doc_scores=doc_scores, ... decoder_input_ids=input_dict["labels"], ... )
generated = model.generate( ... context_input_ids=docs_dict["context_input_ids"], ... context_attention_mask=docs_dict["context_attention_mask"], ... doc_scores=doc_scores, ... ) generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)
generate
( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None context_input_ids = None context_attention_mask = None doc_scores = None do_deduplication = None num_return_sequences = None num_beams = None n_docs = None **model_kwargs ) → tf.Tensor
of shape (batch_size * num_return_sequences, sequence_length)
Parameters
- input_ids (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — The sequence used as a prompt for the generation. Ifinput_ids
is not passed, thencontext_input_ids
has to be provided. - attention_mask (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
: - 1 for tokens that are not masked, - 0 for tokens that are masked. What are attention masks? - context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input IDs post-processed from the retrieved documents and the question encoder input_ids by the retriever. - context_attention_mask (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever. If the model has is not initialized with aretriever
orinput_ids
is not given,context_input_ids
andcontext_attention_mask
have to be provided to the forward pass. They are returned by__call__()
. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. If the model has is not initialized with aretriever
orinput_ids
is not given,doc_scores
has to be provided to the forward pass.doc_scores
are returned by__call__()
. - do_deduplication (
bool
, optional) — Whether or not to deduplicate the generations from different context documents for a given input. Has to be set toFalse
if used while training with distributed backend. - num_return_sequences(
int
, optional, defaults to 1) — The number of independently computed returned sequences for each element in the batch. Note that this is not the value we pass to thegenerator
’s[generate()](/docs/transformers/v4.52.3/en/main_classes/text_generation#transformers.GenerationMixin.generate)
function, where we setnum_return_sequences
tonum_beams
. - num_beams (
int
, optional, defaults to 1) — Number of beams for beam search. 1 means no beam search. - n_docs (
int
, optional, defaults toconfig.n_docs
) — Number of documents to retrieve and/or number of documents for which to generate an answer. - kwargs (
Dict[str, Any]
, optional) — Additional kwargs will be passed to generate()
Returns
tf.Tensor
of shape (batch_size * num_return_sequences, sequence_length)
The generated sequences. The second dimension (sequence length) is either equal to max_length
or shorter if all batches finished early due to the eos_token_id
.
Implements RAG sequence “thorough” decoding. Read the generate()` documentation for more information on how to set other generate input parameters
TFRagTokenForGeneration
class transformers.TFRagTokenForGeneration
( config: Optional[PretrainedConfig] = None question_encoder: Optional[TFPreTrainedModel] = None generator: Optional[TFPreTrainedModel] = None retriever: Optional[RagRetriever] = None **kwargs )
Parameters
- config (RagConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
- question_encoder (TFPreTrainedModel) — An encoder model compatible with the faiss index encapsulated by the
retriever
. - generator (TFPreTrainedModel) — A seq2seq model used as the generator in the RAG architecture.
- retriever (RagRetriever) — A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagTokenForGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
A TF RAG-token model implementation. It performs RAG-token specific marginalization in the forward pass.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any_autoencoding_ model as the question_encoder
and any seq2seq model with language model head as the generator
. It has been tested with TFDPRQuestionEncoder as the question_encoder
and TFBartForConditionalGenerationas the generator
.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Tensorflow keras.Modelsubclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in SavedModel format.
call
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: np.ndarray | tf.Tensor | None = None past_key_values: Tuple[Tuple[Union[np.ndarray, tf.Tensor]]] | None = None doc_scores: np.ndarray | tf.Tensor | None = None context_input_ids: np.ndarray | tf.Tensor | None = None context_attention_mask: np.ndarray | tf.Tensor | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None output_retrieved: bool | None = None n_docs: int | None = None do_marginalize: bool | None = None labels: np.ndarray | tf.Tensor | None = None reduce_loss: bool | None = None return_dict: bool | None = None training: bool = False **kwargs ) → transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or tuple(tf.Tensor)
Parameters
- input_ids (
tf.Tensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to obtain the indices. - attention_mask (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
What are attention masks?
- encoder_outputs (
tuple(tuple(tf.Tensor)
, optional) — Tuple consists of (generator_enc_last_hidden_state
, optional:generator_enc_hidden_states
,optional:generator_enc_attentions
).generator_enc_last_hidden_state
of shape(batch_size, n_docs * sequence_length, hidden_size)
is a sequence of hidden-states at the output of the last layer of the generator’s encoder.
Used by the (TFRagModel) model during decoding. - decoder_input_ids (
tf.Tensor
of shape(batch_size, target_sequence_length)
, optional) — Provide for generation tasks.None
by default, construct as per instructions for the generator model you’re using with your RAG instance. - decoder_attention_mask (
torch.BoolTensor
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default. - past_key_values (
tuple(tuple(tf.Tensor))
) — Tuple consists of two elements:encoder_outputs
of the RAG model (seeencoder_outputs
) andpast_key_values
of the underlying generator. Can be used to speed up decoding.past_key_values
are used in the (RagTokenForGeneration) model during decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. If the model has is not initialized with aretriever
doc_scores
has to be provided to the forward pass.doc_scores
can be computed viaquestion_encoder_last_hidden_state
andretrieved_doc_embeds
, see examples for more information. - context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input IDs post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
`context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. context_attention_mask (tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, _optional_, returned when_output_retrieved=True_): Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
context_attention_mask
has to be provided to the forward pass.context_attention_mask
are returned by__call__()
. - use_cache (
bool
, optional, defaults toTrue
) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - output_retrieved(
bool
, optional) — Whether or not to return theretrieved_doc_embeds
,retrieved_doc_ids
,context_input_ids
andcontext_attention_mask
. See returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return aTFRetrievAugLMOutput
instead of a plain tuple. - n_docs (
int
, optional, defaults to `config.n_docs“) — Number of documents to retrieve and/or number of documents for which to generate an answer. - do_marginalize (
bool
, optional) — IfTrue
, the logits are marginalized over all documents by making use oftorch.nn.functional.log_softmax
. - labels (
tf.Tensor
ornp.ndarray
of shape(batch_size, sequence_length)
, optional) — Labels for computing the cross entropy classification loss according to Rag-Token model formulation Seehttps://arxiv.org/pdf/2005.11401.pdf Section 2.1 for details about Rag-Token formulation. Indices should be in[0, ..., config.vocab_size - 1]
. - reduce_loss (
bool
, optional) — Only relevant iflabels
is passed. IfTrue
, the NLL loss is reduced using thetf.Tensor.sum
operation. - kwargs (
Dict[str, any]
, optional, defaults to{}
) — Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput
or a tuple of tf.Tensor
(ifreturn_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (RagConfig) and inputs.
- loss (
tf.Tensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss. - logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for each vocabulary token. - past_key_values (
List[tf.Tensor]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — List oftf.Tensor
of lengthconfig.n_layers
, with each tensor of shape(2, batch_size, num_heads, sequence_length, embed_size_per_head)
).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used (seepast_key_values
input) to speed up sequential decoding. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
. - retrieved_doc_embeds (
tf.Tensor
of shape(batch_size, config.n_docs, hidden_size)
, optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used withquestion_encoder_last_hidden_state
to compute thedoc_scores
. - retrieved_doc_ids (
tf.Tensor
(int32) of shape(batch_size, config.n_docs)
, optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever. - context_input_ids (
tf.Tensor
(int32) of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever. - context_attention_mask (
tf.Tensor
(int32) of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever. - question_encoder_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the model. - question_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs. - question_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_enc_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model. - generator_enc_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs. - generator_enc_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - generator_dec_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings and one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs. - generator_dec_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFRagTokenForGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
import tensorflow as tf from transformers import AutoTokenizer, RagRetriever, TFRagTokenForGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained( ... "facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True ... )
model = TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever, from_pt=True)
input_dict = tokenizer.prepare_seq2seq_batch( ... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf" ... ) outputs = model(input_dict, output_retrieved=True)
input_ids = input_dict["input_ids"] question_hidden_states = model.question_encoder(input_ids)[0]
docs_dict = retriever(input_ids.numpy(), question_hidden_states.numpy(), return_tensors="tf") doc_scores = tf.squeeze( ... tf.matmul( ... tf.expand_dims(question_hidden_states, axis=1), docs_dict["retrieved_doc_embeds"], transpose_b=True ... ), ... axis=1, ... )
outputs = model( ... inputs=None, ... context_input_ids=docs_dict["context_input_ids"], ... context_attention_mask=docs_dict["context_attention_mask"], ... doc_scores=doc_scores, ... decoder_input_ids=input_dict["labels"], ... )
generated = model.generate( ... context_input_ids=docs_dict["context_input_ids"], ... context_attention_mask=docs_dict["context_attention_mask"], ... doc_scores=doc_scores, ... ) generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)
generate
( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None context_input_ids = None context_attention_mask = None doc_scores = None n_docs = None generation_config = None logits_processor = [] **kwargs ) → tf.Tensor
of shape (batch_size * num_return_sequences, sequence_length)
Parameters
- input_ids (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — The sequence used as a prompt for the generation. Ifinput_ids
is not passed, thencontext_input_ids
has to be provided. - attention_mask (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
What are attention masks?
- context_input_ids (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Input IDs post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
,context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. - context_attention_mask (
tf.Tensor
of shape(batch_size * config.n_docs, config.max_combined_length)
, optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoderinput_ids
by the retriever.
If the model has is not initialized with aretriever
,context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. - doc_scores (
tf.Tensor
of shape(batch_size, config.n_docs)
) — Score between each retrieved document embeddings (seeretrieved_doc_embeds
) andquestion_encoder_last_hidden_state
.
If the model has is not initialized with aretriever
,context_input_ids
has to be provided to the forward pass.context_input_ids
are returned by__call__()
. - n_docs (
int
, optional, defaults toconfig.n_docs
) — Number of documents to retrieve and/or number of documents for which to generate an answer. - generation_config (
~generation.GenerationConfig
, optional) — The generation configuration to be used as base parametrization for the generation call.**kwargs
passed to generate matching the attributes ofgeneration_config
will override them. Ifgeneration_config
is not provided, the default will be used, which had the following loading priority: 1) from thegeneration_config.json
model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. - logits_processor (
TFLogitsProcessorList
, optional) — Custom logits processors that complement the default logits processors built from arguments and a model’s config. If a logit processor is passed that is already created with the arguments or a model’s config an error is thrown. - kwargs (
Dict[str, Any]
, optional) — Ad hoc parametrization ofgenerate_config
and/or additional model-specific kwargs that will be forwarded to theforward
function of the model.
Returns
tf.Tensor
of shape (batch_size * num_return_sequences, sequence_length)
The generated sequences. The second dimension (sequence_length) is either equal to max_length
or shorter if all batches finished early due to the eos_token_id
.
Implements TFRAG token decoding.