API Documentation — Intel&#174 Extension for PyTorch* 2.7.0+cpu documentation (original) (raw)

General

ipex.optimize is generally used for generic PyTorch models.

ipex.optimize(model, dtype=None, optimizer=None, level='O1', inplace=False, conv_bn_folding=None, linear_bn_folding=None, weights_prepack=None, replace_dropout_with_identity=None, optimize_lstm=None, split_master_weight_for_bf16=None, fuse_update_step=None, auto_kernel_selection=None, sample_input=None, graph_mode=None, concat_linear=None)

Apply optimizations at Python frontend to the given model (nn.Module), as well as the given optimizer (optional). If the optimizer is given, optimizations will be applied for training. Otherwise, optimization will be applied for inference. Optimizations include conv+bn folding (for inference only), weight prepacking and so on.

Weight prepacking is a technique to accelerate performance of oneDNN operators. In order to achieve better vectorization and cache reuse, onednn uses a specific memory layout called blocked layout. Although the calculation itself with blocked layout is fast enough, from memory usage perspective it has drawbacks. Running with the blocked layout, oneDNN splits one or several dimensions of data into blocks with fixed size each time the operator is executed. More details information about oneDNN data mermory format is available at oneDNN manual. To reduce this overhead, data will be converted to predefined block shapes prior to the execution of oneDNN operator execution. In runtime, if the data shape matches oneDNN operator execution requirements, oneDNN won’t perform memory layout conversion but directly go to calculation. Through this methodology, called weight prepacking, it is possible to avoid runtime weight data format convertion and thus increase performance.

Parameters:

Returns:

Model and optimizer (if given) modified according to the level knob or other user settings. conv+bn folding may take place anddropout may be replaced by identity. In inference scenarios, convolutuon, linear and lstm will be replaced with the optimized counterparts in Intel® Extension for PyTorch* (weight prepack for convolution and linear) for good performance. In bfloat16 or float16 scenarios, parameters of convolution and linear will be casted to bfloat16 or float16 dtype.

Warning

Please invoke optimize function BEFORE invoking DDP in distributed training scenario.

The optimize function deepcopys the original model. If DDP is invoked before optimize function, DDP is applied on the origin model, rather than the one returned from optimize function. In this case, some operators in DDP, like allreduce, will not be invoked and thus may cause unpredictable accuracy loss.

Examples

bfloat16 inference case.

model = ... model.load_state_dict(torch.load(PATH)) model.eval() optimized_model = ipex.optimize(model, dtype=torch.bfloat16)

running evaluation step.

bfloat16 training case.

optimizer = ... model.train() optimized_model, optimized_optimizer = ipex.optimize(model, dtype=torch.bfloat16, optimizer=optimizer)

running training step.

torch.xpu.optimize() is an alternative of optimize API in Intel® Extension for PyTorch*, to provide identical usage for XPU device only. The motivation of adding this alias is to unify the coding style in user scripts base on torch.xpu modular.

Examples

bfloat16 inference case.

model = ... model.load_state_dict(torch.load(PATH)) model.eval() optimized_model = torch.xpu.optimize(model, dtype=torch.bfloat16)

running evaluation step.

bfloat16 training case.

optimizer = ... model.train() optimized_model, optimized_optimizer = torch.xpu.optimize(model, dtype=torch.bfloat16, optimizer=optimizer)

running training step.

ipex.llm.optimize is used for Large Language Models (LLM).

ipex.llm.optimize(model, optimizer=None, dtype=torch.float32, inplace=False, device='cpu', quantization_config=None, qconfig_summary_file=None, low_precision_checkpoint=None, sample_inputs=None, deployment_mode=True, cache_weight_for_large_batch=False)

Apply optimizations at Python frontend to the given transformers model (nn.Module). This API focus on transformers models, especially for generation tasks inference.

Well supported model family with full functionalities: Llama, MLlama, GPT-J, GPT-Neox, OPT, Falcon, Bloom, CodeGen, Baichuan, ChatGLM, GPTBigCode, T5, Mistral, MPT, Mixtral, StableLM, QWen, Git, Llava, Yuan, Phi, Whisper, Maira2, Jamba, DeepSeekV2.

For the model that is not in the scope of supported model family above, will try to apply default ipex.optimize transparently to get benifits (not include quantizations, only works for dtypes of torch.bfloat16 and torch.half and torch.float).

Parameters:

Returns:

Optimized model object for model.generate(), also workable with model.forward

Warning

Please invoke ipex.llm.optimize function AFTER invoking DeepSpeed in Tensor Parallel inference scenario.

Examples

bfloat16 generation inference case.

model = ... model.load_state_dict(torch.load(PATH)) model.eval() optimized_model = ipex.llm.optimize(model, dtype=torch.bfloat16) optimized_model.generate()

class ipex.verbose(level)

On-demand oneDNN verbosing functionality

To make it easier to debug performance issues, oneDNN can dump verbose messages containing information like kernel size, input data size and execution duration while executing the kernel. The verbosing functionality can be invoked via an environment variable named DNNL_VERBOSE. However, this methodology dumps messages in all steps. Those are a large amount of verbose messages. Moreover, for investigating the performance issues, generally taking verbose messages for one single iteration is enough.

This on-demand verbosing functionality makes it possible to control scope for verbose message dumping. In the following example, verbose messages will be dumped out for the second inference only.

import intel_extension_for_pytorch as ipex model(data) with ipex.verbose(ipex.VERBOSE_ON): model(data)

Parameters:

level

Verbose level

LLM Module Level Optimizations (Prototype)

Module level optimization APIs are provided for optimizing customized LLMs.

class ipex.llm.modules.LinearSilu(linear)

Applies a linear transformation to the input data, and then apply PyTorch SILU (see https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html) on the result:

result = torch.nn.functional.silu(linear(input))

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with silu.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearSilu(linear_module)

module forward:

input = torch.randn(4096, 4096) result = ipex_fusion(input)

class ipex.llm.modules.LinearSiluMul(linear)

Applies a linear transformation to the input data, then apply PyTorch SILU (see https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html) on the result, and multiplies the result by other:

result = torch.nn.functional.silu(linear(input)) * other

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with silu and mul.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearSiluMul(linear_module)

module forward:

input = torch.randn(4096, 4096) other = torch.randn(4096, 4096) result = ipex_fusion(input, other)

class ipex.llm.modules.Linear2SiluMul(linear_s, linear_m)

Applies two linear transformation to the input data (linear_s andlinear_m), then apply PyTorch SILU (see https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html) on the result from linear_s, and multiplies the result from linear_m:

result = torch.nn.functional.silu(linear_s(input)) * linear_m(input)

Parameters:

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_s_module = torch.nn.Linear(4096, 4096) linear_m_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.Linear2SiluMul(linear_s_module, linear_m_module)

module forward:

input = torch.randn(4096, 4096) result = ipex_fusion(input)

class ipex.llm.modules.LinearRelu(linear)

Applies a linear transformation to the input data, and then apply PyTorch RELU (see https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html) on the result:

result = torch.nn.functional.relu(linear(input))

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with relu.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearRelu(linear_module)

module forward:

input = torch.randn(4096, 4096) result = ipex_fusion(input)

class ipex.llm.modules.LinearNewGelu(linear)

Applies a linear transformation to the input data, and then apply NewGELUActivation (see https://github.com/huggingface/transformers/blob/main/src/transformers/activations.py#L50) on the result:

result = NewGELUActivation(linear(input))

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with new_gelu.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearNewGelu(linear_module)

module forward:

input = torch.randn(4096, 4096) result = ipex_fusion(input)

class ipex.llm.modules.LinearGelu(linear)

Applies a linear transformation to the input data, and then apply PyTorch GELU (see https://pytorch.org/docs/stable/generated/torch.nn.functional.gelu.html) on the result:

result = torch.nn.functional.gelu(linear(input))

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with gelu.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearGelu(linear_module)

module forward:

input = torch.randn(4096, 4096) result = ipex_fusion(input)

class ipex.llm.modules.LinearMul(linear)

Applies a linear transformation to the input data, and then multiplies the result by other:

result = linear(input) * other

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with mul.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearMul(linear_module)

module forward:

input = torch.randn(4096, 4096) other = torch.randn(4096, 4096) result = ipex_fusion(input, other)

class ipex.llm.modules.LinearAdd(linear)

Applies a linear transformation to the input data, and then add the result by other:

result = linear(input) + other

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with add.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearAdd(linear_module)

module forward:

input = torch.randn(4096, 4096) other = torch.randn(4096, 4096) result = ipex_fusion(input, other)

class ipex.llm.modules.LinearAddAdd(linear)

Applies a linear transformation to the input data, and then add the result by other_1 and other_2:

result = linear(input) + other_1 + other_2

Parameters:

linear (torch.nn.Linear module) – the original torch.nn.Linear module to be fused with add and add.

Shape:

Input and output shapes are the same as torch.nn.Linear.

Examples

module init:

linear_module = torch.nn.Linear(4096, 4096) ipex_fusion = ipex.llm.modules.LinearAddAdd(linear_module)

module forward:

input = torch.randn(4096, 4096) other_1 = torch.randn(4096, 4096) other_2 = torch.randn(4096, 4096) result = ipex_fusion(input, other_1, other_2)

class ipex.llm.modules.RotaryEmbedding(max_position_embeddings: int, pos_embd_dim: int, base=10000, backbone: str | None = None, extra_rope_config: dict | None = None)

[module init and forward] Applies RotaryEmbedding (see https://huggingface.co/papers/2104.09864) on the query or key before their multi-head attention computation.

module init

Parameters:

forward()

Parameters:

Examples

module init:

rope_module = ipex.llm.modules.RotaryEmbedding(2048, 64, base=10000, backbone="GPTJForCausalLM")

forward:

query = torch.randn(1, 32, 16, 256) position_ids = torch.arange(32).unsqueeze(0) query_rotery = rope_module(query, position_ids, 16, 256, 1, 64)

[Direct function call] This module also provides a .apply_function function call to be used on query and key at the same time without initializing the module (assume rotary embedding sin/cos values are provided).

apply_function()

Parameters:

Returns:

[batch size, sequence length, num_head/num_kv_head, head_dim] or [num_tokens, num_head/num_kv_head, head_dim].

Return type:

query, key (torch.Tensor)

class ipex.llm.modules.RMSNorm(hidden_size: int, eps: float = 1e-06, weight: Tensor | None = None)

[module init and forward] Applies RMSnorm on the input (hidden states). (see https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L76)

module init

Parameters:

forward()

Parameters:

hidden_states (torch.Tensor) – input to be applied RMSnorm, usually taking shape of [batch size, sequence length, hidden_size] (as well as the output shape).

Examples

module init:

rmsnorm_module = ipex.llm.modules.RMSNorm(4096)

forward:

input = torch.randn(1, 32, 4096) result = rmsnorm_module(input)

[Direct function call] This module also provides a .apply_function function call to apply RMSNorm without initializing the module.

apply_function()

Parameters:

class ipex.llm.modules.FastLayerNorm(normalized_shape: Tuple[int, ...], eps: float, weight: Tensor, bias: Tensor | None = None)

[module init and forward] Applies PyTorch Layernorm (see https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html) on the input (hidden states).

module init

Parameters:

forward()

Parameters:

hidden_states (torch.Tensor) – input to be applied Layernorm, usually taking shape of [batch size, sequence length, hidden_size] (as well as the output shape).

Examples

module init:

layernorm = torch.nn.LayerNorm(4096) layernorm_module = ipex.llm.modules.FastLayerNorm(4096, eps=1e-05, weight=layernorm.weight, bias=layernorm.bias)

forward:

input = torch.randn(1, 32, 4096) result = layernorm_module(input)

[Direct function call] This module also provides a .apply_function function call to apply fast layernorm without initializing the module.

apply_function()

Parameters:

class ipex.llm.modules.IndirectAccessKVCacheAttention(text_max_length=2048)

kv_cache is used to reduce computation for Decoder layer but it also brings memory overheads, for example, when using beam search, the kv_cache should be reordered according to the latest beam idx and the current key/value should also be concat with kv_cache in the attention layer to get entire context to do scale dot product. When the sequence is very long, the memory overhead will be the performance bottleneck. This module provides an Indirect Access KV_cache(IAKV), Firstly, IAKV pre-allocates buffers(key and value use different buffers) to store all key/value hidden states and beam index information. It can use beam index history to decide which beam should be used by a timestamp and this information will generate an offset to access the kv_cache buffer.

Data Format:

The shape of the pre-allocated key(value) buffer is [max_seq, beam*batch, head_num, head_size], the hidden state of key/value which is the shape of [beam*batch, head_num, head_size] is stored token by token. All beam idx information of every timestamp is also stored in a Tensor with the shape of [max_seq, beam*batch].

module init

Parameters:

text_max_length (int) – the max length of kv cache to be used for generation (allocate the pre-cache buffer).

forward()

Parameters:

Returns:

Weighted value which is the output of scale dot product. shape (beam*batch, seq_len, head_num, head_size).

attn_weights: The output tensor of the first matmul in scale dot product which is not supported by kernel now.

new_layer_past: updated layer_past (seq_info, key_cache, value_cache, beam-idx).

Return type:

attn_output

Notes

How to reorder KV cache when using the format of IndirectAccessKVCacheAttention (e.g., on llama model see https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1318)

def _reorder_cache( self, past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor ) -> Tuple[Tuple[torch.Tensor]]: if ( len(past_key_values[0]) == 4 and past_key_values[0][0].shape[-1] == 1 ): for layer_past in past_key_values: layer_past[3][layer_past[0].size(-2) - 1] = beam_idx return past_key_values

[Direct function call] This module also provides a .apply_function function call to apply IndirectAccessKVCacheAttention without initializing the module.

The parameters of apply_function() are the same as the forward() call.

class ipex.llm.modules.PagedAttention

This module follows the API of two class methods as [vLLM](https://blog.vllm.ai/2023/06/20/vllm.html) to enable the paged attention kernel in and use the layout of (num_blocks, num_heads, block_size, head_size) for key/value cache. The basic logic as following figure. Firstly, The DRAM buffer which includes num_blocks are pre-allocated to store key or value cache. For every block, block_size tokens can be stored. In the forward pass, the cache manager will firstly allocate some slots from this buffer and use reshape_and_cache API to store the key/value and then use single_query_cached_kv_attention API to do the scale-dot-product of MHA. The block is basic allocation unit of paged attention and the token intra-block are stored one-by-one. The block tables are used to map the logical block of sequence into the physical block.

[class method]: reshape_and_cache

ipex.llm.modules.PagedAttention.reshape_and_cache( key, value, key_cache, value_cache, slot_mapping, kv_cache_dtype, k_scale, v_scale, )

This operator is used to store the key/value token states into the pre-allcated kv_cache buffers of paged attention.

Parameters:

[class method]: reshape_and_cache_flash ipex.llm.modules.PagedAttention.reshape_and_cache_flash(key, value, key_cache, value_cache, slot_mapping, k_scale, v_scale) This operator is used to store the key/value token states into the pre-allcated kv_cache buffers of paged attention. This method implementation is the same as reshape_and_cache but we need this to align with XPU.

Parameters:

[class method]: reshape_and_cache_flash ipex.llm.modules.PagedAttention.reshape_and_cache_flash(key, value, key_cache, value_cache, slot_mapping, k_scale, v_scale) This operator is used to store the key/value token states into the pre-allcated kv_cache buffers of paged attention. This method implementation is the same as reshape_and_cache but we need this to align with XPU.

Parameters:

[class method]: single_query_cached_kv_attention

ipex.llm.modules.PagedAttention.single_query_cached_kv_attention( out, query, key_cache, value_cache, head_mapping, scale, block_tables, context_lens, block_size, max_context_len, alibi_slopes, window_size, k_scale, v_scale, )

This operator is used to be calculated the scale-dot-product based on the paged attention.

Parameters:

[class method]: flash_atten_varlen

ipex.llm.modules.PagedAttention.flash_atten_varlen( out, query, key_cache, value_cache, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv, scale, is_cusal, block_tables, alibi_slopes, window_size_left, window_size_right, k_scale, v_scale, )

Parameters:

class ipex.llm.modules.VarlenAttention

[module init and forward] Applies PyTorch scaled_dot_product_attention on the inputs of query, key and value (see https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html), and accept the variant (different) sequence length among the query, key and value.

This module does not have args for module init.

forward()

Parameters:

Examples

module init:

varlenAttention_module = ipex.llm.modules.VarlenAttention()

forward:

query = torch.randn(32, 16, 256) key = torch.randn(32, 16, 256) value = torch.randn(32, 16, 256) out = torch.emply_like(query) seqlen_q = torch.tensor(1) seqlen_k = torch.tensor(1) max_seqlen_q = 1 max_seqlen_k = 1 pdropout = 0.0 softmax_scale = 0.5 varlenAttention_module(query, key, value, out, seqlen_q, seqlen_k, max_seqlen_q, max_seqlen_k, pdropout, softmax_scale)

[Direct function call] This module also provides a .apply_functionfunction call to apply VarlenAttention without initializing the module.

The parameters of apply_function() are the same as the forward() call.

ipex.llm.functional.rotary_embedding(query: Tensor, key: Tensor, sin: Tensor, cos: Tensor, rotary_dim: int, rotary_half: bool, position_ids: Tensor | None = None)

Applies RotaryEmbedding (see https://huggingface.co/papers/2104.09864) on the query ` or `key before their multi-head attention computation.

Parameters:

Return

query, key (torch.Tensor): [batch size, sequence length, num_head/num_kv_head, head_dim] or [num_tokens, num_head/num_kv_head, head_dim].

ipex.llm.functional.rms_norm(hidden_states: Tensor, weight: Tensor, eps: float)

Applies RMSnorm on the input (hidden states). (see https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L76)

Parameters:

ipex.llm.functional.fast_layer_norm(hidden_states: Tensor, normalized_shape: Tuple[int, ...], weight: Tensor, bias: Tensor, eps: float)

Applies PyTorch Layernorm (see https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html) on the input (hidden states).

Parameters:

ipex.llm.functional.indirect_access_kv_cache_attention(query: Tensor, key: Tensor, value: Tensor, scale_attn: float, layer_past: Tuple[Tensor] | None = None, head_mask: Tuple[Tensor] | None = None, attention_mask: Tuple[Tensor] | None = None, alibi: Tensor | None = None, add_casual_mask: bool | None = True, seq_info: Tensor | None = None, text_max_length: int | None = 0)

kv_cache is used to reduce computation for Decoder layer but it also brings memory overheads, for example, when using beam search, the kv_cache should be reordered according to the latest beam idx and the current key/value should also be concat with kv_cache in the attention layer to get entire context to do scale dot product. When the sequence is very long, the memory overhead will be the performance bottleneck. This module provides an Indirect Access KV_cache(IAKV), Firstly, IAKV pre-allocates buffers(key and value use different buffers) to store all key/value hidden states and beam index information. It can use beam index history to decide which beam should be used by a timestamp and this information will generate an offset to access the kv_cache buffer.

Data Format:

The shape of the pre-allocated key(value) buffer is [max_seq, beam*batch, head_num, head_size], the hidden state of key/value which is the shape of [beam*batch, head_num, head_size] is stored token by token. All beam idx information of every timestamp is also stored in a Tensor with the shape of [max_seq, beam*batch].

Parameters:

Returns:

weighted value which is the output of scale dot product. shape (beam*batch, seq_len, head_num, head_size).

attn_weights: the output tensor of the first matmul in scale dot product which is not supported by kernel now.

new_layer_past: updated layer_past (seq_info, key_cache, value_cache, beam-idx).

Return type:

attn_output

Notes

How to reorder KV cache when using the format of IndirectAccessKVCacheAttention (e.g., on llama model see https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1318)

def _reorder_cache( self, past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor ) -> Tuple[Tuple[torch.Tensor]]: if ( len(past_key_values[0]) == 4 and past_key_values[0][0].shape[-1] == 1 ): for layer_past in past_key_values: layer_past[3][layer_past[0].size(-2) - 1] = beam_idx return past_key_values

ipex.llm.functional.varlen_attention(query: Tensor, key: Tensor, value: Tensor, out: Tensor, seqlen_q: Tensor, seqlen_k: Tensor, max_seqlen_q: int, max_seqlen_k: int, pdropout: float, softmax_scale: float, zero_tensors: bool, is_causal: bool, return_softmax: bool, gen_: Generator)

Applies PyTorch scaled_dot_product_attention on the inputs of query, key and value (see https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html), and accept the variant (different) sequence length among the query, key and value.

This module does not have args for module init.

forward()

Parameters:

Fast Bert (Prototype)

ipex.fast_bert(model, dtype=torch.float32, optimizer=None, unpad=False)

Use TPP to speedup training/inference. fast_bert API is still a prototype feature and now only optimized for bert model.

Parameters:

Note

Currently ipex.fast_bert API is well optimized for training tasks. It works for inference tasks, though, please use the ipex.optimizeAPI with TorchScript to achieve the peak performance.

Warning

Please invoke fast_bert function AFTER loading weights to model viamodel.load_state_dict(torch.load(PATH)).

Warning

This API can’t be used when you have applied the ipex.optimize.

Warning

Please invoke optimize function BEFORE invoking DDP in distributed training scenario.

Examples

bfloat16 inference case.

model = ... model.load_state_dict(torch.load(PATH)) model.eval() optimized_model = ipex.fast_bert(model, dtype=torch.bfloat16)

running evaluation step.

bfloat16 training case.

optimizer = ... model.train() optimized_model, optimized_optimizer = ipex.fast_bert(model, dtype=torch.bfloat16, optimizer=optimizer, unpad=True, seed=args.seed)

running training step.

Graph Optimization

ipex.enable_onednn_fusion(enabled)

Enables or disables oneDNN fusion functionality. If enabled, oneDNN operators will be fused in runtime, when intel_extension_for_pytorch is imported.

Parameters:

enabled (bool) – Whether to enable oneDNN fusion functionality or not. Default value is True.

Examples

import intel_extension_for_pytorch as ipex

to enable the oneDNN fusion

ipex.enable_onednn_fusion(True)

to disable the oneDNN fusion

ipex.enable_onednn_fusion(False)

Quantization

ipex.quantization.get_weight_only_quant_qconfig_mapping(*, weight_dtype: int = WoqWeightDtype.INT8, lowp_mode: int = WoqLowpMode.NONE, act_quant_mode: int = WoqActQuantMode.PER_BATCH_IC_BLOCK_SYM, group_size: int = -1, weight_qscheme: int = WoqWeightQScheme.UNDEFINED)

Configuration for weight-only quantization (WOQ) for LLM.

Parameters:

If group_size > 0:
act_quant_mode can be any. If act_quant_mode is PER_IC_BLOCK(_SYM)
or PER_BATCH_IC_BLOCK(_SYM), weight is grouped along IC by group_size.
The IC_BLOCK for activation is determined by group_size automatically.
Each group has its own quantization parameters.

ipex.quantization.prepare(model, configure, example_inputs=None, inplace=False, bn_folding=True, example_kwarg_inputs=None)

Prepare an FP32 torch.nn.Module model to do calibration or to convert to quantized model.

Parameters:

Returns:

torch.nn.Module

ipex.quantization.convert(model, inplace=False)

Convert an FP32 prepared model to a model which will automatically insert fake quant before a quantizable module or operator.

Parameters:

Returns:

torch.nn.Module

Prototype API, introduction is avaiable at feature page.

ipex.quantization.autotune(model, calib_dataloader, calib_func=None, eval_func=None, op_type_dict=None, sampling_sizes=None, accuracy_criterion=None, tuning_time=0)

Automatic accuracy-driven tuning helps users quickly find out the advanced recipe for INT8 inference.

Parameters:

Returns:

the prepared model loaded qconfig after tuning.

Return type:

prepared_model (torch.nn.Module)

CPU Runtime

ipex.cpu.runtime.is_runtime_ext_enabled()

Helper function to check whether runtime extension is enabled or not.

Parameters:

None (None) – None

Returns:

Whether the runtime exetension is enabled or not. If the

Intel OpenMP Library is preloaded, this API will return True. Otherwise, it will return False.

Return type:

bool

class ipex.cpu.runtime.CPUPool(core_ids: list | None = None, node_id: int | None = None)

An abstraction of a pool of CPU cores used for intra-op parallelism.

Parameters:

Returns:

Generated ipex.cpu.runtime.CPUPool object.

Return type:

ipex.cpu.runtime.CPUPool

class ipex.cpu.runtime.pin(cpu_pool: CPUPool)

Apply the given CPU pool to the master thread that runs the scoped code region or the function/method def.

Parameters:

cpu_pool (ipex.cpu.runtime.CPUPool) – ipex.cpu.runtime.CPUPool object, contains all CPU cores used by the designated operations.

Returns:

Generated ipex.cpu.runtime.pin object which can be used as a with context or a function decorator.

Return type:

ipex.cpu.runtime.pin

class ipex.cpu.runtime.MultiStreamModuleHint(*args, **kwargs)

MultiStreamModuleHint is a hint to MultiStreamModule about how to split the inputs or concat the output. Each argument should be None, with type of int or a container which containes int or None such as: (0, None, …) or [0, None, …]. If the argument is None, it means this argument will not be split or concat. If the argument is with type int, its value means along which dim this argument will be split or concat.

Parameters:

Returns:

Generated ipex.cpu.runtime.MultiStreamModuleHint object.

Return type:

ipex.cpu.runtime.MultiStreamModuleHint

class ipex.cpu.runtime.MultiStreamModule(model, num_streams: int | str = 'AUTO', cpu_pool: ~ipex.cpu.runtime.cpupool.CPUPool = <ipex.cpu.runtime.cpupool.CPUPool object>, concat_output: bool = True, input_split_hint: ~ipex.cpu.runtime.multi_stream.MultiStreamModuleHint = <ipex.cpu.runtime.multi_stream.MultiStreamModuleHint object>, output_concat_hint: ~ipex.cpu.runtime.multi_stream.MultiStreamModuleHint = <ipex.cpu.runtime.multi_stream.MultiStreamModuleHint object>)

MultiStreamModule supports inference with multi-stream throughput mode.

If the number of cores inside cpu_pool is divisible by num_streams, the cores will be allocated equally to each stream. If the number of cores inside cpu_pool is not divisible by num_streams with remainder N, one extra core will be allocated to the first N streams. We suggest to set the num_streams as divisor of core number inside cpu_pool.

If the inputs’ batchsize is larger than and divisible by num_streams, the batchsize will be allocated equally to each stream. If batchsize is not divisible by num_streams with remainder N, one extra piece will be allocated to the first N streams. If the inputs’ batchsize is less thannum_streams, only the first batchsize’s streams are used with mini batch as one. We suggest to set inputs’ batchsize larger than and divisible bynum_streams. If you don’t want to tune the num of streams and leave it as “AUTO”, we suggest to set inputs’ batchsize larger than and divisible by number of cores.

Parameters:

Returns:

Generated ipex.cpu.runtime.MultiStreamModule object.

Return type:

ipex.cpu.runtime.MultiStreamModule

class ipex.cpu.runtime.Task(module, cpu_pool: CPUPool)

An abstraction of computation based on PyTorch module and is scheduled asynchronously.

Parameters:

Returns:

Generated ipex.cpu.runtime.Task object.

Return type:

ipex.cpu.runtime.Task

ipex.cpu.runtime.get_core_list_of_node_id(node_id)

Helper function to get the CPU cores’ ids of the input numa node.

Parameters:

node_id (int) – Input numa node id.

Returns:

List of CPU cores’ ids on this numa node.

Return type:

list