Linear — PyTorch 2.7 documentation (original) (raw)
class torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)[source][source]¶
Applies an affine linear transformation to the incoming data: y=xAT+by = xA^T + b.
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will use different precision for backward.
Parameters
- in_features (int) – size of each input sample
- out_features (int) – size of each output sample
- bias (bool) – If set to
False
, the layer will not learn an additive bias. Default:True
Shape:
- Input: (∗,Hin)(*, H_\text{in}) where ∗* means any number of dimensions including none and Hin=in_featuresH_\text{in} = \text{in\_features}.
- Output: (∗,Hout)(*, H_\text{out}) where all but the last dimension are the same shape as the input and Hout=out_featuresH_\text{out} = \text{out\_features}.
Variables
- weight (torch.Tensor) – the learnable weights of the module of shape(out_features,in_features)(\text{out\_features}, \text{in\_features}). The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}), wherek=1in_featuresk = \frac{1}{\text{in\_features}}
- bias – the learnable bias of the module of shape (out_features)(\text{out\_features}). If
bias
isTrue
, the values are initialized fromU(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) wherek=1in_featuresk = \frac{1}{\text{in\_features}}
Examples:
m = nn.Linear(20, 30) input = torch.randn(128, 20) output = m(input) print(output.size()) torch.Size([128, 30])