LFWPeople — Torchvision 0.22 documentation (original) (raw)
class torchvision.datasets.LFWPeople(root: str, split: str = '10fold', image_set: str = 'funneled', transform: ~typing.Optional[~typing.Callable] = None, target_transform: ~typing.Optional[~typing.Callable] = None, download: bool = False, loader: ~typing.Callable[[str], ~typing.Any] = <function default_loader>)[source]¶
LFW Dataset.
Parameters:
- root (str or
pathlib.Path
) – Root directory of dataset where directorylfw-py
exists or will be saved to if download is set to True. - split (string , optional) – The image split to use. Can be one of
train
,test
,10fold
(default). - image_set (str, optional) – Type of image funneling to use,
original
,funneled
ordeepfunneled
. Defaults tofunneled
. - transform (callable , optional) – A function/transform that takes in a PIL image or torch.Tensor, depends on the given loader, and returns a transformed version. E.g,
transforms.RandomCrop
- target_transform (callable , optional) – A function/transform that takes in the target and transforms it.
- download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.
- loader (callable , optional) – A function to load an image given its path. By default, it uses PIL as its image loader, but users could also pass in
torchvision.io.decode_image
for decoding image data into tensors directly.
Special-members:
__getitem__(index: int) → Tuple[Any, Any][source]¶
Parameters:
index (int) – Index
Returns:
Tuple (image, target) where target is the identity of the person.
Return type: