EfficientNet (original) (raw)

PyTorch

Overview

The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networksby Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.

The abstract from the paper is the following:

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.

This model was contributed by adirik. The original code can be found here.

EfficientNetConfig

class transformers.EfficientNetConfig

< source >

( num_channels: int = 3 image_size: int = 600 width_coefficient: float = 2.0 depth_coefficient: float = 3.1 depth_divisor: int = 8 kernel_sizes: typing.List[int] = [3, 3, 5, 3, 5, 5, 3] in_channels: typing.List[int] = [32, 16, 24, 40, 80, 112, 192] out_channels: typing.List[int] = [16, 24, 40, 80, 112, 192, 320] depthwise_padding: typing.List[int] = [] strides: typing.List[int] = [1, 2, 2, 2, 1, 2, 1] num_block_repeats: typing.List[int] = [1, 2, 2, 3, 3, 4, 1] expand_ratios: typing.List[int] = [1, 6, 6, 6, 6, 6, 6] squeeze_expansion_ratio: float = 0.25 hidden_act: str = 'swish' hidden_dim: int = 2560 pooling_type: str = 'mean' initializer_range: float = 0.02 batch_norm_eps: float = 0.001 batch_norm_momentum: float = 0.99 dropout_rate: float = 0.5 drop_connect_rate: float = 0.2 **kwargs )

Parameters

This is the configuration class to store the configuration of a EfficientNetModel. It is used to instantiate an EfficientNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientNetgoogle/efficientnet-b7 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

from transformers import EfficientNetConfig, EfficientNetModel

configuration = EfficientNetConfig()

model = EfficientNetModel(configuration)

configuration = model.config

EfficientNetImageProcessor

class transformers.EfficientNetImageProcessor

< source >

( do_resize: bool = True size: typing.Optional[typing.Dict[str, int]] = None resample: Resampling = 0 do_center_crop: bool = False crop_size: typing.Optional[typing.Dict[str, int]] = None rescale_factor: typing.Union[int, float] = 0.00392156862745098 rescale_offset: bool = False do_rescale: bool = True do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None include_top: bool = True **kwargs )

Parameters

Constructs a EfficientNet image processor.

preprocess

< source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] do_resize: typing.Optional[bool] = None size: typing.Optional[typing.Dict[str, int]] = None resample = None do_center_crop: typing.Optional[bool] = None crop_size: typing.Optional[typing.Dict[str, int]] = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None rescale_offset: typing.Optional[bool] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None include_top: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None )

Parameters

Preprocess an image or batch of images.

EfficientNetImageProcessorFast

class transformers.EfficientNetImageProcessorFast

< source >

( **kwargs: typing_extensions.Unpack[transformers.models.efficientnet.image_processing_efficientnet_fast.EfficientNetFastImageProcessorKwargs] )

Parameters

Constructs a fast Efficientnet image processor.

preprocess

< source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] **kwargs: typing_extensions.Unpack[transformers.models.efficientnet.image_processing_efficientnet_fast.EfficientNetFastImageProcessorKwargs] ) → <class 'transformers.image_processing_base.BatchFeature'>

Parameters

Returns

<class 'transformers.image_processing_base.BatchFeature'>

EfficientNetModel

class transformers.EfficientNetModel

< source >

( config: EfficientNetConfig )

Parameters

The bare Efficientnet Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)

Parameters

Returns

transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientNetConfig) and inputs.

The EfficientNetModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

EfficientNetForImageClassification

class transformers.EfficientNetForImageClassification

< source >

( config )

Parameters

EfficientNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientNetConfig) and inputs.

The EfficientNetForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

from transformers import AutoImageProcessor, EfficientNetForImageClassification import torch from datasets import load_dataset

dataset = load_dataset("huggingface/cats-image", trust_remote_code=True) image = dataset["test"]["image"][0]

image_processor = AutoImageProcessor.from_pretrained("google/efficientnet-b7") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b7")

inputs = image_processor(image, return_tensors="pt")

with torch.no_grad(): ... logits = model(**inputs).logits

predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) ...

< > Update on GitHub