GitHub - open-mmlab/mmengine: OpenMMLab Foundational Library for Training Deep Learning Models (original) (raw)

What's New

v0.10.6 was released on 2025-01-13.

Highlights:

Read Changelog for more details.

Introduction

MMEngine is a foundational library for training deep learning models based on PyTorch. It serves as the training engine of all OpenMMLab codebases, which support hundreds of algorithms in various research areas. Moreover, MMEngine is also generic to be applied to non-OpenMMLab projects. Its highlights are as follows:

Integrate mainstream large-scale model training frameworks

Supports a variety of training strategies

Provides a user-friendly configuration system

Covers mainstream training monitoring platforms

Installation

Supported PyTorch Versions

MMEngine PyTorch Python
main >=1.6 <=2.1 >=3.8, <=3.11
>=0.9.0, <=0.10.4 >=1.6 <=2.1 >=3.8, <=3.11

Before installing MMEngine, please ensure that PyTorch has been successfully installed following the official guide.

Install MMEngine

pip install -U openmim mim install mmengine

Verify the installation

python -c 'from mmengine.utils.dl_utils import collect_env;print(collect_env())'

Get Started

Taking the training of a ResNet-50 model on the CIFAR-10 dataset as an example, we will use MMEngine to build a complete, configurable training and validation process in less than 80 lines of code.

Build Models

First, we need to define a model which 1) inherits from BaseModel and 2) accepts an additional argument mode in the forward method, in addition to those arguments related to the dataset.

import torch.nn.functional as F import torchvision from mmengine.model import BaseModel

class MMResNet50(BaseModel): def init(self): super().init() self.resnet = torchvision.models.resnet50()

def forward(self, imgs, labels, mode):
    x = self.resnet(imgs)
    if mode == 'loss':
        return {'loss': F.cross_entropy(x, labels)}
    elif mode == 'predict':
        return x, labels

Build Datasets

Next, we need to create Datasets and DataLoaders for training and validation. In this case, we simply use built-in datasets supported in TorchVision.

import torchvision.transforms as transforms from torch.utils.data import DataLoader

norm_cfg = dict(mean=[0.491, 0.482, 0.447], std=[0.202, 0.199, 0.201]) train_dataloader = DataLoader(batch_size=32, shuffle=True, dataset=torchvision.datasets.CIFAR10( 'data/cifar10', train=True, download=True, transform=transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(**norm_cfg) ]))) val_dataloader = DataLoader(batch_size=32, shuffle=False, dataset=torchvision.datasets.CIFAR10( 'data/cifar10', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize(**norm_cfg) ])))

Build Metrics

To validate and test the model, we need to define a Metric called accuracy to evaluate the model. This metric needs to inherit from BaseMetric and implements the process and compute_metrics methods.

from mmengine.evaluator import BaseMetric

class Accuracy(BaseMetric): def process(self, data_batch, data_samples): score, gt = data_samples # Save the results of a batch to self.results self.results.append({ 'batch_size': len(gt), 'correct': (score.argmax(dim=1) == gt).sum().cpu(), }) def compute_metrics(self, results): total_correct = sum(item['correct'] for item in results) total_size = sum(item['batch_size'] for item in results) # Returns a dictionary with the results of the evaluated metrics, # where the key is the name of the metric return dict(accuracy=100 * total_correct / total_size)

Build a Runner

Finally, we can construct a Runner with previously defined Model, DataLoader, and Metrics, with some other configs, as shown below.

from torch.optim import SGD from mmengine.runner import Runner

runner = Runner( model=MMResNet50(), work_dir='./work_dir', train_dataloader=train_dataloader, # a wrapper to execute back propagation and gradient update, etc. optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)), # set some training configs like epochs train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1), val_dataloader=val_dataloader, val_cfg=dict(), val_evaluator=dict(type=Accuracy), )

Launch Training

Learn More

Tutorials

Contributing

We appreciate all contributions to improve MMEngine. Please refer to CONTRIBUTING.md for the contributing guideline.

Citation

If you find this project useful in your research, please consider cite:

@article{mmengine2022,
  title   = {{MMEngine}: OpenMMLab Foundational Library for Training Deep Learning Models},
  author  = {MMEngine Contributors},
  howpublished = {\url{https://github.com/open-mmlab/mmengine}},
  year={2022}
}

License

This project is released under the Apache 2.0 license.

Ecosystem

Projects in OpenMMLab