GitHub - open-mmlab/mim: MIM Installs OpenMMLab Packages (original) (raw)

MIM: MIM Installs OpenMMLab Packages

MIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

Major Features

License

This project is released under the Apache 2.0 license.

Changelog

v0.1.1 was released in 13/6/2021.

Customization

You can use .mimrc for customization. Now we support customize default values of each sub-command. Please refer to customization.md for details.

Build custom projects with MIM

We provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM in MIM-Example. Without worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.

Installation

Please refer to installation.md for installation.

Command

1. install

install latest version of mmcv-full

mim install mmcv-full # wheel

install 1.5.0

mim install mmcv-full==1.5.0

install latest version of mmcls

mim install mmcls

install master branch

mim install git+https://github.com/open-mmlab/mmclassification.git

install local repo

git clone https://github.com/open-mmlab/mmclassification.git
cd mmclassification
mim install .

install extension based on OpenMMLab

mim install git+https://github.com/xxx/mmcls-project.git

install mmcv

install('mmcv-full')

install mmcls will automatically install mmcv if it is not installed

install('mmcls')

install extension based on OpenMMLab

install('git+https://github.com/xxx/mmcls-project.git') 2. uninstall

uninstall mmcv

mim uninstall mmcv-full

uninstall mmcls

mim uninstall mmcls

uninstall mmcv

uninstall('mmcv-full')

uninstall mmcls

uninstall('mmcls') 3. list

Train models on a single server with CPU by setting gpus to 0 and

'launcher' to 'none' (if applicable). The training script of the

corresponding codebase will fail if it doesn't support CPU training.

mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0

Train models on a single server with one GPU

mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1

Train models on a single server with 4 GPUs and pytorch distributed

mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
--launcher pytorch

Train models on a slurm HPC with one 8-GPU node

mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
--gpus-per-node 8 --partition partition_name --work-dir tmp

Print help messages of sub-command train

mim train -h

Print help messages of sub-command train and the training script of mmcls

mim train mmcls -h

train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=1,
other_args=('--work-dir', 'tmp'))
train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=4,
launcher='pytorch', other_args=('--work-dir', 'tmp'))
train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=8,
launcher='slurm', gpus_per_node=8, partition='partition_name',
other_args=('--work-dir', 'tmp')) 7. test

Test models on a single server with 1 GPU, report accuracy

mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 1 --metrics accuracy

Test models on a single server with 1 GPU, save predictions

mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 1 --out tmp.pkl

Test models on a single server with 4 GPUs, pytorch distributed,

report accuracy

mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy

Test models on a slurm HPC with one 8-GPU node, report accuracy

mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \
partition_name --gpus-per-node 8 --launcher slurm

Print help messages of sub-command test

mim test -h

Print help messages of sub-command test and the testing script of mmcls

mim test mmcls -h

Get the Flops of a model

mim run mmcls get_flops resnet101_b16x8_cifar10.py

Publish a model

mim run mmcls publish_model input.pth output.pth

Train models on a slurm HPC with one GPU

srun -p partition --gres=gpu:1 mim run mmcls train \
resnet101_b16x8_cifar10.py --work-dir tmp

Test models on a slurm HPC with one GPU, report accuracy

srun -p partition --gres=gpu:1 mim run mmcls test \
resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy

Print help messages of sub-command run

mim run -h

Print help messages of sub-command run, list all available scripts in

codebase mmcls

mim run mmcls -h

Print help messages of sub-command run, print the help message of

training script in mmcls

mim run mmcls train -h

Parameter search on a single server with CPU by setting gpus to 0 and

'launcher' to 'none' (if applicable). The training script of the

corresponding codebase will fail if it doesn't support CPU training.

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \
--search-args '--optimizer.lr 1e-2 1e-3'

Parameter search with on a single server with one GPU, search learning

rate

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.lr 1e-2 1e-3'

Parameter search with on a single server with one GPU, search

weight_decay

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.weight_decay 1e-3 1e-4'

Parameter search with on a single server with one GPU, search learning

rate and weight_decay

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
--search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
1e-4'

Parameter search on a slurm HPC with one 8-GPU node, search learning

rate and weight_decay

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
--partition partition_name --gpus-per-node 8 --launcher slurm \
--search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
1e-4'

Parameter search on a slurm HPC with one 8-GPU node, search learning

rate and weight_decay, max parallel jobs is 2

mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
--partition partition_name --gpus-per-node 8 --launcher slurm \
--max-jobs 2 --search-args '--optimizer.lr 1e-2 1e-3 \
--optimizer.weight_decay 1e-3 1e-4'

Print the help message of sub-command search

mim gridsearch -h

Print the help message of sub-command search and the help message of the

training script of codebase mmcls

mim gridsearch mmcls -h

gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.lr 1e-2 1e-3',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.weight_decay 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
'1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
partition='partition_name', gpus_per_node=8, launcher='slurm',
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
' 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))
gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
partition='partition_name', gpus_per_node=8, launcher='slurm',
max_workers=2,
search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
' 1e-3 1e-4',
other_args=('--work-dir', 'tmp'))

Contributing

We appreciate all contributions to improve mim. Please refer to CONTRIBUTING.md for the contributing guideline.

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab