GitHub - zerchen/vividex_mujoco: ViViDex implementation under the MuJoCo simulator, ICRA 2025 (original) (raw)
This is the implementation of ViViDex under the MuJoCo simulator, a novel system for learning dexterous manipulation skills from human videos:
Installation 👷
git clone --recursive https://github.com/zerchen/vividex_mujoco.git
conda create -n rl python=3.10
conda activate rl
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt
conda install pytorch-scatter -c pyg
Usuage 🚀
cd tools
Train the state-based relocation policy
python train.py env.name=seq_name
Train the state-based pouring policy
python train.py env=dexycb_pour
Train the state-based placing-inside policy
python train.py env=dexycb_place
Available seq_name can be found at: hand_imitation/env/models/assets/objects/trajectories/ycb
. You can also download trained checkpoints here and check their config files for a reference. Please note that change 36 to 42 for L197-198 in hand_imitation/env/models/control.py
when try to load my trained checkpoints for pour and place inside tasks. When state-based policies are trained, rollout these policies and train the visual policy using tools/dist_bc_train.py
.
Acknowledgements
Parts of the code are based on DexMV and TCDM. We thank the authors for sharing their excellent work!
Citation 📝
If you find ViViDex useful for your research, please consider citing our paper:
@inproceedings{chen2025vividex, title={{ViViDex}: Learning Vision-based Dexterous Manipulation from Human Videos}, author={Chen, Zerui and Chen, Shizhe and Arlaud, Etienne and Laptev, Ivan and Schmid, Cordelia}, booktitle={ICRA}, year={2025} }