Renrui Zhang's Homepage (original) (raw)
Education
- [2017-2021] 🎉 I received my B.E. degree from Peking University, awarded Outstanding Graduate (Top 5%).
- [2020-2021] I worked as a visiting student in University of Pennsylvania, supervised by Prof. Jianbo Shi.
- [2021-Now] 🎓 I'm pursuing my Ph.D. in MMLab, CUHK, supervised by Prof. Hongsheng Li and Prof. Xiaogang Wang.
- [2021-2024] I worked as a research intern at Shanghai AI Lab, supervised by Dr. Peng Gao.
- [2024-2025] I worked as a research intern at LLaVA team, ByteDance, Seattle, supervised by Dr. Chunyuan Li.
- [2025-Now] 💪 I joined SEED (Multimodal Interaction & World Model), ByteDance, San Jose.
Biography
📌 My research interests include Large Multimodal Models, Vision-language Learning, Emboided AI, and 3D Vision.
✉️ I'm looking for undergraduate and graduate students for academic cooperation. Discussions are welcome!
News
- [2025-05] 🔥 We release "T2I-R1", introducing R1 into image generation domains for CoT reasoning.
- [2025-05] One paper accepted by ICML 2025
- [2025-03] 🔥 We release "HybridVLA", the first work unifying Autoregression and Diffusion in VLA models.
- [2025-02] Three papers accepted by CVPR 2025, one Highlight 🎉
- [2025-01] Five papers accepted by ICLR 2025, two Spotlight 🎉
- [2025-01] 🔥 We release "Image Generation with CoT", the first work investigating CoT strategies (e.g., Test-time Scling, RL, and Reflection) in autoregressive text-to-image generation.
- [2025-01] 🎉 "Video-MME", is thrilled to be selected as One of the 14 Groundbreaking Stuides in 2024.
- [2024-08] 🔥 We release "LLaVA-OneVision", the latest LLaVA model for image, video, and image-text interleaved scenarios with superior performance.
- [2024-07] Four papers accepted by ECCV 2024
- [2024-07] 🔥 We release "LLaVA-NeXT-Interleave" for multi-image instruction tuning and "MAVIS" for multimodal mathematical reasoning.
- [2024-05] Three papers accepted by ICML 2024
- [2024-03] Seven papers accepted by CVPR 2024, two Highlight 🎉
- [2024-03] 🔥 We release "MathVerse", a novel mathematical benchmark with the first CoT evaluation strategy.
- [2024-01] Four papers accepted by ICLR 2024
Selected Projects
* Equal Contribution # Project Lead
♠ o1/R1-like Chain-of-Thought (CoT) Reasoning
- 🔥 Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step
Ziyu Guo*, Renrui Zhang#*, Chengzhuo Tong*, Zhizheng Zhao*, Haoquan Zhang, Manyuan Zhang, Peng Gao, Hongsheng Li, Pheng-Ann Heng
CVPR 2025 - 🔥 MME-CoT: Benchmarking CoT in LMMs for Reasoning Quality, Robustness, and Efficiency
Dongzhi Jiang*, Renrui Zhang#*, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, Hongsheng Li
arXiv 2025 - MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data Engine
Renrui Zhang#*, Xinyu Wei*, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, Shanghang Zhang, Peng Gao, Chunyuan Li, Hongsheng Li
ICLR 2025 - MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
Renrui Zhang#*, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li
ECCV 2024
♠ Large Language & Multimodal Models (LLMs & LMMs)
- 🔥 LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-initialized Attention
Renrui Zhang*, Jiaming Han*, Dongyang Liu*, Aojun Zhou, Pan Lu, Yu Qiao, Hongsheng Li, Peng Gao
ICLR 2024 - 🔥 LLaVA-OneVision: Easy Visual Task Transfer
Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, Chunyuan Li
TMLR 2025 - 🔥 LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
Feng Li*, Renrui Zhang*, Hao Zhang*, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, Chunyuan Li
ICLR 2024 Spotlight 🎉 - Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following
Ziyu Guo*, Renrui Zhang#*, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, Pheng-Ann Heng
arXiv 2024 - Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
Chaoyou Fu#, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, Peixian Chen, Yanwei Li, Shaohui Lin, Sirui Zhao, Ke Li, Tong Xu, Xiawu Zheng, Enhong Chen, Rongrong Ji, Xing Sun
CVPR 2025 - MMSearch: Unveiling the Potential of Large Models as Multi-modal Search Engines
Dongzhi Jiang*, Renrui Zhang#*, Ziyu Guo, Yanmin Wu, Jiayi Lei, Pengshuo Qiu, Pan Lu, Zehui Chen, Guanglu Song, Peng Gao, Yu Liu, Chunyuan Li, Hongsheng Li
ICLR 2025 - SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Dongyang Liu*, Renrui Zhang*, Longtian Qiu*, Siyuan Huang*, Weifeng Lin*, Shitian Zhao, Shijie Geng, Ziyi Lin, Peng Jin, Kaipeng Zhang, Wenqi Shao, Chao Xu, Conghui He, Junjun He, Hao Shao, Pan Lu, Hongsheng Li, Yu Qiao, Peng Gao
ICML 2024 - ImageBind-LLM: Multi-modality Instruction Tuning
Jiaming Han*, Renrui Zhang*, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, Xudong Lu, Shuai Ren, Yafei Wen, Xiaoxin Chen, Xiangyu Yue, Hongsheng Li, Yu Qiao
arXiv 2023
♠ Large Vision Models
- 🔥 Personalize Segment Anything Model with One Shot
Renrui Zhang, Zhengkai Jiang, Ziyu Guo, Shilin Yan, Junting Pan, Hao Dong, Peng Gao, Hongsheng Li
ICLR 2024 - 🔥 SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
Ziyu Guo*, Renrui Zhang#*, Xiangyang Zhu, Chengzhuo Tong, Peng Gao, Chunyuan Li, Pheng-Ann Heng
arXiv 2024
♠ Emboided AI & Robotics
- 🔥 HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model
Jiaming Liu*, Hao Chen*, Pengju An, Zhuoyang Liu, Renrui Zhang#, Chenyang Gu, Xiaoqi Li, Ziyu Guo, Sixiang Chen, Mengzhen Liu, Chengkai Hou, Mengdi Zhao, KC Zhou, Pheng-Ann Heng, Shanghang Zhang
arXiv 2025 - Lift3D Foundation Policy: Lifting 2D Large-scale Pretrained Models for Robust 3D Robotic Manipulation
Yueru Jia*, Jiaming Liu*#, Sixiang Chen*, Chenyang Gu, Zhilue Wang, Longzan Luo, Lily Lee, Pengwei Wang, Zhongyuan Wang, Renrui Zhang#, Shanghang Zhang
CVPR 2025 - RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation
Jiaming Liu*#, Mengzhen Liu*, Zhenyu Wang, Lily Lee, Kaichen Zhou, Pengju An, Senqiao Yang, Renrui Zhang#, Yandong Guo, Shanghang Zhang
NeurIPS 2024
♠ Vision-language Learning
- PointCLIP: Point Cloud Understanding by CLIP
Renrui Zhang*, Ziyu Guo*, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, Hongsheng Li
CVPR 2022 - Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
Renrui Zhang*, Wei Zhang*, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li
ECCV 2022 - Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
Renrui Zhang*, Xiangfei Hu*, Bohao Li, Siyuan Huang, Hanqiu Deng, Hongsheng Li, Yu Qiao, Peng Gao
CVPR 2023 - PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
Xiangyang Zhu*, Renrui Zhang#*, Bowei He, Ziyu Guo, Ziyao Zeng, Zipeng Qin, Shanghang Zhang, Peng Gao
ICCV 2023 - Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
Xiangyang Zhu*, Renrui Zhang#*, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, Peng Gao
ICCV 2023 - CLIP-Adapter: Better Vision-language Models with Feature Adapters
Peng Gao*, Shijie Geng*, Renrui Zhang*, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao
IJCV 2024 - Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation
Yulu Gan, Xianzheng Ma, Yihang Lou, Yan Bai, Renrui Zhang, Nian Shi, Lin Luo
AAAI 2023 Best Student Paper 🎉
♠ 3D Vision & Autonomous Driving
- MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection
Renrui Zhang, Han Qiu, Tai Wang, Xuanzhuo Xu, Ziyu Guo, Yu Qiao, Peng Gao, Hongsheng Li
ICCV 2023 - Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
- Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
- Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis
Renrui Zhang, Liuhui Wang, Yali Wang, Peng Gao, Hongsheng Li, Jianbo Shi
CVPR 2023 - No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation
Selected Awards
- [2021-06] Outstanding Graduate, Peking University (Top 5%)
- [2020-09] Academic Excellent Scholarship (Ranked 1st/73)
- [2020-09] Merit Student PaceSetter, Peking University (Ranked 1st/73)
- [2019-09] Academic Excellent Scholarship (Ranked 4th/73)
- [2019-09] Merit Student, Peking University (Ranked 4th/73)
- [2016-07] China Youth Technology Innovation Award (The Only 1 in Province)
- [2016-10] 1st Prize in Provincial Chinese Physics Olympiad (Ranked 18th in Province)
- [2015-10] 2nd Prize in The Chinese 15th Awarding Program for Future Scientist (Ranked 1st in Province)
- [2013-03] 1st Prize in Provincial China Adolescent Robotics Competition (Ranked 1st in Province)
Hobbies
Soccer ⚽️, Moive 🎬, Singing 🎤, Piano 🎹, Violin 🎻, Snorkeling 🤿, HotToys 🦸♂️, FC Online 🎮, PUBG 🐓