Rui Zhu (original) (raw)
About
I am a PhD candidate in the Department of Computer Science and Engineering, UC San Diego, advised by Manmohan Chandraker and collaborating with Ravi Ramamoorthi in recent projects. I am a Powell Fellow and Jacobs School of Engineering Fellowship receiver at UCSD (2018-2021), and a receiver of the Qualcomm Innovation Fellowship (2022-2023).
My current research interest lies in 3D Computer Vision / Computer Graphics, with a focus on indoor scene inverse rendering and understanding utilizing geometric and physics-based cues (materials, lighting, etc.) The methods are applicable to AR/VR applications including photorealistic and physically accurate scene editing and generation. My current work is in accurate indoor lighting estimation with diffusion-based methods. [VIEW MY CV] (updated in Jun. 2024).
I received M.S. in Robotics from the Robotics Institute, School of Computer Science of Carnegie Mellon University, advised by Simon Lucey, and previously B.E. in Information Engineering from Southeast University, China. I used to work at/with Argo AI, Baidu Research USA, Adobe Research, Microsoft, NVIDIA, Qualcomm AI Research and Rembrand for research internships and collaborations.
I photograph occasionally. And FYI my Chinese name is 朱(ZHU)锐(Rui), IPA: /ʈʂu1 ʐweɪ4/.
News
I am on the job market in 2024, for positions of Research Scientist/Engineer. Please contact me if you are interested.
Selected Publications
FIPT: Factorized Inverse Path Tracing for Efficient and Accurate Material-Lighting Estimation
Factorized Inverse Path Tracing (FIPT) reduces ambiguity and Monte Carlo variance in inverse rendering, yielding efficient and high quality BRDF-emission, appealing relighting, and object insertion results for inverse rendering of indoor scenes.
Liwen Wu*, Rui Zhu* (equal contribution
), Mustafa B. Yaldiz, Yinhao Zhu, Hong Cai, Janarbek Matai, Fatih Porikli, Tzu-Mao Li, Manmohan Chandraker, Ravi Ramamoorthi
ICCV 2023 [Oral]
PhotoScene: Photorealistic Material and Lighting Transfer for Indoor Scenes
A framework that takes input image(s) of a scene along with approximately aligned CAD geometry, and builds a photorealistic digital twin with high-quality materials and similar lighting.
Yu-Ying Yeh, Zhengqin Li, Yannick Hold-Geoffroy, Rui Zhu, Zexiang Xu, Miloš Hašan, Kalyan Sunkavalli, Manmohan Chandraker
CVPR 2022
OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets
A novel framework for creating large-scale photorealistic datasets of indoor scenes, with ground truth geometry, material, lighting and semantics, as well as an open dataset and the dataset creation tools.
Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker
CVPR 2021 [Oral]
Single View Metrology in the Wild
We present a novel approach to single view metrology that can recover absolute 3D heights of objects and camera parameters, namely, orientation, field of view and the scale of the scene, using just a monocular image acquired in unconstrained conditions.
Rui Zhu, Xingyi Yang, Yannick Hold-Geoffroy, Federico Perazzi, Jonathan Eisenmann, Kalyan Sunkavalli, Manmohan Chandraker
ECCV 2020
ApolloCar3D: A Large 3D Car Instance Understanding Benchmark for Autonomous Driving
The first large-scale database suitable for 3D car instance understanding, ApolloCar3D, collected by Baidu. The dataset contains 5,277 driving images and over 60K car instances, where each car is fitted with an industry-grade 3D CAD model with absolute model size and semantically labelled keypoints.
Xibin Song, Peng Wang, Dingfu Zhou, Rui Zhu, Yuchao Dai, Hao Su, Hongdong Li, Ruigang Yang
CVPR 2019
Learning Depth from Monocular Videos using Direct Methods
For learning single image depth predictor from monocular sequences, we show that the depth CNN predictor can be learned without a pose CNN predictor, by incorporating a differentiable implementation of DVO, along with a novel depth normalization strategy.
Chaoyang Wang, Jose Miguel Buenaposada, Rui Zhu, Simon Lucey
CVPR 2018
Object-Centric Photometric Bundle Adjustment with Deep Shape Prior
We introduce learned shape prior in the form of deep shape generators into Photometric Bundle Adjustment (PBA) and propose to accommodate full 3D shape generated by the shape prior within the optimization-based inference framework, demonstrating impressive results.
Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey
WACV 2018
Semantic Photometric Bundle Adjustment on Natural Sequences
We provide the first approach of its kind (to our knowledge) for semantic object-centric PBA on natural sequences – which gives the global 6DoF camera poses of each frame and the dense 3D shape, with PBA-like accuracy but denser depth maps.
Rui Zhu, Chaoyang Wang, Ziyan Wang, Chen-Hsuan Lin, Simon Lucey
arXiv preprint, 2018
OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets. Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker.CVPR 2021 [Oral]
Contact
- rzhu@eng.ucsd.edu
- (412)932-7733
- jerrypiglet
- @jerrypigletRui
- Department of Computer Science and Engineering, UC San Diego, La Jolla, USA