Generalizable Policy Learning in the Physical World (original) (raw)
Introduction
Generalization is particularly important when learning policies to interact with the physical world. The spectrum of such policies is broad: the policies can be high-level, such as action plans that concern temporal dependencies and casualties of environment states; or low-level, such as object manipulation skills to transform objects that are rigid, articulated, soft, or even fluid. In the physical world, an embodied agent can face a number of changing factors such as physical parameters, action spaces, tasks, visual appearances of the scenes, geometry and topology of the objects, etc. And many important real-world tasks involving generalizable policy learning, e.g., visual navigation, object manipulation, and autonomous driving. Therefore, learning generalizable policies is crucial to developing intelligent embodied agents in the real world.
Learning generalizable policies in the physical world requires deep synergistic efforts across fields of vision, learning, and robotics, and poses many interesting research problems. This workshop is designed to foster progress in generalizable policy learning, in particular, with a focus on the tasks in the physical world, such as visual navigation, object manipulation, and autonomous driving, because these real-world tasks require complex reasoning involving visual appearance, geometry, and physics. Technically, we expect to stimulate improvement in new directions such as:
- Emergence of physical concepts driven by interaction
- Physical concept-based policy making
- 3D learning for policy learning
- Differentiable physics coupled with policy learning
- ...
Our main targeted participants are researchers interested in applying learning methods to develop intelligent embodied agents in the physical world. More specifically, target communities include, but are not limited to: robotics, reinforcement learning, learning from demonstrations, offline reinforcement learning, meta-learning, multi-task learning, 3D vision, computer vision, computer graphics, and physical simulation.
In affiliation to this workshop, we are also organizing the ManiSkill Challenge, which focuses on learning to manipulate unseen objects in simulation with 3D visual inputs. We will announce winners and host winner presentations in this workshop.
Call for Papers
We invite submission to the Generalizable Policy Learning in the Physical World workshop, hosted at ICLR 2022.
Paper topics
A non-exhaustive list of relevant topics:
- Learning methods for embodied AI tasks (e.g., manipulation, navigation)
- Real-world or simulated benchmarks for generalizable policy learning
- Learning representations for generalization in physical world tasks
- Large-scale reinforcement/imitation learning
- Multi-task learning
- Data augmentation techniques for generalizable policy learning
- Few-shot imitation learning
- Affordance prediction
- Other topics about generalizable policy learning in the physical world
Submission Guidelines
- Submission Portal: OpenReview
- Paper Length: Submissions could be either 4-page short papers or 8-page long papers, excluding references, acknowledgements, and appendices.
- Format:
- You must format your submission using the updated ICLR 2022 LaTeX style file.
- Please include the references and supplementary materials in the same PDF as the main paper.
- The maximum file size for submissions is 100MB. Submissions that violate the ICLR style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.
- Dual Submission:
- Papers to be submitted or in preparation for submission to other major venues (including ICML 2022) in the field are allowed.
- We also weclome published works, but they must be explicitly stated at the time of submission.
- Non-archival: The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.
- Visibility: Submissions and reviews will not be public. Only accepted papers will be made public.
- We encourage the participants of the affiliated challenge to submit technical reports which summarizing their solutions.
- Contact: iclr2022gpl@gmail.com
Review and Selection
- The review process will be double-blind. As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information (even in the supplementary material).
- Each submission will be reviewed for originality, significance, clarity, soundness, relevance and technical contents.
- Accepted submissions will be presented in the form of posters or contributed talks.
- At least one co-author of each accepted paper is expected to register for ICLR 2022 and attend the poster session.
- All the accepted submissions will be available on our workshop website, though authors could indicate explicitly if they want to opt out.
Timeline (11:59 PM Pacific Standard Time)
Jan 17, 2022 | Announcement and call for submissions |
---|---|
Paper submission deadline | |
Mar 25, 2022 | Review decisions announced |
Camera ready and poster uploading deadline |
Workshop Schedule
Please attend our workshop via our ICLR workshop virtual website.
Start Time (PDT) | End Time (PDT) | Event |
---|---|---|
8:00:00 AM | 8:10:00 AM | Intro and Opening Remark |
8:10:00 AM | 8:40:00 AM | Invited Talk (Danica Kragic): Learning for contact rich tasks |
8:40:00 AM | 9:10:00 AM | Invited Talk (Peter Stone): Grounded Simulation Learning for Sim2Real |
9:10:00 AM | 9:20:00 AM | Break |
9:20:00 AM | 10:15:00 AM | Poster Session 1 |
10:15:00 AM | 11:15:00 AM | Live Panel Discussion (password: bluefew) |
11:15:00 AM | 11:23:00 AM | Challenge Winner Presentation (Zhutian & Aidan) |
11:23:00 AM | 11:31:00 AM | Challenge Winner Presentation (Fattonny) |
11:31:00 AM | 1:00:00 PM | Lunch Break |
1:00:00 PM | 1:10:00 PM | Contributed Talk (Sim-to-Lab-to-Real: Safe RL with Shielding and Generalization Guarantees) |
1:10:00 PM | 1:40:00 PM | Invited Talk (Shuran Song): Iterative Residual Policy for Generalizable Dynamic Manipulation of Deformable Objects |
1:40:00 PM | 2:10:00 PM | Invited Talk (Nadia Figueroa): Towards Safe and Efficient Learning and Control for Physical Human Robot Interaction |
2:10:00 PM | 2🔞00 PM | Challenge Winner Presentation (EPIC lab) |
2🔞00 PM | 2:30:00 PM | Break |
2:30:00 PM | 2:40:00 PM | Contributed Talk (Know Thyself: Transferable Visual Control Policies Through Robot-Awareness) |
2:40:00 PM | 3:10:00 PM | Invited Talk (Mrinal Kalakrishnan): Robot Learning & Generalization in the Real World |
3:10:00 PM | 3:40:00 PM | Invited Talk (Xiaolong Wang): Generalizing Dexterous Manipulation by Learning from Humans |
3:40:00 PM | 3:48:00 PM | Challenge Winner Presentation (Silver-Bullet-3D) |
3:48:00 PM | 3:50:00 PM | Break |
3:50:00 PM | 4:45:00 PM | Poster Session 2 |
4:45:00 PM | 5:30:00 PM | Challenge Award Ceremony |
5:30:00 PM | 5:35:00 PM | Closing Remarks |
Panelists
listed alphabetically
Organizers
listed alphabetically
Program Committee
We would like to thank the following people for their effort in providing feedback for submissions!
- Abhishek Gupta
- Ankur Handa
- Annie Xie
- Anurag Ajay
- Avi Singh
- Brijen Thananjeyan
- Cheol-Hui Min
- Coline Devin
- Dhruv Shah
- Dongsu Zhang
- Fanbo Xiang
- Fangchen Liu
- Fei Liu
- Homer Walke
- Jiayuan Gu
- Jonathan Yang
- Junbang Liang
- Krishna Murthy
- Laura Smith
- Liyiming Ke
- Miles Macklin
- Minghua Liu
- Nicklas Hansen
- Quan Vuong
- Rui Chen
- Sasha Khazatsky
- Shikhar Bahl
- Shuang Liu
- Tao Chen
- Tianhe Yu
- Vikash Kumar
- Weizi Li
- Xuanlin Li
- Yoonseon Oh
- Yu Shen
- Yufei Ye
- Zhan Ling
- Zhiao Huang
- Zhiwei Jia
- Zih-Yun Chiu
Poster Sessions
Poster session assignments are posted below. The session will be held at https://app.gather.town/app/Wfl5hBvVzs7ELFNS/gplpw-poster-room.