Many of our research projects are in one of the following general themes. Note that this page is still being updated to include all publications.

Deformable Object Manipulation

Deformable objects are challenging from both a perceptual and dynamic perspective: a crumpled cloth has many self-occlusions and its configuration is hard to infer from observations; further, the dynamics of a cloth are complex to model and incorporate into planning algorithms. We develop algorithms to handle deformable object manipulation tasks, such as cloth, liquids, dough, and articulated objects.

Relevant Publications
ToolFlowNet: Robotic Manipulation with Tools via Predicting Tool Flow from Point Clouds
Daniel Seita, Yufei Wang†, Sarthak J Shetty†, Edward Yao Li†, Zackory Erickson, David Held
Conference on Robot Learning (CoRL), 2022
Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation
Xingyu Lin*, Carl Qi*, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held
Conference on Robot Learning (CoRL), 2022
Learning to Singulate Layers of Cloth based on Tactile Feedback
Sashank Tirumala*, Thomas Weng*, Daniel Seita*, Oliver Kroemer, Zeynep Temel, David Held
International Conference on Intelligent Robots and Systems (IROS), 2022 - Best Paper at ROMADO-SI
FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy
Thomas Weng, Sujay Bajracharya, Yufei Wang, David Held
Conference on Robot Learning (CoRL), 2021
SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation
Xingyu Lin, Yufei Wang, Jake Olkin, David Held
Conference on Robot Learning (CoRL), 2020
Cloth Region Segmentation for Robust Grasp Selection
Jianing Qian*, Thomas Weng*, Luxin Zhang, Brian Okorn, David Held
International Conference on Intelligent Robots and Systems (IROS), 2020

3D Affordance Reasoning for Object Manipulation

In order for a robot to interact with an object, the robot must infer its “affordances”: how the object moves as the robot interacts with it and how the object can interact with other objects in the environment. We develop robot perception algorithms that learn to estimate these affordances and then use such inferences to learn to manipulate objects to achieve a task.

Relevant Publications
Neural Grasp Distance Fields for Robot Manipulation
Thomas Weng, David Held, Franziska Meier, Mustafa Mukadam
International Conference on Robotics and Automation (ICRA), 2023
TAX-Pose: Task-Specific Cross-Pose Estimation for Robot Manipulation
Chuer Pan*, Brian Okorn*, Harry Zhang*, Ben Eisner*, David Held
Conference on Robot Learning (CoRL), 2022
ToolFlowNet: Robotic Manipulation with Tools via Predicting Tool Flow from Point Clouds
Daniel Seita, Yufei Wang†, Sarthak J Shetty†, Edward Yao Li†, Zackory Erickson, David Held
Conference on Robot Learning (CoRL), 2022
Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation
Xingyu Lin*, Carl Qi*, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held
Conference on Robot Learning (CoRL), 2022

Multimodal Learning

Robots should use all of the sensors available to them, such as depth, RGB, and tactile data. We have developed methods to intelligently integrate these sensor modalities.

Relevant Publications
Learning to Singulate Layers of Cloth based on Tactile Feedback
Sashank Tirumala*, Thomas Weng*, Daniel Seita*, Oliver Kroemer, Zeynep Temel, David Held
International Conference on Intelligent Robots and Systems (IROS), 2022 - Best Paper at ROMADO-SI
Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects
Thomas Weng, Amith Pallankize, Yimin Tang, Oliver Kroemer, David Held
Robotics and Automation Letters (RAL) with presentation at the International Conference of Robotics and Automation (ICRA), 2020

Reinforcement Learning Algorithms

Robots can use data, either from the real world or from a simulator, to learn how to perform a task. This is especially important for tasks which are difficult for robots to achieve via traditional techniques such as motion planning, such as deformable object manipulation. We have developed novel reinforcement learning algorithms to more effectively learn from data.

Relevant Publications
Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
Wenxuan Zhou, David Held
Conference on Robot Learning (CoRL), 2022 - Oral Presentation (Selection rate 6.5%)

Autonomous Driving

In the domain of autonomous driving, we have developed novel methods for every part of the perception pipeline: segmentation, object detection, tracking, and velocity estimation.

Relevant Publications
Differentiable Raycasting for Self-supervised Occupancy Forecasting
Tarasha Khurana*, Peiyun Hu*, Achal Dave, Jason Ziglar, David Held, Deva Ramanan
European Conference on Computer Vision (ECCV), 2022
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Siddharth Ancha, Gaurav Pathak, Srinivasa Narasimhan, David Held
Robotics: Science and Systems (RSS), 2021
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
Xinshuo Weng, Jianren Wang, David Held, Kris Kitani
International Conference on Intelligent Robots and Systems (IROS), 2020

Active Perception

Rather than statically observing a scene, robots can take actions to enable them to better perceive a scene, known as “active perception.”

Relevant Publications
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Siddharth Ancha, Gaurav Pathak, Srinivasa Narasimhan, David Held
Robotics: Science and Systems (RSS), 2021
Combining Deep Learning and Verification for Precise Object Instance Detection
Siddharth Ancha*, Junyu Nan*, David Held
Conference on Robot Learning (CoRL), 2019

Self-Supervised Learning for Robotics

Rather than relying on hand-annotated data, self-supervised learning can enable robots to learn from large unlabeled datasets.

Relevant Publications
Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
Wenxuan Zhou, David Held
Conference on Robot Learning (CoRL), 2022 - Oral Presentation (Selection rate 6.5%)

Previous Directions

Object tracking

Tracking involves consistently locating an object as it moves across a scene, or consistently locating a point on an object as it moves. In order to understand how robots should interact with objects, the robot must be able to track them as they change in position, viewpoint, lighting, occlusions, and other factors. Improvements in this area should enable autonomous vehicles to interact more safely around dynamic objects (e.g. pedestrians, bicyclists, and other vehicles).

Relevant Publications
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
Xinshuo Weng, Jianren Wang, David Held, Kris Kitani
International Conference on Intelligent Robots and Systems (IROS), 2020