Many of our research projects are in one of the following general themes. Note that this page is still being updated to include all publications.
Deformable Object Manipulation
Deformable objects are challenging from both a perceptual and dynamic perspective: a crumpled cloth has many self-occlusions and its configuration is hard to infer from observations; further, the dynamics of a cloth are complex to model and incorporate into planning algorithms. We develop algorithms to handle deformable object manipulation tasks, such as cloth, liquids, dough, and articulated objects.
Relevant Publications
![]() |
ToolFlowNet: Robotic Manipulation with Tools via Predicting Tool Flow from Point Clouds
Conference on Robot Learning (CoRL), 2022
|
![]() |
Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation
Conference on Robot Learning (CoRL), 2022
|
![]() |
Learning to Singulate Layers of Cloth based on Tactile Feedback
International Conference on Intelligent Robots and Systems (IROS), 2022 -
|
![]() |
FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy
Conference on Robot Learning (CoRL), 2021
|
![]() |
SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation
Conference on Robot Learning (CoRL), 2020
|
![]() |
Cloth Region Segmentation for Robust Grasp Selection
International Conference on Intelligent Robots and Systems (IROS), 2020
|
3D Affordance Reasoning for Object Manipulation
In order for a robot to interact with an object, the robot must infer its “affordances”: how the object moves as the robot interacts with it and how the object can interact with other objects in the environment. We develop robot perception algorithms that learn to estimate these affordances and then use such inferences to learn to manipulate objects to achieve a task.
Relevant Publications
![]() |
Neural Grasp Distance Fields for Robot Manipulation
International Conference on Robotics and Automation (ICRA), 2023
|
![]() |
TAX-Pose: Task-Specific Cross-Pose Estimation for Robot Manipulation
Conference on Robot Learning (CoRL), 2022
|
![]() |
ToolFlowNet: Robotic Manipulation with Tools via Predicting Tool Flow from Point Clouds
Conference on Robot Learning (CoRL), 2022
|
![]() |
Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation
Conference on Robot Learning (CoRL), 2022
|
Multimodal Learning
Robots should use all of the sensors available to them, such as depth, RGB, and tactile data. We have developed methods to intelligently integrate these sensor modalities.
Relevant Publications
![]() |
Learning to Singulate Layers of Cloth based on Tactile Feedback
International Conference on Intelligent Robots and Systems (IROS), 2022 -
|
![]() |
Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects
Robotics and Automation Letters (RAL) with presentation at the International Conference of Robotics and Automation (ICRA), 2020
|
Reinforcement Learning Algorithms
Robots can use data, either from the real world or from a simulator, to learn how to perform a task. This is especially important for tasks which are difficult for robots to achieve via traditional techniques such as motion planning, such as deformable object manipulation. We have developed novel reinforcement learning algorithms to more effectively learn from data.
Relevant Publications
![]() |
Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
Conference on Robot Learning (CoRL), 2022 -
|
Autonomous Driving
In the domain of autonomous driving, we have developed novel methods for every part of the perception pipeline: segmentation, object detection, tracking, and velocity estimation.
Relevant Publications
![]() |
Differentiable Raycasting for Self-supervised Occupancy Forecasting
European Conference on Computer Vision (ECCV), 2022
|
![]() |
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Robotics: Science and Systems (RSS), 2021
|
![]() |
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
International Conference on Intelligent Robots and Systems (IROS), 2020
|
Active Perception
Rather than statically observing a scene, robots can take actions to enable them to better perceive a scene, known as “active perception.”
Relevant Publications
![]() |
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Robotics: Science and Systems (RSS), 2021
|
![]() |
Combining Deep Learning and Verification for Precise Object Instance Detection
Conference on Robot Learning (CoRL), 2019
|
Self-Supervised Learning for Robotics
Rather than relying on hand-annotated data, self-supervised learning can enable robots to learn from large unlabeled datasets.
Relevant Publications
![]() |
Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
Conference on Robot Learning (CoRL), 2022 -
|
Previous Directions
Object tracking
Tracking involves consistently locating an object as it moves across a scene, or consistently locating a point on an object as it moves. In order to understand how robots should interact with objects, the robot must be able to track them as they change in position, viewpoint, lighting, occlusions, and other factors. Improvements in this area should enable autonomous vehicles to interact more safely around dynamic objects (e.g. pedestrians, bicyclists, and other vehicles).
Relevant Publications
![]() |
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
International Conference on Intelligent Robots and Systems (IROS), 2020
|