Autonomous and Human-Guided Manipulation
The ability of robots to autonomously handle dense clutters or a heap of unknown objects has been very limited due to challenges in scene understanding, grasping, and decision making. In addition to the development of autonomous grasping and complex non-prehensile/prehensile manipulation techniques, we conduct research on semi-autonomous approaches where a human operator can interact with the system, using tele-operation as well as giving high-level commands to complement autonomous skill execution.
Our research investigates paradigms to adapt the level of autonomy of robotic systems to the complexity of the situation, and the skills and states of the interacting or nearby humans. Building on our semi-autonomous control framework in our lab, our research looks at building a manipulation skill learning system that learns from demonstrations and corrections of the human operator and can therefore learn complex manipulations in a data-efficient manner.
Advancing the state of the art in complex manipulation scenarios, which go beyond only reaching and grasping,
Designing new interaction concepts where robots use collisions for enabling richer human interactions and manipulation performance
Learning manipulation skills through demonstration and enabling corrections through human-in-the loop approaches
Shared and traded autonomy methods in teleoperation
Evaluating human performance and workload in teleoperation when handling complex scenes
Related Research Projects
HEAP is a research project funded by Chist-Era that investigates Robot Manipulation Algorithms for Robotic Heap Sorting. This project will provide scientific advancements for benchmarking, object recognition, manipulation and human-robot interaction. We focus on sorting a complex, unstructured heap of unknown objects –resembling nuclear waste consisting of a set of broken deformed bodies– as an instance of an extremely complex manipulation task. The consortium aims at building an end-to-end benchmarking framework, which includes rigorous scientific methodology and experimental tools for application in realistic scenarios.
Project lead: Ayse Kucukyilmaz
Related Research Publications
Serhan, B., Pandya, H., Kucukyilmaz, A. and Neumann, G., (2022, May). Push-to-See: Learning Non-Prehensile Manipulation to Enhance Instance Segmentation via Deep Q-Learning. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 1513-1519). IEEE doi: 10.1109/ICRA46639.2022.9811645. https://ieeexplore.ieee.org/document/9811645
Ly, K. T., Poozhiyil, M., Pandya, H., Neumann, G., and Kucukyilmaz, A. (2021, August). Intent-Aware Predictive Haptic Guidance and its Application to Shared Control Teleoperation. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 565-572). IEEE. https://ieeexplore.ieee.org/abstract/document/9515326
Singh, J., Srinivasan, A. R., Neumann, G., & Kucukyilmaz, A. (2020). Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master. IEEE transactions on haptics, 13(1), 246-252. https://ieeexplore.ieee.org/abstract/document/8979376