Robots are increasingly equipped with multiple, redundant, sensing modalities (inertial, force, tactile or visual perception) and it remains a significant challenge to efficiently use this information to create robust behaviors in unknown environments. Further, robots are seldom capable of learning new or improving known behaviors as they collect more real-world experience. To address this challenge, we design learning algorithms capable of using multi-modal sensory data to:
Importantly, we test all of our learning algorithms on real robot for manipulation and locomotion tasks to ensure that they are robust to real noisy sensors and imperfect actuators.