SICB Logo: Click Here to go to the SICB Home Page

Meeting Abstract

SICB+    A cockroach-inspired legged robot to traverse multiple types of large obstacles Mi, J; Wang, Y*; Li, C; John Hopkins University, University of California San Diego; John Hopkins University; John Hopkins University ywang460@jhu.edu https://li.me.jhu.edu/

Insects like cockroaches are excellent at traversing complex 3-D terrain with multiple types of obstacles as large as themselves, an ability that still dwarfs even the best robots. Recent research in our lab using a terradynamics approach advanced understanding of how to use physical interaction to do so (Othayoth, Xuan, Wang, Li, 2020, Proc. Roy. Soc. B). By integrating animal experiments, robotic physical modeling, and physics modeling to study model systems of abstracted locomotor challenges, design and control principles of traversing large bump, gap, pillar, and beam obstacles and self-righting have been discovered. Here, we developed a multi-functional robot that integrates these insights to traverse multiple types of large obstacles. Our robot (20 cm long, 18.5 cm wide, 10 cm tall, 0.75 kg) has six compliant S-shaped legs using an alternating tripod gait. A streamlined ellipsoidal frontal body shape enabled passive obstacle repulsion to traverse pillars and facilitated body rolling to traverse beams. An active tail with both pitch and yaw degrees of freedom facilitated bump traversal by using inertial effect to pitch the body up and facilitated beam traversal tapping against the ground to roll the body. Dynamic self-righting after flipping over was achieved by opening a pair of wings to reduce static stability and raise center of gravity and yawing the tail to generate lateral perturbation. Our robot traversed a 2.5× hip height bump, pillars spaced 1.1× robot body width, and beams spaced 0.7× robot body and can dynamically self-right within 10 s. Notably, traversal probability increased from 0% to > 70% (P < 0.001, ANOVA) and traversal time decreased by ~ 30% with active tail oscillation (P < 0.01, ANOVA). Currently, our robot transitioned between strategies using human-in-the loop control. We will add vision and force sensing feedback to enable autonomous transitions between strategies.