Robotics

Bio-Inspired Robotics

Legged Vehicles

UGV

Unmanned Ground Vehicles

AUV

Autonomous Underwater Vehicle

UAV

Unmanned Aerial Vehicles

Terrain Adaptive Intelligence

Humans naturally stiffen or loosen their legs while walking on different terrains to maintain a stable walk or run. Further, we (as humans) are aware after just a few steps how a surface might impact the way we move: we show down when it's slippery; we turn less sharply; and we favor some surfaces (concrete) over others (deep snow). This research builds intelligence frameworks that quickly identify terrains, adapt robot motion, and build new plans for navigation based on activity learned information.

Vegetation Traversal

Robots have a hard time differentiating obstacles that are real and obstacles that are not. A bladder of grass can seem in. This research builds frameworks that help robots learn to distinguish between plans that are navigable from those which are real obstacles. Further, our robots learn the expected cost of such an operation; this is often in terms of energy and stability.

Stealth Aware

Many animals and insects do not move in open, flat regions, but naturally find ways to navigate the environments while minimizing line of sight with any potential predators. This research builds a bio-inspired, stealthy planner that guides robots to navigate similar to their biological counterparts.

Human Trajectory

Human-rich enchants are difficult for robots to work in largely due to the inability to understand, predict, and plan based on human objectives. Robots, frankly, have no idea what humans are going to do. This research predicts human actions and builds belief spaces around their expected futures.

Prosthetic

Simple 3D printed robotic hands have the potential to help many people experiment with and try robotic prosthetics. We have several designs and types being tested.

Multi-drone Search

Leveraging many small-scale systems can cover large amounts of space quickly, but systems have to self-organize or be directed in order to effectively solve collaborative tasks. Our objectives are to use human-like strategies to incentivize actions without needing direct control.