Site Map | Faculty Intranet | Staff Intranet | Student Intranet |

Modeling and Improving Human-Robot Interactions

You are here


Prof. Dawn Tilbury and graduate student Justin Storms examine an automatic weight-shifting mechanism that can increase stability of mobile robots at high speeds.

Teleoperated ground robots have the potential to save human lives both in military operations and in post-disaster search and rescue missions. They can spare soldiers and first responders from crawling around in cramped, dark and dangerous environments, and they can help locate survivors. But teleoperated robots, and their human operators, also have a number of limitations that hinder widespread use.

"When you have a human operator in the loop who has no visual contact with the robot, the operator has to rely on video feedback to make decisions about how to direct it next," said ME Professor Dawn Tilbury, who also chairs the steering committee for a new College of Engineering initiative in robotics. "Figuring out exactly where the robot is, what it's doing and how to control it is very hard to do when you can't see it."

Part of the problem is communications latency, or the time delay caused by the wireless link. Other hindrances include limitations of the robot's speed, inefficient human-robot interactions and human shortcomings related to decision-making, reaction time and control of a manipulator arm using a basic joystick.

To help more fully realize ground robots' potential, Tilbury works to increase the speed and efficiency of teleoperated robots. Toward this objective, she has developed a framework for characterizing, modeling and understanding the major factors that limit performance, defined as speed, accuracy and safety.

In one project, Tilbury developed and assessed novel interfaces, or input devices, for the human operator. Her research group conducted user studies -- students teleoperated a mobile manipulator using a simple joystick or a master-slave manipulator arm -- and analyzed the data. The findings showed that the master-slave manipulator arm helped novice users significantly more than it helped experienced users; in fact, experienced users were faster using the joystick. "This surprised us," she said. "We would have thought the master-slave manipulator arm would have helped all users."


Steve Vozar (right) explains to John Broderick the teleoperation task where users must operate a mobile manipulator to discover and pick up boxes in a controlled environment.

From there, Tilbury's team began developing a user model to describe how people behave while teleoperating a mobile robot. "In the automotive industry, there are well-established driver models that describe how the driver responds to turns and obstacles in the road; similarly, we wanted to develop a model of the user teleoperating the robot."

Using a different set of user tests, Tilbury and her group created a simple transfer function model -- likely the first-ever driver model for robotic teleoperation -- that incorporates measures of performance time and accuracy. The model will be useful to understand how well a human-operated robot can perform certain tasks rather than having to conduct actual tests with humans, which cost significantly more and take much longer.

"A purely automated robot could move much faster, but we're looking at applications where there's a human in the loop for a reason," said Tilbury, such as to make decisions about how to search an area for survivors or explosive devices. "Maybe the operator notices a rock to look under or an area of terrain to explore further. Those highly variable and very specific situations are challenging to predict."

In addition, how do you combine models of human intention with autonomous robotic systems such as obstacle avoidance and stability control? In a related project Tilbury's group has developed a prototype manipulator arm (attached to the robot) to enhance performance. The arm is autonomously controlled when the robot is moving, but when it stops to perform a task, such as picking up a box, control transfers to the human operator.

"We need to learn how to blend what the obstacle avoidance and other onboard controllers want to do with what the human operator wants to do, and communicate that seamlessly to the robot. It's partly a controls question, partly a human interface question," Tilbury said. "These are interesting controls challenges."