Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
The Model-based Embedded and Robotic Systems Group at MIT
is collaborating with the Jet Propulsion Laboratory
at Caltech on a project that will enable astronauts
to safely work with robots in an easy and intuitive way.
One example of such a robot is ATHLETE,
a vehicle developed at JPL that will
support robotic and human missions
on the surface of the Moon.
ATHLETE is capable of rolling or walking
over extremely rough or steep terrain,
and can load, transport, manipulate, and deposit
payloads to any desired sites of interest.
Previously, robots have been simple enough
to be controlled through teleoperation
or direct commanding.
However, more complex robots, like ATHLETE or the Vecna BEAR
battlefield robot, have too many degrees
of freedom for direct teleoperation or commanding
to be practical.
We've developed a different control paradigm
that incorporates verbal commands, shared
written instructions, and demonstration
by example to enable more efficient interaction.
These modes of communication occur naturally to humans
during collaborative tasks.
Here is an example of a common-sense written plan where
the human and athlete work together
to collect rock samples for analysis.
The instructions include descriptions
of both the human's tasks and ATHLETE's tasks.
A key challenge to executing tasks
is grounding terms like pick up into ATHLETE motions.
We reduce time and effort required
to do this by teaching actions through demonstration.
We use an interface device called
TRACK developed by the Distributed Robotics Lab at MIT
to teleoperate the robot through each desired activity
several times.
Here we're teaching athlete retirement pick up
by showing athlete how to pick up an object without hitting
the box.
Based on the demonstrated motions,
are learning algorithm generalizes from the examples
a set of motions that will likely
achieve the action of pick up.
This is compactly encoded in the representation we
call a probabilistic flow tube.
Once ATHLETE learns the activity,
it can autonomously execute the corresponding nominal
trajectory.
The learned flow tube represents flexibility
in the motions the robot can choose to perform,
enabling it, for example, to recover from disturbances.
The width of the flow tube indicates areas
with more or less flexibility.
After ATHLETE knows how to perform each of its tasks
individually, it can collaborate with the human
to execute the written plan together.
We have developed an executive that
allows ATHLETE to make decisions at runtime
to improve the likelihood of successfully completing
the plan.
Specifically, ATHLETE schedules the start times and durations
of each activity with flexibility,
while ensuring that limbs synchronize successfully.
This allows it to react appropriately to the human
and still perform the actions correctly.
We demonstrated our capabilities by integrating them
with the ATHLETE prototype at JPL, and by showing a human
and ATHLETE cooperating to complete this task.
First, the human attaches a gripper to ATHLETE's left limb.
The executive timestamps this activity duration
when the person is done so it can accommodate changes
in the human's activities.
Now ATHLETE is moving the gripper near the rock
in preparation to pick it up.
This complex motion was learned using the techniques described
earlier.
Since ATHLETE can't sense the precise location of the rock,
the person can assist ATHLETE by fine-tuning the gripper
location through voice commanding.
Move limb six forward 5 centimeters.
Next, the human attaches have been to ATHLETE's right limb.
The gripper closes on the rock to pick it up.
At this point, both limbs move in position
to drop the rock into the box.
Finally, ATHLETE stores the box for transport, which
is the end goal of the plan.
In the future, we plan to demonstrate its capabilities
with more complex plans that involve
the humans safely coordinating with multiple robots.
Engineers at JPL are currently developing the next generation
of ATHLETE robots called Tri-ATHLETE,
in which each robot has three limbs
and can coordinate together to transport habitats and perform
more complex manipulation tasks.
In the future, we aim to show humans working safely
in the same environment as the Tri-ATHLETEs
in order to support future operation on the Moon.
Funding for this task was provided
through the JPL Strategic University Research Partnership
Program and the National Defense Science and Engineering
Graduate Fellowship.