Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
Our lab is interested in rapid prototyping
and testing of autonomous vehicles which
are vehicles that can complete a certain
set of tasks without human intervention.
So, as researchers we're very interested
in seeing how our algorithms work in the
real world where sensory measurements are
often imperfect or even distorted.
In these scenarios vehicles often make an
estimate or a belief of their best
perception of what the environmental model
looks like and makes a decision based on
that. But this magnifies when you have a
very complex system of agents interacting
together and it's very hard to understand
why the algorithm behaves a certain way.
What we'd really like to be able to do is
to be able to read the minds of our
autonomous agents and get some idea of how
their decision-making processes work.
Additionally we'd like to be able to test
our algorithms on a variety of
environments so we can robustify them.
So this new system which we refer to it as
"measurable virtual reality" basically
combines a projection system a motion
capture system. We use the projection
system to project a simulated environment
and we call it measurable virtual reality
because we measure this projected scene
using actual sensors that are mounted on
autonomous robots, say ground robots or
aerial robots. And at the same time we
have this motion capture system which
tells us where this physical system, the
robot we are working with, located in the
3-D environment and combining the
information we are getting from the motion
capture system and the projection system
we basically enable fast prototyping of
cyber-physical systems or in other words
the faster design of these learning,
perception and planning algorithms for
autonomous systems.
This work can be applied in multi-agent
scenarios where a single agent can have
control of other agents in its team. For
example you can think of a scenario where
an agent can communicate with nearby
agents but only within a certain radius of
communication. As this leader agent moves
around in its environment it can link to
other agents and give them tasks to do in
real time. This was work that was
previously very hard to convey to
spectators from outside our lab because it
was difficult to ascertain when that
communication link occurred. But now using
our system we can in real-time see when
that link-up occurs.
One of the limitations these days in
designing autonomous systems are the
regulations out there in society. We
cannot easily run autonomous cars or
flying robots outdoors due to the
regulations so this system allows us to
bring the outdoors in basically and have
simulations of the world and then using
the sensors actually measure this
projected scene as if the robot is flying
or driving outside in the real world
environment. So our system allows to
transform any indoor lab environment into
a complete virtual reality simulation
which is perceivable by any type of
autonomous agents. So we're hoping that
this system can become a future indoor
environment in which private institutions
can test and research their vehicles
before deploying them into the real world.