In Introducing Augmented Reality Apparatus – From Victorian Stage Effects to Head-Up Displays, we saw how the Pepper’s Ghost effect could be used to display information in a car using a head-up display projected onto a car windscreen as a driver aid. In this post, we’ll explore the extent to which digital models of the world that may be used to support augmented reality effects may also be used to support other forms of behaviour…
Constructing a 3D model of an object in the world can be achieved by measuring the object directly, or, as we have seen, measuring the distance to different points on the object from a scanning device and then using these points to construct a model of the surface corresponding to the size and shape of the object. According to IEEE Spectrum’s report describing A Ride In Ford’s Self-Driving Car, “Ford’s little fleet of robocars … stuck to streets mapped to within two centimeters, a bit less than an inch. The car compared that map against real-time data collected from the lidar, the color camera behind the windshield, other cameras pointing to either side, and several radar sets—short range and long—stashed beneath the plastic skin. There are even ultrasound sensors, to help in parking and other up-close work.”
Whilst the domain of autonomous vehicles may seem to be somewhat distinct from the world of facial capture on the one hand, and augmented reality on the other, autonomous vehicles rely on having a model of the world around them. One of the techniques currently used in detecting distances to objects surrounding an autonomous vehicle is LIDAR, in which a laser is used to accurately detect the distance to a nearby object. But recognising visual imagery also has an important part to play in the control of autonomous and “AI-enhanced” vehicles.
For example, consider the case of automatic lane detection:
Here, an optical view of the world is used as the basis for detecting lanes on a motorway. The video also shows how other vehicles in the the scene can be detected and tracked, along with the range to them.
A more recent video from Ford shows the model of the world perceived from the range of sensors one of their autonomous vehicles.
Part of the challenge of proving autonomous vehicle technologies to regulators, as well as development engineers, is the ability to demonstrate what the vehicle thinks it can see and what it might do next. To this extent, augmented reality displays may be useful in presenting in real-time a view of a vehicle’s situational awareness of the environment it currently finds itself in.
DO: See if you can find some further examples of the technologies used to demonstrate the operation of self-driving and autonomous vehicles. To what extent do these look like augmented reality views of the world? What sorts of digital models do the autonomous vehicles create? To what extent could such models be used to support augmented reality effects, and what effects might they be?
If, indeed, there is crossover between the technology stack that underpins autonomous vehicles, computational devices developed to support autonomous vehicle operation may also be useful to augmented and mixed reality developers.
DO: read through the description of the NVIDIA DRIVE PX 2 system and software development kit. To what extent do the tools and capabilities described sound as if they may be useful as part of an augmented or mixed reality technology stack? See if you can find examples of augmented or mixed reality developers using such toolkits originally developed or marketed for autonomous vehicle use and share them in comments below.