Automakers increasingly rely on LCDs to show maps, turn-by-turn directions, phone, and stereo information, but it’s a solution that invariably diverts a driver’s eyes from the road. San Francisco start-up Navdy offers a different solution, an aftermarket head-up display (HUD) that projects a wide range of information in the driver’s view, and offers gesture control.

Navdy, the name of the device and the company, sits on a car’s dashboard in front of the driver. It projects imagery on a transparent screen that sticks up a few inches from the device. Dropping your view a bit, you can see the full-color graphics as if they are floating over the road. Chief technology officer Karl Guttag, who has extensive experience working with micro-displays, developed the projection technology for Navdy. Rather than a simple projection on a transparent plane, Navdy successfully makes its imagery appear out in front of the car, reducing cognitive dissonance for the driver.



During a demonstration Navdy gave CNET of a prototype unit, those graphics looked very sharp, offering better color and resolution than current factory-installed head-up displays.

Although the projected display is a key part of Navdy, the company takes it further by including a CPU running Android in the unit. This computing platform lets Navdy run a wide array of apps, including navigation. However, the company will maintain control over approval, only allowing apps it deems safe for the car.

More useful to the average driver, Navdy will offer an app that integrates a driver’s iPhone or Android with the HUD. Navdy projects turn-by-turn directions, computed by navigation on the phone, in front of the driver. When a phone call comes in, Navdy shows the contact information. And drivers can see what music is currently playing.

Navdy offers a number of control features, as well. A camera mounted in the unit takes gesture control information from the driver. Swiping your hand to the left answers an incoming phone call, as one example. To avoid having Navdy react to random gestures, its recognition is context-sensitive, only becoming active during a phone call or other gesture-controllable activities.


For more information go to:


Author: admin