Introducing Voyage Commander

A state-of-the-art self-driving A.I. to power a robust robotaxi service

Oliver Cameron
Voyage

--

Meet Commander, the Brain of our Robotaxi

The human brain has evolved over hundreds of thousands of years to the point where a student driver can learn to drive a car with only a few hundred miles of experience. From a student driver’s first mile of driving, they harness the powerful perception, prediction, and planning system built into every human brain that makes us remarkably capable of driving a car. However, humans have emotions, and when it comes to driving, those emotions—being tired, distracted, angry, or under the influence—can cause fatal accidents.

At Voyage, we are not building general intelligence—flaws and all—instead, we are building a brain with a single responsibility: navigate a car from point A to B safer than a human. A brain that cannot get angry, nor tired, nor answer questions nor read books, but that drives a car with superhuman safety and accuracy. Like any brain, it gets better and better at driving every time it drives. This brain is a self-driving A.I. that we call Commander, which, when combined with Shield, Telessist, and the G3 forms a powerful self-driving technology.

The Brain of our Robotaxi

Commander is a self-driving A.I. that’s responsible for autonomous point-to-point driving within our communities and towns. Commander is powered by our state-of-the-art perception, prediction, planning, and behavior modules — all running atop a safety-certified middleware and monitored by our self-diagnostic systems. This technology can safely and autonomously navigate road features such as unprotected turns, intersections, double-parked cars, and lane merges.

A Superhuman Driver

To make safe and intelligent driving decisions, Commander needs to know where pedestrians and vehicles are and what they’re going to do next.

Accurately perceiving and interacting with vehicles, scooters, bicyclists, and pedestrians

The first input to each driving decision is our cutting-edge perception module, responsible for identifying dynamic objects and static obstacles to pay close attention to. Commander’s perception module combines a deep learning-powered computer vision system—trained on millions of data points—with multiple reliable classical computer vision algorithms as fallbacks. This approach brings to bear the benefits of deep learning’s intelligence while retaining the robustness of tried-and-tested robotics, ensuring we detect every dynamic and static object.

Our computer vision approach is predicated on both measuring and estimating depth. Our 3D depth sensors feed physics-accurate depth measurements into our deep learning algorithms, while our 2D sensors—cameras—feed pixels from which estimate the depth of objects. Both the measurements and estimations are then fused into a single representation, resulting in a view of the world that is both accurate and information-rich.

Commander thoughtfully navigates a complex scenario

With precise perception input, Commander then predicts what the identified objects are going to do next. Commander’s prediction engine uses a combination of advanced probabilistic models, behavioral models, and high-definition maps to predict what’s going to happen around our robotaxis. For example, if a pedestrian is walking in the direction of the roadway ahead of our robotaxi, our prediction engine outputs a range of possible futures for where that pedestrian could be located a few moments from now. Commander’s prediction engine then weighs thousands of potential combinations of scenarios, selecting the future it has the highest confidence will come true.

The art of prediction is one of the great problems in self-driving, and we are really proud with just how superhuman our prediction engine has become over time.

Commander makes split-second decisions to enable safe driving

Now that Commander has a prediction of the future, Commander’s behavior engine is asked to make a safe driving decision, like continue driving, slow down, yield, or pursue a more advanced driving maneuver. Within Commander’s behavior engine, we’ve pioneered a new form of fleet learning that improves our decision-making capabilities with each mile driven. Instead of relying upon gargantuan datasets with weak driving events (i.e., driver take-over) to train our robotaxis to make human-like driving decisions, our data team evolves a detailed dataset of explicitly tagged driving events with affirmation or correction. For example, when our robotaxi overtook a pedestrian cautiously and did so with the right distance and speed, we create a tagged driving event that reinforces that the robotaxi did the right thing by slowing down to 15MPH and keeping a 2-meter distance. This is akin to telling a child “nice work!” and explaining explicitly what they did well. We also correct erroneous behaviors in this dataset. For example, when our robotaxi stays stuck behind a parked truck instead of overtaking, we explicitly tag that stopping and waiting was the wrong behavior, and that the robotaxi should have overtaken with a certain distance. We then train machine learning algorithms powering Commander’s behavior engine on these affirmations and corrections, resulting in our robotaxis learning from their past successes and failures.

Commander is taught to be cautious around pedestrians

Knowing The Limitations

On its mission to exceed the safety of human driving, Commander is encoded with a series of driving constraints. These constraints—such as speed, proximity to dynamic objects, or predictability of the environment—ultimately result in consistent, rule-abiding, and safe driving behaviors. When Commander detects that something in the road ahead cannot be handled by itself, it gracefully transitions control to Telessist, our novel remote assistance technology. Telessist enables our robotaxi service to scale quicker, safely driving riders through any scenario that the world can throw at us.

Commander detects the gap is too tight to navigate by itself, passing control to Telessist

Proactive Safety Systems

All systems—human or robotic—are capable of failure. A robotaxi with no Safety Driver must be prepared at all times to handle such failure gracefully and safely. While driving, Commander is continuously monitored by a comprehensive diagnostics system that proactively catches hardware, software, or vehicle issues before they can cause dangerous events. When our diagnostics system detects something is awry—big or small—the vehicle is brought to a safe stop.

A controlled stop due to a perception issue

Key to delivering a reliable diagnostics system is validated hardware and software that does not fail silently. Our self-driving technology operates atop automotive-grade compute, an ISO 26262/ASIL-D rated real-time operating system, and a safety-certifiable middleware. Our array of validated components deliver a system that does not fail silently, enabling our diagnostics system to proactively catch issues and bring the vehicle to a safe stop before a dangerous event can occur.

Commander in Action

In Commander, Shield, Telessist, and the G3, we’ve built all the necessary technologies to power a robust robotaxi service. This service is designed to transport senior citizens—the power users of driverless technology—around their communities and towns, from their doorstep to destination. From Voyage’s founding nearly four years ago, these technologies have matured rapidly, to where scaling fully driverless robotaxi services is now within reach.

Watch our robotaxi service in action below, featuring hours of fully autonomous driving with no need for in-car or remote assistance. If you’re interested in working on this problem with us, please apply.

Hours of fully autonomous driving with no need for in-car or remote assistance

--

--

Obsessed with AI. Built self-driving cars at Cruise and Voyage. Board member at Skyways. Y Combinator alum. Angel investor in 50+ AI startups.