Waymo, the Google spin-off company that is working to bring fully autonomous vehicles to the roads, has released its first safety report, which highlights the company’s safety measures and philosophies with its self-driving vehicle technology.

The safety report includes thorough details on how Waymo approaches safety measures for its autonomous vehicles from the ground up, how its vehicles work, the safety measures in place to ensure safe travels, how Waymo tests its vehicles and more. The report also highlights the risks of human error while driving.

In the past, Waymo has detailed how the company uses simulations, real-world driving, closed course tests and a range of hardware and base vehicle safety testing measures to ensure a safe ride.

Waymo’s vehicles perform many safety tests to ensure all of its tools work while the vehicle drives. The vehicles are able to automatically handle faults and failures on the vehicle without human intervention. All of these designs are made through thousands of hours of development and testing. The vehicles also have back-ups installed for redundancy, including for its steering, braking, power, computing and other capabilities.

Waymo’s vehicles all learn from each other too through an interconnected network. So, when one vehicle comes across a specific driving scenario and learns how to successfully navigate the scenario, the rest of the vehicles in Waymo’s fleet learn the same lessons.

In its blog post and safety report, Waymo mentions how there wasn’t a “playbook” on safety for self-driving vehicles. As a result, Waymo has made its own set of safety principles to follow, which are highlighted within this safety report.

Waymo has been working on its technology for more than eight years, and has been testing the self-driving vehicles in Arizona since April 2016. Recently, Waymo launched an awareness campaign for its self-driving vehicles in Arizona, partnering with local groups such as East Valley Partnership and Mothers Against Drunk Driving.

Since April of this year, the Mountain View, California-based company has been driving residents around in the Valley as part of a pilot program to get its cars driving for people’s everyday means.

Waymo’s System Safety Program addresses how the company approaches several areas of safety. The safety report explains how Waymo’s technology handles traffic rules and driving scenarios, the functionality of its technology, the vehicle’s crash safety and its safety for those who may interact with the vehicle, such as passengers and first responders.

At length, the report notes how Waymo’s self-driving vehicles observe and react to the world around them, which is a four-step measure that answers the same four questions humans must constantly answer as they drive.

The four questions Waymo asks and answers for its self-driving cars in the report are:

  • Where am I?
  • What’s around me?
  • What will happen next?
  • What should I do?

Waymo goes on to describe how its mapping technology, sensors, software approach these questions in driving environments for the best possible outcomes.

During its testing processes, Waymo has used a variety of interesting techniques to test the vehicle so it is more prepared for the sometimes random driving conditions that can occur on the road.

Some of these tests involved having people jump out of porta potties, throwing stacks of paper at the vehicle’s sensors among other random types of testing. I had a chance to ride in Waymo’s self-driving car earlier this year and witnesses conditions where the car reacted successfully to a random occurrence on the road when a pedestrian was walking in the bike lane near the road instead of the sidewalk.

The company states safety is at the core of its mission, as self-driving technology hopes to create safer and easier transportation options for people. In its report, Waymo mentions that 94 percent of road crashes happen because of some sort of human error, whether it’s drunk driving or distracted driving.