Search
Close this search box.

The Implications of the First Fatal Autonomous Vehicle Accident

This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here:

On Sunday, March 18, 2018, an autonomous Uber vehicle operating in autonomous mode, with a human safety operator behind the wheel, struck and killed a pedestrian crossing the road — the first fatal autonomous vehicle accident on a public road to occur. We’ve now crossed the realm from the theoretical of what could happen in this scenario to the reality of the situation actually happening.  In our AI Today podcast this week, we explored what happened in this particular situation (based on the limited information available at the time of recording), who and what could potentially be liable or have fault in this and future cases, where the autonomous vehicle industry is heading now that we’re in a new reality, and what potential new laws and regulations could emerge around autonomous vehicles.  As a follow-up to our recent podcast, we’ll go over some of the things we discussed and some of our thoughts as to how this industry can proceed from this incident.

Untangling the Web of Fault

What makes this accident different from all others is that there are more parties and factors at play here than usual. While the specifics of this particular accident might point to the fault of one particular person or company over another, in future incidents the web of fault can be much more complicated. Using this recent fatal accident as a guide, it’s possible that many of the following parties we discuss below could be to blame for this or future accidents of this kind.

The Pedestrian

Already various sources are reporting that in this particular case, the pedestrian was walking a bike and suddenly, and unexpectedly stepped in front of the moving vehicle, not providing enough time for the system, or the backup human, to respond in time to prevent the fatal collision. In many ways, then, this accident is like most others: a human behaves in an unpredictable way without providing any margin to avoid or correct, and resulting in a fatal incident. The only way in which a situation like this could be different in the case of a self-driving vehicle, is that these autonomous cars are bristling with sensors and data. In theory the system can detect and identify pedestrians well in advance of any potential collision and potentially factor in likelihoods of unexpected behavior. However, in practice, this is very impractical.

In crowded cities, like New York City, where there’s both street pedestrians and bikes competing with traffic as well as heavy sidewalk and crosswalk traffic, plus the not-so-occasional jaywalker, it would be very unfeasible to have an autonomous system second guess itself and stop every time the threshold of likelihood for an unexpected entry into the path of travel exceeds a small percentage.  So, in this case, when pedestrians act unexpectedly, even with all the sensors and data available, autonomous vehicles will act like their human counterparts if there isn’t sufficient time to avoid the collision. The only way that the system could be at fault in this instance is if the sensors are blocked or the AI system has some sort of fault condition. We explore this further below.

The AI Technology

In other instances, where it’s not clear if a pedestrian is at fault, it’s possible for the AI technology itself to be at fault. Autonomous vehicles are very complicated machines. They have dozens of sensors and employ real-time decision-making with trained Machine Learning (ML) models that are based on the collective training and experiences of the network of autonomous vehicles. If any of these sensors are blocked (say, with mud, salt, debris, or a too-close-following bike), or if the sensors fail for any reason, then the system will not behave as anticipated or expected. Similarly, if the AI systems should fail for any reason, the computer malfunctioning, invalid or corrupt training data, or perhaps just incomplete data, then likewise the system will fail.

Most likely what will differentiate accidents involving autonomous vehicles from the human-cause ones will be an evaluation of the state of the AI system at the time of the incident. These vehicles will be required, if not already, to maintain black boxes that record key data and decision-making as well as the current operational state of the sensors. In this way then if the actions of the pedestrian are unclear, the determination of the state of the system can be determined. Of course, if all the sensors are working well, even with the AI system providing a description of its decisions, without proper Explainable AI (XAI), investigators will not be able to determine why the autonomous vehicle made the decision it did, even with the information that might be available.

Uber / Ride Sharing Company

If the autonomous vehicle systems all check out and the pedestrian behavior is not clear or established to be not at fault, then the next place we can pin blame on is the company operating the vehicle. In this case, it was Uber’s vehicle, but with now dozens of companies entering the space, it will be a much more involved affair. Already, there are some saying that in this case, Uber has been very aggressive about pushing its autonomous vehicle program forward, even in the face of numerous incidents, including running red lights, unexpected lane changes, and some less-than-fatal accidents.

Given those incidents, should Uber (and other companies) have pulled back on the reigns of their autonomous vehicle (AV) programs and worked to address those issues more rigorously? Is Uber operating its vehicles in real-life operation too soon? Are the other companies also moving too fast? Does Uber share the blame in the way it is operating its autonomous vehicle program? Uber has been aware of the fact that their self driving vehicles have a “problem” with the way autonomous vehicles cross bike lanes.  The AV companies will tell you they are responding carefully and with due care and diligence to each incident as it happens, and they are working within legal and regulatory guidance. So the question is, are we comfortable with that pace, given that accidents like this will most likely continue to happen?

The Uber human “supervisor”

Currently, all autonomous vehicles operating in “real-world” situations are required to have human safety operators to handle any unexpected situations.  Already video has been released about the fatal accident, and it shows both an inside camera view as well as the forward camera view of the car just prior to the moment of impact. It’s not clear if the human operator was distracted or otherwise occupied during the moment of impact and couldn’t respond in time. Police investigators are saying that, in this situation, a human would not have been able to avoid the accident in any case.  But perhaps in future incidents, where it’s not clear with regards to the pedestrian’s behavior and all else checks out, then the AV company might try to pin the blame on the human safety operator.  After all, they will claim it’s that person’s responsibility is to keep the vehicle, its passengers, and the outside environment safe from harm.  So, if there was harm, then for sure, the human safety operator should share some blame.

However, you can also make an argument that if the system is meant to be autonomous, then it should require zero attention of a human safety operator. Because if it did, then it’s not really autonomous.  We have said this before that automation is not intelligence — and automating something halfway and leaving out the hard parts is fairly useless even if you are operating in an industry that uses the term “automation” but pretends that it means intelligence.

The Passengers

Just to add one more element to this discussion, even though it is not relevant for this particular fatal accident, it’s possible that in future accidents that the passengers of the autonomous vehicle could share blame in an accident. What? How could passengers who aren’t even behind the wheel be a factor of liability? While we still have human safety operators in a car, whose job it is to keep everyone safe, if the passengers act in such a way to impair the operator, or perhaps even impair the vehicle itself, and act in a reckless way, then the passengers could be liable. In the not-to-distant future when fully autonomous vehicles are on the road driving home rowdy passengers after a drunken night out, it would not be unreasonable to say that those passengers would be liable for causing an accident that interferes with the operation of the vehicle.

The Road Design

In this fatal accident, the road design at the location of impact has an unusual configuration (see link to the article above with map). The bike lane continues forward, but the turning lane crosses across the continuation of that lane, which is indicated by a dashed line. Regardless, the struck pedestrian was crossing against all those lines and not in a marked crosswalk. Is it possible the pedestrian was crossing in that location because the crosswalk was inconvenient or on the otherside of the curve or otherwise obstructed? Is it possible the AI systems were confused by the lane layout and had trouble determining the intent of the pedestrian? All of this is called into question by confusing road markings and design and can be another source of liability for this and future incidents.

The State of Arizona

AV companies have flocked to Arizona to run their autonomous programs because Arizona is directly appealing to them with limited regulation, lowered reporting requirements, and other considerations that are more favorable versus their competition in states like California and New York. Some have criticized the laxness of Arizona’s approach to AV, especially as the state has been more relaxed on reporting requirements for traffic incidents that are less-than-fatal. In this light, it’s possible Arizona might share some of the blame for this or future incidents by not having enough of a reporting regimen or oversight on the AV industry. Yet, you can also argue that over-regulation might kill or otherwise impair an industry that could be a huge economic boon and technological advantage in the future. Finding the right balance between regulation and oversight is something states will have to grapple with, even in the face of continued accidents.

Is this Accident Truly Different from the Thousands of Purely Human Accidents?

Many in the AV industry as well as outside the industry are arguing that accidents happen and we shouldn’t punish the AV industry more than human drivers. After all, the only reason why we’re reporting about this incident is because of just how rare it is. Indeed, even the fact that there are autonomous vehicles operating at all is newsworthy and rare in itself, so accidents are doubly rare. On the other hand, even fatal human accidents are barely reported because of how frequent they are. Surely the AV industry will have a much better safety rating than current human drivers. And that’s something that has a positive public health outcome. Perhaps states and cities will even begin to favor AV technology over human drivers based on the comparative safety of the two. While this might be true in the future, we operate in the reality of now. In this reality, AV cars operating in real-world scenarios are rare, and accidents rarer. And this is why we’re so focused on it now. Can the industry learn from this incident and move forward to find ways to make autonomous operation safer, more clear, and less prone to future incidents? This is the learning moment for the industry.

What Potential Laws and Regulations Could Come out of this, and Future, Incidents?

There’s no doubt that the conversation after this fatal accident will have implications for the AV industry. Will it stop or slow it down? We don’t think so, even though Uber and even Toyota are putting their programs on temporary hold. The AV industry is doing everything it can to learn from this latest incident and to prevent its repeat. The AV industry is no doubt going to seriously consider and address the issues raised by this accident, and self-regulate in many ways since they need to show to society the use and safety of AV to make it a compelling choice. Otherwise, they would be shooting themselves in the foot to continue on without making much-needed fixes.

However, we also believe that new laws and regulations could be put into place to make the AV industry safer, provide greater value to all stakeholders, and also help keep the industry growing with investors and consumers feeling safe. The easiest of these laws and regulations related to requiring better record keeping and reporting, and more transparency from the AV companies. For example, while Arizona has been lax about reporting different incidents, rules that require public disclosure of incidents would actually help the industry mature faster by allowing sharing of this information between competing AV companies as well as other constituencies that can help further develop the whole industry.

Another possible area for laws and regulations could be around certification and compliance of the individual vehicles. Just as individual states are responsible for their drivers license processes and tests, they might consider laws that govern certification of cars for autonomous operation. This could involve a driving test of sorts for each car that’s on the road as well as a regular inspection and compliance regimen. While it would be a technical challenge for states to come up with an autonomous version of a driving test, it would be worthwhile to consider a way to certify that the vehicles are continuing to operate without decay over time.

Likewise, we could see the US federal government, and perhaps other foreign governments put in some new rules and regulations around the operation of sensors on the vehicles. Perhaps there will need to be standardized testing for sensors as well as required action in case the sensors start to fail. The rules might require AV vehicles go into mandatory shutdown mode or require human intervention in the instance sensors begin to fail or are obscured in such a way they prevent nominal performance.

Finally, we can see the use of passive avoidance technology, such as LIDAR “reflectors” that can provide a signal to autonomous vehicles that there is a biker or pedestrian that should be avoided at all costs, even if the system otherwise indicates that the obstacle is not recognized. These sorts of indicators are the autonomous vehicle equivalent of the flashing lights and reflective clothing that could otherwise have been useful, especially in this particular fatal incident.

Regardless of where the industry as a whole heads and the outcomes of this fatal incident, Cognilytica will be keeping an eye on the industry, regulatory changes, and any moves the AV companies make as they re-enter the market. If you’re looking for insight into what is happening here for your own purposes, whether you’re a vendor entering the market with new capabilities or a regulatory authority, we want to hear from you!

 

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The The Implications of the First Fatal Autonomous Vehicle Accident

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!