Monday, July 4, 2022
Bringing the Latest in News Straight to Your Screen


When AI Self-Driving Cars Sue Due To Dangerous Roadway Conditions

By News Creatives Authors , in Business , at July 19, 2021

You are driving along on a highway and enjoying the open road.

Up ahead, a curve is coming. You are currently zipping along at the topmost allowed highway speed (well, plus a tad bit faster, though you would never admit that). The curve doesn’t look overly onerous, at first glance.

So, you proceed apace.

Turns out that as you begin to take the curve, you suddenly and shockingly discover that you are moving way too fast for this curve. The wheels of the car begin to lose traction. You can feel the vehicle pulling fervently and you are fighting dearly with the steering wheel to stay on the roadway. It is pretty much too late to try and slow down since you are already deep into the curve.

Yikes!

Sweat pours down your forehead as you, the car, and the highway are doing battle. You glance over at the edge of the highway and realize that you might end up going into a ditch. Not what you want to do. Not what you had in mind. A tinge of fear and pure panic begins to overtake your mind as you scramble to save yourself and the vehicle from potential destruction.

Luckily, you somehow pull through the curve and pop out onto a straightaway. All is good. You faced the grim reaper and lived to tell the tale. The whole matter took only a handful of seconds to play out, and yet it seemed like a lifetime. This is a curve that you will never forget.

You gradually regain your composure. One thought that starts to go through your mind was whether or not there was any warning about the curve and the dangers of taking it too fast. Shouldn’t there be a posted sign? Usually, roadway signs warn you about curves and provide a recommended reduced speed.

Sure, undoubtedly, you oftentimes ignore those signs, but at least you do glance at them. Your feeling is that you know how to drive your car and there is no need for some stupid sign to offer you advice. In this case, you mull over that perhaps those posted signs can be useful. It is conceivable that you might have instinctively slowed down by having seen a posted sign, even if you only noticed it fleetingly and didn’t give it your complete concentration.

Later that day, you return home via that same path.

When you reach the juncture that the posted sign should have been present at, you notice that it is laying on the ground. Nobody would see it unless they perchance were looking specifically for it and even then it was nearly impossible to see (laying partially obscured in a bunch of weeds and thick grass). The sign seemed to have gotten damaged and possibly knocked over, perhaps by some wayward prior driver. That’s unfortunate.

Actually, that’s a potentially life-threatening aspect, namely that this essential and downed sign ought to be put back into its proper place so that those approaching the curve will sufficiently be alerted and prudently on their toes.

You decide to call the government agency responsible for that stretch of road and inform them about the downed sign. Why so? Because you feel strongly that it is your duty as a responsible person and also wanting to aid in ensuring that no one else gets themselves into a terrible accident. It is your good deed for the day, perhaps for the week.

Two months later, you read in the news about a car that took that very same curve and flew uncontrollably into the ditch. The driver and two passengers were seriously injured. No one was killed, thankfully.

You are wondering whether that downed warning sign was ever put back into position. On your next occasional drive in that direction, you opt to go over to the curve and see if the sign is up.

Darn, the sign is still laying flat on the ground.

Without that posted sign, those innocent people in that wiped-out car did not seemingly get any heads-up about the curve and the kind of speeds to be safely going. They probably were cruising along at the prevailing highways speeds and simply took the curve without any special speed corrections. Perhaps, if the government agency had put that sign back up, those people would not have been injured and they would have taken the curve without any issues.

Somebody ought to sue.

What do you think?

Should the people in that car accident opt to go after the government agency that had responsibility for the posting of the warning sign? The act of suing that agency could potentially aid in recovering some of the costs of their totaled car and their hefty medical bills. In addition, the agency might get its act together and put that sign-up, along with being more diligent overall about such signage.

One question that might come up is whether the agency knew or should have known about the downed sign. In this instance, the phone call to them two months ago was seemingly sufficient notification. The two months elapsed time would ostensibly be plentiful for the agency to have put the sign-up or post a new sign.

All in all, it seems like the government agency dropped the ball and an unsuspecting driver and some passengers paid a harsh price accordingly.

The government might argue that the driver of the car is at fault.

Regardless of whether a posted sign was there or not, the driver was responsible for driving safely. The buck stops with the driver, as it were.

Furthermore, the government might question the capacity of the driver. Was the driver possibly drunk? Was the driver drowsy? Maybe the driver was watching cat videos and allowed themselves to be distracted from the roadway.

In contrast, the driver argues that their driving was completely lawful and fully abided by the speed limit on that highway. Without the posted sign being visible and apparent, nobody could have known that a reduction in prevailing speed was required. In short, the government cannot shirk its duty to ensure that proper signage exists for a public roadway that is known by the government to be inherently dangerous. Indeed, there was a sign already there, thus proving that the government knew this danger existed, and the agency failed to fulfill its duties properly by making sure that the sign was adequately posted.

Meanwhile, the government agency decides to claim that sovereign immunity applies. This is a legal way of saying that the agency is immune to civil lawsuits and cannot be sued accordingly. No litigation of this nature can be legally sought against the agency.

In reply, the attorney for the driver contends that there are various exceptions to the immunity provision and that this instance is indeed an exception. Per the attorney, the agency has flagrantly failed to properly maintain the roadways. And pursuant to the laws applicable in this case, the lack of suitable and timely roadway maintenance provides an opening to pierce through the usual immunity and bring the agency into court.

The plot somewhat thickens when it is discovered that a highway construction firm had been hired by the agency to do work in that area. They saw the downed sign. They did nothing about it. A decision is made by the driver to have the attorney go ahead and sue the construction company too.

It is all a bit of a sordid mess.

I certainly hope that none of you are ever embroiled in such a complicated circumstance.

Let’s slightly shift gears.

The future of cars consists of AI-based true self-driving cars.

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Here’s an intriguing question that is worth pondering: Is it conceivable that AI-based true self-driving cars might get enmeshed into lawsuits about the status of roadway conditions, and if so, what would that entail?

I’d like to first further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Suing About Roadway Conditions

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

There are several ways AI self-driving cars can enter into the picture on this topic. We can consider each of the ways and delve into some of their particulars.

First, one aspect that is likely to arise in a case like this is whether there is any kind of evidence or documented indication about the prevailing roadway status at the time of the incident.

The driver might have to provide tangible indications that can demonstratively showcase the matter at hand. For example, are there any witnesses that the sign was not posted on the day and time of the incident? Can the driver provide any substantive proof that they were driving any the legal speed limit? Was the roadway dry or was it perchance wet from the rain?

All of this could be crucial to making the case against the government agency.

Here’s where self-driving cars come in handy.

Self-driving cars are equipped with a slew of sensors. This includes video cameras, radar, LIDAR, ultrasonic units, and the like (not all self-driving cars have the same set of sensors). These sensors are used to detect the driving scene. Throughout a driving journey, the sensors are actively collecting data and feeding the data into the AI driving system. The AI driving system then interprets computationally the data to figure out where to drive and how to undertake the driving task.

Some of the time, the data is being stored in onboard computer memory. This can be handy for later on being uploaded into the cloud of the fleet operator via OTA (Over-the-Air) electronic communications. I’ve extensively emphasized that this will be a vast amount of data that can potentially be monetized by the fleet operator. In addition, on the downside of things, this can also portend for serious concerns about personal privacy issues.

I’ve labeled this capability as the roving eye, see my discussion at the link here.

So what, you might be wondering.

Once we have a prevalence of self-driving cars on our highways and byways, they will potentially be recording all sorts of aspects of our daily existence. If any self-driving cars were near to the incident of the human driver that crashed at the curve, there may be a lot of recorded data about the incident.

Potentially having been uploaded to the cloud of the fleet operator, both sides of the case might seek to get access to the data. The recorded video might well reveal that there wasn’t a posted sign at the time of the incident. It might indicate the speed of the car that failed to make the curve. The roadway conditions might be apparent such as whether the roadway was dry or wet. Likewise, the radar, LIDAR, and other sensory data could be instrumental to the case.

There are several complexities to this.

You can expect that there will be numerous different automakers and self-driving tech firms that will be producing and fielding self-driving cars. Each of them will likely have their own clouds that store their autonomous vehicle procured data. They will undoubtedly also have their own proprietary formats for the data. They will probably have different retention policies about how long they keep their data intact.

This means that it might be difficult or costly to get this data. In addition, the data might need to be pieced together, taking some data from one brand of a self-driving car and trying to match it with the data from some other brand of a self-driving car. Kind of a technological nightmare.

Worse still, how would you even know which self-driving cars perchance came upon that specific driving scene within the realm of time pertinent to the incident?

You would seemingly not know which self-driving cars were there if any. As such, you might have to go to all of the fleet operators and ask all of them to produce whatever relevant data they might have. This again could be a huge undertaking for all parties involved.

The other question is whether the cloud owners are willing to give up that data. Maybe yes, maybe no. They might contend that the data reveals insider aspects about how their tech works. Numerous objections could be raised. This would seemingly play out in court with all kinds of legal wrangling’s taking place.

The odds are that this type of situation might become a court case in its own right. You can envision that a series of court cases about what the self-driving car firms have to provide or not provide in these matters could slowly wind its way through the judicial system.

Now that we’ve covered that angle on the topic, get ready for a different one.

Suppose a self-driving car is driving along and comes to that curve. The AI driving system is scanning for any nearby posted signs. None are detected, which in this case is because the posted sign is laying flat on the ground.

The AI driving system proceeds to take the curve.

Turns out that the algorithms used for the AI driving system were not expecting that the curve would be so dangerous. The AI driving system begins to lose control of the autonomous vehicle.

Oh no, the self-driving car goes into the ditch.

I realize that some smarmy insiders would say that this could never happen. They would argue that the AI driving system would have been established with extensive maps beforehand, clearly indicating that this dangerous curve existed. It would not matter whether the posted sign was there. The posted sign was essentially superfluous since the computer system already had lots of highly detailed maps.

Though it is the case that most of the AI driving systems are being devised to depend upon pre-mapping, this is not necessarily the case for all self-driving cars. Ergo, you cannot guarantee that all self-driving cars will have those kinds of pre-mapped indications at hand.

I’ll also switch up the scenario to showcase that merely having maps does not solve all such roadway problems.

Assume that the curve was recently altered by the roadway construction crew that went out there to do maintenance (notice that I cleverly planted that seed in your mind, earlier). This change to the curve is not shown on any maps as yet. The maps of the roadway were undertaken a week ago, or whenever, and they are outdated about the status of the existing roadway.

This can and will happen, let no one contest that.

The AI driving system should always be scanning for any roadway signs. This is important in case a roadway sign has been put up to forewarn about some recent change to the roads. For example, pretend that a temporary warning sign was supposed to be put up in front of this particular curve, but it was not placed there.

Okay, so we have a self-driving car that based on the preloaded maps does not know that the curve is now dangerous, whereas before we’ll say it was safe and easy to traverse. Furthermore, we are going to say that there is no signage to warn about the changes to the roadway.

Under these conditions, I would suggest it is possible that the AI driving system could computationally fail to ascertain how to take the curve, and might end up in a predicament that gets the self-driving car into a status that becomes uncontrollable.

Some of you might be pulling your hair out and yelling that this is rarely going to happen. Though I don’t necessarily agree with that claim, let’s assume I concede it. Nonetheless, there is now an existence proof, of sorts, that we both agree this could happen. Sure, it is perhaps a rarity, an edge or corner case, but it can in fact happen.

I’m going to then proceed with this discussion by standing tall on the aspect that it can happen.

If it did happen, the next consideration is a mind-bender.

Should the self-driving car company sue the government agency that is responsible for that stretch of roadway and/or the construction firm too?

Notice that I’m suggesting that the fleet operator or the maker of the self-driving car would do the suing. I mention this because it is not as though the AI is sentient and opts to sue the government. In the instance of a human driver, it would presumably be the human driver that did the suing. For a self-driving car, it is not the AI driving system that would do the suing.

That being said, you might be puzzled as to why any lawsuit would be warranted. If there wasn’t a human driver, there would not seem to be any basis to bring a personal injury lawsuit. The AI isn’t a person (which, as a side note, some believe that AI systems might eventually have some form of legal personhood, see my discussion at this link here).

Aha, there are at least two considerations to keep in mind.

There are the damages that the self-driving car incurred as a result of going into the ditch. Those damages to the autonomous vehicle can potentially be sought for recoupment.

More importantly, suppose there were passengers inside the self-driving car. Imagine that three people were inside the self-driving car. All three get injured when the self-driving car goes into the ditch.

Now you’ve got the potential for lawsuits galore.

The people are bound to sue the self-driving car company and the fleet operator. Those passengers might also sue the government and the construction contractor. The self-driving car company and fleet operator might sue the government and the construction contractor. Etc.

Courts are going to have their hands full with the advent of AI self-driving cars, I assure you.

We are going to have a lot of legal fireworks when it comes to the thorny socio-legal issues underlying the development and fielding of self-driving cars, see my further elaboration at this link here. In addition, you might take a look at this recent article about the legal issues entailing the future of autonomous vehicles, written by attorney Roger Royse of the Royse Law Firm and available at this link here.

Conclusion

As eloquently proclaimed in the cherished movie The Princess Bride, I am only getting started on eliciting key facets on this engaging topic.

Here’s one for you.

The government might try to argue that the AI driving system was insufficiently devised to properly take the curve.

Why would the government claim this?

For the same reason that the government might seek to suggest that a human driver was unsafe in their driving efforts. Pin the tail on the donkey, it’s a worthwhile gambit. You can also expect that if the passengers opt to sue, they also will attack the veracity of the AI driving system and how it was devised, wanting to show that the AI driving system was not up to par.

Few of the self-driving car companies and automakers realize they are eventually going to get hauled into court and have to defend their AI driving systems. What decisions were made about how the AI driving system works? What provisions exist to handle situations like this?

On and on, the legal inquiry will go, trying to indicate that the AI driving system was poorly designed, poorly tested, poorly fielded and that the automaker or self-driving tech firms bear responsibility for what took place.

It could be that self-driving cars end up being a keystone for suing about dangerous roadway conditions. Similarly, it could be that self-driving cars are targeted as the source of why a roadway circumstance was not properly handled.

In your mind’s eye, envision a courtroom filled with self-driving developers and engineers, trying to explain and justify the decisions they made when devising their AI driving systems.

I suppose some potential relief for them would be the day that the AI becomes sentient, in which case the AI could reside in the witness stand and be pummeled by probing questions from the attorneys, ostensibly saving those human technoids the trauma of having to do so.

Comments


Leave a Reply


Your email address will not be published.