Are you familiar with the expression that someone is a fink or a no-good dirty rat?
Perhaps you might be more acquainted with other ways that this is commonly depicted such as those that are characterized as a weasel, a snitch, or a stoolie.
Let’s add to the matter a vexing ethical question, namely whether someone can be considered a stool pigeon or a squealer even if they are reporting on something that was an illegal or unlawful act?
You would normally be tempted to assert that reporting a prohibited act is entirely appropriate and the tipster or whistleblower ought to be rewarded rather than ostracized as a tattler or snitch.
Okay, consider a real-world example and see how you do.
You are driving along on your daily journey to the office. There is a stop sign at an upcoming intersection. As you approach the intersection, you notice that there is no other traffic of any kind and there aren’t any pedestrians in the vicinity. Being in a bit of a hurry that morning, you approach the stop sign and make a rolling stop.
To be clear, you nearly came to a full stop, but decided to just kind of ease your way forward and slowly, carefully, judiciously continue in motion. This was definitely not a blatant blasting past a stop sign. You were thoughtful and calculated that there would be no harm in gently observing the stop sign versus fully abiding by the stop sign.
There was no danger, so you took the shot, as it were (thanks goes to Top Gun).
How are you feeling so far about this?
Your action seems perfectly justifiable. You looked both ways. There weren’t any other cars. No pedestrians were nearby. And, notably, you did bring the car nearly to a halt. Admittedly, you inched forward and did not technically make a full stop, but that seems like quite a technicality and not worthy of notable discussion or acrimonious debate.
Time to add a twist to the tale.
Turns out that a neighbor adjacent to the stop sign has set up an automatic camera that takes snapshots of any vehicles that roll through the stop sign. You might say they are doing this for the sake of safety, or maybe it is merely a hobby. In any case, the camera captures your kind-of stop that was really a rolling stop.
The neighbor upon reviewing their camera film realizes they got you, in a veritable sense of hook, line, and sinker. So, they opt to notify the police and turn you in.
Now how do you feel about this?
Maybe your first thought is that this person is a fink, a tattletale, a snitch.
They are making a molehill into a mountain. Your driving action was relatively innocent, and this sanctimonious do-gooder has turned this teensy tiny transgression into a full-blown crime. You know in your heart that no one was hurt by your driving action.
If there are any ongoing efforts to get illegal drivers, by gosh, you see plenty of scary and outrageously unlawful driving actions by zany drivers all the time. Go after those nutty drivers that truly put others into jeopardy by running red lights, speeding way beyond the speed limit, and that otherwise are a danger to society while being behind the wheel.
After calming down, perhaps you shift into a realization that the stoolie was trying to do the right thing.
You can entirely understand the motivation to catch those that brazenly brush past a stop sign. Of course, that’s not what you did. You mull over whether you can convince a judge that you are a good driver, and this was a minuscule infraction that should be tossed out of court.
But as you think further on the topic, you start to get enraged again. What a waste of time and energy. It was one stinking stop sign in one stinking little neighborhood on your stinking way to work on one particular day. The neighbor that reported you is a stinking informer and should go after those that zip past the stop sign and leave alone those that were obviously being much more mindful about (somewhat) stopping.
I realize that some of you might be thinking that the neighbor was absolutely right and you ought to have gotten nabbed.
Perhaps your efforts were generally well-intended, but maybe you are gradually going on a slippery slope towards abject lawlessness. One day, you roll through a stop sign, and the next thing you know, it is possible you’ll justify driving on sidewalks or ramming into other cars that you dislike. Thankfully, this conscientious busybody with their handy camera has stopped you before you lead a life of outright debauchery and criminal behavior.
I’d wager a guess that few of you are having those kinds of thoughts.
Consider a matter analogous to this stop sign saga.
We already know that as a society there is a great deal of controversy over simple things such as those intersection red-light cameras. The cameras are intended to catch those that opt to run the intersection when the light has gone red. This is decidedly a perilous driving action and often leads to car crashes. Those car crashes lead to people getting injured or killed.
Some believe that having a red-light camera to detect such scofflaws is more so a deterrent versus capturing those that run the red light. Yes, of course, the idea is to snap a photo of those that do indeed run the red and notify them, along with punishing them, which might awaken them to not do so ever again. The other notion is that just knowing that the camera exists will get people to second guess themselves about running the red light and they will consciously avert doing so. Thus, this is more so a deterrent rather than a focus on capture and punishment per se.
Having an all-seeing eyeball that you know will detect your driving transgression is presumably sufficient to cause people to drive more safely.
As an example of the quirkiness of human behavior, some locations that had a red-light camera have removed their red-light intersection cameras for the very reason that it was supposedly spurring drivers to rush the intersection. Here’s why. Not wanting to get caught by the red-light camera, drivers would intentionally attempt to zip through the intersection. This led to apparently more dangerous driving situations and actually increased the chances of car crashes.
That’s quite a dose of irony.
The adoption of something intended to reduce bad driving seemed to have an unanticipated adverse consequence of prodding people toward bad driving, including those that seemingly would not have been doing bad driving to start with.
Now, be aware, not everyone agrees with those kinds of utilization assessments of the red-light cameras, and there is a lot of research that falls on both sides of the good or bad merits thereof.
Anyway, why all this chattering about the act of tattle-telling?
Time to reveal the true plot at hand.
Allow me a moment to show my cards.
There will eventually be a widespread emergence of self-driving cars.
AI-based true self-driving cars are chockful of sensory devices that are used for navigating and driving the vehicle by the AI driving system. Electronic devices typically used include video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and the like. The primary purpose of that state-of-the-art equipment is to serve as the eyes and ears of the AI driving system. Data is collected via those sensors and the AI tries to figure out the nature of the driving environment and then ascertain what driving actions can be safely undertaken.
There are though other potential uses of the suite of sensory devices on a self-driving car.
I’ve previously pointed out that those devices are a kind of “roving eye” that can potentially record whatever they see or detect during a driving journey (see my analysis at this link here). This can be handy for a multitude of purposes, such as being able to keep track of homes and real estate in a community and readily allow knowing what properties look like (if the data was posted online and made available for use).
Unfortunately, the same roving eye can have some quite scary outcomes.
Imagine that there are thousands upon thousands of self-driving cars that have been fielded and are weaving their way through your community as they provide ridesharing services. That’s great and a huge convenience for wanting to get a lift. At the same time, those sensory devices are collecting in real-time the activities of whatever is happening in that community.
In theory, if one could aggregate the data, it would be feasible to generally track your activities. Early in the morning, you walk out of your house to take your dog for a walk. Around noon, you and a friend decide to eat lunch on your front patio. Later in the day, you get a lift to a downtown restaurant. Self-driving cars passing past the restaurant spot you going in, and likewise catch images of you as you are leaving the eatery.
The gist is that it would be technically possible to piece together the jigsaw of your everyday life outdoors via grabbing up the data collected from the numerous self-driving cars. Those self-driving cars are likely to be outfitted with OTA (Over-The-Air) electronic communications, allowing the vehicles to get downloaded patches to the AI driving system, and likewise being able to upload the roving eye data they collected throughout the day.
Once that data gets into the cloud, there is an open question as to what happens to it next.
Presumably, whoever owns the fleet of self-driving cars will want to monetize that data. How they do so and what the data is used for, well, that’s something yet to be decided. Some believe that heavy regulations are going to be needed to protect us from an otherwise inevitable Big Brother situation.
We can add more fuel to that fire.
Returning to the initial tale of the neighbor that perchance set up a camera to monitor the stop sign nearby their home, we can ratchet up this notion.
Ratchet up a thousand-fold, exponentially so.
There will be human drivers driving their conventional cars and doing so amidst the advent of self-driving cars. Those self-driving cars will be watching the roadway, like a hawk, including spotting whatever any nearby human-driven cars are doing or undertake to do.
Here is the intriguing question to ponder: Will AI-based true self-driving cars potentially snitch or fink upon human drivers nearby that bend or outright bust the driving laws?
Let’s unpack the matter and see.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars As Snitches
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
As mentioned, the AI driving system includes a host of souped-up sensors. Those sensors are trying to identify anything within the sensory range of the self-driving car. Are there pedestrians nearby? Is there a car up ahead? Are there cars behind the self-driving car? And so on.
Here’s what can occur.
You are driving adjacent to a self-driving car. There is nothing odd or out of the ordinary about this since there are numerous self-driving cars on the roadways (this is somewhat true today in specific locales, such that during a regular driving journey, you can encounter self-driving cars every few minutes, which has happened to me many times).
You and the adjacent self-driving car are approaching a stop sign that has two lanes. You are in the right lane and intend to make a stop at the stop sign, and then proceed to turn right. The self-driving car is in the left lane and presumably, will come to a stop and then proceed straight ahead.
Sure enough, the self-driving car comes to a full stop at the stop sign.
Meanwhile, perhaps in your haste, you slightly roll through the stop sign, failing to have come to a complete stop. You were only moving inches at a time. Furthermore, there weren’t any pedestrians nearby. The only other car nearby was the self-driving car. You did nothing that endangered the self-driving car. All in all, you simply did a modest and seemingly immaterial roll through the stop sign and then made your right turn.
Nobody would ever know that you did a small oopsie on obeying the law.
Not so fast with that assumption.
The self-driving car was surveying the surrounding driving environment, as it customarily does, and therefore detected the presence of your car being adjacent. The sensors also captured the fact that you did not come to a full stop.
One possibility is that the self-driving car does nothing about your transgression. You did not harm or threaten the self-driving car. You did not hit any pedestrians. Overall, your act was inconsequential.
But, wait for a second, you broke the law. If a police officer was standing at the stop sign, they surely would flag you down and issue you a ticket. As a society, we seemingly want the police to take this action. Even though you ostensibly did not pose a danger, you are possibly the kind of driver that does this act all the time. Best to nip things in the bud, before you get out-of-hand.
The AI driving system in this everyday self-driving car is not a cop. It is not enforcing the driving laws (well, not as yet).
Unquestionably, it did detect someone breaking the law.
What do we want the AI to do?
Some would argue vehemently that it is none of the business of the AI. The AI driving system should keep its head down and focus on driving the darned car (that seems somewhat anthropomorphic, so let’s just say that the AI wasn’t programmed to do anything about the detection). Leave others alone.
And, especially leave human drivers alone.
Others would as equally fervently argue that this is a godsend of sorts. By having detected a bad driver, this should be reported right away to the police. Furthermore, there is abundant proof to support the contention that an illegal driving act occurred.
If a human driver perchance saw another human driver rolling through a stop sign, there is a temptation to report on that other driver. This though is rather problematic. Once you get to court, it will likely come down to your word versus the word of the other. Assuming that the other driver won’t admit to the infraction, the whole thing ends up as a potential frustration and waste of time. How is the judge to decide which of you is telling the truth and which is lying?
In the case of the self-driving car, there is a plethora of sensory data that has nabbed you. This seems like pretty hard evidence to overturn. You can bet that clever lawyers will try to find a means to do so. They can argue that the sensory data was faked, as in the popular Deepfakes that we see commonly appearing today. They could argue that the data was real but has been edited by someone, perhaps a systems administrator that has access to the cloud data of the self-driving car fleet. Etc.
Yes, those are all potential legal angles and loopholes, but it will be quite an uphill battle.
Once we have our streets, highways, and byways filled with roaming self-driving cars, there will be a nearly endless supply of recorded detections involving human drivers that have bent or entirely busted the proper driving laws.
You can bet your bottom dollar on this.
Some people have worried that we are going to have more cameras set up like the one that a neighbor placed to spot the stop sign in their neighborhood. We have already seen that the doorbell cameras are capturing quite a lot of what happens in a neighborhood. Undoubtedly, the trend of using stationary cameras will continue.
I respectively submit that those kinds of static unmoving cameras will be a drop in the bucket.
The advent of self-driving cars is going to inevitably take off like wildfire.
People will crave the use of self-driving cars, partially due to the hoped-for lower cost of use versus a human-driven ridesharing capability, but also (more so) because it is anticipated that self-driving cars will have many fewer car crashes. There are about 40,000 human fatalities each year due to car crashes in the United States, and about 2.3 million injuries. The belief is that self-driving cars will not be drunk while driving, will not be distracted drivers, and therefore there will be a sizable reduction in the number of car crashes and thus a sizable reduction in the number of injuries and fatalities.
Meanwhile, the craving for self-driving cars will also produce the possibility of routinely and rather extensively catching nearby human-drivers that break the driving laws. We need to wrestle with the idea that self-driving cars can be a snitch, or if you will, an added pair of helpful eyes that can detect those humans that are driving in unlawful manners and promptly report those transgressors to the police.
Some are handwringing that we will become a police state.
At first, it will be the running of a stop sign, and the next thing you know it will be a pedestrian that tosses a candy wrapper on the ground. Others emphasize that maybe we could establish thresholds, such as if the act detected is considered inconsequential or a petty act, the AI won’t be prompted to tattle.
One final twist to these twists.
We already know and acknowledge that perhaps human drivers reacted somewhat paradoxically to the red-light cameras, leading to a presumed unanticipated adverse consequence.
Would human-drivers react in “unforeseen” ways if they knew that self-driving cars were going to be the proverbial fink or weasel?
Maybe human drivers would do whatever they could to avoid getting within sensory range of self-driving cars. This could produce adverse driving beyond that which we already have. Envision that a human driver coming down a quiet street might suddenly make an illegal U-turn to avoid coming up upon a self-driving car (kind of what happens by some that perchance come upon a police car). For each scofflaw being detected by self-driving cars, there might be many more new untoward driving acts being generated by human drivers determined to avoid coming under the gaze of a self-driving car.
Or, perhaps humans will try other means to confuse the sensors or somehow mask their car from the sensory suite. Some savvy entrepreneurs will provide electronic “invisibility cloaks” to hide your human-driven vehicle from the detectable scope of the self-driving car sensors. And so on.
Where there is a will, there is a way.
If AI ever becomes sentient, it would certainly be interesting to see what the AI says about all of this, specifically whether the AI would believe it is doing the right thing as an alleged stoolie, or perhaps be upset that humans are fearing it because of the heavy burden of always being the tattletale.
It’s a tough job being AI.