# Who should AI kill in a driverless car crash?



## phrelin (Jan 18, 2007)

Yep, that's the headline - Who should AI kill in a driverless car crash? It depends who you ask. Here are some of the questions:

Should a car with three occupants, an adult man and woman and a child, swerve into a wall, killing them all, in order to avoid hitting three elderly people, two men and a woman? Should an unoccupied car swerve and kill an unemployed adult man, a child and a cat in order to save an adult man and woman and a child? Does the answer change if the pedestrian light is red? What if one of the people is unfit, or pregnant?​
All of which makes one realize that artificial intelligence as its abilities expand is going to be making a lot of important decisions, some of which in most cases we don't consciously make. In this case, we treat the "swerve" as an instinctive response in most accidents - they are, after all, "accidents".

And then there is the question: Who gets to decide? Elon Musk? Congress?


----------



## James Long (Apr 17, 2003)

Swerve which way? Human drivers are not presented with all of the facts (ages, incomes, employment) and I suspect most humans would choose self preservation. I believe it would take a considerable amount of forethought (such as a fighter pilot choosing to stay with their plane and crash in a less populated area) to make the sort of rationalizations suggested by this article.


----------



## TheRatPatrol (Oct 1, 2003)

SkyNet


----------



## billsharpe (Jan 25, 2007)

Call me a pessimist, but I can only see autonomous cars working well when the great majority of cars (and trucks and busses) are autonomous.


----------



## scooper (Apr 22, 2002)

SO how are you going to accomodate MOTORCYCLES and other "non-cars" ? For that matter, how about non- AI vehicles ? Are you going to require all non-AI vehicles to carry transponders (probably with GPS trackers) ?

I'm pretty scared of the idea of sharing the road with AI vehicles when I'm on my MC. Other Motorcyclists I've brought this subject up with have expressed the same concerns.


----------



## NYDutch (Dec 28, 2013)

There might be a case to be made that autonomous vehicles would be more predictable with the human element removed. That should work in the motorcyclists favor...


----------



## scooper (Apr 22, 2002)

I'm more concerned about the AI dealing with us "unpredictable" human drivers / riders. And in the case of motorcycles - will the AI operated vehicle even "see" / detect us ?


----------



## NYDutch (Dec 28, 2013)

If they can "see" pedestrians and animals on the side of, or in, the roadway, why wouldn't they "see" a motorcyclist? At least some of the current crop of AI driven vehicles use a "collision cost" table in making evasive action decisions. In those tables, a detected human or large animal like a deer or moose ranks very high, while smaller animals like squirrels rank low, calling for less drastic evasive measures. Most of the current AI vehicles are even using sensors that can detect if an object is a soft (skin, fur, etc) or hard (metal, glass, etc) material as part of the decision making process that's also used in the identification process for situations like differentiating between a dead skunk or a piece of steel in the road. I don't think motorcyclists present a major problem either for AI vehicles or themselves. In the future, there may well be some sort of transponder included on motorcycles to make them even safer though. Then again, BMW has demonstrated an autonomous motorcycle... 

BMW Unveils Autonomous Motorcycle that Aims to Make Riding Safer


----------



## SamC (Jan 20, 2003)

Self driving cars are science fiction. 

If you assume that an AI can make the millions of decisions that a person makes per second, with 99.9% accuracy (something no computer has ever achieved) that still means everybody is dead within months.


----------



## James Long (Apr 17, 2003)

You don't believe in five nines?


----------



## scooper (Apr 22, 2002)

5 Nines is still 1 in 10,000


----------



## James Long (Apr 17, 2003)

scooper said:


> 5 Nines is still 1 in 10,000


Five nines is one in 100,000. Much better than 99.9% (three nines).


----------



## NYDutch (Dec 28, 2013)

According to the NHTSA, 37,113 people died in vehicle accidents in the US last year. The leading causes of those accidents according to insurance company statistics were distracted driving, drunk driving, reckless driving, and speeding. AI operated vehicles don't get distracted, drunk, and aren't likely to drive recklessly or speed.


----------



## James Long (Apr 17, 2003)

As their use increases we will get better statistics on the accident and death rates of AI vehicles.


----------



## MysteryMan (May 17, 2010)

NYDutch said:


> AI operated vehicles don't get distracted, drunk, and aren't likely to drive recklessly or speed.


They can if they get hacked.


----------



## yosoyellobo (Nov 1, 2006)

*Eeny, meeny, miny, moe. When it reaches 100% then let it decide what God would do.*


----------



## NYDutch (Dec 28, 2013)

MysteryMan said:


> They can if they get hacked.


Do you really want to start down the "Hypothetical" path of the hundreds of possibilities that could occur with AI operated vehicles? Or human operated vehicles? Vehicle failures of all sorts can cause accidents, regardless of the operator format.


----------



## James Long (Apr 17, 2003)

AI leads to less alert drivers. The complacency leads to the person ultimately responsible for safety losing attention. Until the responsibility is 100% taken over by the AI, and that AI is 100% accurate, AI driving is a major risk.


----------



## NYDutch (Dec 28, 2013)

Just as humans have never been 100% accurate all the time as drivers and likely never will be, neither will AI vehicles be, although the decision making speed and faster reaction times of AI's will likely come closer to that ideal than humans are capable of. There is no possible way to program in all possible failure mods, so adaptability, "learning", the true measure of AI performance, is critical to its accuracy. What will be a major leap forward in making AI vehicles safer than human drivers I believe, will be when AI vehicles are communicating with each other, even if there are still a significant number of human operated vehicles on the road.


----------



## James Long (Apr 17, 2003)

That is where the 100% responsibility comes in. If the AI car is involved in an accident the passengers (including the one in the traditional driver's seat) need not be held responsible.

I like the concept of AI vehicles ... it would help solve parking problems in my life (drop me at the door of work or shopping then park as far away as needed and come back to pick me up when I call). Then the car can take me home while I catch up on my email or spend my commute more productively than watching the road. I assume city folk will use AI cars instead of cabs, human Ubers and Lyfts etc or other shared vehicles. (I live in a rural area where "ride share" coverage is practically non-existent. I would not expect an AI cab company to invest in this area so I'll be in my own car - whether I'm driving it or an AI is driving.)


----------



## NYDutch (Dec 28, 2013)

There's no question that autonomous cars, trucks, etc., raise some unique legal issues that will need to addressed at some point. As for cabs and ride sharing, Google's Waymo division, Uber, and Lyft all have AI operated vehicles in public road testing, although Uber has temporarily suspended public testing after one of their Volvos killed a women in Tempe, AZ last March. Lyft recently reported passing the 5,000 paying passengers mark for their 30 car Aptiv built AI fleet in Las Vegas. Waymo is currently experimenting with trip pricing for it's cab fleet in Phoenix, AZ, although rides are still free for volunteer passengers. Off shore, there are a number of other companies developing AI cars for both personal and public use. I think it's fair to say that AI driven vehicles may be a fact of life sooner than we think, although I do agree there's still a lot to be learned and done to improve safety. Right now, any serious accident involving an AI vehicle is big news, regardless of how rare they are, yet the 100+ people a day that are killed in or by human operated vehicles barely make blip in the news unless it's someone notable or the crash is somehow spectacular, like the 20 people killed in a limo accident in upstate NY recently.


----------



## TheRatPatrol (Oct 1, 2003)

James Long said:


> and come back to pick me up when I call


"KITT, come pick me up." 
"I'm sorry Dave, I can't do that, SkyNet has taken......over......my........meeeemoooorrryyyy............."


----------



## James Long (Apr 17, 2003)

NYDutch said:


> I think it's fair to say that AI driven vehicles may be a fact of life sooner than we think ...


As noted, they are here now. Testing, with some manufacturers doing a better job of actual testing than others. "Driver Assist" is the gateway to autonomous operation. Some level of "Driver Assist" is becoming common. Tessla's hap-hazard rollout of their speed and lane control feature led to the company taking a step back when they realized how stupid people could be while using their technology. (Yes, blame the people.) Other manufacturers have taken the step from alerts to automatic response (such as braking).

Being "a fact of life" depends on one's definition and expectations. Right now most vehicles are in controlled environments - either geographically limited or with a human driver in the car (allegedly) prepared to take control. We are not close to "any road, any state, any weather, any date" but I assume that "testing" will continue. Set the threshhold. Will we reach "a fact of life" when manufacturers claim that testing is finished? When the first car is licensed to drive? When a certain number of cars are on the road? When there are no limits to where such a car could be used (if the road is in a standard GPS the car can drive it)?

My threshhold is high - I will not consider AI cars to have arrived until they no longer need a licensed driver and can drive on all roads mapped by GPS in any weather condition. Unfortunately I have found GPS errors (roads mapped that do not exist and new roads or road changes that have not been mapped). Google's street view (or similar mapping) proves that the road could be traveled, but there are roads street view has never traveled. One could say that AI cars have arrived before they can go "anywhere" ... but the AI car loses its value when it can't handle dirt roads, unmarked pavement, heavy rain and snow, etc.

Set the threshhold low and AI cars are already a fact of life. They just are not ready to meet my needs.


----------



## scooper (Apr 22, 2002)

My major concern with AI cars is the detection and avoidance of legacy vehicles - no matter how old or regardless of size - preferably without forcing said legacy vehicles to carry something so the AI vehicles know where they are.

The roads consideration brought up by James is also valid - in an area where there is alot of construction - even Google maps doesn't always have the latest information about where to drive (try being a rideshare driver for awhile and you'll see what I mean). And then there's the unexpected results of accidents with their lane closings. I don't think you will ever get to the totally handsoff self driving vehicle with these considerations in mind. You might be able to accomplish 95% on established roads, such as on highways between cities.


----------



## NYDutch (Dec 28, 2013)

Google/Waymo's 200 cars have self-driven 8 million miles on public roads as of last July, and are now cumulatively driving about 25,000 miles per day. That real-world experience, plus over 5 billion miles in simulation, is how they're building what is likely the world’s "most experienced driver". I think it's safe to say that somewhere in all that mileage the cars have faced probably millions of legacy cars, trucks, and motorcycles, as well as many construction zones along with at least a few accident scenes. All of which I'm sure taught them something new to add to their adaptive programming data base. That's the key to successful AI, the ability to learn and adapt its behavior, just as humans do...


----------



## 4HiMarks (Jan 21, 2004)

IMO, it's going to happen incrementally. In addition to the cars talking to each other, they are also going to communicate with the road itself, although that may take a little longer to become fully implemented. First step, insurance companies are going to start giving discounts for cars with the above mentioned "driver assist" features (if they aren't already) the way the do for airbags or anti-theft devices. Next, the HOT lanes will also give discounts. This will work out so well, the toll gantries will begin 2-way communication with cars to regulate traffic density and guarantee predictable trip times. 

Soon after that, non-autonomous cars will be banned from HOT lanes. When you enter the lane, you will have to turn control over to the road. When this proves a success, the ban will begin expanding - first to all controlled-access roadways (Interstates, etc.), then as secondary roads are repaired. more and more will get controls embedded. 

Pretty soon, it will be illegal to drive a car manually, except on a race track or in special gearhead preserves for us old-timers. I give it about 20-25 years. By 2050 at the absolute latest. There are children alive today who will never get driver's licenses, and knowing how to drive at all will be as uncommon among them as knowing how to drive a stick shift is among Millennials.


----------



## scooper (Apr 22, 2002)

No way - You would be talking about an obscene amount of money to do that to all roads. Even getting just toll roads will be cost prohibitive. YOur cost for the devices for the road, communications cost, the computer center(s). We don't even have a sufficient level of computing power to do this on an individual car yet (assuming a distributed model).

And by the way, the term is HOV (High Occupancy Vehicle), not HOT

Oh - since you live in the DC area - can't you see the fallacy of your idea ? It simply will not work. There ARE Metro areas that do not even HAVE HOV lanes yet - Kansas City for example.


----------



## James Long (Apr 17, 2003)

Sometimes it is easy to see where someone is posting from without looking at any city given. "Laurel, MD" ... between Washington and Baltimore. An area that has HOV lanes, demand pricing on toll lanes, etc. Where I am (northern Indiana but away from Chicago) we have a toll road, but it is the same price 24x7 and expensive enough that most locals take parallel state highways. I can't remember where the nearest HOV lane would be (I believe there is one on an interstate near Chicago). Illinois and Ohio offer discounts for using iPass/ez-Pass, Indiana does not. I believe Chicago has day/night pricing for trucks on the toll roads - same price 24/7 for cars. With that in mind, the government would need to find another way to encourage AI usage (if they choose to do so).

The insurance discount angle is interesting and should come if the actuaries actually see a reduction in payouts due to the AI features. Discounts for features that reduce liability make sense. Does going full AI reduce liability? I expect the industry will be cautious offering discounts until they are sure that AI is a blessing, not a curse. There needs to be some case law to assure insurance companies that AI isn't a greater liability. AI on high end cars that are more expensive to repair or replace may not be much of a discount.


----------



## yosoyellobo (Nov 1, 2006)

4HiMarks said:


> IMO, it's going to happen incrementally. In addition to the cars talking to each other, they are also going to communicate with the road itself, although that may take a little longer to become fully implemented. First step, insurance companies are going to start giving discounts for cars with the above mentioned "driver assist" features (if they aren't already) the way the do for airbags or anti-theft devices. Next, the HOT lanes will also give discounts. This will work out so well, the toll gantries will begin 2-way communication with cars to regulate traffic density and guarantee predictable trip times.
> 
> Soon after that, non-autonomous cars will be banned from HOT lanes. When you enter the lane, you will have to turn control over to the road. When this proves a success, the ban will begin expanding - first to all controlled-access roadways (Interstates, etc.), then as secondary roads are repaired. more and more will get controls embedded.
> 
> Pretty soon, it will be illegal to drive a car manually, except on a race track or in special gearhead preserves for us old-timers. I give it about 20-25 years. By 2050 at the absolute latest. There are children alive today who will never get driver's licenses, and knowing how to drive at all will be as uncommon among them as knowing how to drive a stick shift is among Millennials.


Looking to the year 2050. I be a young 107.


----------



## scooper (Apr 22, 2002)

I'll be a spry 89/90 (if I live that long).

If the dream of full AI is going to work - I think more like 100 years from now - maybe.


----------



## Laxguy (Dec 2, 2010)

Your premises are far off base.



SamC said:


> Self driving cars are science fiction.
> 
> 
> > Surely you jest. They are here.
> ...


No human makes millions of decisions per second (and many don't make that number in a lifetime).

And many computers have performed at 100% accuracy.


----------



## NYDutch (Dec 28, 2013)

I think we need to keep in mind that autonomous vehicles do not need a "full AI" implementation for a dedicated task like driving a vehicle. The vehicle AI does not need to be thinking about what to buy while driving to the grocery store and changing the radio settings, just what it needs to do to get there. I know "just" is overly simplistic for such an extremely complex task, but considering the immense complexity of all the non-driving tasks that humans attend to while simultaneously driving, it's probably appropriate in the overall scheme of things AI. Even driving related tasks like watching for a street sign for a turn is something a dedicated vehicle AI does not have to do because it figured all that out within a few seconds at most of you telling it where you wanted it to go. There's absolutely nothing "simple" about vehicle AI, but compared to what a "full" human AI implementation would require, it's pretty far down the complexity list. I think it's noteworthy given the infancy of vehicle AI that at the 5 million mile mark, the Waymo fleet had been involved in about 30 accidents, or about the same as NHTSA's estimated accident rate per million miles for all vehicles, all but one caused by the other vehicle's operator. The one minor accident attributed to the Waymo vehicle in autonomous mode occurred when the car changed lanes in front of a public transit bus while negotiating a partially blocked lane.

Waymo's Self-Driving Car Crash in Arizona Revives Tough Questions


----------



## 4HiMarks (Jan 21, 2004)

scooper said:


> And by the way, the term is HOV (High Occupancy Vehicle), not HOT.


HOT stands for High Occupancy Toll lanes. They are not the same as HOV lanes.


----------



## SamC (Jan 20, 2003)

Laxguy said:


> Your premises are far off base.
> 
> No human makes millions of decisions per second (and many don't make that number in a lifetime).
> 
> And many computers have performed at 100% accuracy.


Actually driving a car involves millions of decisions per second, it is just we call those decisions "instinct". But, to a machine A and non-A are the same.

As to computers, all are far short of 100% accuracy. Ever had an app crash? Ever had to just CTL-ALT-DEL? Ever lost conection? Ever just plain old had to turn the d**n thing off and start over?

In a self-driving car that means you are dead.

Try this. Use Google, or any other map service, and run 20 routes that you already know. It is 100% certain that at least 1 will be wildly wrong. Probably more. Taking you 100s of miles out of your way. They cannot even make a computer that can read a map as well as a human.

I have a "lane departure" thing on my car. They are now using a "contra flow" lane which means there are the real pavement markings and the old ones, pained over. Look the same, to the computer, when wet. I have the sense to turn the thing off, but then again, I know how to drive. Computers do not, and never will.


----------



## scooper (Apr 22, 2002)

4HiMarks said:


> HOT stands for High Occupancy Toll lanes. They are not the same as HOV lanes.


You mean like over on the Outer Loop 495 (and maybe the other direction as well) from Tyson's Corner to the Mixing Bowl ?

semantics - it's still an HOV lane, plus you pay a toll to use it. Without an EZ Pass, you REALLY get socked for it.... (I did).

I used to live in Reston VA, spent alot of time in Tyson's.


----------



## 4HiMarks (Jan 21, 2004)

scooper said:


> You mean like over on the Outer Loop 495 (and maybe the other direction as well) from Tyson's Corner to the Mixing Bowl ?
> 
> semantics - it's still an HOV lane, plus you pay a toll to use it. Without an EZ Pass, you REALLY get socked for it.... (I did).


Yes. Plus on I-95 south of the mixing bowl and north of Baltimore. But it is not just semantics, as HOV lanes don't have toll gantries that already communicate with transponders in the vehicle, and there is no physical barrier to prevent cars from weaving in and out of the lanes. Just some diamonds painted on the road.

All that would need to happen is a software upgrade on the gantries and new transponders that can talk to the onboard computer of the car. Not expensive at all, and could be paid for by a slight increase in the toll.


----------



## James Long (Apr 17, 2003)

In the DC area people can get the "ezPass Flex" which has a switch one is expected to change to indicate if they have three or more passengers. It is the "honor system" ... backed up be enforcement.

I would not mind seeing an external indicator on autonomous vehicles that would light when the vehicle is in autonomous mode.


----------



## James Long (Apr 17, 2003)

Waymo Can Finally Bring Truly Driverless Cars to California

To begin, the truly driverless cars will test only at up to 65 mph in the southern Bay Area, in Mountain View, Sunnyvale, Los Altos, Los Altos Hills, and Palo Alto. (Waymo and its parent company Alphabet are headquartered in Mountain View.) The company said it will inform local governments before expanding its tests any further. And though Waymo has clearly stated its intention to run its own driverless taxi service, the company's first driver-free passengers will only be employees. Waymo did not say when it will open its cars to the wider California public.

Tesla accused of misrepresenting Autopilot in Florida crash lawsuit

Here's what happened. Morgan was driving his Tesla Model S on Florida's Turnpike (State Road 91) between his home in Winter Garden and his job at a Nissan dealership in Fort Pierce. He relies on Autopilot to reduce the tedium of his 125-mile commute, but as the Model S approached a disabled vehicle in the left lane, it kept going and collided with the disabled vehicle, destroying the Tesla's front end and leaving Hudson with "severe permanent injuries," according to the complaint.


----------



## Laxguy (Dec 2, 2010)

SamC said:


> Actually driving a car involves millions of decisions per second, it is just we call those decisions "instinct". But, to a machine A and non-A are the same.
> 
> As to computers, all are far short of 100% accuracy. Ever had an app crash? Ever had to just CTL-ALT-DEL? Ever lost conection? Ever just plain old had to turn the d**n thing off and start over?
> 
> << Snipped bits out >>....but then again, I know how to drive. Computers do not, and never will.


Millions of "decisions" huh? How bout naming a 100,000? No? 10,000?
1,000? 100? OK, *name 10 (ten) decisions we make per second while driving. *

I use Macintoshes, and only poorly written apps have crashed. Losing connection? Yes, but that's Comcrap's weak signal. Reboot? I do so once a week to clear out cruft.

One day, it will be shown that computers with radar, GPS and highly powered dedicated computers will have a much better record than we humans. Not better than you-or me-(!) but better than the general population.


----------



## phrelin (Jan 18, 2007)

It's interesting. In terms of the future what's really being described in many discussions is an automated vehicle using AI highly dependent upon "the state" (which can be anything from some federal agency down to some municipality).

I'm not sure how comfortable I would be mixing a CTL-ALT-DEL "brain" with a CalTrans low-bid purchased inexpensive transponder that can talk to the onboard computer of the car.

It's the State of California. It took three decades to replace the State's accounting system and the replacement of the payroll system was a travesty of errors. All the while Intuit, founded in 1983 and located in Silicon Valley, was providing off-the-shelf and customized systems used by millions of people.

The most serious risk I see is urban folks who get used to being driven around deciding to take a weekend jaunt up into rural Northern California where apparently it is hard enough for CalTrans and the Highway Patrol to keep the lighted information warning signs up to date.










And yeah, we do make decisions while driving, like getting distracted by a phone call which results in a reduction in driving decisions which results in not adjusting the steering while and drifting into the oncoming lane.


----------



## dreadlk (Sep 18, 2007)

Two things on the subject.

1) The movie iRobot was heavily based on the same question the OP asked.

2) The biggest impediment to driverless cars is that the responsibility for all accidents will be shifted to the car makers. I just don’t see the auto companies taking on that kind of legal accountability.
Even Tesla warns you to keep in control of the vehicle at all times. Who is going to be the first company to say it’s fine to jump in the back seat, take a nap and our car will get you to work safely?


----------



## NYDutch (Dec 28, 2013)

If anyone really thinks the days of ubiquitous completely driverless cars and trucks are not nearly here, check out this Verge article regarding Waymo's driverless taxi rollout currently underway in Phoenix, AZ.

A day in the life of a Waymo self-driving taxi

And this recent MSNBC article:

Waymo wins industry's first approval to test driverless cars on public roads in California


----------



## James Long (Apr 17, 2003)

Again, define "ubiquitous". Expanded testing isn't universal service. "Found everywhere" is no where near a true statement for the current state of the art.


----------



## phrelin (Jan 18, 2007)

dreadlk said:


> Two things on the subject.
> 
> 1) The movie iRobot was heavily based on the same question the OP asked.
> 
> ...


As the one who started this thread, I have to acknowledge I have significant reservations regarding "self-driving" vehicles which I believe I expressed before. I'm particularly concerned because I still see references to "autonomous" vehicles which concerns I explained here in 2015 is a disturbing thought. Or to quote Wikipedia:

Put in the words of one Nissan engineer, "A truly autonomous car would be one where you request it to take you to work and it decides to go to the beach instead."​
My problems aren't with robotics or AI. I just generally distrust people and generally consider them to be irresponsible. I think the 21st Century advent of idiots literally falling off cliffs while using "devices" sustains my opinion. And now we're talking about some corporation execs with a Silicon Valley mentality deciding who gets killed.

The _I, Robot_ novel was a sort of compilation of science fiction short stories or essays by Isaac Asimov who started the whole "Three Laws of Robotics" discussion in his the 1942 short story "Runaround" included in the novel. Since that time the subject has been discussed, parodied, scripted into films, and generally pursued by the robotics/AI community.

But in the second decade of the 21st Century, it has become a "more real" subject. At an European Society for Cognitive Systems meeting in October 2013 Alan Winfield, Professor of Robot Ethics at the University of West England, Bristol, presented a revised 5 laws:

Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
Robots are products. They should be designed using processes which assure their safety and security.
Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
The person with legal responsibility for a robot should be attributed.
IMHO when you choose to buy something you buy all the obligations that come with it. With regard to self-driving vehicles, #'s 2 and 5 above...

Humans, not Robots, are responsible agents and

The person with legal responsibility for a robot should be attributed
...must be established in law today, if not yesterday.

IMHO "the person with legal responsibility" should be the owner of the vehicle whose potential obligations should be declared to be unlimited since it will be the behavior of the owner which will ultimately determine the level of risk.

The owner should be legally obligated to buy an insurance policy with an unlimited obligation which would attach to the vehicle and continue so long as the vehicle is on the road. The auto company's only obligation would be to not sell the vehicle to someone until it has received evidence of the insurance and if they fail to do so, they assume unlimited liability.

The typical American, without feeling any need to at least be able to pass a lower division college course on robotic ethics because he/she has learned everything from the movies, will believe he/she has a right to own a cool "autonomous" vehicle which will drive him/her to work each morning while he/she indeed jumps in the back seat to take a nap.

Of course, this is the United States where we'll argue over how to make the "autonomous" robot liable. That will be hard because at that point we'll have to cajole and persuade our "autonomous" vehicle to actually take us to work rather than go for a nice drive over a cliff after it watches _Thelma & Louise_.


----------



## NYDutch (Dec 28, 2013)

I said "...nearly here..." as in not in the far distant future that some seem to think. Waymo alone has contracts in the works for many thousands of vehicles over the next few years. I really believe driverless cars will likely be pretty commonly seen nationwide in my lifetime. And I'm 75...


----------



## scooper (Apr 22, 2002)

As phrelin stated, we need some legal guidelines on who covers for insurance purposes, etc. first, before they are out there in significant numbers. I can't see the automakers taking this liability on any more than they do now (try driving a car you just bought home without insurance).

nydutch - you are maybe 75 and looking forward to autonomous vehicles. Some of us who are younger and whose careers have been spent in the IT industry are the biggest opponents as well as the Proponents (depending which side of the issue you are on) of this. Something for you to think about...


----------



## NYDutch (Dec 28, 2013)

scooper said:


> nydutch - you are maybe 75 and looking forward to autonomous vehicles. Some of us who are younger and whose careers have been spent in the IT industry are the biggest opponents as well as the Proponents (depending which side of the issue you are on) of this. Something for you to think about...


I guess I should have mentioned along with my age, that I retired from an IT career as lead systems analyst/administrator for a multi-national manufacturing company. I don't think I've stated anywhere whether I'm in favor of dedicated AI operated vehicles or not, nor will I. Mostly I've been trying to show the current state of the industry, and that it's much more advanced than many people seem to think. The shear volume of AI vehicles being developed both in the US and abroad by a significant number of well funded companies virtually insures that the vehicles will be on our roads sooner than later. The liability issues will get straightened out, most likely under about the same rules as human operated cars where the owner/lessor has to insure them at least at some minimum level before they can be registered and operated on the road. The opponents may slow down the advances, but I believe there's already too much momentum and too much public acceptance of the current "semi-AI" vehicle systems to stop it from happening.


----------



## scooper (Apr 22, 2002)

Point taken sir. I had wrongly assumed that you were not up on the tech. I still have my reservations about the AI vehicles sharing the roads with legacy vehicles.


----------



## NYDutch (Dec 28, 2013)

I have a number of concerns as well, but the more I learn about the programming and sensor capabilities, as well as the depth of real and simulated experience that's being assimilated, the more those concerns are being put to rest. I do believe AI vehicles won't begin to maximize their safety until they reach saturation on the highways pretty much removing the uncertain human element. The very preliminary numbers so far though, do seem to indicate that AI vehicles will likely be safer than human controlled vehicles even the interim. Time will tell...


----------



## James Long (Apr 17, 2003)

NYDutch said:


> I said "...nearly here..." as in not in the far distant future that some seem to think. Waymo alone has contracts in the works for many thousands of vehicles over the next few years. I really believe driverless cars will likely be pretty commonly seen nationwide in my lifetime. And I'm 75...


I have seen one "in real life". I was on vacation in Pittsburgh. I have also seen one Google Street View car (and have a screen shot of the picture it took of me).
Driverless = no safety driver. The car is 100% responsible for its own operation. There are few of those on public roads today - most have a safety driver. The safety driver is the driver of record.
Propagation Milestones: 1) Every state has at least one driverless car. 2) Every county has at least one driverless car. 3) Every city has a driverless car.
Operational Milestones: 1) Able to operate on any paved road with visible markings (unable to operate if no markings or if markings obscured by dirt/snow/etc.). 2) Able to operate on any road where the road surface is discernible from the non road surface (including dirt roads and unmarked pavement). 3) Able to operate where the road surface is not discernible from the non-road surface (see: snow).
Penetration Milestones: 1) 1% of registered vehicles driverless. 2) 10% of registered vehicles driverless. 3) 51% of registered vehicles driverless.

The industry has a long way to go.


----------



## James Long (Apr 17, 2003)

phrelin said:


> And now we're talking about some corporation execs with a Silicon Valley mentality deciding who gets killed.


I believe that is part of the problem. "Deciding who gets killed" is not part of the normal driving experience. Fortunately very few people get behind the wheel thinking "who am I going to kill today". People have a more positive outlook, even when the odds are against them.

I'd argue that the AI should never make that decision. It's prime objective should be to save human life without doing the math and deciding that one death is better than fifteen or the death of an 80 year old is better than the death of an 8 year old. And woe to the company who ever designs an algorithm that saves lives based on gender, social status, income or value to society.

If all goes well "who dies" will never be a valid question for the AI. Everyone lives. The proponents of driverless cars seem to believe that AI vehicles are a quantum leap better than humans? Reduce deaths to the point where the AI never has to decide.


----------



## NYDutch (Dec 28, 2013)

As to the "who dies" decisions, we all make those every time we get behind the wheel when unexpected events unfold in front of us. I know I've hit my share of squirrels and other small wildlife over the years when avoiding them would have but myself or others at greater risk. It's drivers failing to make those decisions correctly that sometimes result in otherwise avoidable accidents. The AI operated cars use a "cost table" to determine the best course of action in those situations, just as humans should be doing, but often don't when instinct says to avoid what's immediately in front of us while tunnel vision blinds us to the other objects around us. None of us like to hit any living creature of course, but if the choice is a dog in the road or a group of children on the sidewalk, I'd hope we would all make the correct decision. A basic AI tenet is to do no harm, especially to humans, but the reality is that an AI, just like humans, needs to make decisions that do the least harm when no harm is impossible. And that's what "cost tables" help them do in deciding to risk hitting that small soft object in the road rather than the humans on the side of the road. We all know of more than one serious vehicle accident I'm sure, where the cause was said to be avoiding a dog, cat, etc., when hitting them would have had much fewer consequences. I know that's harsh, but those are decisions that need to made in name of maximum safety. And all that doesn't take into consideration that AI's don't get distracted and have faster reaction times, so are much more likely to sense a potential situation in time to take other action, like simply braking sooner than humans.


----------



## yosoyellobo (Nov 1, 2006)

Who will die? At the last second the AI will pass the decision off to God.


----------



## James Long (Apr 17, 2003)

NYDutch said:


> I know that's harsh, but those are decisions that need to made in name of maximum safety.


The trouble is that when you or I make a split second decision we (if we survive) can tell the investigators, courts and jury that it was a split second decision or impulsive response - and the humans we are talking to will probably accept an error in judgement.

Cost tables make the decision premeditated. The corporation decides who to kill? If an accident is calculated to be 100% unavoidable would the AI run a vehicle with one occupant off the road and into a tree (hoping that the airbag and other safety systems would protect the passenger) instead of hitting a group of pedestrians?

The AI should not know the identities of the potential victims and calculate the value of each life as suggested in the original article. But if it did, how would people feel about a company that set a value on each human life then decided who lived based on the cost? An AI that calculates that if they hit person A the monetary loss would be more than if they hit person B. Based solely on statistics, person A's future earnings is higher than person B. Should the cost table include political and PR costs? The AI swerved to avoid a group of children and killed a person working on a cure for cancer (their research died with them). I'd rather the AI didn't know.

If you or I or any human had to explain to a court or jury that we decided to kill an old person instead of a young person or a person of one race or gender instead of another (even if strictly following statistical outcomes) we would be considered monsters. The AI shound't be deciding which human dies.

Human vs animal is an easier decision. But when calculating the outcome of hitting a dog vs a group of children the AI should find another outcome. What are the odds one of the children would step out in traffic and try to save the dog? Stick that in your cost table. Thirty years from now *IF* every car is connected new solutions may be possible such as stopping opposing traffic and giving the vehicle that would have hit the dog another outcome. That sort of integration is a long way off.



NYDutch said:


> And all that doesn't take into consideration that AI's don't get distracted and have faster reaction times, so are much more likely to sense a potential situation in time to take other action, like simply braking sooner than humans.


And that is the real answer to the question of which person an AI should kill: none of the above.


----------



## NYDutch (Dec 28, 2013)

Obviously "none of the above" is always the preferred answer, just not always a possible answer. Of course an AI cannot realistically place an economic value on any life, human or otherwise, and that's not what "cost tables" are for. What a cost table does is help the AI determine which decision will cause the least damage to life and property. If that means driving into a bridge abutment with one person on board instead hitting a group of 6 people, and yes it's never quite that simplistic of course, is that really any different than a pilot electing to crash his failed airplane into a parking lot instead of a school building? Humans often do make the best decision in those situations, but since humans are subject to panic and a strong survival instinct, neither of which affect an AI, the AI should be likely to make the least harmful decision more often. There are many driving situations where I have no idea how an AI operator would react, but then I don't necessarily know how human drivers would react either. For instance, I once deliberately put my car into a side slide on black ice to avoid t-boning a car sliding sideways towards me so we would hit with the largest contact area to absorb the energy. No one was injured and both cars were driven away with just relatively minor sheet metal damage, and the loss of a side mirror on one car. What would an AI do in that situation? Beats me...


----------



## James Long (Apr 17, 2003)

At this point when AI doesn't now what to do it gives up and expects the "safety driver" to take over. Whether that driver is alert and ready to react or watching The Voice via streaming on their cell phone "NOT ME" becomes the driver. I have not seen the plan for vehicles without safety drivers.

"Beats me" is a good answer for a human. Error 404 solution not found. Collect data to be recovered by headquarters to teach the next version of the AI what it should or should not do. If the vehicle cannot be programmed to avoid death or injury then make a moral judgement that the company can stand behind. (And watch out for the actuaries who note that the insurance payout for a death could be less than the insurance payout for a serious injury that could lead to a long lifetime of pain and long term care.)


----------



## NYDutch (Dec 28, 2013)

One of the articles I linked to about the Waymo totally driverless taxi rollout mentions a human team that the car contacts when it doesn't have a solution to a new situation. That should add significantly to the learning database over time, especially when combined with the million miles a month of experience Waymo's cars are currently piling up.


----------



## James Long (Apr 17, 2003)

Yep ... Contacting live tech support will help with split second decisions. 

That falls under "collect data for a future upgrade".


----------



## NYDutch (Dec 28, 2013)

Yep, there's no question that AI operated cars are still in the learning stages, and likely always will be. Just like humans...


----------



## dreadlk (Sep 18, 2007)

phrelin said:


> As the one who started this thread, I have to acknowledge I have significant reservations regarding "self-driving" vehicles which I believe I expressed before. I'm particularly concerned because I still see references to "autonomous" vehicles which concerns I explained here in 2015 is a disturbing thought. Or to quote Wikipedia:
> 
> Put in the words of one Nissan engineer, "A truly autonomous car would be one where you request it to take you to work and it decides to go to the beach instead."​
> My problems aren't with robotics or AI. I just generally distrust people and generally consider them to be irresponsible. I think the 21st Century advent of idiots literally falling off cliffs while using "devices" sustains my opinion. And now we're talking about some corporation execs with a Silicon Valley mentality deciding who gets killed.
> ...


Nice post. Just to expand on one detail.
If something goes wrong with a new car or even a plane today it is deemed the manufactures fault if it is proven to be a defect in the design. The amount of litigation that will occur over the first ten years of self driving cars will be enormous. Which company is going to make the first leap and help set the new legal standards. They can tell you that it was following an algorithm and was not faulty but if someone is killed the algorithm used has to be cleared of any bad code or incorrect assumptions or the company pays.


----------



## NYDutch (Dec 28, 2013)

Similar litigation issues are already being worked out over the limited AI features already offered in some retail vehicles, and any precedents should be pretty well established by the time dedicated AI operated vehicles see widespread distribution. I'm sure companies like Waymo, etc., are currently paying hefty insurance premiums or setting aside large sums for self-insurance to put their test vehicles on the public roads, both for their own protection and due to various state mandates. It would be interesting to know what insurance coverage Waymo had to have before they received permits in Arizona and California to test dedicated AI vehicles with no safety drivers.

And I agree that "truly autonomous" is an incorrect label for the driverless cars currently being developed since they're controlled by dedicated defined scope AI programs. The AI programs being used are more like taxi or ride share drivers in that they have some latitude in deciding how to get from point A to point B, but they don't get to determine where or what point B is.


----------



## dreadlk (Sep 18, 2007)

NYDutch said:


> Similar litigation issues are already being worked out over the limited AI features already offered in some retail vehicles, and any precedents should be pretty well established by the time dedicated AI operated vehicles see widespread distribution. I'm sure companies like Waymo, etc., are currently paying hefty insurance premiums or setting aside large sums for self-insurance to put their test vehicles on the public roads, both for their own protection and due to various state mandates. It would be interesting to know what insurance coverage Waymo had to have before they received permits in Arizona and California to test dedicated AI vehicles with no safety drivers.
> 
> And I agree that "truly autonomous" is an incorrect label for the driverless cars currently being developed since they're controlled by dedicated defined scope AI programs. The AI programs being used are more like taxi or ride share drivers in that they have some latitude in deciding how to get from point A to point B, but they don't get to determine where or what point B is.


It will be a very interesting day when a company finally sells a car that is self driving and actually states you do not need to be holding the steering etc but can sit back and take a nap. My feeling on this is that it is never going to happen. The reliable tech may be there in a decade or so but the legal issues are going to keep it dead in the water.


----------



## NYDutch (Dec 28, 2013)

dreadlk said:


> It will be a very interesting day when a company finally sells a car that is self driving and actually states you do not need to be holding the steering etc but can sit back and take a nap. My feeling on this is that it is never going to happen. The reliable tech may be there in a decade or so but the legal issues are going to keep it dead in the water.


"...never going to happen."??? 

GM just introduced a self-driving car without a steering wheel


----------



## James Long (Apr 17, 2003)

NYDutch said:


> "...never going to happen."???
> 
> GM just introduced a self-driving car without a steering wheel


"Cruise, which is based in San Francisco, expects to test the modified Chevy Bolt next year."
"The company has filed a petition with the National Highway Traffic Safety Administration, requesting exemptions from 16 safety standards. It says these aren't relevant because the vehicle doesn't have manual controls."
"GM is requesting that 2,500 vehicles receive exemptions. That's the maximum number the government will currently allow for each manufacturer."
"Cruise wouldn't say where it will eventually deploy the new vehicles or how soon the public will be able to ride in them."

Are any of those vehicles on the road? Will any be on the road in 2019? A lot has happened in the industry since January (and it has not all been positive).
Automakers have been displaying "concept cars" longer than you have been alive.

While "never" is one of those absolute statements that should always be avoided (always avoid absolutes) a vehicle without any manual controls is a stretch. I expect there will be touch panel controls or at least an app that would allow a driver to manually operate the vehicle.


----------



## phrelin (Jan 18, 2007)

[


James Long said:


> "Cruise, which is based in San Francisco, expects to test the modified Chevy Bolt next year."
> "The company has filed a petition with the National Highway Traffic Safety Administration, requesting exemptions from 16 safety standards. It says these aren't relevant because the vehicle doesn't have manual controls."
> "GM is requesting that 2,500 vehicles receive exemptions. That's the maximum number the government will currently allow for each manufacturer."
> "Cruise wouldn't say where it will eventually deploy the new vehicles or how soon the public will be able to ride in them."
> ...


Yeah, I'm not looking for one of these one the road just yet. IMHO as explained elsewhere there is a bit of financial (over?) enthusiasm relative to Cruise Automation that makes me skeptical and my take is the GM management was justifying their huge investment to shareholders resulting in a lot of PR-based articles this week:

During the General Motors Q3 2018 earnings call on Wednesday morning, CEO Mary Barra and CFO Dhivya Suryadevara went over the company's ... costs earmarked for the year. ...It's understood that ... $1 billion will be shelled out this year for its ongoing AV program happening in San Francisco. Barra said it would also "be great" if the team came in under budget, though it's not expected of them.

Costs associated with the $1 billion Cruise Automation budget allocated from General Motors for 2018 includes hiring more engineers, though it's not immediately clear if it's to hire additional staff or to replace unforeseen departures. Otherwise, it's aggregated towards continued development of the Cruise AV program, accumulating miles, and further data collection.

As General Motors will front $1 billion of its own money for Cruise Automation, SoftBank announced an additional $2.3 billion commitment into the self-driving vehicle program, and acquired 19.6 percent of the Silicon Valley company in the process. ... The deal with GM Cruise is uniquely special, as SoftBank gains now gains access to a vehicle manufacturer. Should everything go according to plan, the SoftBank investment opens up the potential for the Japanese multinational conglomerate to build self-driving cars for various global services, and eventually to even weave in other companies SoftBank has invested in around the world.

...Honda also joined General Motors as a strategic partner in autonomous vehicle development. Honda will invest $2 billion over 12 years into Cruise Automation with $750 million in equity up front, announced back on October 3, 2018. The investment brings the value of Cruise Automation to an estimated $14.6 billion.​
That's a lot of "billions" being thrown around particularly when the writer's best speculation that the GM biliion "includes hiring more engineers, though it's not immediately clear if it's to hire additional staff or to replace unforeseen departures" and "aggregated towards continued development of the Cruise AV program, accumulating miles, and further data collection."

That sound like corporate-speak for "we're working on something."


----------



## NYDutch (Dec 28, 2013)

I can't find the link right now, but Waymo is also petitioning the NTHSA for modified rules for exemptions from manual control requirements. And from the last paragraph of the GM article, "Waymo announced in November that it was removing test drivers from the front seat. It plans to launch a commercial service in the Phoenix area this year."

I clicked too fast. On edit:

Regarding the sales of AI operated cars, I noticed this bit in a Bloomberg article on the state of AI cars in Europe and Asia: "South Korea, home to Hyundai Motor Co., is 'the silent leader,' with plans to have AVs for sale by 2020..."

Bloomberg - Are you a robot?


----------

