What's new

The autonomous (self driving) car buyers and owners thread (1 Viewer)

DaveF

Moderator
Senior HTF Member
Joined
Mar 4, 2001
Messages
28,751
Location
Catfisch Cinema
Real Name
Dave
Ok. I live in Ashburn, work in Chantilly, and sometimes commute to near national harbor.And I love going to DuPont circle for a burger at Shake Shack :)
 

Sam Posten

Moderator
Premium
HW Reviewer
Senior HTF Member
Joined
Oct 30, 1997
Messages
33,712
Location
Aberdeen, MD & Navesink, NJ
Real Name
Sam Posten
Asimov only had 3 rules, but the ethics go much much deeper!
http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/
Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?
Great great article!!!!
 

Richard Travale

Senior HTF Member
Joined
Feb 27, 2001
Messages
3,424
Location
The Island, Canada
Real Name
Rich Travale
I listened to an interview on the radio, talking about the ethical choices an automated vehicle would have to make in an emergency situation. Like, is it better to veer into an elderly person to avoid hitting a child. Or, does the vehicle crash into another car to avoid being hit from behind.
It sure made me think about the level of AI required. Not black & white at all.
 

KevinGress

Supporting Actor
Joined
Aug 24, 2005
Messages
836
Thanks for the link, Sam. It does provoke thought. I think the author missed the mark on several points, though.

"While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm."

[color=rgb(51,51,51);font-family:Arial, HelveticaNeue, 'Helvetica-Neue', Helvetica, sans-serif;]The first part is simply incorrect. Human drivers do scan their environment and are making many calculations even before they are aware of danger. Now, good and attentive drivers are always doing this - analyzing the scene before them and calculating what could happen. The problem is that too often drivers either feel they can divert more mental resources like other endeavors - like eating, applying makeup, or calling/texting and this is what causes accidents (the article notes this). [/color]

[color=rgb(51,51,51);font-family:Arial, HelveticaNeue, 'Helvetica-Neue', Helvetica, sans-serif;]The thing is, I think this will also become a problem for 'robot' cars. People tend to vastly overestimate the abilities of a computer chip in comparison to the human brain. The issue is that the computer is doing a large number of calculations less in a given moment than a human brain is. So, as car manufacturers, politicians, and 'riders' add more tasks for these cars to perform (monitor environment, monitor speed, monitor internals, gather data for reports, 'call in' said reports, select music, do internet searches, connect to work etc.) the more chances that a car will become 'distracted' (ex: car has to create and submit report to federal [/color]bureaucracy on a normally non-busy street or highway and a deer or child runs out) .

The other problem the article has, and while it enumerates it several times, is what it offered up was simply thought exercises and not realistic. While fun to think about, you can't base policy or ethical behavior on them because in essence they deny reality. Take the motorcyclists example - 'the car is put in a scenario where it will hit one of two motorcyclists, should it hit the one with, or without the helmet?' The ethical answer is to avoid both.

Now, with its problems, the article is worth reading because it shows us just how much thinking and deciding that we, as a society, still need to do in regards to technology of this kind. What place does it have in society? How can it enhance, and not hinder, life? etc.
 

DaveF

Moderator
Senior HTF Member
Joined
Mar 4, 2001
Messages
28,751
Location
Catfisch Cinema
Real Name
Dave
I think the Wired article highlights a challenge we face in the "information" age: being penalized for knowledge. The author asserts that people are happier crashing into cars based on ignorant reflexes rather than crash harm being minimized through informed decisions. I think this is reasonable conclusion, and very frustrating.

In this context, and stipulating he's accurate for the sake of argument, the author misses an interesting possibility: self-driving cars, in when faced with an unavoidable accident, will target each other. Presuming a car-net, they can communicate the impending disaster, optimize their collision for minimum damage, as pre-deploy passenger-safety features. And when prospective buyers learn that smart cars are built to crash into each other, the smart car industry will die of withering sales.


More practically: the deterministic choices the author presents are too simplistic, and I would hope no one would go in such a strict route. Because that information is too little. While the Volvo is better in a crash than the Fiesta, it's unknown if the Volvo is carrying a toddler with a young mom driving while the Fiesta has 90 year old Grandma who has lived a rich and full life. (I hope I don't go too far with this example. I don't mean to imply or actually start the "lifeboat" ethical debate here :) )

Actuarial science is needed for such cars; I'll guess that spam filtering and also life-insurance experts can find new careers in smart-car programming. But, as the author considered, I think some randomness is needed in the decision making.
 

BrianW

Senior HTF Member
Joined
Jan 30, 1999
Messages
2,563
Real Name
Brian
I think if an autonomous car spots a clown by the side of the road, it should be programmed to take him out. It'd be better for all of us.
 

Sam Posten

Moderator
Premium
HW Reviewer
Senior HTF Member
Joined
Oct 30, 1997
Messages
33,712
Location
Aberdeen, MD & Navesink, NJ
Real Name
Sam Posten
Big brains already thinking hard about this:
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.

This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University.
http://www.popsci.com/blog-network/zero-moment/mathematics-murder-should-robot-sacrifice-your-life-save-two?src=SOC&dom=fb&utm_source=digg&utm_medium=email
 

Sam Posten

Moderator
Premium
HW Reviewer
Senior HTF Member
Joined
Oct 30, 1997
Messages
33,712
Location
Aberdeen, MD & Navesink, NJ
Real Name
Sam Posten
Sounds like their confidence is ramping up and that more demos to journos are going almost too well...
Boring.
That’s the best way to sum up my second ride in a Google self-driving car, on Tuesday.
My first ride, now almost four years ago, included merging onto a freeway and navigating a sweeping flyover curve with all the dexterity of a human driver. This time our route was even more mundane, basically an uneventful tour through the city streets of a Silicon Valley community.
http://bits.blogs.nytimes.com/2014/05/13/a-trip-in-a-self-driving-car-now-seems-routine/?_php=true&_type=blogs&_r=0

More impressions:
http://recode.net/2014/05/13/googles-self-driving-car-a-smooth-test-ride-but-a-long-road-ahead/

http://www.nytimes.com/2014/05/14/upshot/when-driverless-cars-break-the-law.html
 

Sam Posten

Moderator
Premium
HW Reviewer
Senior HTF Member
Joined
Oct 30, 1997
Messages
33,712
Location
Aberdeen, MD & Navesink, NJ
Real Name
Sam Posten
Pre-vis is key. Google has 2k of -millions of miles of roads mapped so far.

In fact, it might be better to stop calling what Google is doing mapping, and come up with a different verb to suggest the radical break they've made with previous ideas of maps. I'd say they're crawling the world, meaning they're making it legible and useful to computers.

Self-driving cars sit perfectly in-between Project Tango—a new effort to "give mobile devices a human-scale understanding of space and motion"—and Google's recent acquisition spree of robotics companies. Tango is about making the "human-scale" world understandable to robots and the robotics companies are about creating the means for taking action in that world.

The more you think about it, the more the goddamn Googleyness of the thing stands out: the ambition, the scale, and the type of solution they've come up with to this very hard problem. What was a nearly intractable "machine vision" problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.

http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-that-makes-googles-self-driving-cars-work/370871/
 

Sam Posten

Moderator
Premium
HW Reviewer
Senior HTF Member
Joined
Oct 30, 1997
Messages
33,712
Location
Aberdeen, MD & Navesink, NJ
Real Name
Sam Posten
My pal Dave Troy is a huge fan of City life, he posted on Autonomous cars with some of that flavor:
Transit Is More Important
We already have a really incredible technology for moving large numbers of people at scale that can also create large-scale economic growth: it’s called mass transit, and it’s the single best investment that we can make in our urban centers. It works at both long-haul and short-run scales.
A mag-lev train line from Washington, DC to Boston would create massive economic growth in all the cities it touches, effectively merging DC, Baltimore, Wilmington, Philadelphia, Trenton, Newark, New York, and Boston into one giant supercity with incredible density and potential for wealth generation.
Improving transit in cities like Baltimore (whose aborted subway system sits as a monument to shortsighted planning) would dovetail with such an effort, quickly transforming its economics and creating more opportunities for everyone.
https://medium.com/future-of-cars-collaborative/a2d7e3ead598
 

Josh Steinberg

Premium
Reviewer
Senior HTF Member
Joined
Jun 10, 2003
Messages
26,358
Real Name
Josh Steinberg
Sam Posten said:
Asimov only had 3 rules, but the ethics go much much deeper!
http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

Great great article!!!!
I'm just noticing this thread, but I found this article to be absolutely fascinating. (As well as the subsequent discussions here.)

It actually reminds me a little of an argument I had with my driver's ed teacher back in the day. I don't remember the exact situation described, but basically the teacher was explaining how the rules of the road limit the circumstances when you can and can't change lanes, etc., and about properly obeying those rules under all circumstances. The scenario was something like this: let's say you're on a highway that goes in both directions, and a car starts coming at you the wrong way -- driving in the wrong direction in your lane towards you. If you turn to your right, you'll miss being hit, but you'll probably crash into the wall on the side of the highway which will almost certainly result in catastrophic damage to your vehicle. If you turn to your left, you'll be swerving into oncoming traffic as that's the lane going in the other direction. But in the example as the teacher presented it, for whatever reason, there was no one on the road going in the other direction. The teacher was adament that there was only one proper thing to do - to crash into the right side, because you're never allowed to drive the wrong direction on a highway. I get where she's coming from in theory. But in practice, who would do that? In real life, in a scenario where you only have a second to decide, I think most people would go with the illegal move (driving in the wrong lane for five seconds) over the legal move (crashing into a wall, possibly destroying your car, possibly getting severely injured or killed in the process).

What would the automatic car do in that situation? Would it be programmed to follow the rules of the road at all costs, or would it allow for an illegal move that was unquestionably the safest solution to a momentary danger?
 

Users who are viewing this thread

Sign up for our newsletter

and receive essential news, curated deals, and much more







You will only receive emails from us. We will never sell or distribute your email address to third party companies at any time.

Latest posts

Forum statistics

Threads
357,007
Messages
5,128,246
Members
144,228
Latest member
CoolMovies
Recent bookmarks
0
Top