Great great article!!!!Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?
I'm only in my 30's but that's why I assume that I'll be long dead by the time that this becomes a reality or the norm.Richard Travale said:It sure made me think about the level of AI required.
http://www.popsci.com/blog-network/zero-moment/mathematics-murder-should-robot-sacrifice-your-life-save-two?src=SOC&dom=fb&utm_source=digg&utm_medium=emailA front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.
This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University.
That’s the best way to sum up my second ride in a Google self-driving car, on Tuesday.
My first ride, now almost four years ago, included merging onto a freeway and navigating a sweeping flyover curve with all the dexterity of a human driver. This time our route was even more mundane, basically an uneventful tour through the city streets of a Silicon Valley community.
http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-that-makes-googles-self-driving-cars-work/370871/In fact, it might be better to stop calling what Google is doing mapping, and come up with a different verb to suggest the radical break they've made with previous ideas of maps. I'd say they're crawling the world, meaning they're making it legible and useful to computers.
Self-driving cars sit perfectly in-between Project Tango—a new effort to "give mobile devices a human-scale understanding of space and motion"—and Google's recent acquisition spree of robotics companies. Tango is about making the "human-scale" world understandable to robots and the robotics companies are about creating the means for taking action in that world.
The more you think about it, the more the goddamn Googleyness of the thing stands out: the ambition, the scale, and the type of solution they've come up with to this very hard problem. What was a nearly intractable "machine vision" problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.
https://medium.com/future-of-cars-collaborative/a2d7e3ead598Transit Is More Important
We already have a really incredible technology for moving large numbers of people at scale that can also create large-scale economic growth: it’s called mass transit, and it’s the single best investment that we can make in our urban centers. It works at both long-haul and short-run scales.
A mag-lev train line from Washington, DC to Boston would create massive economic growth in all the cities it touches, effectively merging DC, Baltimore, Wilmington, Philadelphia, Trenton, Newark, New York, and Boston into one giant supercity with incredible density and potential for wealth generation.
Improving transit in cities like Baltimore (whose aborted subway system sits as a monument to shortsighted planning) would dovetail with such an effort, quickly transforming its economics and creating more opportunities for everyone.
I'm just noticing this thread, but I found this article to be absolutely fascinating. (As well as the subsequent discussions here.)Sam Posten said:Asimov only had 3 rules, but the ethics go much much deeper!
Great great article!!!!