As a young teen I remember watching the Forbin project, not long after I’d seen Kubrick’s 2001. Both films featured computers that had ideas of their own how things should be done. HAL 9000 got its comeuppance as astronaut Dave Bowman yanked its memory modules, payback for killing all four of his crew-mates. More recently, the Terminator series have added their spin to it. SkyNet will bring down Judgement day on our heads as soon as it gains sentience.
Lets face it, AIs have got a bad reputation and it just got worse a few days because a driverless Uber car injured a woman so badly that she died. This isn’t a real life version of Stephen King’s Christine, just a very unpleasant accident. Apparently if you walk in front of a 38 MPH car from the median (translation- central reservation) when it’s dark, the laws of physics aren’t magically suspended.
It was quite likely that even the best AI equipped with state of the arts sensors would sooner or later find an edge case as this one did and there will be others. The engineers who develop these learn from each one to improve the overall safety but it’s foolish to expect 100% safety.
Should that accident mean that we abandon autonomous driverless vehicles? Definitely not. It has the potential to eliminate just about every driving death from human error. That’s hundreds of thousands of lives saved each year across the globe.
AI is not in itself a danger. Unlike Colossus or HAL 9000, current AIs do not have the kind of general intelligence and creativity needed to conquer the world. Their intelligence is just puzzle solving, particularly finding patterns. They are very good at that and should immediately apply to join Mensa.
AIs can certainly be used to deliberately cause injury or death. A relatively modest $200,000 gets you a gun-toting sentry bot. Match that Apple! Let’s hope they’re less buggy than the ED-209 in Robocop with its “You have 20 seconds to comply”…
There’s a philosophy question about autonomous cars having to make a moral decision whether to crash and kill the occupants or run-over pedestrians/crash into other vehicles. Well you could argue Uber have settled that one but in reality it’s a moot question; the sensors in autonomous cars track every object moving or stationery that might be a threat and take action to prevent it from taking place.
AI does not make moral decisions or judgements. No doubt human ingenuity will find a way to bypass security but I’d be more worried about them being used to carry bombs or suicide bombers or being hacked to kidnap somebody or divert the vehicle over the nearest cliff…
Who actually wants a robot car? I don’t even let my wife drive my car, and dislike SatNav because I don’t appreciate an electronic woman telling me what to do. No bloody way am I putting my life in the grasping robotic claws of some computerised monstrosity. What could go wrong? “UNEXPECTED ITEM IN DRIVING AREA” *will* go wrong. Followed by death. Computer programmers can’t even release a game these days without a massive Day 1 patch, and the IoT is a vast, roiling ocean of malware and security holes. And we’re to trust these manboobed autists that their computer… Read more »
Computer programmers can’t even release a game these days without a massive Day 1 patch, and the IoT is a vast, roiling ocean of malware and security holes. And we’re to trust these manboobed autists that their computer cars *probably* won’t go mad and start running down pensioners and cyclists like in MAXIMUM OVERDRIVE? The tech isn’t up to it and won’t be for a good long time. And they can fuck off anyway. It sounds like some boon but would in fact be an unmitigated evil. They have already killed one person while testing their crap and this pedestrian… Read more »
Twatty flatus-blast today. WI-FI in the nut house was never a good idea.
Is Murphy off thorazine yet Twats?
It was far worse in the old days before t’internet. You had to hope all the bugs were found before you created the master. Modern AAA games are constructions of massive complexity and with the best will in the world, there will always be bugs. AI though relies far less on programmers and way more on the algorithms and the data. Testing the algorithms is straightforward. That woman that was killed by the Uber car walked in front of it from the central reservation. In 2016 there were 37,461 fatalities from motor vehicles. Source: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year Self driving cars will drop… Read more »
AI as it stands– and is likely to for a considerable time hype and bullshit to the contrary–doesn’t work. The driverless nonsense is being pushed by big corpers ( or corpsers might be the better word) who see the cash, grants etc from the scum of the state whom want to turn private transport into public. That is to say a system under their scummy control. The tech isn’t up to it and won’t be for a good long time. And they can fuck off anyway. It sounds like some boon but would in fact be an unmitigated evil. They… Read more »
“Scared of AI?”
Petrified.
Coz pattern recognition. Coz you just know the thug State is going to use it. And we all leave our paw prints as we go about our lives. And pattern recognition is going to see patterns. Which can be interpreted as innocent or suspicious. But you can guess which side they’ll come down on…
“Who want” not “whom” –sorry.
As should always be the case on this website, the key question is, “Compared to what?” AI driving has not achieved perfection and probably will never do so. But it already probably is better than the average punk who aims for a life of impunity on the wrong side of the border that should have restrained him. The question is not the state of the art but the state of the law. The anti-inflammatory Vioxx and trans-fats have vanished, not because they were not vastly better than what preceded them, but because they were less than perfect (as is the… Read more »
So they won’t kill us deliberately out of malice, but they may kill us just because they had not been programmed to think about it?
Not reassuring.
the sensors in autonomous cars track every object moving or stationery that might be a threat and take action to prevent it from taking place.
Because the pen is more dangerous than the sword?
Alas, thinking is a long way off…
Can’t work out if I’ve missed the point or the author of the piece has missed the point. A driverless car doesn’t need AI. It’s just a very good collision avoidance system tied into a GPS route–planner. Add a booking system, if you want to run it as a taxi. No AI in sight. Could all be done by writing code. Sort of code a cruise missile runs on.
BiS It ain’t “AI”. It’s low-level artefacts-in-the-image fed into a system trained to recognise a set of patterns and for each region in the image decide whether that area contains an X, aY, a Z etc. The training can be insufficient. The image can be sub par. The rules can be inadequate. It’s just fashionable to call this “AI”. Again, it isn’t. Its much closer to retinal image processing and a classifier. Probably you can’t usefully do it by writing code, or not as efficiently anyways. The ‘recognizer’ is trained, and thus the training experience tells it what to look… Read more »
Quite; it doesn’t “learn” via AI not to hit pedestrians; it is built not to hit pedestrians. Lady darted out in front of a (driverless) car; if braking was applied, the heartless machine worked as well as I would have. And it’s inconceivable the machine could not have seen the obstacle, computed a collision-in-the-making, and started to take corrective action.
The first ever passenger train killed one of its promoters. Obviously, passenger trains were immediately banned.
Tim Urban of Wait But Why fame wrote a series of articles on AI. He quoted a factoid that I found particularly interesting. Moore’s Law says that computing power doubles every eighteen months or thereabout. Therefore and thusly, the autonomous robot creature of thirty years hence will have computing power that has been subjected to ten doublings, or roughly a thousand times greater than it is now. FRED will no longer be programmed by humans. FRED will program itself. Smart technology will be everywhere. Couch potatoes will no longer have to get up to go to the fridge. The fridge… Read more »
Apologies, I meant fifteen years hence.
Moore’s ‘law’ (it’s more of an observation about transistor density, in reality) has been running out of steam for the last 10 years (clock speeds haven’t changed much, but we now get 8 processors on each chip). Continuing for another 15 years will lead to transistors significantly smaller than an atom, which looks unlikely.
Moore’s Law was indeed about transistor density. Your mention of processors-per-chip is not about transistor density but is still relevant; also longer word lengths, parallel/pipelined processing, and circuits with more than two discrete logic levels. There is a limit, in size and in generated heat, of increasing the number of switching circuits per volume, but engineers are augmenting computing in many other dimensions to cope with Moore’s Law running out. Market competition may compel them to keep doing so, so a law like Moore’s may still be in effect.
Plus, it turns out that replicating terribly simple processors and providing them with efficient means to communicate lets you get 1-3 orders of magnitude more of easily programmed performance per square mm than current giant x86’s can mange. They waste most of their transistors and most gate transitions; simpler is better. And can be much more secure (if you’re not on the verge of panic every time your truly think of the IoT, you need to ummmmm. I dunno. Wake up and smeel the drugged coffee dregs. It’s a seriously Bad Situation.
(Send money! I can help!)
And sometimes there’s a bit of serendipity, it turns out that graphics processing chips developed for gaming are also good for AI.
…There’s a philosophy question about autonomous cars having to make a moral decision whether to crash and kill the occupants or run-over pedestrians/crash into other vehicles…
Easily solved: before journey occupant turns big red knob to “Save Me” or “Save Others”
This is the choice a driver makes.
Nah. In real situations it’s never so clear cut that you can make a well-thought through decision as to what goal to aim for – incomplete data, not enough time, actual momentum etc etc etc..
You always try to do ‘the best thing’ as it appears at the time. Provided that up to the moment of catastrophe you were within the law, no blame accrues.
Brad Templeton has written about this (as have others, but he seems to make sense).
I remember (vividly) the snowy night that I elected to scrape against a guardrail, versus T-bone a car stalled sideways in the roadway ahead. Strategy was unsuccessful. Bloke, the driver has more time to consider which goal to aim for if the driver is a machine.
I remember when they first introduced the Docklands Light Railway. Amazing how many people were scared to get on the thing, predictions of a catastrophe were commonplace. Of course driverless vehicles are of a different order, but the advantages… staggering out the pub and sleeping it off on the back seat while you’re driven home.
I remember when the Airbus was introduced…
There was an ‘expert’ interviewed (on the BBC prolly) who stated that as there were so many millions of lines of code in the system it couldn’t possibly be thoroughly tested and they would all crash…
My view is the Terminator version of the future AI is nonsense. Intelligence implies motivation. What’s the motivation of a self concious AI? An intelligent AI thinks. Humans think. The world we live in is delivered to us by our senses. So, effectively, the world we live in is inside our heads. it’s what we think it is. So an intelligent AI would want to do more of what it does. Think. Give it a free hand, it’ll redesign itself to think more. Two ways to do that. Either become physically bigger & accept a continually reducing clock rate due… Read more »