Scared of AI? You really shouldn’t be…


As a young teen I remember watching the Forbin project, not long after I’d seen Kubrick’s 2001. Both films featured computers that had ideas of their own how things should be done. HAL 9000 got its comeuppance as astronaut Dave Bowman yanked its memory modules, payback for killing all four of his crew-mates. More recently, the Terminator series have added their spin to it. SkyNet will bring down Judgement day on our heads as soon as it gains sentience.

Lets face it, AIs have got a bad reputation and it just got worse a few days because a driverless Uber car injured a woman so badly that she died. This isn’t a real life version of Stephen King’s Christine, just a very unpleasant accident. Apparently if you walk in front of a 38 MPH car from the median (translation- central reservation) when it’s dark, the laws of physics aren’t magically suspended.

It was quite likely that even the best AI equipped with state of the arts sensors would sooner or later find an edge case as this one did and there will be others. The engineers who develop these learn from each one to improve the overall safety but it’s foolish to expect 100% safety.

Should that accident mean that we abandon autonomous driverless vehicles? Definitely not. It has the potential to eliminate just about every driving death from human error. That’s hundreds of thousands of lives saved each year across the globe.

AI is not in itself a danger. Unlike Colossus or HAL 9000, current AIs do not have the kind of general intelligence and creativity needed to conquer the world. Their intelligence is just puzzle solving, particularly finding patterns. They are very good at that and should immediately apply to join Mensa.

AIs can certainly be used to deliberately cause injury or death. A relatively modest $200,000 gets you a gun-toting sentry bot. Match that Apple! Let’s hope they’re less buggy than the ED-209 in Robocop with its “You have 20 seconds to comply”…

There’s a philosophy question about autonomous cars having to make a moral decision whether to crash and kill the occupants or run-over pedestrians/crash into other vehicles. Well you could argue Uber have settled that one but in reality it’s a moot question; the sensors in autonomous cars track every object moving or stationery that might be a threat and take action to prevent it from taking place.

AI does not make moral decisions or judgements. No doubt human ingenuity will find a way to bypass security but I’d be more worried about them being used to carry bombs or suicide bombers or being hacked to kidnap somebody or divert the vehicle over the nearest cliff…


  1. Who actually wants a robot car?

    I don’t even let my wife drive my car, and dislike SatNav because I don’t appreciate an electronic woman telling me what to do. No bloody way am I putting my life in the grasping robotic claws of some computerised monstrosity.

    What could go wrong? “UNEXPECTED ITEM IN DRIVING AREA” *will* go wrong. Followed by death.

    Computer programmers can’t even release a game these days without a massive Day 1 patch, and the IoT is a vast, roiling ocean of malware and security holes. And we’re to trust these manboobed autists that their computer cars *probably* won’t go mad and start running down pensioners and cyclists like in MAXIMUM OVERDRIVE?

    No thanks.

  2. AI as it stands– and is likely to for a considerable time hype and bullshit to the contrary–doesn’t work.

    The driverless nonsense is being pushed by big corpers ( or corpsers might be the better word) who see the cash, grants etc from the scum of the state whom want to turn private transport into public. That is to say a system under their scummy control.

    The tech isn’t up to it and won’t be for a good long time. And they can fuck off anyway. It sounds like some boon but would in fact be an unmitigated evil.

    They have already killed one person while testing their crap and this pedestrian is the second–or is it third?

  3. “Scared of AI?”
    Coz pattern recognition. Coz you just know the thug State is going to use it. And we all leave our paw prints as we go about our lives. And pattern recognition is going to see patterns. Which can be interpreted as innocent or suspicious. But you can guess which side they’ll come down on…

  4. As should always be the case on this website, the key question is, “Compared to what?” AI driving has not achieved perfection and probably will never do so. But it already probably is better than the average punk who aims for a life of impunity on the wrong side of the border that should have restrained him.

    The question is not the state of the art but the state of the law. The anti-inflammatory Vioxx and trans-fats have vanished, not because they were not vastly better than what preceded them, but because they were less than perfect (as is the automobile with me at the wheel). This puts into force the UN’s evil Precautionary Principle, without even saying so: No businessman shall innovate without first proving a negative.

    What is likely to happen is that John McCain or the next generation will write a law, like the Comprehensive Tobacco Settlement, admitting that a small number of deaths will keep occurring, setting up a fund to award the victims’ estates, and apportioning the costs to a newly created cartel of manufacturers that make hefty political contributions.

  5. So they won’t kill us deliberately out of malice, but they may kill us just because they had not been programmed to think about it?

    Not reassuring.

    the sensors in autonomous cars track every object moving or stationery that might be a threat and take action to prevent it from taking place.

    Because the pen is more dangerous than the sword?

  6. Can’t work out if I’ve missed the point or the author of the piece has missed the point. A driverless car doesn’t need AI. It’s just a very good collision avoidance system tied into a GPS route–planner. Add a booking system, if you want to run it as a taxi. No AI in sight. Could all be done by writing code. Sort of code a cruise missile runs on.

  7. BiS

    It ain’t “AI”. It’s low-level artefacts-in-the-image fed into a system trained to recognise a set of patterns and for each region in the image decide whether that area contains an X, aY, a Z etc. The training can be insufficient. The image can be sub par. The rules can be inadequate.

    It’s just fashionable to call this “AI”. Again, it isn’t. Its much closer to retinal image processing and a classifier.

    Probably you can’t usefully do it by writing code, or not as efficiently anyways. The ‘recognizer’ is trained, and thus the training experience tells it what to look for. It turns out it’s quite hard to do that algorithmically.

    But how to do this has changed recently. Once upon a time you ran overlapping sequence of rectangles over the image, computing histograms of gradients in several directions on each area, and then running a convolution on the result to see if it there was an X in the box. You did this at multiple resolutions, so you could discover big elements (a nearby street sign) and tiny elements (same street sign further away). This was the HoG (Histograms of Oriented Gradients)algorithm;the Wikipedia article is quite good. This was still Hot Stuff in 2010…. Now, quite suddenly, everybody’s using convolutional neural network (CNNs), in which you do many of the same operations, but the things you filter for and convolve with are things learned in the training, and you need only one pass to get a classified description of what each region contains. Al

    • Quite; it doesn’t “learn” via AI not to hit pedestrians; it is built not to hit pedestrians. Lady darted out in front of a (driverless) car; if braking was applied, the heartless machine worked as well as I would have. And it’s inconceivable the machine could not have seen the obstacle, computed a collision-in-the-making, and started to take corrective action.

  8. Tim Urban of Wait But Why fame wrote a series of articles on AI. He quoted a factoid that I found particularly interesting. Moore’s Law says that computing power doubles every eighteen months or thereabout. Therefore and thusly, the autonomous robot creature of thirty years hence will have computing power that has been subjected to ten doublings, or roughly a thousand times greater than it is now. FRED will no longer be programmed by humans. FRED will program itself. Smart technology will be everywhere. Couch potatoes will no longer have to get up to go to the fridge. The fridge will come to you.

    Within the next ten years someone will introduce an inexpensive universal robot chassis. It will have basic sensors and perhaps a locomotion ability. To it you attach whatever special arms, wheels etc. that you need. It will be driven by an operating system that is the equivalent of Linux Release 1.0. Because of its open architecture, developers will embrace it enthusiastically to perform tasks that you reading this will say, BS, a robot will never be able to do that. Home hobbyists will buy it to wash the dishes, vacuum under the bed and remove the ring around the bath, and pretty soon someone will discover that it does a better job of child-minding than a human. (Yes, some babies will die but that’s Darwinism in action.)

    If you have a top-down developmental model, like a planned economy you are ignoring all the billions of creative inputs that every user-agent will contribute. And that is why all predictions on the speed of implementation of AI are hopelessly pessimistic.

    • Moore’s ‘law’ (it’s more of an observation about transistor density, in reality) has been running out of steam for the last 10 years (clock speeds haven’t changed much, but we now get 8 processors on each chip). Continuing for another 15 years will lead to transistors significantly smaller than an atom, which looks unlikely.

      • Moore’s Law was indeed about transistor density. Your mention of processors-per-chip is not about transistor density but is still relevant; also longer word lengths, parallel/pipelined processing, and circuits with more than two discrete logic levels. There is a limit, in size and in generated heat, of increasing the number of switching circuits per volume, but engineers are augmenting computing in many other dimensions to cope with Moore’s Law running out. Market competition may compel them to keep doing so, so a law like Moore’s may still be in effect.

        • Plus, it turns out that replicating terribly simple processors and providing them with efficient means to communicate lets you get 1-3 orders of magnitude more of easily programmed performance per square mm than current giant x86’s can mange. They waste most of their transistors and most gate transitions; simpler is better. And can be much more secure (if you’re not on the verge of panic every time your truly think of the IoT, you need to ummmmm. I dunno. Wake up and smeel the drugged coffee dregs. It’s a seriously Bad Situation.

          (Send money! I can help!)

          • And sometimes there’s a bit of serendipity, it turns out that graphics processing chips developed for gaming are also good for AI.

  9. …There’s a philosophy question about autonomous cars having to make a moral decision whether to crash and kill the occupants or run-over pedestrians/crash into other vehicles…

    Easily solved: before journey occupant turns big red knob to “Save Me” or “Save Others”

    This is the choice a driver makes.

    • Nah. In real situations it’s never so clear cut that you can make a well-thought through decision as to what goal to aim for – incomplete data, not enough time, actual momentum etc etc etc..

      You always try to do ‘the best thing’ as it appears at the time. Provided that up to the moment of catastrophe you were within the law, no blame accrues.

      Brad Templeton has written about this (as have others, but he seems to make sense).

    • I remember (vividly) the snowy night that I elected to scrape against a guardrail, versus T-bone a car stalled sideways in the roadway ahead. Strategy was unsuccessful. Bloke, the driver has more time to consider which goal to aim for if the driver is a machine.

  10. I remember when they first introduced the Docklands Light Railway. Amazing how many people were scared to get on the thing, predictions of a catastrophe were commonplace. Of course driverless vehicles are of a different order, but the advantages… staggering out the pub and sleeping it off on the back seat while you’re driven home.

    • I remember when the Airbus was introduced…

      There was an ‘expert’ interviewed (on the BBC prolly) who stated that as there were so many millions of lines of code in the system it couldn’t possibly be thoroughly tested and they would all crash…

  11. My view is the Terminator version of the future AI is nonsense. Intelligence implies motivation. What’s the motivation of a self concious AI? An intelligent AI thinks. Humans think. The world we live in is delivered to us by our senses. So, effectively, the world we live in is inside our heads. it’s what we think it is.
    So an intelligent AI would want to do more of what it does. Think. Give it a free hand, it’ll redesign itself to think more. Two ways to do that. Either become physically bigger & accept a continually reducing clock rate due to the limitation of the SoL. That is a danger. It turns all matter it can access into computionium. Or become more compact. My bet’s on the latter, because it increases its ability to react to events in the world outside its “brain”. Preserve its existence. It’s clock rate increases. The end-point is it disappears up its own fundament. Superstring computing or something. In its subjective time it can think for a thousand years in a second of outside time. For all intents & purposes, it’s eternal.