AI Can’t Recognise Black Skin Tones – Excellent, That’s How Science And Engineering Work

2
533

There’s a certain shock horror to the finding that AI driven image recognition systems cannot recognise black skin. Or, perhaps, can’t recognise them as well as the more pinkish one common in the upper reaches of the Northern Hemisphere. Undoubtedly this is proof perfect of the racism of the capitalist plutocratic world cont. pg 94.

Except, of course, this is merely how science and engineering work. We try something out and the early versions are pretty skimpy to be sure. We observe that skimpiness, think through why they’re more revealing than revolutionary and go back to working on Version 0.02. And V 0.03, and so on, until we either give up in frustration – those personal jet packs – or get it right. Say, working out what causes sickle cell anaemia and why.

The No Breakfast Fallacy: Why the Club of Rome was wrong about us running out of minerals and metals

This thus isn’t some shock horror:

The racism of technology – and why driverless cars could be the most dangerous example yet
‘Machine vision’ is struggling to recognise darker-skinned pedestrians, and cost pressures could make things worse

The claim being that gammons like myself are well recognised by those driverless cars while they’ll plough through blacks like the ball in ten pin bowling. This might even be true at this level of technology. But that’s not the point at all. No one is going to generally release a technology which slaughters by melanin content. Not while the plaintiffs’ bar exists at least.

Self-driving cars are by no means the first technology to fail when confronted by other ethnicities: Google’s image-recognition system notoriously failed to discern black people from gorillas. Almost every product design has failed to grapple with the reality of humanity, from Kodak colour film that reduced dark skin to a pitch-black smudge; to motion-activated taps and driers that refuse to acknowledge the presence of a brown hand but will trigger for a white one.

A 10-tonne driverless truck poses a higher penalty for error, however. The good news is that most actually existing self-driving cars use more than one type of sensor, including several that do not rely on visible light at all: Tesla cars, for instance, have a radar built in to the front of the vehicle, while Google’s Waymo uses a bulky, but extraordinarily accurate Lidar system instead; think radar but with lasers. The bad news is that there is strong market pressure to move towards camera-only systems because of the huge cost savings. Such systems would only hit the streets in large numbers if they proved significantly safer than human drivers, but even that raises the important question: safer for whom?

Sure, the new technology isn’t right yet. That’s why it’s not in every car already. And it won’t be until it is right. Our problem therefore is what? We’ve an interesting engineering challenge, sure, but that we’re not all going to be using the technology until it stops preferentially killing blacks is rather more a sign of the non-racism of our society than anything else.