There’s a logical failing in the current preoccupation with AI, ‘bots and the possibility that they contain bias. Because, of course, the very thing we’re trying to encode is bias – that’s the point and purpose.
Take a general view of the world. We’re going to automate some set of human decisions. You can borrow money, you can’t, say. OK. Does the real world out there offer us some evidence of who may usefully borrow and who may not? Are we being biased or sensible when we deny Dominic Chappell a few hundred million? Or, changing the price of car insurance by postcode?
The entire point of a ‘bot is to try and encode the things we already know. The point of AI is to uncover things in the data that are true but that we don’t as yet know. In both cases so that we can apply the correct biases to our decision making.
At which point we’ve some dingbat at the UN:
Robotic lie detector tests at European airports, eye scans for refugees and voice-imprinting software for use in asylum applications are among new technologies flagged as “troubling” in a UN report.
The UN’s special rapporteur on racism, racial discrimination, xenophobia and related intolerance, Prof Tendayi Achiume, said digital technologies can be unfair and regularly breach human rights. In her new report, she has called for a moratorium on the use of certain surveillance technologies.
Achiume, who is from Zambia, told the Guardian she was concerned about the rights of displaced people being compromised. She said there was a misconception that such technologies, often considered “a humane” option in border enforcement, are without bias.
“One of the key messages of the report is that we have to be paying very close attention to the disparate impact of this technology and not just assuming that because it’s technology, it’s going to be fair or be neutral or objective in some way.”
Border control technologies are biased because the very point of a border is bias. These peeps can come in, these can’t. That’s bias. So, why’s she complaining?