Amazon Shut Down Its Recruitment AI For Gender Discrimination

11
1228

This is a most amusing little story from Amazon, they wanted to use AI to aid in hiring decisions and found that they weren’t getting much aid from it. Thus, of course, they stopped. The amusement coming from the fact that it proves what Gary Becker said all those years ago – discrimination is costly. Thus we’d expect a market based economic system not to contain all that much discrimination. And that actually is the reason Amazon shut it down, because it was costly in the manner it did discriminate.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much like shoppers rate products on Amazon, some of the people said.

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

Which is amusing. That the AI has to be trained means that it’s got to use past data to see what patterns there might be which can become the basis of decision making. But if that past data itself contains prejudice or bias then the AI itself is going to be prejudiced or biased.

Now, normally, this would be a good thing. For we’re trying to train the AIs to be useful in this world, as people are, not as they should be:

This brings us to another common argument about AI — that it should not incorporate the things we know about actual human beings.

For example, we know that some to many humans are racist, misogynist, greedy and short-termist. AI too can pick up those foibles, and can definitely show what we would call prejudice.

Insisting they do not is to miss the point entirely. The only possible use of AIs is to provide us with knowledge about the world we live in, knowledge we cannot derive purely from logic but which can only be gained through data processing.

After all, the world is full of deeply prejudiced human beings. An AI which didn’t account for that would have little value in describing our world. That’s why we should not just want, but must absolutely insist that AIs do incorporate our errors.

This is one of those things which is generally true but which isn’t specifically so.

There is quite a movement out there to insist that all algorithms, all AIs, must be audited. That there can be no black boxes – we must know the internal logic and information structures of everything. This is so we can audit them to ensure that none of the either conscious or unconscious failings of thought and prejudice that humans are prey to are included in them.

But, as above, this fails on one ground – that we humans are prey to such things. Thus a description of, or calculation about, a world inhabited by humans must at least acknowledge, if not incorporate, such prejudices. Otherwise the results coming out of the system aren’t going to be about this world, are they?

Again, generally true. But not in this specific case. For one of the things that Amazon would like to do is work out when it is being discriminatory to its own cost? Like, say, when it doesn’t hire qualified and nice and cheap – cheap because they’re being discriminated against – women? Making the same mistake everyone else is means losing money, or perhaps not making as much as you could be. In a capitalist world this is a bad thing. And it’s that market bit, that others can be doing things a different way, which militates against your doing this.

So, here, Amazon would definitely like not to be taste discriminating against women candidates. They’d still like to be rationally discriminating of course, which is the very point of selecting people to be hired in the first place, to discriminate.

So, why did Amazon can the gender discriminatory AI? Because it was losing Amazon money by not hiring qualified women. That means that we need regulation of AI, when markets achieve it for us, why?

0 0 votes
Article Rating
Subscribe
Notify of
guest

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Spike
6 years ago

You see, unequal numbers of women on the job can only mean you hate women. Amazon did not really want to redesign hiring; it wanted to automate the successful process used in the past and yet be able to point to the automaton and deny all hatred. However, unequal numbers of women on the job can only mean you hate women.

Songstress Taylor Swift pans Tennessee candidate Marcia Blackburn for voting not to renew a law letting Washington overturn workplace decisions: can only mean you hate women. Sillier here, but the identical fallacy.

Spike
6 years ago

PS — This means that human HR people, benefitting from feedback on their past decisions, rejected theoretically “qualified women” because comparable candidates had not worked out. A sizable minority probably quit to have kids. This suggests that female job candidates are not cheap as a result of discrimination. Probably, on average, they are correctly priced. (Again imagining Star Trek Firenge, “It is never too late to ask, ‘How much for the woman?'”)

Rhoda Klapp
Rhoda Klapp
6 years ago

The AI ought to be told about the whole career. Women may or may not be cheaper, over time, all things considered. OTOH a tech guy might cost millions with one bad line of code.

Spike
6 years ago
Reply to  Rhoda Klapp

I was with you until your OTOH. It’s like playing your bench quarterback because today’s all-but-won game is not worth risking Tom Brady. I always say, if merely doing your job is too risky, you need another line of work.

Bloke on M4
Bloke on M4
6 years ago

You’ve missed the point of what they’re doing. The AI isn’t seeking value candidates. It’s seeking people like other people they’ve already had CVs from. The point of the exercise, I suspect, is to cut out recruitment agencies. Instead of going to the market and asking to find a C++ programmer from an agency, they want their own bank of CVs and then approach people. So it sounds like this AI read existing CVs and scored up words that appeared regularly in them and scored down all other words. Bit like a spam filter. It has no idea what the… Read more »

Pat
Pat
6 years ago

Could it be that AI came to a similar conclusion to James Damore?

Spike
6 years ago
Reply to  Pat

One of Damore’s conclusions was that women’s aptitude for jobs at Google might differ from men’s, and independently of Male Hatred. The other was that the corporation was irrationally resistant to expressions of such ideas. The current exercise proves the same assertions at Amazon.

Hallowed Be
Hallowed Be
6 years ago

From what i gather “Women’s chess team” on a CV wasn’t given as a high score by the AI as “Chess team”. So it had basically learnt to sexually discriminate. (not withstanding it was the chess team that was doing the discriminating)

Now of course its arguable whether it’s out and out sex discrimination or qualitative discrimination. But the former, without doubt, would have been argued and since that’s against the law and even refuting it would bring reputation damage it’s not too surprising Amazon pulled the plug.

Esteban DeGolf
Esteban DeGolf
6 years ago

Given what we’ve learned about the PC culture at many of these companies I’d be curious to know exactly what they found objectionable about the AI in this instance. Perhaps it worked very well but the number of women rated highly wasn’t acceptable? We’ve seen something similar in academia, certain groups don’t score well enough on standardized tests so we’ll come up with “soft & fuzzy” data points to boost their scores.

I wouldn’t be surprised at all if the problem wasn’t with the AI in this case, it was that reality didn’t cooperate.

Pat
Pat
6 years ago

Could it be that AI came to a similar conclusion to James Damore?

Rhoda Klapp
Rhoda Klapp
6 years ago

The AI ought to be told about the whole career. Women may or may not be cheaper, over time, all things considered. OTOH a tech guy might cost millions with one bad line of code.