Realist, not conformist analysis of the latest financial, business and political news

No, Don’t Regulate AI And Certainly Don’t Inclusively Manage It

We must regulate artificial intelligence because – well, according to the Fabians we must regulate artificial intelligence because. Their argument being that it produces results they don’t like so they must get to regulate it to produce results that they do like.

This is all dressed up in the usual feminist arguments. It’s largely men who do the AI coding, largely white men to boot. Therefore that coding will encapsulate their innate prejudices and this would be bad. They do mention the idea that perhaps women could be doing more of the coding then reject it on, I assume, the grounds that math is hard. Instead the insistence becomes that women – more specifically, women who share the Fabian outlook – should be able to tell those white men what to code into AIs.

This is all backed up by this example of how current AI is wrong:

What if the workforce designing those algorithms is male-dominated? This is the first major problem: the lack of female scientists and, even worse, the lack of true intersectional thinking behind the creation of algorithms.

Examples of bias were reported by the Guardian a few years back, showing that searching Google for the phrase “unprofessional hairstyles for work” led to images of mainly black women with natural hair, while searching for “professional hairstyles” offered pictures of coiffed white women.

The thing is there’s nothing wrong with that coding. Sure, it might not reflect the society we’d like to have but it’s a pretty good representation of the one we do have. Vast afros are thought to be unprofessional for work, a tight coiled bun is thought to be more so.

No, stop, do not start thinking about how that’s not the way it should be. I agree entirely that it shouldn’t. But is that a useful outline of how current society works? Yep, it is. So, those white male AI coders haven’t made some dreadful mistake there, we’ve got a reasonable approximation to how our society does work today.

It’s entirely true that if one or other line of thinking, political or social philosophy, manages to gain control of the AI coding then the AIs will work in different ways. They might even reflect the society wished for by that philosophy. They’ll also not be very useful and will therefore die off as they’ll not reflect nor manage to work in the reality we inhabit. They’ll die because we’re never going to give absolute power over all AIs to just the one group and therefore those that don’t work will die as they are outcompeted.

The Fabian demand is that AIs should be built to reflect their desired world. Which has two problems, the first being that not all share their desires. The second and much more important one being that AIs designed to their desired world won’t work in this extant world – an AI that doesn’t work not being all that useful.

0 0 votes
Article Rating
Total
0
Shares
Subscribe
Notify of
guest

12 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rhoda Klapp
Rhoda Klapp
6 years ago

The Fabians were very taken with this bit of Omar Khayyam:

Ah Love! could thou and I with Fate conspire
To grasp this sorry Scheme of Things entire,
Would not we shatter it to bits — and then
Re-mould it nearer to the Heart’s Desire!

Except they allow no disagreement and all attempts to build the better scheme of things elsewhere and getting folks to voluntarily switch have failed, miserably.

Spike
6 years ago

Regulating AI to avoid promulgation of a “male viewpoint” is being done at the same time that many universities are disparaging as male the concepts of scientific method, study of Western Civ, individual accountability, algebra, and so on. The new call for censorship won’t just break AI but will further empower the worthless.

So Much For Subtlety
So Much For Subtlety
6 years ago

It is enough to make me pray for Skynet.

Bloke on M4
Bloke on M4
6 years ago

I’m pretty sure that Google’s Image search isn’t even using much “AI”. Follow any of the “professional hair for work” and you find pages where the title of the page includes “professional” and “hair”. Then someone probably linked to it from another site, boosting the ranking. The problem with Guardian types is that what’s actually going on there, with both algorithms like PageRank, and with AI, is that they’re actually more democratic than the Guardian are. The whole attraction of analysing data is that it avoids human bias. It’s why Tesco and Wal-Mart crushed their opponents for a while –… Read more »

So Much For Subtlety
So Much For Subtlety
6 years ago
Reply to  Bloke on M4

The problem is that the data is racist – at least to the extent that the data does not present the world in the way that the Social Justice Warriors would like. IQ is the prime example. But another one would be the App that told you if you were going to a dangerous neighbourhood or not – very useful for tourists one would think. But of course what it did in effect was tell you if the area of interest had a lot of Black people. The truth just is sexist and racist. There is no way around it.… Read more »

BraveFart
BraveFart
6 years ago

The discrepancy in the perceived professionalism of female workers with different hair styles is nothing to do with race. It is probably to do with whether females with neat hairstyles or 1 metre wide afros are more likely to drop a hair in your sammich or coffee when they’re making it for you to eat or drink or to get their hair tangled in filing cabinets or keyboards.

Arthur the Cat
Arthur the Cat
6 years ago

In the example given the results are fuck all to do with the coding of the algorithm, they come from the training set fed to the algorithms. Algorithms are no more male or female than gravity is.

Dongguan John
Dongguan John
6 years ago
Reply to  Arthur the Cat

That’s how I understand it. I could be incorrect but don’t the coders build some neural network and then feed it info to learn for itself?

Any prejudices will come from the data fed in not the programmer.

Mr Ecks
Mr Ecks
6 years ago

The Fabians –those friends of eugenics and advocates of mass-murder –are scum.

They likely fear that AI might create something evil.

AI would have to go some to create anything as bad as the fucking Fabians themselves.

David Bolton
David Bolton
6 years ago

AI is largely about very sophisticated pattern matching and uses statistical methods in many cases. For instance you devise a model then train it using some input data. For instance you can provide it with a bunch of photos of face and label them by gender. After a while, it will be able to look at a photo and identify its gender with high accuracy. But that’s barely scratching the surface of what it can do. The spy cameras dotted around the UK’s road are very good at reading number plates thanks to AI. Likewise digital speed cameras. Motion sensors… Read more »

NiV
NiV
6 years ago
Reply to  David Bolton

The problem is that a general pattern matcher can often find the wrong pattern. Say a university researcher develops an AI to look at people’s academic records and figure out which are going to be the high-flyers. They train is by feeding in the academic records of a lot of young researchers, and how they’re doing 20 years later, and identify hidden patterns. The thing achieves a 80% success rate. No more fallible humans doing interviews! Get an AI to do it objectively! The somebody does a few experiments, and finds that almost all of the record has no effect… Read more »

Bloke in North Dorset
Bloke in North Dorset
6 years ago

As other’s have said, there’s no intelligence here its just classic garbage in *the training data), garbage out.

Here a very good video of how neural networks works and how they are trained. It take the simple case of number recognition and is aimed at the non techie: https://www.youtube.com/watch?v=aircAruvnKk

Check out his Bitcoin explanation as well, its the best one I’ve come across: https://www.youtube.com/watch?v=bBC-nXj3Ng4

12
0
Would love your thoughts, please comment.x
()
x