More EU Drivel – Lie Detector Tests At Airports

5
1274

The European Union is to deploy lie detectors at airports to question incoming travellers. Sure, they’re fancy and modern and new because they employ artificial intelligence. The problem with all of this being that lie detectors themselves don’t work. So, to the extent that these are lie detectors, this will not work obviously enough.

Further, the problem with lie detectors is not that they will work with only a little bit of refinement, or just that next iteration of technology. It’s that the basic underlying assumption itself is incorrect, that there are unfakeable or unmissable physiological signs when someone is lying. It’s that base assumption which is wrong. And the more professional and more trained the person being questioned the more it is wrong. The outcome of this being that the really bad guys, the ones you really want to catch, are precisely the people who pass lie detector tests and thus don’t get caught by their use:

Passengers at some European airports will soon be questioned by artificial intelligence-powered lie detectors at border checkpoints, as a European Union trial of the technology is set to begin.
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying.

Those signs being looked for just aren’t there – at least among those who have been trained. Heck, even a good poker player can beat the things.

‘We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks,’ project coordinator George Boultadakis of European Dynamics in Luxembourg.

‘iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.’

Yes, we humans are indeed pretty good at telling when someone is lying. But there simply is no one single nor combination of facial or any other physical signs which exist when someone is lying. Nor any other set whose absence shows they’re not lying. Thus any system based upon a certainty of catching such is bound to be wrong.

According to early testing, the system is around 76pc accurate, but the iBorderCtrl team say they are confident they can increase this to 85pc.

Neither number being useful enough.

They do say that if people are pulled by this first level of the system then off they go to another and more intense level conducted by actual people. But that’s not the problem. False positives will indeed be – largely enough – caught by such a system. The AI says “bad’un” the human level is able to clear them. False negatives though, they slip through the net entirely. A bad’un who isn’t picked out at that first level because they’re good at lying is then not checked at all. And given the ease with which those biomarkers can be simulated or avoided anyone truly a bad’un will have the training. Thus this AI system is entirely useless at the actual job at hand, catching the bad’uns.

But then, you know, the EU, white hot heat of technology and all that.

5
Leave a Reply

avatar
5 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
5 Comment authors
john77Esteban DeGolfBniCBloke in North DorsetChester Draws Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Chester Draws
Guest
Chester Draws

People are actually really bad at telling if someone is lying. They *think* they are good at it, but that’s not the same thing at all.

Hence why trials and investigations require evidence. And why decisions based on testimony, like the Kavanaugh hearings, are useless.

Bloke in North Dorset
Guest
Bloke in North Dorset

They’ll probably get better results using racial profiling.

BniC
Guest
BniC

Will it adjust for the fact that different cultures have different responses and visual cues, have to consider diversity after all

Esteban DeGolf
Guest
Esteban DeGolf

I think Tim has jumped the gun a bit here, this may be very useful depending on several things. For instance if it’s 85% accurate and humans are only 60%, that’s an improvement. It may also be much cheaper than a human (probably will be in time). The rate of false positives versus negatives could also change the analysis of how useful this is – it isn’t necessarily the case that it is 15% incorrect in both directions. If this system catches 90% of bad’uns that could be enough to drive almost all the bad’uns to try someplace else. Finally,… Read more »

john77
Guest
john77

There are claims that psychopaths look the same when lying and they are the ones we really need to stop