Article

Criminalité : la preuve par l’imagerie cérébrale ?

Les neurotechnologies ont permis de perfectionner considérablement les techniques de détection de mensonge. S’ils sont aujourd’hui beaucoup plus fiables, ces dispositifs soulèvent toutefois de nombreuses questions juridiques et éthiques. Les preuves issues de l’observation du cerveau sont d’ailleurs jugées irrecevables par la plupart des tribunaux du monde.
Illustration

Alla Katsnelson
Freelance science writer and editor based in Northampton, Massachusetts, United States. 

In the early 1990s, physicians at the Strasbourg University Hospital in France reported the strange case of a 51-year-old man who experienced unusual epileptic seizures. More than a third of the seizures, it seemed, occurred when he lied for business reasons. 

The physicians soon determined the cause of the seizures – a tumour pushing against his amygdala, the brain region that regulates emotions such as fear. Researchers think that the fear he experienced when lying, rather than the lie itself, set off the seizures. Presumably, then, similar emotions felt for other reasons would also set off the same electrical cascade in his brain, says Rebecca Wilcoxson, a forensic psychologist at Central Queensland University in Australia. 

There is no single characteristic, in the body or the brain, that appears when and only when someone is lying, Wilcoxson explains. But over the past two decades, neuroscientists have investigated whether tracking different forms of brain activity could provide some level of lie detection to help guide law enforcement agencies. 

Contested techniques

They have focused broadly on two technologies. One, called functional magnetic resonance imaging (fMRI), tracks blood flow in the brain to gauge patterns of brain activity. The premise is that telling a lie takes more cognitive load than telling the truth – and that this difference would be detectable by this form of brain imaging. By putting someone in an fMRI scanner, asking them specific questions while imaging their brain, then processing those images, researchers say they can determine the veracity of a person’s statements.

The other modality, electroencephalography (EEG), looks for a blip of electrical activity called the P300, which occurs about 300 milliseconds after a person experiences a stimulus – say, a word or a picture on a screen. The P300 signal isn’t lie detection per se, but corresponds to familiarity with a stimulus, explains Robin Palmer, a legal expert on forensics at the University of Canterbury in New Zealand. So, for example, investigators could ask whether a person is familiar with elements of a crime scene or with a murder weapon.

Some studies suggest that when used correctly, these techniques can be highly accurate – significantly more so than a polygraph test. But their use raises many issues. In the United States, brain-based lie detection was presented as evidence in a couple of criminal cases about a decade ago. However, the use of the technology was challenged on appeal and found not to meet the Daubert standard, which determines the admissibility of scientific evidence in court.

Today, these techniques are still considered inadmissible in most countries worldwide. Early on, law enforcement agencies in India and Japan used EEG-based lie detection technology, but they no longer do, says James Giordano, a neuroscientist and ethicist at Georgetown University Medical Center, Washington DC. 

Studies investigating these lie detection techniques have been small

Guilty verdict

In 2008, India became the first country to convict someone of a crime relying on evidence from an EEG brain scan. Aditi Sharma, a 24-year old business student from Pune, was convicted of killing her ex-fiancé by poisoning him. The case generated worldwide attention, but the verdict was overturned a year later. In June 2021, Sharma and her new partner were eventually found guilty of the crime, and the veracity of the brain scan was never called into question.

Studies investigating these lie detection techniques have been small, and most participants have mainly been university student volunteers. “We have to show that it works in real life,” says Jane Moriarty, a professor of law specializing in neuroscience and scientific evidence at Duquesne University in Pittsburgh, US. “We haven’t shown that yet.”

The electroencephalogram test is much simpler and cheaper to use, involving just a light portable headset. But its use has been mired in controversy. “Because of insufficient independent corroboration of its accuracy and reliability, it hasn’t had much traction,” says Palmer. He recently set out to validate the P300 signal, testing it in both university students and in people who were imprisoned for a violent crime. It worked almost perfectly in students, he reports, and slightly less well in imprisoned subjects, who were less co-operative and more impulsive. “We are satisfied that the method of detecting knowledge in the brain using P300 is generally accurate and reliable.”

No brain technology on its own is sufficient to be able to make legal conclusions

Searching the brain

But even so, Palmer cautions, ethical and legal issues surrounding its use abound. For example, if police believe someone has insider knowledge about a crime, can they force the person to take the test? “Is it possible to get a search warrant for somebody’s brain?” he asks. He plans to work with police in New Zealand to trial the technology with paid informants, who would volunteer to take the test.

Another issue is how these tools interact with memory, explains Moriarty. Suppose you’re shown a photo of a suspect, but the person looks very much like a close friend. Would your brain show a P300 signal? Similarly, an object central to a criminal case might coincidentally look like something a person is familiar with in a different context. “Those are some of the concerns I have,” she says. “First, does mistaken recognition look like recognition, and second, how do you know the person doesn’t recognize something in a non-inculpatory fashion?” What's more, she adds, “people taking the tests may be able to intentionally confound their own results.”

There’s also the danger of misuse by authorities, notes Palmer. Imagine that police arrest someone they suspect stole an object. If an officer shows it to the suspect, the person will appear guilty when tested. “That’s why it can never be entrusted to police units to do this,” he says. “It’s got to be independent units who do the testing.”

It’s hard to gauge the extent to which government agencies are using such technology, experts say. The Pentagon, which houses the US Department of Defence, has supported research into high-tech lie detection, including the use of fMRI. But they are commercially available. For example, Brainwave Science, a company based in Massachusetts, US, says on its website that it has developed a P300 testing system which measures brainwaves to help law enforcement agencies in areas including national security, counter-terrorism, criminal justice, and immigration control.

The complexity and sophistication of brain-based technologies are evolving, Giordano states. Today, no brain technology on its own is “to the point where it is sufficient as a stand-alone metric to be able to make legal conclusions as regards guilt,” he adds.

But that time may not be far away. Scientists are increasingly using machine learning and artificial intelligence to pull out cleaner signals from brain data. “The 800-pound gorilla in the room is that we simply do not know how ‘mind’ actually occurs in ‘brain’,” he concludes.  “What the technology is allowing us to do is to gain insights.” 

Should we be afraid of neuroscience?
UNESCO
janvier-mars 2022
UNESCO
0000380264
Subscribe Courier

Subscribe