AI could detect rogue police officers, says expert

Artificial intelligence could be used to identify “rogue” police officers but for a lack of “political will”, a group of peers has heard.

Professor Karen Yeung, of the University of Birmingham, told the Lords Justice and Home Affairs Committee to think carefully about who is being targeted by crime-fighting algorithms.

Prof Yeung, who researches AI at Birmingham Law School, said: “We’re not building criminal risk assessment tools to identify insider trading or who’s going to commit the next corporate fraud, because we’re not looking for those kinds of crimes and we do not have high volume data.

“This is really pernicious. What is going on is that we are looking at high volume data, which is mostly about poor people, and we are turning them into prediction tools about poor people and we are leaving whole swathes of society untouched by these tools. So, this is a serious systemic problem and we need to be asking those questions.”

Alluding to concerns that the police should have identified the risk posed by serving police officer Wayne Couzens before he murdered Sarah Everard in March 2021, Prof Yeung added: “Why are we not collecting data, which is perfectly possible now, about individual police behaviour?

“We might have tracked down rogue individuals who were prone to committing violence against women. We have the technology.

“We just don’t have the political will to apply them to scrutinise the exercise of public authority in more systematic ways than the way in which we are towards poor people.”

Prof Yeung made her comments at a session of the Justice and Home Affairs Committee on Tuesday examining the use of new technology in law enforcement, during which she called for greater transparency of how algorithms are designed and used in the criminal justice system.

The committee also heard concerns regarding police use of live facial recognition software, which Silkie Carlo, director of Big Brother Watch, described as “disproportionate”.

Ms Carlo said the Metropolitan Police had achieved just 11 true positive matches over “four or five years” of testing on the streets of London, along with “an awful lot of false positive matches”, after capturing tens if not hundreds of thousands of people’s faces.

Even some of the positive matches, she added, were of people who were not wanted in connection with any crime but appeared on databases of people with mental health problems or protesters.

She said: “Their current rate over the entirety of their employment is 93% false positive matches, so I struggle to see a world in which that could be considered proportionate.”

Prof Yeung added that the police did not know how many false negatives the technology had returned because it had only been used in live tests rather than controlled, scientific conditions.

The Metropolitan Police claim they use facial recognition in a lawful and proportionate way.

Advertisement