Skip to content

Policing using artificial intelligence could lead to ‘predictive’ crime prevention, experts argue

A pilot program in the UK to improve police capabilities using artificial intelligence has proven successful, but could pave the way for a future of “predictive surveillance”, experts told Fox News Digital.

“Artificial intelligence is a tool, like a firearm is a tool, and it can be useful, it can be deadly,” Christopher Alexander, CCO of Liberty Blockchain, told Fox News Digital. “In terms of the holy grail here, I really think it’s the predictive analytics capability that if you get better at that, you have some pretty scary capabilities.”

British police in different communities have been experimenting with a system powered by artificial intelligence (AI) to help catch drivers committing offenses such as using their phone while driving or driving without a seat belt. Offenders could face a £200 ($250) fine for using a phone while driving.

A week-long trial at locations in East Yorkshire and Lincolnshire caught around 239 drivers breaking the rules, the BBC reported. The program also saw a trial in late 2022 in Devon and Cornwall, which caught 590 drivers not wearing a seat belt over a 15-day period.

SPEAKER OF THE HOUSE KEVIN MCCARTHY TAKES CONGRESS BACK TO SCHOOL ON AI

Safer Roads Humber, which helped set up the trial in cooperation with Humber Police, told Fox News Digital that the program is not entirely AI-run, but involves human checking to check for errors. The AI ​​will use computer vision to determine if a person is not wearing a seat belt or using a phone, and positive results will be passed to a human for verification.

The initial review process takes up to five seconds, with false positives automatically deleted, a spokesman for Safer Roads Humber explained. The system connects via phone signals and humans can check the results remotely.

AI-DRIVEN FACIAL RECOGNITION INDUSTRY IN BALLOONS Raises Bias, Privacy Concerns

Permanent implementation of the system would require more cameras, but the cameras and equipment can be mounted on vehicles, such as a trailer that can be left on the side of a road for weeks or even months, the spokesman said.

“Personally, I think a mobile solution would work better as it would ensure road users change their behavior at all times and not just at a static point,” said Ian Robertson, partnership manager for Safer Roads Humber.

Brian Cavanaugh, visiting fellow at The Heritage Foundation’s Center for Immigration and Border Security, expressed concern that high-surveillance countries like the UK could invest more in using AI in conjunction with their massive systems, which which could lead to a more authoritarian system. state control as an unintended consequence.

“I absolutely see this as a slippery slope,” Cavanaugh told Fox News Digital. “You’re moving from an open and free society to one that you can control through facial recognition [technology] and AI algorithms – you’re basically looking at China.

“The UK will use safety and security metrics to say, ‘Well, that’s why we did it for phones and cars.’ And then they’ll say, ‘If you have, say, guns…what’s next in their list of crimes that you take for safety and security reasons?” he added. “All of a sudden, you’re creating an authoritarian, technocratic government where you can control society through carrots and sticks.

PRESIDENTIAL CANDIDATE WARNS AN AI PAUSE TO US MEANS ‘CHINA WITH THE AMB’

“I think there is the ability to go from observations to predictive measures, but with that you have the possibility of false positives and the risk of a margin of error.”

Cavanaugh argued that the best use of artificial intelligence in policing would focus on understanding crime rates, using data to create more informed decisions about the allocation and deployment of resources. He stressed the need to keep human discretion at the center of any police policy and that society never let AI “take the place of the officer”.

Alexander described the most extreme version of this practice as “predictive surveillance,” similar to the type of enforcement seen in the movie “Minority Report.”

The Israel Defense Forces (IDF) recently discussed how they used AI to help determine targets during conflict and even use available data to identify possible locations of enemy fighters or terrorists, a trial that resulted in successful operations against at least two Hamas commanders in 2021.

Colonel Yoav, commander of Data Science and AI, said AI helped the IDF do in days what might have taken “almost a year” to complete otherwise.

AI KILLER SHUTS OFF BEFORE PRESSING THE NUCLEAR BUTTON

“We take original subgroups, calculate their closed circle [of personal connections]calculate relevant features, classify results and determine thresholds, using feedback from intelligence officers to improve the algorithm,” he explained.

Alexander warned that these advances will often begin in the military and intelligence community, then “trickle down” to the private sector.

“Presumably, you’re going to have more and more data,” Alexander argued. “People will think more about picking it up, and we’ll improve predictive capabilities, and … could the police show up in riot gear two hours before a riot starts?”

He also used the example of the IRA, asking whether British police might even end up using AI to get warrants and execute a search “just as people are settling in”.

“I think predictive capabilities are where the focus is … and it makes perfect sense to be in the future,” he concluded.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish