With the recent uptick in protests around the country in regard to high profile, controversial police cases involving race, some experts have begun to look for advanced technological solutions.
Enter artificial intelligence, also known as AI.
Professor Ryan Abbott posits that artificial intelligence could help improve the current tensions by removing the human equation.
“AI can automate police work, which you already see with traffic cameras and automated tickets, and it can augment human police through technologies like facial recognition,” said Abbott in an email to The College Fix.
Abbott is professor of law and health sciences at the University of Surrey School of Law and adjunct assistant professor of medicine at the David Geffen School of Medicine at UCLA.
He is the author of a new book “The Reasonable Robot: Artificial Intelligence and the Law.”
“Biases are an inevitable part of both AI and human decision-making, but some are morally and legally unacceptable,” Abbott said in a news release announcing the book’s publication over the summer.
“…A way to manage human bias is to cede some agency to AI, which can be explicitly programmed to never consider race or even proxy variables; doing so might be the best chance for society to avoid discrimination in a racially stratified world.”
Abbott is not alone.
Blackboard Insurance cloud data architect Gary Cheung argued in a July Medium piece headlined “Using AI & analytics to stop police brutality against minorities” that “artificial intelligence can provide a more automated and objective way to ensure justice for racial minorities and prevent police brutality.”
The police force has certain rules and regulations when it comes to rules of engagement during an encounter; however, the data collected from the post-encounter police report has a high degree of variability and inefficiency, according to Cheung.
To standardize this process, Cheung recommends a machine-learning approach. Audio and video data from dash cams and body cams can be used to train a machine learning model that can evaluate officer compliance and effectiveness.
Cheung suggests various metrics that can be used to evaluate an interaction, such as language sentiment, protocol adherence, and racial bias. Cheung stresses such a model would merely be a tool in the utility belt of internal affairs auditors, not a replacement.
In a separate interview with The College Fix, Cheung said that there “will definitely be push back from police officers.”
“However,” he added, “this type of evaluation provides major benefits.”
He said the benefits include increased transparency, safety and consistency.
Regardless, the point that Cheung makes is simple: “the goal here isn’t to score/punish police officers for every infraction. The goal is to identify at-risk officers with … tendencies that severely deviate from the norm.”
Whether this type of policing would ever be acceptable remains to be seen, especially on campus. In February, UCLA scrapped plans to use facial recognition software for security surveillance.
MORE: ‘Surveillance Society’ professor warns: Students far too lax with online info
IMAGE: Carlos Castilla / Shutterstock
Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.