Police departments near the US-Mexico border have discovered a new way to fight crime – or find it. They’re paying a company to help them launch fake online personas powered by artificial intelligence (AI). These AI-generated bots can cruise the internet and social media, talking to people who, law enforcement believes, could be violent criminals, sexual predators, or even “protesters,” all in the hopes of producing evidence to use against those suspects. Of course, the technology is unproven, costs hundreds of thousands of dollars, and seems to operate behind a wall of secrecy. Even the product’s name has an air of mystery: Overwatch. How does it work? Well, few seem interested in letting the public peek behind the curtain. Perhaps there’s a reason for that.
Overwatch for Public Safety?
The company that owns Overwatch, Massive Blue, says its vision is to “harness the power of tech ethically to drive transformative positive change across society.” It advertises its product as an “AI-powered force multiplier for public safety,” which “deploys lifelike virtual agents” to “infiltrate and engage criminal networks across various channels.”
“According to a presentation obtained by 404 Media,” explained Wired, “Massive Blue is offering cops virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media.” The personas are “designed to interact with and collect intelligence on ‘college protesters,’ ‘radicalized’ political activists, and suspected drug and human traffickers,” according to internal documents obtained by 404 Media, an independent media organization focused on technology. Other uses for Overwatch include “border security,” “school safety,” and stopping “human trafficking.”
All that sounds somewhat noble, but the internet is not a small space. How does Massive Blue determine who is a potential suspect? None of the documents 404 Media viewed explained that part, but it did get a look at some of the AI characters, including a “protest persona,” described as a “radicalized AI” posing as a lonely and childless divorcee who likes baking, activism, and “body positivity.” Then there’s a “Honeypot” persona, a bot disguised as a “25-year-old from Dearborn, Michigan, whose parents emigrated from Yemen and who speaks the Sanaani dialect of Arabic,” as Wired put it. There’s also a “child trafficking” persona pretending to be a 14-year-old boy, an “AI pimp persona,” “escorts,” “juveniles,” a “college protestor,” and an “external recruiter for protests.”
Massive Blue has already sold Overwatch’s services to Pinal County, Arizona, signing a $360,000 contract paid for by Arizona’s Department of Public Safety, using a grant allocated to prevent human trafficking. The package the county bought includes “24/7 monitoring of numerous web and social media platforms” with “development, deployment, monitoring, and reporting on a virtual task force of up to 50 AI personas across 3 investigative categories,” explained Wired.
At a public hearing held to discuss the tech company, Pinal County’s deputy sheriff told council members he couldn’t “‘get into great detail’ about what Massive Blue is and that doing so would ‘tip our hand to the bad guys.’” 404 Media received a similar answer when talking to Mike McGraw, Massive Blue’s cofounder. “We cannot risk jeopardizing these investigations and putting victims’ lives in further danger by disclosing proprietary information,” said McGraw. As of last summer, the technology had not produced any arrests.
No doubt, a wide range of criminal activity unfolds on the internet, but is deploying AI personas disguised as real people going to cause more harm than good? It seems likely, too, that these bots would eventually end up influencing some people’s behavior, perhaps getting them to do something they otherwise wouldn’t do. At what point would something like this be considered entrapment?
“The problem with all these things is that these are ill-defined problems,” said Dave Maass, the director of investigations at the Electronic Frontier Foundation, speaking to 404 Media. “What problem are they actually trying to solve? One version of the AI persona is an escort. I’m not concerned about escorts. I’m not concerned about college protesters. So like, what is it effective at, violating protesters’ First Amendment rights?”
Another problem is that Chatbots are also notorious for giving false and biased information. How long before the police are arresting the wrong people? How will they determine whether the bots aren’t interacting with other bots, thinking they’re surveilling people? There are already more bots than humans on the web, according to the Thales Bad Bot Report by Imperva, a company specializing in cybersecurity. As more and more people use AI technology to fight their causes, combat “misinformation,” or commit crimes, it will get harder to distinguish between real and fake.
Liberty Nation does not endorse candidates, campaigns, or legislation, and this presentation is no endorsement.