The U.S. government have begun using artificial intelligence to detect ‘thought criminals’ on social media and report them to law enforcement for further action. Biden’s Customs and Border Protection (CBP), under the umbrella of the Department of Homeland Security (DHS),
has been partnering with an AI tech firm called Fivecast to deploy social media surveillance software that detects “problematic” thoughts and emotions of social media users and subsequently reports them to police for possible punishment.
Though a series of FOIA requests, independent outlet 404 uncovered various Fivecast marketing documents elaborating on its software’s utility for law enforcement:
Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect “sentiment and emotion” in online posts, according to a cache of documents…
CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel. In this case, the specific company called Fivecast also offers “AI-enabled” object recognition in images and video, and detection of “risk terms and phrases” across multiple languages, according to one of the documents.
Pjmedia.com reports: Fivecast, according to its mission statement, is “used and trusted by leading defense, national security, law enforcement, corporate security and financial intelligence organizations around the world” and “deploys unique data collection and AI-enabled analytics to solve the most complex intelligence challenges.” It claims to work with the intelligence agencies of all five nations that comprise the so-called “Five Eyes” — the United Kingdom, United States, New Zealand, Australia, and Canada.
Among the many red flags that Fivecast claims to be able to detect with its software are the emotions of the social media user. Charts contained in the marketing materials uncovered by 404 show metrics regarding various emotions such as “sadness,” “fear,” “anger,” and “disgust” on social media over time. “One chart shows peaks of anger and disgust throughout an early 2020 timeframe of a target, for example,” 404 reports.
Logistical difficulties of AI assessing human emotion aside, this would theoretically open the door for the government to surveil and censor not just the substance of speech, but also the alleged emotion behind that speech (which could potentially at some point be admissible in court to impune the intent/motive of defendants). It’s almost impossible to overestimate the dystopian applications of this technology, which for obvious reasons governments around the world are eager beavers to adopt.