The US Special Operations Command (USSOCOM) has contracted New York-based Accrete AI to deploy software that detects disinformation threats in “real time” on social media.
The company’s Argus anomaly detection AI software analyzes social media data, accurately captures “emerging narratives” and generates intelligence reports for military forces to quickly neutralize disinformation threats.
“Synthetic media, including AI-generated viral narratives, deep fakes, and other harmful social media-based applications of AI, pose a serious threat to US national security and civil society,” he said. say the founder and CEO of Accrete. Prashant Bhuyan said
“Social media is widely recognized as an unregulated environment where adversaries routinely exploit vulnerabilities in reasoning and manipulate behavior through the intentional dissemination of misinformation.
“USSOCOM is at the forefront of recognizing the critical need to analytically identify and predict social media narratives at an embryonic stage before those narratives evolve and gain traction. Acrete is proud to support the mission of “USSOCOM”.
The US Department of Defense first partnered with Accrete for the Argus platform license agreement in November 2022.
Business version for businesses
The company also revealed that it will launch an enterprise version of Argus Social for disinformation detection later this year.
AI software will provide protection for “urgent customer pain points” against AI-generated synthetic media such as viral disinformation and deep fakes.
Providing this protection requires AI that can automatically “learn” what is most important to a business and predict the likely social media narratives that will emerge before they influence behavior.
Social Nebula helps customers manage media risks generated by artificial intelligence, such as smear campaigns, according to the company.
It also autonomously generates fast and relevant content to counter these malicious attacks.
“Government agencies and businesses alike have an urgent need to manage the multitude of risks and opportunities posed by AI-generated synthetic media.” Bhuyan said.
“Companies are already experiencing significant economic damage from the spread of AI-generated viral misinformation and deep fakes manufactured by competitors, disgruntled employees and other types of adversaries. We believe the market for AI that can predict and neutralizing evil synthetic media generated by AI is about to explode,” he added.