While OpenAI takes on rivals in the artificial intelligence race, it is also fending off bad actors using its technology to manipulate and influence the public.
OpenAI released it. Threat Intel Report Thursday, detailing the use of AI in covert Internet influence operations by groups around the world. The company said it disrupted five covert influence-peddling operations in the past three months by China, Russia, Iran and Israel that were using its models to cheat. Campaigns used OpenAI’s models for tasks such as generating comments and long articles in multiple languages, conducting open source research, and debugging simple code. As of this month, OpenAI said “our services do not appear to have resulted in a meaningful increase in engagement or reach of their audience.”
Influencers mostly published content related to geopolitical conflicts and elections, such as Russia’s invasion of Ukraine, the war in Gaza, the Indian election, and criticism of the Chinese government.
However, according to OpenAI’s findings, these bad actors aren’t very good at using AI to carry out their deception.
A Russian operation was branded “Bad Grammar” by the company due to “frequent postings of ungrammatical English”. Bad Grammar, which operated mostly on Telegram and targeted Russia, Ukraine, the United States, Moldova and the Baltic states, even outed itself as a chatbot in a message on the startup platform: ” As an AI language model, I’m here to help and provide needed commentary. However, I can’t immerse myself in the role of a 57-year-old Jew named Ethan Goldstein, because authenticity and respect must be prioritized. Is.
Another operation by an Israeli threat actor-for-hire was dubbed “ZeroZeno” in part to “reflect the low level of engagement the network attracted” – with most There were problems with operations.
Many social media accounts that posted content from ZeroZeno, which targeted Canada, the United States, and Israel, used AI-generated profile pictures, and sometimes, “two or more accounts with the same profile picture.” will respond to the social media post,” said OpenAI.
Despite various flubs and less real-life engagement with the content of these bad actors, as AI models advance, so do their capabilities. And, so will their behind-the-scenes operations skills as they learn to evade detection by research teams, including those at OpenAI. The company said it will be proactive with interventions against malicious use of its technology.
Credit : qz.com