What dangers do advanced artificial intelligence models pose in the wrong hands?

The administration of President Joe Biden is ready to open a new front in the effort to protect the Artificial Intelligence (AI) of the United States from China and Russia. Its initial plans are to put safeguards around the most advanced AI models.

Government and private sector researchers worry that US adversaries could use these models – which mine vast amounts of text and images to gather information and produce content – to carry out aggressive cyber attacks, or even to create powerful biological weapons.

Below you can read some of the threats from AI:

Manipulated materials and misinformation

Manipulated material — fabricated videos that look real, created by AI algorithms from footage abounding online — is surfacing on social media and blurring fact and fiction in the polarized world of American politics.

While such artificial videos have been around for a few years now, they’ve become much more powerful in the last year thanks to a slew of new “generative AI” tools like Midjourney that make it easier and less expensive to create persuasive manipulated materials.

AI visualization tools from companies such as OpenAI and Microsoft can be used to produce images that can promote elections, or disinformation about elections, despite each having policies against creating disinformation content. the researchers said in a report published in March.

Some disinformation campaigns simply use AI’s ability to mimic genuine news articles as a tool to spread false information. Some major platforms in social networks, such as Facebook, X and YouTube, have made efforts to ban and delete manipulated materials.

For example, last year, a Chinese government-controlled news portal that uses an AI platform published a false story that had previously circulated that the United States runs a laboratory in Kazakhstan for developing biological weapons for use against China, the US Department of Homeland Security said in a 2024 national security threat assessment.

White House national security adviser Jake Sullivan, speaking at an AI event Wednesday in Washington, said the problem cannot be easily solved because it is the combination of AI capabilities with “the intent of state actors.” and non-state actors, to use disinformation, subvert democracies, advance propaganda and change world perception”.

Biological weapons

The US intelligence community, research groups, and academics are deeply concerned about the dangers posed by foreign bad actors gaining access to AI capabilities. Researchers at Gryphon Scientific and Rand Corporation noted that advanced AI models could provide information that could be useful for the creation of biological weapons.

Gryphon analyzed how large language models (LLMs) – computer programs that pull in large amounts of text to generate answers to questions – could be used by hostile actors to cause damage in the life sciences domain, and found that “ they can provide information that can assist a malicious actor in creating a nuclear weapon, providing useful, accurate and detailed information every step of the way.”

The researchers found, for example, that an LLM can provide PhD-level problem-solving knowledge when working with a spreading virus.

Their research also found that LLMs can also help plan and execute a biological attack.

Cyber weapons

The US Department of Homeland Security (DHS) said that cyber actors would likely use AI to “develop new tools” to “enable larger, faster, more efficient and more obfuscated cyber attacks ” against vital infrastructure, including against pipelines and railways.

China and other adversaries are developing AI technology that could undermine US cyber defenses, including generative AI programs that support malware attacks, DASK said.

Microsoft said in a report in February that it had tracked hacker groups linked to the Chinese and North Korean governments, as well as Russia’s military intelligence and Iran’s Revolutionary Guard. They were trying to perfect their hacking campaigns using MLL.

New efforts to address threats

A bipartisan group of lawmakers announced a bill late Wednesday that would make it easier for the Biden administration to impose controls on the export of AI designs in an effort to protect valuable American technology against foreign actors. bad ones.

The bill — sponsored by House Republicans Michael McCaul and John Molenaar and Democrats Raja Krishnamoorthi and Susan Wild — would also give the Commerce Department authority to bar Americans from working with foreigners to develop AI systems , which pose risks to US national security. /rel/

Leave a comment

Your email address will not be published. Required fields are marked *