The New Cold War: US Urged to Form ‘Manhattan Project’ for AGI

A commission says China is catching up with US in AI capabilities and recommends a public-private partnership to develop artificial general intelligence as a national security priority.

Shane Snider, Senior Writer, InformationWeek

November 21, 2024

5 Min Read
Tech war between China and the USA. Flag of USA and China on a microprocessor
Ivan Marc Sanchez via Alamy Stock

A bi-partisan US congressional group this week released a report urging a “Manhattan Project” style effort to develop AI that will be able to outthink humans before China can win the AI arms race.

The US-China Economic and Security Review Commission outlined the challenges and threats facing the US as powerful AI systems continue to quickly proliferate. The group calls for the government to fund and collaborate with private tech firms to quickly develop artificial general intelligence (AGI).

The “Manhattan Project” was the historic collaboration between government and the private sector during World War II that culminated in the development of the first atomic bombs, which the US infamously unleashed on Japan. The subsequent proliferation of nuclear weapons led to an arms race and policy of “mutually assured destruction” that has so far deterred wartime use, but sparked the Cold War between the United States and Russia.

While the Cold War with Russia ultimately ended in 1991, the nuclear stalemate caused by the arms pileup remains.

A new stalemate may be brewing as superpowers race to develop AGI, which ethicists warn could present an existential threat to humanity. Many have likened such a race to the plot of the “Terminator” movie, where the fictional company Cyberdyne Systems works with the US government to achieve a type of AGI that ultimately leads to a nuclear catastrophe.

Related:How AI Is Changing Political Campaigns

The commission’s report doesn’t sugarcoat the possibilities. “The United States is locked in a long-term strategic competition with China to shape the rapidly evolving global technological landscape,” according to the report. The rise in emerging tech like AI could “alter the character of warfare” and for the country winning the race, would “tip the balance of power in its favor and reap economic benefits far into the 21st century.”

AI Effort in China Expands

China’s State Council in 2017 unveiled its “New Artificial Intelligence Development Plan,” aiming to become the global leader in AI by 2030. The US still has an advantage, with more than 9,500 AI companies compared to China’s nearly 2,000 companies. Private investment in the US dwarfs China’s effort, with $605 billion invested, compared to China’s $86 billion, according to a report from the non-profit Information Technology & Innovation Foundation.

But China’s government has poured a total of $184 million into AI research, including facial recognition, natural language processing, machine learning, deep learning, neural networks, robotics, automation, computer vision, data science, and cognitive computing.

Related:How to Find and Train Internal AI Talent

While four US large language models (LLMs) sat on top of performance charts in April 2024, by June, only OpenAI’s GPT-4o and Claude 3.5 remained on top. The next five models were all from China-backed companies. “The gap between the leading models from the US industry leaders and those developed by China’s foremost tech giants and start-ups is quickly closing,” the report says.

Where the US Should Focus

The report details areas that could make the biggest impact on the AI arms race where the US currently has an advantage, including advanced semiconductors, compute and cloud, AI models, and data. But China, the report contends, is making progress by subsidizing emerging technologies.

The group recommends a priority on AI defense development for national security, with contracting authority given to the executive branch. The commission urges US Congress to establish and fund the program, with the goal of winning the AGI development race.

The report also recommends banning certain technologies controlled by China, including autonomous humanoid robots, and products that could impact critical infrastructure. “US policy has begun to shift to recognize the importance of competition with China over these critical technologies,” the report states.

Related:Defining an AI Governance Policy

Manoj Saxena, CEO and founder of Responsible AI Institute and InformationWeek Insight Circle member, says the power of AGI should not be underestimated as countries race toward innovation.

“One issue is rushing to develop AGI just to win a tech race and not understanding the unintended consequences that these AI systems could create,” he says. “…it could create a situation where we cannot control things, because we are accelerating without understanding what the AGI win would look like.”

Saxena says the AGI race may result in the need for another Geneva Convention, the global war treaties and humanitarian guidance that were greatly expanded after World War II.

But Saxena says a public-private collaboration may lead to better solutions. “As a country, we’re going to get not just the best and brightest minds working on this, most of which are in the private sector, but we will also get wider perspectives on ethical issues and potential harm and unintended consequences.”

An AI Disaster in the Making?

Small actors have limited access to the tightly controlled materials needed to make a nuclear weapon. AI, on the other hand, enjoys a relatively open and democratized environment. Ethicists worry that ease of access to powerful and potentially dangerous systems may widen the threat landscape.

RAI Institute’s Saxena says weaponization of AI is already occurring, and it might take a catastrophic event to push all parties to the table. “I think there is going to be some massive issues around AI going rogue, around autonomous weapon attacks that go out of control somewhere … Unfortunately, civilization progresses through a combination of regulations, enforcement, and disasters.”

But in the case of AI, “regulations are far behind,” he says. “Enforcements are also far behind, and it's more likely than not that there will be some disasters … that will make us wake up and have some type of framework to limit these things.”

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights