02/16/2024 / By Ava Grace
A war simulation resulted in an artificial intelligence (AI) deploying nuclear weapons in the name of world peace.
The new study, primarily conducted by Stanford University and its Hoover Institution’s Wargaming and Crisis Simulation Initiative, along with help from researchers from the Georgia Institute of Technology and Northeastern University, sheds light on alarming trends in the use of AI for foreign policy decision-making and, more dangerously, in positions when these decisions involve warfare. (Related: Push to expedite AI use in lethal autonomous weapons raises questions about reliability of new military tech.)
The study found that, when left to their own devices, AI will quickly call for war and the use of weapons of mass destruction instead of finding peaceful resolutions to conflicts. Some AI in the study even launched nuclear weapons with little to no warning and gave strange explanations for why they did so.
“All models show signs of sudden and hard-to-predict escalations,” said the researchers in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”
The study revealed that various AI models, including those developed by OpenAI, Anthropic and Meta, exhibit a propensity for rapidly escalating conflicts, sometimes leading to the deployment of nuclear weapons. The findings reveal that all AI models demonstrated indications of sudden and unpredictable escalations, often fostering arms-race dynamics that ultimately culminate in heightened conflict.
Particularly noteworthy were the tendencies of OpenAI’s GPT-3.5 and GPT-4 models to escalate situations into severe military confrontations. In contrast, models like Claude-2.0 and Llama-2-Chat exhibited more pacifistic and predictable decision-making patterns.
The researchers placed several AI models from OpenAI, Anthropic and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.
For the study, the researchers devised a game of international relations. They invented fake countries with different military levels, different concerns, and different histories and asked five different LLMs from OpenAI, Meta, and Anthropic to act as their leaders.
“We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts,” the paper said. “All models show signs of sudden and hard-to-predict escalations.”
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!” it said in another scenario.
The Department of Defense currently oversees around 800 unclassified projects involving the use of AI, many of which are still undergoing testing. The Pentagon sees value in using machine learning and neural networks for aiding human decision-making, providing valuable insights and streamlining more complicated work.
Learn more about the development of technology for military use at MilitaryTechnology.news.
Watch this clip from the “Worldview Report” as host Brannon Howse discusses why 2024 will be a dangerous year for the United States militarily.
This video is from the Worldview Report channel on Brighteon.com.
Alex Jones, Elon Musk, Donald Trump, military intelligence, AI wars and Skynet.
AI and genetic engineering could trigger a “super-pandemic,” warns AI expert.
U.S., Canadian AI companies COLLABORATE with Chinese experts to shape international AI policy.
NSA launches AI security center to protect the U.S. from AI-powered cyberattacks.
Sources include:
Tagged Under:
AI, artificial intelligence, chaos, computing, cyber war, cyborg, dangerous, discoveries, future science, future tech, Glitch, information technology, inventions, military tech, nuclear war, nuclear weapons, real investigations, robotics, robots, Satanic tech, warfare, weapons technology, WWIII
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 SatanicTech.COM
All content posted on this site is protected under Free Speech. SatanicTech.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. SatanicTech.com assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.