Future Frontiers of Warfare: The Promises and Perils of Weaponized A.I.

Concept art of computer code. Photo from pixabay

Concept art of computer code. Photo from pixabay

On July 28, 2015, over 4,000 leading researchers signed an open letter entitled “Autonomous Weapons: An Open Letter from AI & Robotics Researchers” at the International Joint Conference on Artificial Intelligence. The signatories––including the likes of Elon Musk, Stephen Hawking, Noam Chomsky, and Steve Wozniak––advocated against the use of autonomous weapon systems in the U.S. military. They fear that automated weapon systems would become the “Kalashnikovs of tomorrow,” threatening unnecessary violence and a destabilization of the balance of power. Since 2015, artificial intelligence research has continued to predominate in the technical fields, providing ever-increasing applications to not only the U.S. military but to those of the United States’ biggest foreign competitors, China and Russia, as well. Today, the U.S. is faced with the difficult decision of either adhering to the advice of its scientists and refraining from integrating AI in the military, risking giving China and Russia a technological edge, or ignoring its scientists in the name of national security and maintaining U.S. military dominance.

In order to understand the potential utility––or harm––that artificial intelligence offers with respect to governance, an understanding of AI’s capability and historical use is necessary. The inception of AI is generally agreed to have taken place at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, where host John McCarthy for the first time defined AI as “the science and engineering of making intelligent machines.” This summer project brought together leading computer scientists, cryptographers, engineers, and mathematicians to discuss the future of computing and the role that AI would, and should, play in it. At the conference, the first AI-based computer program––a program capable of solving mathematical proofs––was unveiled. Surprisingly, the program received little reception, but its success proved that artificial intelligence in its nascent form was indeed possible.

 With increases in available computing power and decreases in the production cost of computers, AI research saw vast developments in the years succeeding the historic summer conference. Between 1950 and 1970, Marvin Minsky and Dean Edmunds engineered the first rudimentary neural network that was capable of storing short- and long-term information, Alexey Ivakhnenko and Valentin Lapa introduced the first multi-layered neural network, and Joseph Weizenbaum created the first natural-language processing computer program, called ELIZA, that was able to mimic English grammar rules. These huge successes in the field of artificial intelligence captured the attention of the U.S. government, leading the Defense Advanced Research Projects Agency (DARPA) to become the primary sponsor of AI research in the United States. With a close relationship established between DARPA and AI research, the U.S. military saw its first real applications of AI technology in the 1970s. Advancements included pattern recognition programs used to identify enemy missiles, U.S. missiles engineered to autonomously course-correct, and programs attempting to instantaneously translate Arabic. Throughout the rest of the 20th century, developments in artificial intelligence steadily progressed but repeatedly hit ceilings due to inadequate computing power and efficiency. However, as the growth rate of Moore’s Law surpassed the computational requirements needed for truly ambitious projects, the next two decades saw what AI research could achieve when equipped with sufficient computing power.

Artificial Intelligence in 2021: 

Today, AI helper systems like Amazon Alexa and Google Home are found in a quarter of homes in the United States, the world’s best Chess and Go players are sitting inside the pant pockets of 294 million Americans, machine learning data analytics are used in virtually all modern software, and OpenAI’s GPT-3 language model can produce text that is indistinguishable from human writing. Driven by the profitability and public interest in AI, private companies and universities have taken the lead in research innovations, leaving the U.S. government struggling to keep up in applying these new innovations. This has caused the U.S. Department of Defense to pursue partnerships with private tech companies. Perhaps the most famous example is Project Maven, a joint initiative between the U.S. government and Google with the goal of utilizing Google’s video analysis systems to autonomously sift through and highlight key moments in captured drone footage. But in response to the publicization of this project, thousands of Google employees signed a petition protesting Google’s involvement in “the business of war,” causing Google to back out of the partnership. Even without the help of private companies, the U.S. military now utilizes AI-based decision-making support systems like the “Commanders Virtual Staff” program, the Aegis Ballistic Missile Defense System, which can operate autonomously, and an Algorithmic Warfare Cross-Functional Team that is researching the uses of AI for exploiting foreign networks and even running unmanned aircraft.

It’s clear that engineers at Google and AI researchers alike don’t want their fields corrupted by warfare, but the unfortunate truth is that they may not have a choice. Compared to Russia and China, the United States exhibits only a moderate interest in developing AI warfare. Vladimir Putin famously said in an address to students, “Whoever becomes the leader in [artificial intelligence] will become the ruler of the world.” This sentiment seems to resound most within the Chinese government, whose national goal is to become the world’s leader in artificial intelligence technology by 2030. Additionally, China’s military has wholeheartedly expressed a willingness to integrate AI technologies. As of 2018, China is the world’s largest exporter of unmanned combat drones, the largest user of facial recognition AI, and is actively sponsoring––in both the private and military spheres––autonomous submarines, aircraft, and combat robots. Russia is similarly engaged in military AI research, sponsoring its own swath of autonomous combat vehicles to counteract population constraints, cyber warfare initiatives designed to politically derail countries through misinformation, deep fakes––artificial faces overlaid on people’s faces within videos, and decision support systems. Keeping in mind the fervent interest in militarized AI that China and Russia demonstrate, the U.S. may be forced to continue researching military applications of AI in order to preserve its military advantage and protect the interests of American citizens. All three of the world’s major powers are seriously exploring militarized artificial intelligence, and the military benefits are obvious. But what are the potential drawbacks?

Complications with Artificial Intelligence:

The most commonly shared fear of introducing autonomy within weapon systems is the unwarranted and unapproved attacking of targets, but there are many more problems to be worried about. The public has little trust in handing over to computers the decision-making authority to end a person’s life, engineers distrust the ability of AI to accurately process data, and security analysts are hesitant about the complexity AI would add to military cooperation. These concerns are not unfounded. AI attacked allied targets in 1991 when the Phalanx Close-In Weapon System aboard the USS Jarrett mistook a friendly ship for an enemy and autonomously fired at the USS Missouri. In other cases, humans supervising semi-autonomous Patriot missiles ended up trusting the target recommendations of the systems excessively and accidentally fired on a Royal Air Force plane, killing the crew members. Because of such mishaps, there have been repeated discussions about banning fully autonomous weapon systems, though Russia, China, and the United States all doubt each other’s commitment to such a promise. 

Moreover, there are intrinsic complexities in applying AI technology to warfare that make it potentially unstable. Current machine learning capabilities orient computers to solve single task problems. When multiple objectives are at play, secondary objectives are often ignored or unoptimized. If an AI’s task is to win a military battle, then objectives like gaining territory, stalling the enemy, and acquiring intelligence could be easily cast aside in favor of achieving a crushing victory, which may result in more total casualties. Additionally, in warfare, military forces almost never have perfect data about the enemy’s military capabilities, location, or size. As such, militaries will likely train AI on inaccurate data that could result in the AI making strong recommendations without appreciating the data’s uncertainty. Lastly, MIT’s Dr. Erik Lin-Greenberg argues that AI military integration could hamper alliance cooperation and coordination. Dr. Lin-Greenberg writes, “By increasing the speed of warfare, AI could decrease the time leaders, from the tactical to strategic levels, have to debate policies and make decisions,” ultimately threatening an alliance’s decision-making cohesion.

AI clearly offers a new era in military engagement that may result in fewer human casualties, more efficient decision making, and shorter battles, but the vast uncertainties inherent to AI make its military integration questionable. In the 2015 open letter, the researchers warn of the inevitability of a “global arms race” if AI weapon systems continue to be researched by militaries. However, it seems that countries abroad have fewer ethical concerns about such an arms race and the dangers of AI, putting pressure on the United States to continue research. Of course, in the face of increasing AI military threats from foreign powers and the loss of a military advantage, the United States should continue its own research; but for the sake of humanity, the U.S. should, at every step of the way, attempt to negotiate a binding, international prohibition of autonomous systems in war.  

Sebastian Preising is a Junior Editor at CPR and a sophomore who is studying Political Science and Mathematics. 

Sebastian Preising