Artificial active

Artificial Intelligence and the Future of Warfare – Best American Indian Magazine | San Jose California

Consider an alternate history for the war in Ukraine. Intrepid units of the Ukrainian army strive to eliminate the Russian supply convoys. But rather than relying on sporadic air cover, Russian convoys travel under cheap drone cover. Armed drones carry relatively simple artificial intelligence (AI) that can identify human forms and target them with missiles. The tactic leaves many innocent civilians, as the drones kill almost anyone close enough to the convoys to threaten them with anti-tank weapons. As the Ukrainians try to respond to the setback with their own drones, they are overwhelmed by the more numerous Russian drones.

Get stories from India Currents straight to your inbox.

It is increasingly plausible that this scenario could be seen in the next great war. In fact, the future of AI in wartime is already here, even if it is not yet used in Ukraine. The United States, China, Russia, Britain, Israel and Turkey are all aggressively designing AI-enabled weapons that can shoot to kill without a human in the decision-making loop. These include fleets of ghost ships, tanks and ground vehicles, AI-enabled guided missiles, and most importantly, aircraft. Russia is even developing autonomous nuclear weapons; the 2018 U.S. Nuclear Posture Review said Russia was developing a “new intercontinental, nuclear-armed, nuclear-powered autonomous submarine torpedo”. Lethal autonomous weapons (LAWs) have been used in offensive operations to attack human combatants. In March 2021, a Turkish Kargu-2 drone was used in Libya to mount autonomous attacks against human targets. According to a UN Security Council report, the Kargu-2 tracked retreating military and logistics convoys, “the attack[ing] targets without requiring data connectivity between the operator and the munition.

In reality, autonomous weapons that kill without active human decision are now hundreds of years old. Land and naval mines have been in use since at least the 1700s. Missile defense systems such as the Patriot and Phalanx can operate autonomously to attack enemy aircraft or surface ships. Additionally, machine guns that automatically fire at targets in combat patrol areas were deployed on armored vehicles.

That said, these systems have largely been defensive in nature. The Rubicon the world is currently traversing would allow offensive weapons, with enhanced intelligence for more complex decisions, to play a major role in conflict. This would create a battlefield where robots and autonomous systems would outnumber human soldiers.

Russian drone (Image credit: The National Interest)

Why Governments Love Killer Robots

The appeal of killer robots and autonomous systems is obvious. Using them to do the dirty work means valuable soldiers don’t have to die and expensive pilots don’t have to fly expensive equipment. Robots don’t go to the toilet, don’t need water, or miss a shot when they sneeze or shake. While robots make mistakes, so do humans. Offensive AI proponents assume that robot errors will be more predictable, ignoring the increasing unpredictability of behavior that arises from the emergent properties of complex systems. Finally, robots can be trained instantly and replacing them is much faster and cheaper than replacing human fighters.

More importantly, the political cost of using robots and laws is much lower. There would be no images of captured soldiers or burned corpses, of pilots kneeling in a snowy field begging for mercy. This is why the war will probably continue to become more distant and faceless. Weapon AI is just the next logical step on that path. It allows robotic weapons to operate on a larger scale and react without the need for human intervention. This makes the military rationale crystal clear: not having AI capabilities will put an army at a very disadvantageous position. Just as software is eating up the business world, it is also eating up the military world. AI is spearheading the software, leveling the playing field and allowing battlefield systems to evolve at the same speed as popular consumer products. Choosing not to use AI on the battlefield will feel like a bad business decision, even if there are tangible moral repercussions.

The benefits and risks of fewer humans in the loop

As we explained in our book, Driver in the driverless car, proponents of autonomous lethal force argue that AI-controlled robots and drones could prove far more moral than their human counterparts. They claim that a robot programmed not to shoot women or children would not make mistakes under the pressure of battle. Furthermore, they argue that programmatic logic has an admirable ability to reduce the central moral problem to binary decisions. For example, an AI system with enhanced vision could instantly decide not to fire at a vehicle painted with a red cross as it rushes towards a checkpoint.

These lines of thought are essentially counterfactuals. Are humans more moral if they can program robots to avoid weaknesses in the human psyche that can cause experienced soldiers to lose their sense of sanity and morality in the heat of battle? When it is difficult to discern whether an adversary is following a moral compass, as in the case of ISIS, is it better to rely on the cold logic of the robot warrior rather than an emotional human being? What if a non-state terrorist organization develops deadly robots that give them an edge on the battlefield? Is this a risk the world should be willing to take in developing them?

There are clear and unacceptable risks with this type of combat, particularly in cases where the robots operate largely autonomously in an environment with both soldiers and civilians. Take the example of Russian drones flying in air cover and destroying anything that moves on the ground. The collateral damage and death of innocent non-combatants would be horrific. In several instances, including a famous 1979 incident in which a human inadvertently triggered alarms warning of a Russian nuclear strike, automated systems gave incorrect information that human operators debunked just in time to avoid a nuclear exchange. With AI, decisions are made far too quickly for humans to correct. As a result, catastrophic errors are inevitable.

Nor should we expect LAWS to remain exclusive to nation states. Because their manufacturing costs follow Moore’s Law, they will quickly enter the arsenals of sophisticated non-state actors. Affordable drones can be armed with off-the-shelf weapons, and their sensors can be linked to local remote artificial intelligence systems to identify and target human-like forms.

We are currently at a crossroads. The horrific brutality of Russia’s invasion of Ukraine demonstrated once again that even great powers can set aside morality for national narratives that suit autocrats and compromised political classes. The next great war will likely be won or lost in part through the clever use of AI systems. How to deal with this imminent threat?

While a complete ban on AI-based technologies would have been ideal, it is now impossible and counterproductive. For example, a ban would handcuff NATO, the United States and Japan in future battles and leave their soldiers vulnerable. A ban on applying AI systems to weapons of mass destruction is more realistic. Some may say it is a distinction without difference, but the world has succeeded in limiting the weapons that can have global impacts. However, we have crossed the Rubicon and have little choice in a world where crazies like Putin attack innocent civilians with thermobaric rockets and threaten nuclear escalation.


Vivek Wadhwa and alex salkever are the authors of “The Driver in the Driverless Car” and “From Incremental to Exponential: How Big Companies Can See the Future and Rethink Innovation”. Their work explains how advanced technologies can be used for both good and ill, to solve humanity’s grand challenges or to destroy it.

This article first appeared in The National Interest.