How AI Will Change Our Wars

The world’s major powers are in the middle of an artificial intelligence (AI) arms race. Over the next several years, China expects to deploy a fleet of unmanned submarines in contested waters like the South China Sea. Russia has tested its robotic tank on the battlefield in Syria and is reportedly working on developing autonomous nuclear submarines. For its part, the United States is in the process of testing autonomous swarming drones.

This development has major implications—not just for how wars are fought, but also for the future of American foreign policy. As AI grows into an essential part of modern warfare, it will become more difficult for the United States to exit wars and avoid new ones. In short, the use of AI could very well keep our military trapped in forever wars.

It’s not difficult to see the appeal of AI in conflict. Robots are far more capable of processing large amounts of data than humans are. At a time when the speed of warfare is increasing, AI systems offer quicker reaction time, allowing militaries to lessen their reliance on human personnel—ultimately limiting bloodshed and lowering costs.

Unfortunately, there’s another side of the coin. AI might also inspire unmerited confidence among top military brass. When humans are no longer doing the fighting, it’s easy to conclude that the costs of war are smaller. Of course, that isn’t true.

The geopolitical ramifications of wars are more difficult to measure than their casualties, but they’re no less significant. After all, the past 20 years of foreign interventions have proven just that. They’ve weakened America’s security and global standing while turning entire countries into sanctuaries for jihadist groups like ISIS, contributing to a mass influx of refugees into Europe, and convincing North Korea that its survival depends on nuclear weapons. Cleaning up this mess could take generations.

Yes, these costs are very real, but they’re also too far removed from the lives of everyday Americans to be truly felt. Once machines replace humans on the battlefield, it will become even easier for hawkish politicians to sell the public on unwinnable and counterproductive wars.

A burdensome conflict, so it goes, is a conflict that people think about. But that’s not the kind of war we’ll fight if the conflict process becomes smooth as silk. Earlier this month, the Pentagon announced that it was introducing JEDI, an AI-based cloud system, onto the Afghanistan battlefield. For now, the U.S. military says it intends to use AI in Afghanistan primarily to store massive amounts of data, but that could change as technology continues to advance. And once the burden of Washington’s conflicts are shifted onto machines, America’s longest war will become even easier to maintain—and less likely to reach a conclusion anytime soon.

And even if AI turns out to be error-prone, that’s only further cause for concern. AI systems are inflexible. They excel at performing certain routine tasks and recognizing specific algorithmic schemes, but they’re also prone to falling short when encountering new circumstances that weren’t programmed for. As Paul Scharre points out in Foreign Policy, AI tends to fare very poorly when it’s met with scenarios that differ from its training models. After all, the main reason self-driving cars have failed to take off is because they struggle to deal with all the unexpected developments the road inevitably throws their way.

If AI systems have a hard time driving on busy city streets, how will they ever navigate the fog of war? Battlefields and geopolitical hotspots are, by nature, characterized by great uncertainty and constant change. Preparing a machine for such an environment is especially difficult, because information about the enemy’s weaponry and tactics is always limited. Under such circumstances, it’s easy to imagine an AI system misreading the situation — with disastrous consequences.

On September 26, 1983, the world came dangerously close to nuclear armageddon. Early that morning, the Soviet Union’s nuclear early warning system reported an incoming attack of five intercontinental ballistic missiles that had been launched from the United States. Thankfully, the Soviet officer monitoring the system, knowing that an actual American nuclear attack would involve far more than just five missiles, deduced that the signal was a false alarm. Consequently, he chose not to report the signal to his superiors, thereby averting a Soviet counterstrike that could have led to an all-out nuclear war.

Could a machine have shown such sound judgment in a similar scenario? Based on what we have seen thus far, the answer is no. And what a costly mistake that would have been.

As AI takes on a bigger role in the military, it will be asked to deal with sensitive and high-stakes situations like the 1983 nuclear crisis. Given its track record so far, that prospect should worry us. If we rush to embrace AI, we will hand over the power to make decisions about war and peace to rigid algorithms. And that’s exactly the way our “forever wars” will become truly endless.

About pulsedaily

One comment

  1. Sorry, i don’t buy it. AI can be programmed to present facts and projections to humans who can make a go, no go decision. It would be foolish to program AI in such a way as to eliminate the participation of a human being for critical decisions. Yes, after a go command is issued, hopefully an AI system has been programmed to assess changing circumstances to cause modification of defense or attack action. All the projections of AI potentially being a catastrophe are only possible if used by an incompetent or a person with evil intentions. I remind you that we face such risks now if a commander responsible for a nuclear launch loses his or her sanity, in America, Russia, China or North Korea!

Leave a Reply

Your email address will not be published. Required fields are marked *

*