Decisions made by artificial intelligence are made much too rapidly for humans to rectify them.

Consider an alternate history for the Ukraine conflict. Ukrainian Army forces make a valiant attempt to intercept Russian supply convoys. Instead of relying on occasional air protection, Russian convoys move beneath a blanket of low-cost drones. Armed drones are equipped with artificial intelligence (AI) that can recognize human shapes and attack them with missiles. As the drones kill practically everybody close enough to the convoys to threaten them with anti-tank missiles, the method claims many innocent people. While the Ukrainians try to reply to the setback with their own drones, they are outnumbered by the larger number of Russian drones.

This situation is becoming more likely in the next great battle. In truth, the future of artificial intelligence in warfare is already here, even if it is not yet being used in Ukraine. The United States, China, Russia, the United Kingdom, Israel, and Turkey are all working hard to develop AI-enabled weapons that can fire to kill with no humans involved in the decision-making process. Fleets of ghost ships, land-based tanks and vehicles, AI-enabled guided missiles, and, most notably, airplanes are among them. Russia is even researching autonomous nuclear weapons; according to the 2018 U.S. Nuclear Posture Review, Russia is working on a “new intercontinental, nuclear-armed, nuclear-powered, submerged autonomous torpedo.” Lethal autonomous weapons (LAWs) have previously been employed to strike human soldiers in offensive operations. In March 2021, a Turkish Kargu-2 drone was used to launch autonomous assaults on human targets in Libya. According to a UN Security Council report, the Kargu-2 tracked retreating logistical and military convoys, “attacking targets without needing data link between the operator and the munition.”

In actuality, autonomous weapons that murder without a conscious human choice have been around for hundreds of years. Mines have been used on land and at sea since at least the 1700s. Patriot and Phalanx missile defense systems may act independently to strike enemy aircraft or surface boats. On armored vehicles, sentry guns that automatically fire at targets in combat patrol zones have also been installed.

See also  The Trillion Euro Jump Innovation: Neutrinovoltaic

Nonetheless, these systems have mostly been defensive in character. The world is currently crossing a Rubicon that will enable offensive weapons, armed with improved intelligence for more complicated judgments, to play a significant part in wars. This would result in a war where robots and autonomous systems outnumber human troops.

Why Do Governments Adopt Killer Robots?

The allure of killer robots and self-driving cars is undeniable. By using them to perform the dirty job, precious troops are spared the risk of death, and expensive pilots are spared the risk of flying expensive equipment. When robots sneeze or tremble, they don’t go to the restroom, require water, or miss a shot. Humans, like robots, make errors. Offensive AI proponents expect that robot errors will become more predictable, with little respect for the rising unpredictability of behavior resulting from the emergent features of complex systems. Finally, robots can be taught in real time, and replacing them is more quicker and less expensive than replacing human warriors.

Most crucially, the political cost of using robots and LAWs is far cheaper. There would be no images of captured troops or burned bodies, or of pilots on their knees in a frozen field pleading for mercy. As a result, conflict is likely to grow more distant and faceless. AI on weaponry is just the next natural step along this road. It allows robot weaponry to function on a larger scale and respond without the need for human intervention. This clarifies the military rationale: a lack of AI capabilities puts an army at a significant disadvantage. Software is devouring the corporate world, and it is also consuming the military world. Artificial intelligence (AI) is the tip of the software spear, leveling the playing field and enabling battlefield systems to grow at the same rate as successful consumer goods. Even if there are actual moral ramifications, the decision not to utilize AI on the battlefield will become equivalent to a poor economic decision.

See also  Next-Gen Robotics: Innovations and Their Impact on Society

The Advantages and Drawbacks of Having Fewer Humans in the Loop

As we demonstrate in our book, Driver in the Driverless Car, proponents of autonomous deadly force contend that AI-controlled robots and drones may be significantly more moral than their human counterparts. They argue that a robot instructed not to kill women or children would not make errors under duress. Furthermore, they contend that programmed logic has an excellent capacity to reduce the essential moral dilemma to binary choices. For example, an AI system with better eyesight may decide on the spot whether to fire a vehicle emblazoned with a red cross as it speeds toward a checkpoint.

These are basically counterfactual streams of thinking. Are humans more moral if they can design robots to avoid the psychological flaws that lead veteran warriors to lose their sense of reason and morality in the heat of battle? Is it preferable to depend on the cold logic of the robot warrior rather than an emotional human person when it is difficult to tell whether a foe follows any moral compass, as in the case of ISIS? What if a non-state terrorist group creates deadly robots that give them an edge in the battlefield? Is that a danger the world should be prepared to face in order to develop them?

There are obvious, unacceptable hazards associated with this form of fighting, especially when robots act mainly independently in a setting populated by both military and civilians. Consider Russian drones flying air cover and destroying everything that moves on the ground. The collateral damage and fatalities of noncombatants would be horrifying. In numerous cases, including a well-known 1979 incident in which a human mistakenly sent up sirens warning of a Russian nuclear attack, automated systems provided false information that human operators disproved just in time to prevent a nuclear clash. Decisions made by AI are made much too rapidly for humans to rectify them. As a consequence, catastrophic errors are unavoidable.

See also  The Pinnacle of Possibility: Science's New Era of Enlightenment

We should also not anticipate LAWs to be limited to nation-states. They will swiftly join the arsenals of adept non-state actors since their production costs follow Moore’s Law. Drones can be outfitted with off-the-shelf weaponry, and their sensors can be linked to home-grown remote AI systems to detect and target human-like shapes.

We are presently at a fork in the road. The heinous barbarism of Russia’s invasion of Ukraine has once again proved that even major countries may abandon morality in favor of national narratives that suit autocrats and corrupted political elites. The next great conflict will almost certainly be won or lost in part owing to the intelligent usage of AI technologies. What can be done to address this imminent danger?

While a complete prohibition on AI-based technology would have been ideal, it is today both unfeasible and harmful. A prohibition, for example, would bind NATO, the United States, and Japan in future war, making their men exposed. It is more reasonable to prohibit the use of AI systems in weapons of mass devastation. Some may argue that this is a distinction without a distinction, yet the world has effectively restricted weapons with global implications. However, we have crossed the Rubicon and have few options in a world where lunatics like Putin use thermobaric missiles to target innocent victims and threaten nuclear escalation.

The authors of The Driver in the Driverless Car and From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation are Vivek Wadhwa and Alex Salkever. Their work illustrates how new technology may be utilized for both good and evil, to solve humanity’s big issues or to destroy it.

Leave a Reply