Tomorrow Wars Volume 1 Issue 8: It’s the Autonomy, Stupid

33°11’33.4″N 106°34’58.2″W

“In the loop” is a phrase with a wicked irony. It is a categorization of human control over the autonomous processes of a machine, and also, an inevitable reference to a cult classic comedy about stumbling into war through miscommunication.

As military planners and designers inch forward with autonomous features in military machines, where the human sits in the loop matters more than ever. The “where” of in the loop sets the range of human control, and it determines when an action falls outside that human control. It also vexes definitions of lethal autonomy, especially when applied to thinking machines in flight. If policy makers are to understand what, exactly, weapons are capable of doing, they might need a new way of understanding how these weapons work.

I’m Kelsey D. Atherton, reporting from Socorro, New Mexico, and I’m here to talk about classification. (No, not that kind of classification).

Let me step back a moment.

In our last issue, Tomorrow Wars dove into the muddy distinctions between armed drones, loitering munitions, and cruise missiles. It was an attempt at classification by physical form, the propulsion system behind the weapon and the ability (or lack thereof) to disarm and land.

What if a classification-by-propulsion metric is wrong?

An alternative possibility is to instead classify a weapon based on when it selects a target. For cruise missiles, this is generally at the time of launch. For remotely piloted drones, the choice is made when a human selects a target and fires one of the drone’s onboard weapons. A loitering munition like the Harop, meanwhile, might fly a whole mission and see no targets, or might pick up a radar signature, arm its warhead, and crash down into the anti-air system it’s designed to destroy.

The “when” of the selection matters, and while it roughly matches the physical characteristics of the weapon, it does so imperfectly. The Long Range Anti-Ship Missile, or LRASM, is a perpetual bugaboo in the lethal autonomous weapons debate. (This is ably documented in a whole section of Paul Scharre’s “Army of None,” an ur-text for the thorny classifications of lethal autonomy). Fired as a cruise missile, the LRASM’s onboard sensors allow it to find a different target if the first target is no longer valid. The human is in the loop at the moment of firing and initial target selection, and the machine itself may change what target it hits, within some parameters.

Putting the focus on how human control and machine autonomy factor into target selection might lead to better designed countermeasures. If the threat model is cheap armed drones actively piloted by humans, then disrupting the communication between pilot and platform is a viable countermeasure. If the threat, instead, is autonomous machines that select targets based on pre-programmed options, masking the signature of a target, say a warship that gives the impression of being a tanker, becomes instead a viable defense.

This debate over how, exactly, to parse the difference between armed flying machines without people on board is bound to continue. A focus on targeting, and where, exactly, the human sits in the loop, is an opportunity to look at the deeply human nature of even such inhuman things as armed robots.


Robots are not just the future of war, they are the present of war. One clear example is the Recognized Environmental Picture (Maritime Unmanned Systems) exercise conducted by NATO in Troia, Portugal. Featuring at least 15 varieties of uncrewed craft, the robots in air, surface, and below the surface provided information and intelligence to humans conducting everything from beach landings to ship boarding. By all appearances, the exercise emphasized asymmetry, with the machines contributing to an intelligence picture already stacked in favor of NATO.

Training alongside robots is an important first step of modern warfare. To truly capture how war might unfold in the 21st century, the next step will be training alongside robots against adversaries who are also armed with robots.


Rescue robotics remains one of the most fascinating military-machine adjacent fields. Developed for domestic emergencies and lawn enforcement needs, these people-finding and sensing tools can prove valuable in the hazard-rich environment of modern combat.

This fortnight, I looked at FLIR’s MUVE C360 chemical-sensing system. Built on the workhorse body of a DJI-made drone, the system has industrial and safety applications. It could also sniff out characters in rooms where it might be dangerous for humans to go, but in order to end up in Pentagon inventories, it will likely need to pass through the foreign-made drone waiver program.

In Russia, the ZMEELOK-3M snake robot is a modular tool built for caves and collapsed structures. Caves are one of the hardest environments for troops to safely navigate, especially if there’s a fear of an armed foe waiting in the darkness. Sending a robot in first is a good way to protect the lives of the humans that follow it. That the sight of a robotic snake is unsettling in and off itself is an added bonus.


An existing missile, mounted on an existing sensor mast, mounted on an existing tracked platform, scans as a “dog bites man” story. The news is not in the newness, it’s in the aggregate functionality. The missile, a propulsion and sensing system, mounted on a pile of sensors, on an uncrewed vehicle, is either a range extension of a soldier’s reach, or an almost-complete autonomous weapon system. It is perhaps both.

Mounting a Javelin anti-tank missile on a THeMIS robotic tanklet through a Protector turret means creating a platform that can sense, move, and target. Humans are in the loop now, responsible for any firing decision, but the nature of the autonomous machines is that, at some point, changing from human in the loop to human on the loop could be done to meet the demands of a less management-intensive anti-vehicle platform.

And that’s to say nothing of the possibility of those same sensors directing a loitering munition at targets instead.

Dr. Hans C. Mumm