Home : SSI Media : Recent Publications
April 12, 2023

“Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming”

By Robert J. Sparrow, Adam Henschke

Contesting Paul Scharre’s influential vision of “centaur warfighting” and the idea that autonomous weapon systems will replace human warfighters, Sparrow and Henschke propose that the manned-unmanned teams of the future are more likely to be minotaurs—teams of humans under the control, supervision, or command of artificial intelligence. They examine the likely composition of the future force and prompt a necessary conversation about the ethical issues raised by minotaur warfighting.




Keywords: ethics, manned-unmanned teaming, future force, centaur warfighting, autonomous weapon systems

Read the article: https://press.armywarcollege.edu/parameters/vol53/iss1/14/

Episode Transcript: “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming”

Stephanie Crider (Host)

You’re listening to Decisive Point. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.

Today I’m chatting with Rob Sparrow, professor in the philosophy program and an associate investigator in the Australian Research Council Center of Excellence for Automated Decision-making and Society at Monash University, and Adam Henschke, an assistant professor in the philosophy section at the University of Twente, Netherlands.

Sparrow and Henske are the authors of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” which ran in the Spring 2023 issue of Parameters. Welcome to Decisive Point, gentlemen. Set the stage for us, please, including Paul Sharre’s perspective.

Robert J. Sparrow

We’ve seen drones and teleoperated weapon systems play an increasing role in contemporary conflict. There is lots of enthusiasm for autonomy in weapon systems in the military, as I think 20 years ago, there was a paper in Parameters arguing that in the future the tempo of battle would increase to such a point that only computers would be capable of making the decisions that are required to win battles. So for a long time, there’s been a debate about the relationship between human beings and unmanned systems in warfighting. Recently, Paul Sharre has argued that we don’t need to worry about autonomous weapon systems taking over all the combat roles because, actually, the future of warfighting involves manned-unmanned teaming, and Sharre suggests that we should think of this on the model of what he described as a Centaur. A centaur is a mythical creature with the head and upper body of a human being and the lower body of a horse. And that’s a really nice image. You’ve got the human being in command and in control and the machines doing the physical work involved in warfighting.

We think that’s perhaps optimistic for a number of reasons, and that what we’ve seen, is really in applications, is it’s often easier to get machines to be making decisions than it is to get machines to do physical work. And so, for that reason, we think that the future of manned-unmanned teaming might be better imagined as what we call a minotaur. And so, rather than the human being in charge of the team, we suspect that in many roles, actually, the AI will be in charge of the team. And the human beings will be effectively under the command of the AI and doing the physical work, where the mental work will be performed by artificial intelligence.

Henschke

One way to think of Paul Sharre’s approach is, as Rob said, he’s advocated this view of centaur warfighting, and there the human is generally seen as the head, the decision-making part of the warfighting operation. And the robots and the machines, they do the, kind of, the grunt work. They’re the things that do the stuff on the ground. So, the way in which this manned-unmanned kind of vision is put forward in Sharre’s work is humans do the deciding and the robots, or the machines, do the fighting

Host

But you disagree. Will you expand on that please?

Sparrow

We do think that in many domains and in many situations it’s more likely that the machine will be in charge—or effectively in charge. We think that it’s quite hard to get machines to do something like move a gun into place or walk up the stairs or talk to these people and ask them where the insurgents are. Those roles, we think, will still need to be carried out by human beings. But, for instance, wargaming or identifying and tracking targets, those tasks machines can outperform human beings already in lots of circumstances.

If you are considering the team, we think a lot of the executive and cognitive tasks will actually be handed over to artificial intelligence, and the human beings will be left doing what the AI says. And so, in that context, you should think of this as a minotaur team, as a kind of cyber with an artificial intelligence head and the body made-up of human beings.

Henschke

On this, Rob came up with the idea of flipping the central view to suggest that we might instead think of these robot/human teams like minotaurs, where the thinking is done by the machine and the fighting, or the grunt work, is done by the humans. So, the robot becomes the head, and the human becomes the body, kind of flipping it from the central vision. The idea is that we ought to think of these more as unmanned-manned teams, like a minotaur rather than a centaur.

Host

I love that visual. Would you walk us through the key technical dynamics and the particulars of minotaur war fighting?

Sparrow

When people started thinking about artificial intelligence, they thought the hard task was going to be getting machines to do the things that we find difficult—playing chess, calculating, looking for patterns in data. But when people started building robots, what they discovered was the machines were quite good at that stuff. Machines were able to calculate . . . were able to do scheduling tasks. They were able to, in the end, play chess reasonably easily and very, very well.

Where machines struggled were doing things that children and animals can do, and so we don’t recognize as requiring sophisticated capacities— things like walking into a room and recognizing where the chairs in the room are or simply being able to walk up steps or pick up a cup. Those tasks actually turn out to be incredibly difficult for machines.

We still don’t have robots that can, for instance, walk into your office and find your coffee cup. Machine vision systems are much better but in complex environments, and where there’s a need to recognize context and maybe move around the cluttered environment, machines fail very quickly. If you’re looking for tasks to automate, if you’re looking for tasks that machines can perform, often where people end up employing machines is in these executive or cognitive roles. For instance, scheduling which offices get cleaned and making sure that there are people to clean them . . . that task can be outsourced to a machine scheduling system. Actually walking into the office and emptying rubbish bins, setting for the next vacuuming, that still needs to be done by human beings.

Host

Let’s talk about ethics. What are the ethical implications of minotaurs and minotaur warfighting?

Henschke

Well, one of the main things in the notion of the minotaur warfighting as we describe it in the paper is that you’ll have the machines, computers, and AI in a combination of technologies, directing and guiding how humans engage in a conflict zone. And as result of that, you’ve got decisions being made by machines carried out by humans. And this has quite a few ethical implications. One of the most interesting ones, or at least one of the first ones that we want to point out, is in these situations there might actually be a strong case for minotaur warfighting. If this would either decrease the likelihood of military mistakes, would decrease the likelihood of fratricide, and/or put your own soldiers at unnecessary risk, we might want those decisions to be made by the computers, by the machines. And so, there is a bit of an argument in favor, first, of minotaur warfighting, and, picking up from what Rob had just said, there might also be a case for minotaur warfighting in many situations over centaur warfighting.

So, if you think of what Rob was saying about navigating physical terrain, when we think of terrestrial warfighting—warfighting on land, for instance—that’s really really complex, really complicated physical environments. You might have sand. You might have water. You might have trees. You might have humans or equipment moving through these complicated environments. And in that sense, the centaur is probably gonna face a whole lot of trouble. Whereas the minotaur system, or minotaur unmanned-manned system, that might actually have a far better capacity to operate in these sorts of complicated, complex, terrestrial environments. So, there we might see that there’s actually a case for the minotaur warfighting if it’s going to increase both the likelihood of success and decrease the chances of making military mistakes.

Sparrow

We also think that people might be quite horrified by that prospect. I mean, really, the idea that you were just following a list of tasks or strategic objectives given to you by a computer system, I think people are going to really struggle with that. And there’s an understandable sense that human beings are valuable in a way that machines are not and that placing machines in authority or giving machines effective power over human beings is getting that relationship backwards.

We think there’s a relationship here with the debates about autonomous weapons systems, where critics of autonomous weapons systems have often insisted that machines shouldn’t be given the power to take a human life, that there’s something about the value of human life that suggests we shouldn’t let a machine make a decision about taking a human life. That intuition also, I think, counts against minotaur warfighting, because minotaurs, therefore AI systems, will be placing warfighters in harm’s way. They may sometimes have to put people into combat that they’re unlikely to survive, and I think people will balk at that, despite the fact that there are these very powerful arguments to suggest that might actually reduce the average risk to warfighters.

There’s some questions here also about what we expect when we give reasons to each other. And you know, it’s a really important ethical principle that when you are relating to someone, you should be able to provide reasons for the way that you’re treating someone. And there’s some problems with the idea of machines giving reasons. Nowadays, machine systems can spit out reasons. I mean they can give you what looks like a reason, but they don’t have skin in the game in the same way that human beings do. They don’t stand behind their words in a way that a human being does when they explain why they have ordered you to take on this very difficult task. So, there’s some questions here again, about whether it’s ethically acceptable to have machines effectively ordering people into battle.

Host

I read one of the potential scenarios in your article about what if AI uses humans, basically, as fodder to achieve a larger goal or greater objective. That rattled me somewhat, I’d never considered it.

Henschke

I was going to say exactly that point. One of the other really fundamental principles in ethical theory goes back to the work of Immanuel Kant. And one of the things he said is that we shouldn’t treat people as tools. We should treat them as ends in themselves. And if you’ve got a machine that, as Rob had said, lacks the capacity to morally reflect on decisions telling humans what to do and to go on and do things, there is a really significant concern that the humans there become like a tool. They become fodder. And that is something that goes against a really, really core set of foundational principles in ethics. So that’s something that is problematic, definitely for the minotaur warfighting (and) probably also for centaur warfighting in various forms that that might take as well.

Host

What should we consider going forward? This obviously isn’t going away.

Sparrow

So clearly a key question here is whether or not we can intervene to prevent minotaur warfighting from emerging, or perhaps control the sorts of tasks where machines are given effective power or authority over human beings. We worry about an arms race here. We think that minotaur warfighting will evolve because in lots of circumstances, we think these systems will win battles . . . that, essentially, a military force that leaves too many decisions up to human beings may struggle to compete with a military force that is more willing to hand over certain sorts of decisions to AI. So, there’s a potential for something like an arms race.

If you did want to try to prevent or slow down the development of minotaur warfighting, one obvious way of going about it is trying to build better robots . . . is to think about how we can build robots that actually can take on these sorts of physical tasks that are currently very difficult for machines. And we think one of the real problems there is the source of technologies that you would need to develop in order to get machines that can handle the complexity of the physical and the uncertainties of the physical environment might actually also make it easier for machines to work in other command roles.

Henschke

One other thing, too, that seems possibly quite obvious—there would have to be significant changes to training, education, and other ways in which the culture and practice of militaries operate. So, if we are going to go down some kind of minotaur route, then we’d need to recognize that the soldiers who are placed under the command of these machines, that they would have to receive particular training relevant to that. And also, quite importantly, the people who have the capacity to make the decisions about the minotaur warfighting . . . they would also have to undergo really specific training and education, understanding what the manned-unmanned systems are for, what their weaknesses are, what their limits are, what the implications are because there would be shifts in responsibilities, moral responsibility, who we assign moral responsibility to, and the training and probably some of the legal responsibilities would have to change as a result of that as well. So, one of the big important things would be to have a cultural shift in the militaries that recognize not just the practices of minotaur warfighting but a lot of the ethical, legal, and perhaps even social and cultural issues that might come around as well.

Sparrow

If people are freaked out by this vision. If people don’t want to see command roles handed over to AI, I also think that means that they should think again about autonomous weapons systems. There’s a lot of enthusiasm for autonomy in our armed forces for understandable reasons, indeed. The same reasons we think minotaurs will emerge . . . reasons that other people have been arguing that more and more tasks will be handed over to autonomous weapon systems. But machine autonomy here looks problematic in both cases, or neither. If you think it’s wrong to have a machine sending your troops into battle where they might be killed, it’s quite hard to explain how it can be OK to have a machine making decisions about which of the enemy to kill. So, there are some connections here between debates, we think debates that need to happen about so-called minotaur warfighting and debates that are going on at the moment about autonomy in weapon systems.

Host

Definitely lots of food for thought, here.

Listeners you can really dig deep in this and get into a lot of detail. Download the article at press.armywarcollege.edu/parameters, look for volume 53, issue 1.

Rob and Adam, thank you so much for making this happen. I know we’re on three different continents. It took a little bit of effort. I really appreciate you making the time for this.

Henschke

Thanks very much, Stephanie.

Sparrow

Thank you, Stephanie.

Host

If you enjoyed this episode of Decisive Point and. Would like to. Hear more? You can find us on any major podcast platform.


About the authors:
Robert J. Sparrow is a professor in the philosophy program and an associate investigator in the Australian Research Council Centre of Excellence for Automated Decision-making and Society (CE200100005) at Monash University, Australia, where he works on ethical issues raised by new technologies. He has served as a cochair of the Institute of Electrical and Electronics Engineers Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.

Adam Henschke is an assistant professor in the philosophy section at the University of Twente, Netherlands. His research is concerned with ethics, technology, and national security, and he is interested in ethical issues having to do with information technologies and institutions, surveillance, cyber-physical systems, human military enhancement, and relations between social information technologies, political violence, and political extremism. He recently coedited the 2022 Palgrave Handbook of National Security.