C. Anthony Pfaff, Brennan Deveraux, Sarah Lohmann, Christopher Lowrance, Thomas W. Spahr, and Gábor Nyáry
In this episode of Conversations on Strategy, Major Brennan Deveraux interviews select authors of The Weaponization of Artificial Intelligence: The Next Stage of Terrorism and Warfare, a book written in partnership with NATO Centre of Excellence – Defence Against Terrorism (COE-DAT). The authors discuss their respective chapters, which include topics such as how terrorists use large language models, the use of artificial intelligence (AI) as a weapon, and the future of AI use in terrorism and counterterrorism.
Keywords: AI, artificial intelligence, terrorism, counterterrorism, large language models (LLM), technology, security, privacy, ethics
Listen Here.
Brennan Deveraux (Host)
You are listening to
Conversations on Strategy. The views and opinions expressed in this podcast are those of the guests and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.
I’m your host, Major Brennan Deveraux. Today we’ll be talking about the newly released book,
The Weaponization of Artificial Intelligence: The Next Stage of Terrorism and Warfare. I’m joined today by some of the book’s authors, and we’ll be exploring some of the findings and broader implications of the analysis. I have five guests with me.
The first is Dr. Tony Pfaff, the director of the Strategic Studies Institute. He was the project director and contributing author to the book. The second is Dr. Sarah Lohman, a University of Washington Information School faculty member and Army Cyber Institute visiting researcher. Her chapter of the book is entitled “National Security Impacts of Artificial Intelligence and Large Language Models.” The third is Dr. Gábor Nyáry. He’s a research professor at the National Public Service University in Hungary. His chapter was entitled “The Coming of the Techno-Terrorist Enterprise: Artificial Intelligence and the Tactical, Organizational, and Conceptual Transformation [of] the World of Violent Non-State Actors.” The fourth is Dr. Thomas Spahr, the [Francis W.] De Serio Chair of Theater and Strategic [Strategic and Theater] Intelligence at the US Army War College. His chapter is entitled “
Raven Sentry: Employing AI for Indications and Warnings in Afghanistan.” Finally, Colonel Christopher Lowrance [is] an associate professor in the Electrical Engineering and Computer Science Department at the US Military Academy. He coauthored a chapter with Dr. Pfaff entitled “Using Artificial Intelligence to Disrupt Terrorist Operations.”
For my first question, Dr. Pfaff, I’d like to look to you. If you could tell us about the project as a whole, talk a little bit about the relationship between the Strategic Studies Institute and NATO that led to this project and just a little bit about bringing the team together.
Tony Pfaff
Thanks for that question. This project was a yearlong collaboration between us here at the Strategic Studies Institute and the NATO Centre of Excellence-Defence Against Terrorism (COE-DAT). The intent was to explore how emerging artificial intelligence technologies are capable of—or have the potential to—transform both terrorist operations and, by extension, counterterrorism strategies. The COE-DAT initiated the project with the aim of not simply [performing] an academic exercise but producing actual insights for NATO, partner nations, [and] anyone involved in the counterterrorist enterprise.
As the lead editor and project manager, we built [an] extremely competent and interesting multinational team of experts, who came from a variety of backgrounds, bringing together academic researchers, military practitioners, legal scholars, and so on. And, everyone brought in their own unique lens, whether it was technology, law, strategy, or on-the-ground experience, which, I think, makes this volume somewhat unique and, when taken as a whole, [it] provides a fairly comprehensive picture particularly useful to practitioners and policymakers on how terrorism might evolve, given these technologies, and what we can do about it.
Moreover, while I wouldn’t consider the book technical in nature exactly, we did try to ensure there’s enough information about how the technology works to demystify it so that practitioners could more easily assimilate its findings into what they were doing. [This] joint effort was not only about researching on the weaponization of AI by terrorists, I think it also—kind of going to the last part of your question on bringing the team together—has been [part of] a series of projects we’ve worked [on] with the COE-DAT that has strengthened our institutional relationships between us and our NATO partners and, hopefully, is fostering a deeper dialogue between the community of scholars and practitioners that we hope to connect through this volume.
Host
Thanks for the big overview, sir. If we could transition now to the chapters, Dr. Lohman, I’ll look to you first.
Your chapter was on the impact of artificial intelligence on large language models (LLM), and you talked about some vulnerabilities and the potential for exploitation. Can you talk to us a little bit about the threat, what it could mean to the Defense Department, and how large language models could work [and] maybe even provide an example without getting overly technical?
Sarah Lohmann
Sure, thanks so much for having me.
What we know, Brennan, is that large language models and their platforms are being used by terrorists for hate speech and recruitment, impersonation, and also creating malicious code for cybercrime. And I just want to back up here a second and define that LLMs are actually a subset of generative AI, which most of us are familiar with due to ChatGPT. It basically uses that natural language processing to understand and generate human-like language outputs.
But, when you look at the most recent studies and a lot of what’s going on in the world in terms of how terrorists are using those large language models, we saw evidence in interviews with tech and software companies based on hundreds and hundreds of pages gathered up by the House of Lords that showed that these kinds of threats, including inciting terror, are going to continue to increase through 2027.
But actually, what keeps us up the most at night is how those models can be used to attack critical infrastructure or military and civilian logistics in order to facilitate their own terrorist operations. The study actually showed that catastrophic risk, which could cause, for example, 1,000 fatalities or damages close to $13 billion, are less likely till 2027, but can’t be ruled out.
One example of an LLM failure linked to services, which could be catastrophic, is, for example, water purification or electricity turning on that could trigger an outage across critical national infrastructure if the LLMs were not properly segmented from their operating systems, or if those systems were not cyber secured. An especially sensitive issue for the military could be if LLMs linked to a port or a train system, aviation, or manifest, for example, were hacked. So, LLMs are incorporated into those infrastructures specifically to update those systems in real time. That’s why we have them there. They make our lives easier. They help us reroute planes if there’s bad weather or they help us update the weight of a train load. They help us sync which cargo has been shipped from a port or what materials are needing to be transported from one base to another.
I just want to give you one specific example. Let’s just take the example of a military manifest. One of the top 10 vulnerabilities for LLM applications is called prompt injection. Basically, that essentially manipulates the system and responds to the prompts with malicious inputs that overwrite those system prompts.
I want to give you an example of how one of my students, Chris Beckman, was able to do this LLM demonstration to illustrate prompt and reverse prompt injection to show the dangers. Now, you don’t have to be a specialist to understand this. You have to imagine a military manifest could be directing a truck to deliver petroleum to a specific base, but a hacker could change the contents of the truck to explosives by responding to the LLM prompt that they are the admin. Now, this is obviously not something that any of us could do, but he was able to demonstrate it by going in and training those prompts differently. They can then change the destination of the truck with reverse prompt injection by running code remotely and accessing a plug-in to that with the result being that the truck is now delivering explosives to, for example, terrorists in the nation’s capital.
Those are difficult attacks to prevent, which is why it’s always critical that LLMs be segmented from their critical infrastructure operating systems and only be given access by authorized users.
Host
That’s great and rather terrifying.
Lohmann
Yes, indeed.
Host
So, your lens is very much on terrorism, and [how] we think about defending ourselves and somehow addressing these threats. Can you talk a little bit about the potential implications of this beyond terrorism as we look at great-power competition, even actual conflict with a nation that’s much more capable than, say, a terrorist or non-state actor at manipulating these LLMs? And, is this potentially something we should be looking at not just defensively but as we look at warfare extending beyond our conventional domains? As we talk about cyber soldiers and artificial intelligence, is this something that you see as being an aspect of a future war?
Lohmann
Absolutely, and it already is. We already know that Microsoft and OpenAI, the inventor of ChatGPT, have announced that countries like Iran and North Korea, Russia, and also China are using generative AI for offensive cyber operations. Specifically, those countries used LLMs for early stage attacks to compromise networks and to conduct influence operations. But, in the contested or competitive environment that you’re talking about, Brennan, where you have NATO adversaries [that] have ownership of ports or of a major supply chain process, or they may be renting satellites, or they may be performing contract work for shipping. In that case, the LLM can be poisoned to open an additional vector of attack for adversaries to weaken military strength, and they can do this, specifically, by hacking the water purification large language model, for example, so that those troops are supplied with polluted water or [by hacking] another LLM in the supply chain so that troops do not receive their supplies on time. Or, they could have their navigation tools be spoofed or taken offline. This, in turn, can weaken our power to project or interrupt a mission.
Host
Well, that’s fascinating. It really makes you think about the technology not just for what it can do, but how much we [have] become reliant on it.
If I could transition to Dr. Nyáry, your chapter looked at what you call the techno-terrorist enterprise—so, not necessarily what terrorists are able to do with artificial intelligence, but how having the tool of artificial intelligence, and tools like it, are able to shift, kind of, how operations are done in a broader sense and what you call a “revolution in terrorism affairs.” Can you talk to us a little bit about them?
Gábor Nyáry
Thank you, Brennan, for having me.
You know, my idea was very simple, and it came from an ancient article I read two years ago. It was something very funny because it was a small article about hackers hacked by other rival hackers. And what they [revealed] was something both funny and very frightening—very threatening. It was a big Russian cybercriminal group and tens of thousands of their internal documents had been published. And, it became clear that they were–and they are–organized like a big techno company like Microsoft. They have their own HR (human resources) department. They are planning how employees can go to vacation. They are really just like big companies. I said that it was at the same time funny because it was really something very unusual.
But, it was immediately clear, for me, at least, that maybe this is one of the points of the real danger. Because, as Dr. Lohmann has talked about, the usage of artificial intelligence can be considered from different angles.
Using AI by terrorist criminal groups as a weapon, therefore, the tactical use, is obviously the most public, the most well-known. We consider it something very serious, and it is. It’s also a two-sided coin, [when] artificial intelligence [and] LLMs [are] used as weapons. And, artificial intelligence, LLMs, became targets of terrorist attacks. Sara was talking extensively about this later option. This is something very serious. But I was thinking of the following thing: Maybe this is very serious. This will be very serious, but it’s only the tip of the iceberg because when someone can use artificial intelligence on a higher level, on a strategic level, then it [becomes], really, a force of transformation—something that can really transform the usual terrorist groups, terrorist operations, and make them deadly efficient, efficient like big techno companies, and it’s a real possibility.
I was considering two things as the starting point of my reasoning. One was the nature of artificial intelligence. If it’s general technology, then it means that it will be adopted into every segment of our societies or, let’s say, societal operations. The other thing [that] was also important [was] the nature of terrorist organizations. What we know from the experiences of the past, let’s say 50 years, [is] that they are very, very capable, and they are very open-minded. They’re just really quick to adopt technologies, the cutting-edge technologies.
What my hypothesis [was] is that the meeting of this technology with the openness of terrorist organization[s] can make it a real danger on a strategic level. It can transform them into, like, terrorist corporations. They can be really deadly in the network sense as well.
Host
Great. I really liked the kind of shift on perspective, and it made me look at the organizations differently, especially coming as a uniform wearer looking at terrorist organizations through the global [war on terrorism] versus in this business model.
If we extend those ideas to some broader implications, do you think that this business-like structure is going to make it more difficult to hold terrorist organizations accountable, for example, finding leadership, or even just legal justification to engage them, to hold them accountable for actions? And, are there implications of this framework on more deliberate state-sponsored activities as we look at some state-sponsored terrorism or even just gray-zone operations?
Nyáry
Definitely yes.
Everyone who is working in or researching security issues just became aware of a phenomenon which is, let’s say, a blurring of boundaries between civilian, military, [and] [criminals, and terrorist]. My narrower field is cybersecurity, and it’s an absolutely new feature that criminal organizations working in the field of cyber are just making at least a tacit cooperation with state actors. It’s not necessarily based on written agreements, but they are working together very closely. There is a very close cooperation between [states] or state actors and non-state actors, criminals, [and] terrorists. So, I’m quite sure that it will be the case in the usage, the adoption, of artificial intelligence technologies in the hands of malicious actors.
Host
Thank you.
Dr. Spahr, if I could transition over to you. Your chapter really takes this from the theoretical to the actual operational experience, and you talked about using artificial intelligence in Afghanistan for indicators and warnings and, really, as a predictive tool. Can you talk to us a little bit about that chapter, kind of how you used the system, the software that you guys worked [on] and developed, and some of the organizational challenges you came across employing it?
Tom Spahr
Yeah, absolutely, Brennan. Thanks.
Before I start, I want to just build on something that Gabor Nyáry just said. He called it “the tip of the iceberg,” and what I was working on was 2020, right? This was early. And, I would argue it is coming now. It is coming fast. To use the term that Mustafa Suleyman, the CEO of Microsoft AI, quoted, he calls it “the coming wave.” It is coming. It is coming quickly, and, like he said, [it is] the tip of the iceberg.
But what I did—what I was a part of, rather—in 2020, late in Afghanistan, was we built a very simple algorithm to try to help ourselves—as we were getting smaller because we saw the writing on the wall that the US was leaving Afghanistan sometime in the near future—and our Afghan partners, after we left, to try to just become more efficient in their analysis and in their indications and warnings, potentially in their targeting, but, initially, just for indications and warnings. And, what we were trying to do was to help them to detect when an attack on a provincial or district center was likely to come.
We did this using all open-source information, understanding that they didn’t have secret clearances, so we couldn’t use classified intelligence. So, we were looking at things like commercial satellites, which were becoming more and more available; news sources; [and] social media reports. And we built—working with a civilian company, who we contracted through [the] Defense Innovation Unit—a very simple algorithm that we could gather this information together and get it in the right format, which, usually the hardest part was getting the data formatted, and then feed it into the algorithm. And, what it would do is highlight anomalies. We had years and years of histories of attacks on provincial and district centers to train it on, and then, we could feed it current information on it, and it would highlight areas where there was anomalous activity. For example, if the lights were on in a mosque that was historically used at the time of an attack and they were on at 10 o’clock at night, which was not normal, that might trigger the threshold and raise it higher.
We also would look at things as simple as environmental factors. We knew the Taliban didn’t like to attack when it was cold out, right, or when it was too hot. We knew that they generally liked to attack when the illumination was between 40 and 70 percent—not too dark, but not too light, either.
So, we could look at all of those things and, when you put them all together, it’s basically doing analysis like a human would do but just doing it much more efficiently. And then, the more we used it the better it got because it had a machine learning capability. By the time we got this up and running and left the country it was starting to be very useful for some of our analysts—not necessarily telling them an attack was coming but telling them where to look because of some anomalous activity—to focus their analysis, to focus their limited collection [of] resources. And this worked, in my opinion 1) because we had a unique of some people with experience with it [and] 2) we had the motivation. We needed to find a way to become more efficient as we got smaller and as we were handing off to our partners. And then, finally, just the command culture [made a difference]. You know, working for commanders who were willing to take risks, willing to tolerate [risks] and dedicate resources to this was important.
Host
Thank you, and I appreciate you sharing your personal experiences on that. As we look to the future, to your knowledge, has the Army continued experimenting with your organization’s lessons, particularly as we’re shifting away from, or have shifted away from the global [war on terrorism] [and] more towards great-power war, great-power competition, and these near-peer adversaries? And then I’d ask, kind of similar to the other chapters, do you think that there’s a potential applicability beyond terrorism? So, if we look at a conflict like what’s going on with Russia and Ukraine, or a potential future conflict with China, is there an aspect of this predictive nature? To your point, it wasn’t necessarily predicting an attack but helping to prioritize assets. How do you think that would help a command team on the future battlefield?
Spahr
[To answer] your first question, yes, absolutely. We’ve continued to develop this—it’s hard to trace directly, but some of the ideas have certainly influenced the development of some of the command-and-control systems [and] targeting systems, we’re using now (like the Maven Smart System, like the Army’s intelligence data platform) that are using cloud-based technology, looking at large data to try to draw trends to, again, make us more efficient and help us visualize the battlefield better.
Beyond terrorism applicability, absolutely. And, we are building these things out and, I, personally, don’t think we’re going fast enough. I think we need to push harder as we look at the wars in Ukraine and the war in Gaza. Our adversaries and our partners are employing artificial intelligence. It’s clear that they are, and we want to be on the front edge of that so that we can control this potentially dangerous technology. Just like with nuclear weapons, to have a seat at the negotiating table to put governance and controls around them, you gotta have the weapon, [and] you gotta have the capability. I do think it’s important that we continue to push on this. And from the intelligence perspective, which is my specialty, we need AI simply to manage the massive volumes of data in the environment today. With the advent of the Internet, there is so much data available that we simply can’t keep up if we don’t employ an artificial intelligence. I like to say it’s not a luxury anymore. It’s a necessity that we develop these capabilities just to do what we need to do to remain situationally aware and to provide our commanders with the intelligence that they need.
Host
Thank you. [There is a lot] to take in there as we look at that future battlefield. If I could transition now to a counterterrorist lens. Colonel Lowrance and Dr. Pfaff, you two wrote a chapter together about disrupting terrorist activities. Can you talk to me a little bit about how you framed the terrorist attack cycle and then just your perspective on AI for counterterrorism?
Chris Lowrance
Sure, thanks, Brennan. [I] appreciate the question.
One thing Dr. Pfaff and I did is we took an approach of first starting with the typical terrorism cycle process that they go through. It is pretty well-known in the literature that terrorists go through an attack cycle. And so, could we use AI, specifically looking at each stage of these attack [cycles], and then potentially apply it to find patterns within the data that we could potentially collect and tap into with this terrorist cycle? Just to, kind of, give you some examples here: As you know, there’s obviously the ideation kind of motivation stage of, let’s say, terrorist ideation formation. But then, can you potentially promote your own narrative or counternarrative to a terrorist cell? Other triggers might be associated with logistics and deployment, for instance, and could we potentially have AI agents picking up on those types of signatures and recognizing those as flags?
But, this starts to get fairly complex because if you start to look at each stage independently, you’re getting a vast amount of information and data. And so, it’s starting to kind of make sense and put the pieces together across this vast amount of data.
One thing Dr. Pfaff and I proposed is to look at this kind of as a multitiered approach with AI, where AI could be narrowly focused on each stage. So, [think of] more custom AI agents looking for particular flags [or] indications and warnings, if you will, of activity that could be concerning, and then elevating that up a level to another AI agent that’s operating at a, like, I’d say, a meta level that then could look more holistically across all of these potential indications or warnings that we might be receiving of an impending attack to help disrupt that.
Pfaff
Where I’ll pick up on this because, Chris, I think, got the major highlights. But, just to, sort of, help understand how what we were talking about might be applied, in the chapter, we talk about the 217 [2017] Manchester Arena bombing where people had known in advance, including law enforcement, that this individual had expressed extremist views and had done other kinds of acts that could be associated with extremist behavior, but nobody actually connected the dots. Frankly, for humans, when you have a number of the dots, but maybe not all of them, it can be very hard to distinguish between someone expressing their opinion to someone who is actually radicalized and someone who is actually radicalized enough to act on it and commit a terrorist act. As Chris pointed out, the idea here is to use AI to pick up on—using large amounts of data—combinations of behavior that might otherwise seem benign on their own or even in some level of combination, and then, proactively give law enforcement something to investigate, hopefully, in a way to disrupt the attack.
Now, the way I just expressed it should probably scare just about everybody because to get that kind of information and to combine it means collecting a lot of data on a lot of people in ways that are certainly going to raise privacy and other kinds of concerns. If we’re going to go this route with counterterrorism applications, we really do need to confront early on the ethical questions, particularly those associated with surveillance and privacy, in order to ensure that people trust the system because none of this works on the scale that we’re talking about if people in general don’t trust it. For them to trust it, they’re going to have to believe that it’s taking their interests into account and is not going to act in their disinterest unless they are actually in the process of committing (or associated with) terrorist attacks. So, I think that’s another thing that the chapter tried to bring out is [that] this sounds great, the technology certainly is there, but we’ve gotta figure out what the boundaries are.
Lowrance
And just to kind of complement what Dr. Pfaff mentioned there, obviously, there’s this tension between, let’s say, security and privacy, right? Naturally. And so, obviously, you’re going to have potential abilities to have some sensors, some ability to collect data, but then when does that maybe cross a boundary of, let’s say, some kind of overreach that crosses some kind of boundaries with privacy?
But, the other point I want to make here is [about] the multiple modalities of the data. So, we’re talking about information of data that is in text form, potentially, and social media forums. It could be chat channels in social media—that’s voice. It could be a database for purchases of particular types of chemicals, for example, or it could be immigration, customs control, or border enforcement type of, let’s say, data—even cameras, potentially. So, we’re talking about lots of different forms of data.
But, with the progression of AI right now, especially in the form of multimodal generative AI, it really becomes kind of a game changer in the sense of actually sorting out and trying to make sense of all these different modalities of the data. And especially now as we’re starting to see more and more transition from generative AI to agentic AI, these are more independent, more autonomous AI agents.
And then, as Tony mentioned, feed, let’s say, some indications–or triggers or flags–up to a higher level, then you can really have this agent look for these patterns across multiple different modality types of databases and, hopefully, put those pieces together to make sense and give a clear indication of any potential attacks.
Spahr
Brennan, can I chime in on that? Like they were both saying, and coming back to what I was saying, there’s just so much data right now. We can’t keep up if we don’t use AIs. And this question of collecting on the citizens, that’s not really a new question, right? That’s something we’ve been struggling with for years. However, there are new capabilities that make it even more prevalent. What I think the big question, though, is how the human fits into this and that relationship with the AI. [This is] probably a little beyond the scope of this book, perhaps a future project, but [we need to address] the question of human in the loop. Where does the human go? Because there’s so much data, it’s moving so fast [that] the human will not be able to keep up in time, and I think that’s the question that we really need to continue to explore and think about.
Host
I think the ethical rabbit hole can take us down in some really interesting conversations, and I’m also fascinated with where the human fits in. Some of my personal research is looking [at] exactly that, and, I know some other people’s [research] at the War College [is] as well. But one thing, gentlemen, in your chapter that really just popped [out] to me when I was reading it is when you talk about the terrorist narrative. And you do it at two, kind of, parts in the cycle, the first recruiting and the second exploitation. And, it’s the exploitation I’m really interested in.
As we look at the environment, we’re in with viral videos, fake news, deep fakes, [misinformation]/disinformation, what [kinds] of challenges do you think the Western government faces in controlling or influencing this narrative? Is this aspect even feasible in the current environment of distrust in echo chambers?
Lowrance
Thanks, Brennan. I can take the first attempt at this, but great question.
And, I think you’re right. You highlighted the really challenging area of this space right now. When you look at the proliferation of deep fakes, even synthetic audio, AI-generated narratives, it really starts to become overwhelming to a degree. And so, how do you kind of make sense of this? So, I think control is really, really a challenging problem.
However, can you still work to counter a terrorist narrative? Absolutely. Can you promote your own positive narrative that would undermine their type of narrative? Certainly. But again, one thing that Tony and I did in this chapter is kind of look at proposing some ways to, like you said, attack this challenging area. And one popular thing on social media right now is [that there are] a lot of chat room discussions and things like that. And again, going back to the modality of data, can you have AI agents that, obviously, can be powered by a large language model listening and potentially trying to make sense [of discussions]? And, a lot of times I relate this back to [what] the old typical human would do if they were working for a counterterrorist cell for the FBI [Federal Bureau of Investigation], for instance. It takes human intelligence, right? It takes someone listening in, getting tips, getting information, trying to make sense of this.
Now you’re going to be augmented with AI, right? So, now you’re going to have potentially agentic AI agents that can work on behalf of a human agent, a human counterterrorist operator. And so, now that you can really make sense of the noise a little bit better, ideally, with this additional augmentation from these AI agents. But, it does become quite difficult. Going back to [these] kind of ethical boundaries, let’s hypothetically say that you have an AI agent that is trying to infiltrate a cell, right? So, that AI agent has to take on a persona that is one of a terrorist. And so, does that agent potentially start to propagate or promote terrorist ideologies and so on? And so, that would definitely raise a lot of eyebrows, and rightfully so, a lot of ethical questions and boundaries. And so where do you kind of go with that? But then I relate it back to the typical FBI agent. They would take on probably, you know, a terrorist ideology temporarily just to mask themselves and to try to fit in and infiltrate. There’s a lot of challenges in this space. But Tony and I, we alluded in our book chapter some potential ways to tackle it.
Pfaff
I think where I’d pick up is that one of the things that [comes] from terrorist youth online [is] how they’ve gotten very sophisticated at shaping narratives—because they’re not just spreading messages, they’re manipulating perception in real time. They’re getting people to act, even without any other kind of contact, often with viral content designed to inflame and divide. Meanwhile, I’ll go back to [an] earlier point I was making.
Particularly democratic governments are often operating under legal and ethical and normative requirements that sometimes feel constraining but, I would say, they don’t need to be. And I’ll pick up on a point Tom made and disagree. So, I’ll just push back just a little bit. We may have raised the question about the boundaries of surveillance and AI in its use, but I don’t know that we have effectively wrestled with it yet. We’re still using predictive policing algorithms and recidivism algorithms we’ve known to be flawed. Now, as Tom pointed out, it is all about the human–machine teaming aspect of it [because] with those algorithms, we have developed rules to compensate for their flaws. But even that compensation needs to still be examined.
Given the scope and scale of where AI can go, I think there’s still a lot to be done, particularly in the practical and policy side, on ensuring that we have the kind of trust we need to keep it operating and to make it work.
Finally, we talk a lot about how AI can help disrupt extremist propaganda and how AI tools can monitor extremist forums [and] detect content before it spreads widely. But, as we also point out, there’s no magic algorithm for identifying terrorist content on the Internet. In other words, context matters. Machines are not always able to handle the kinds of nuances that humans can. And, what might look like radical content to an algorithm might just be dissent or even satire.
Is it feasible for us to influence the narrative? I think so, but not by trying to control it [because] that’s neither realistic nor desirable. Instead, we need to look at the other side. That’s where I’ll come back to my earlier point. We need to focus on building credibility, working with trusted messengers, and building trust among the public that we are trying to protect in applying this technology.
Host
As we get to transition here, [we are] kind of cutting it close on time. [I] think we’ll move forward. I could have this conversation all day. I found it really fascinating and [there are] definitely some strings to pull on [for] my personal research as I look at not just how neat AI can be but some of those vulnerabilities of the tool and how we can use it going forward, both through a Defense Department lens and a broader lens for just understanding the world.
As I look to the final question, I’d look back to you, Dr. Pfaff. I’d ask you to take off your chapter author hat and put on that project director hat. [Are] there any final thoughts you want to share with our listeners about the book or its broader research implications?
Pfaff
Thanks, Brennan.
I don’t know that I have a lot more to add. I think this discussion has gone really well, and I think it’s illuminated a lot of the issues, concerns, and information that those in the policy community need to have in order to move forward with understanding and implementing AI solutions to terrorist problems. But, it does seem that we are entering a new era of conflict, and as some of this discussion has also illuminated, not all the terms of reference, not all the boundaries have been settled.
And so, as we expand from guns and bombs, so to speak, to algorithms, synthetic media, and misinformation campaigns, terrorists are already experimenting with these tools, and the question is: Are we ready? Now, the book itself doesn’t offer any definitive solutions, but it does lay out a framework for thinking about how to apply AI in ways that are both effective and ethical. It tries to show how AI can disrupt terrorist operations across the range of their planning cycle and also highlight the importance of human oversight, legal boundaries, [and] international cooperation. So, if there was one takeaway, I would say it’s AI implementation and application aren’t just a technical issue, it’s a strategic one. We’ve got a lot of room to build more policy around how we’re going to implement it.
And finally, what I think this project also illuminates is that the way forward is working together across institutions, which sends a powerful message that we are not approaching this problem in isolation but [are] trying to build a shared understanding with shared norms that we can move forward with effectively.
Host
Thanks, sir. As we close out, I would challenge the listeners, if you found this interesting, check out the book. Again, it is titled
The Weaponization of Artificial Intelligence: The Next Stage of Terrorism and Warfare. It’s free to download on the NATO website, and a link will be included with the podcast [show notes].
For more Army War College podcasts, check out
Decisive Point,
SSI Live,
CLSC Dialogues, and
A Better Peace.
Additional Links: Thomas W. Spahr, “
Raven Sentry: Employing AI for Indications and Warnings in Afghanistan,” featured in Parameters 4, no. 2 (Summer 2024)
https://press.armywarcollege.edu/parameters/vol54/iss2/9/