Episode Transcript
[00:00:00] Speaker A: Welcome to the show.
[00:00:00] Speaker B: Thank you so much. It's so great to be here.
[00:00:02] Speaker A: So before we started off, can you tell our listeners a little bit about yourself and your connections to technology and the military?
[00:00:09] Speaker B: Yeah, absolutely.
So I'm kind of a. I don't know, I've done a lot of different kinds of things. So I'm an engineer by training who did a postdoc in physics and dabbles in philosophy. I started out as an army active duty officer doing biodefense work, so developing different kinds of countermeasures against biological threats. And then from there I kind of got into the intel side of that. So looking at proliferation of these types of technologies and interdiction types of intel activities, I guess.
And then when I left active duty, I became a data scientist working at a few different intelligence agencies. And that's where I really started to get interested in AI and some of the issues around AI. Now I'm still a reservist and I look at sort of the convergence between biological threats and AI. And then in my civilian job, I'm a faculty member at the National Intelligence University there. I'm the department chair of Cyber Intelligence and Data Science and also the director of the Biological and Computational Intelligence Center.
So my research there mostly focuses on sort of AI ethics and safety as it relates to national security kinds of problems.
[00:01:16] Speaker C: That's fantastic. Very impressive background. We're excited to have you here. And bonus points for mentioning the name of the podcast in your response. We're going to talk about artificial intelligence and lethal autonomy today, but let's start with AI and command and control. So a lot of commercial solutions are what's known as black boxes, meaning you can't understand how they make the decisions they make. You know, you don't know what's going on inside of them. So what are the implications of that notion for the military?
[00:01:41] Speaker B: I think they're significant. I think the definition of AI has sort of been fungible over time. Like, we always think of AI as being the next best thing or like something terrifying like a Terminator like scenario. And we never think of the things that we already have as being AI. So we never think of like the facial recognition features of your smartphone as being AI. It's always this, like this intangible thing in the future. So, you know, I think a lot of what we would call weak AI, so simpler AI systems are basically like statistical engines that cluster things together, do different types of regression algorithms and that kind of thing to make predictions about data. And they're relatively Explainable. You know, if you fit a line to, to a set of data points, you can extract a slope and an intercept, and you can make some physical sense as to what those two parameters mean.
But with a lot of contemporary AI systems like neural networks that rely on these large quantities of parameters, perhaps billions of parameters, you can't really make that connection. So they kind of work similarly where they basically fit a very large function to a lot of different parameters, but you can't really make sense of what that means. And that's kind of the origin of this black box issue where AI makes predictions, but you don't always know how it makes the predictions or what it's thought. And I use that term very loosely, thought processes are. And I think that becomes a challenge in military applications because we expect, if we're acquiring some sort of military tech, we expect it to perform the same way in the same types of environments. And that's kind of what our entire acquisition system is built on. It's predicated on this notion that technology is going to perform reliably and predictably in different types of operational environments. But you can't necessarily guarantee that with AI because of that explainability issue. And explainability, a lot of AI philosophers talk about explainability and emergence of these unexpected types of behaviors that you can't necessarily predict from first principles or from some more basic level of behavior or from the input parameters of the AI, that sort of thing.
And that kind of leads to what we call the alignment problem, which is how do we ensure that AI does what we expect it to do or aligns with human expectations and values. If you can't explain how it works, how do you ensure that it's really difficult to do. And then that leads to what we call the control problem, where if you can't align it with human expectations, it's going to make it a lot more difficult to control and to ensure that you can maintain positive human control over these types of systems. And that's critical in military types of applications, especially if you were to integrate AI into lethal autonomous weapons or other types of things that I consider to be critical, high stakes types of applications where if it does the wrong thing, it could have really devastating, really terrible results.
[00:04:19] Speaker A: So I think you've brought up some really great points and I think that leads us right into our next big topic, the link between that reliability and predictability that you mentioned and how, how are we going to trust lethal autonomous weapons in the future?
[00:04:32] Speaker B: I think there's a moral question that one has to ask in General, about whether or not it's appropriate for a machine to make a decision as to whether or not a human ought to live or die, I think that's kind of like the biggest question. I think, you know, a lot of philosophers might say that that sort of decision making doesn't really respect human dignity. You know, some sort of feature, some. Something that is, you know, humans are good in and of themselves, right? As Kant might say, you know, there's something intrinsic about humanity that's important and valuable and needs to be respected in any sort of moral system that one might. That one might devise. And I think in war now, I'll caveat. I'm not a combat veteran, so I don't know. I've never really experienced this firsthand. But knowing combat veterans and just being in the military, I sort of have this sense that there's a psychological cost to war. And I think we would all agree that there's something that happens that triggers something innate about our humanity. If you have to take the life of somebody else, even if you don't necessarily think about it all the time. When you're in the fog of war, in that sort of tactical space, dealing with a lot of these moral issues, we have a lot of psychological trauma that comes from war because of a lot of these types of things. And I think in some ways that humanity, that part of it that cost to war in some ways can arrest some of the more terrible aspects of war. Right? Just like blatant brutality and other types of things that a machine may not experience, probably would not experience. So machine isn't able to appreciate the innate humanity of a person. So is it really moral for a machine to make that kind of life or death decision in the battlefield? And I don't think it is. I mean, I think there are a lot of appropriate applications of AI in military operations. Things that are not. That don't have that sort of catastrophic feel to them, or catastrophic risk. Like it can be used to help analysts parse a lot, you know, large quantities of data, as long as you have a human in the loop, you know, making sure the AI doesn't mess up or make bad decisions. But those types of things are low risk. So there's not some significant consequence if it makes the wrong decision.
Nor is there sort of a moral weight on the AI making those kinds of decisions. I would also differentiate autonomous weapons from automated types of weapons. So autonomy is something that can, that can make decisions on its own. You know, like my coffee maker is automated, so I can program my coffee maker to reliably go off at 6am and brew coffee. But it's not going to make some decision to not do that, save some sort of externality, like a power outage or something like that. It's not going to just decide to do brew at seven because it thinks I'm going to sleep in or brew decaf because it hates me, those kinds of things.
So it's reliable in that sense. An autonomous type of system has sort of a distribution of potential behaviors and it's not as predictable. So in a military context, if you had some sort of a, an automated weapon that basically if you have some sort of a gun pointed at a kill box and a commander is like anything in this box gets, you know, gets shot at, that's different because the AI itself isn't really making that decision. It just responds to some sort of something that's present. It doesn't have sort of the moral weight of making some sort of a decision as to whether or not something ought to, ought to be a target or not. So, you know, I think those issues are significant and I, I would like to see more people sort of thinking about this problem. And you know, I write about that a lot in my book Unknowable Minds because I think there's a significant moral weight to, to this issue.
[00:07:55] Speaker C: Yeah, absolutely. I think this is probably one of the more significant decisions of the 21st century when it comes to warfare. Even the smaller, smaller scale AI tools that we use now, things like camo GPT on the DoD side, but even the commercial chat GPT, it's often wrong. And not only is it often wrong, but it's also, it's often very confident about its mistakes. You know, we found plenty times when doing research will ask it to analyze something and give us its sources and it does, and you click on those sources and those sources don't exist. So those are the types of mistakes that you really can't have when it's determining life versus death.
So switching gears a little bit and talking about sort of innovation from within, how should we best build our defense innovation structure to best leverage these technologies? So, so we've talked about both sides of it. The, the lethal side, but also the more data driven side.
Um, should they focus on agency and decision making or should we be using AI for something else?
[00:08:50] Speaker B: Yeah, I think that's, that's a good question as well. Well, number one, I think we ought to update our acquisition process in a way that sort of respects the uncertainty around AI systems. I had a Very talented student two years ago, work on a thesis pertaining to that and we're in the process of trying to publish some of his work. Basically creating a, a technology readiness level analog that accounts for things like AI explainability and alignment and those kinds of things.
So instead of it being more of a linear process that makes assumptions as to how AI is going to perform in various levels of uncertainty, as you get from the lab bench to something that's deployed, it can kind of move back into different places if sort of the context changes just so that, to sort of account for some of those issues.
So I think that's the first step is we have to have a more adept acquisition process for any sort of AI system that we would ever use.
Two, I think using AI in systems that are low risk, where there's little chance for catastrophic outcomes that could propagate, is a good starting point. So not only is it useful in that regard, but it also helps educate people within the military on what AI's risks are and how it works. Because I think AI can be useful, but it's only useful if it's used reflectively, like any sort of epistemic tool.
Like you wouldn't give a calculator to someone who knows nothing about arithmetic because it wouldn't make any sense to them. They wouldn't be able to, to use it appropriately. And AI works similarly. So you have to understand a little bit about what its limitations are. The fact that it hallucinates, like you brought up, where it comes up with sources that may be completely, completely false. So you have to be able to check those things. And I think along the same lines, it also requires some domain knowledge to be able to make those assessments. So someone who knows nothing about mathematics isn't going to use AI to write a mathematical paper on something because they don't really know whether or not the AI is telling them something that's true or not.
So that domain knowledge is also critical. So I think starting with those sort of low hanging fruit kind of applications where you can leverage the benefits of AI that increase the speed of action, which is the whole point of integrating AI into military operations without leading to scenarios that could have significant catastrophic outcomes is the first step. So my approach to that would be very pragmatic and you know, and not just throw AI at everything. And I think sometimes people think AI is this panacea and it's going to fix all things. But you know, the way in which we develop technology or the speed at which we develop technology, I think far outpaces our ability to reflect on its appropriate use. And I think with something like AI, where it is so uncertain, we really need to take a step back and think about that before we decide to throw it at everything.
[00:11:27] Speaker A: When we think about this in terms of the threat, in terms of our adversaries, how do you think our adversaries are tackling these issues? The PLA is actively pursuing an intelligentized force.
So does this give them an advantage? And also how does the possibility of a difference in moral weights kind of play into all of this?
[00:11:50] Speaker B: Yeah, that's an excellent question, something I've thought about a lot. You know, like I said, I'm pragmatic when it comes to AI, but I also try and be realistic in terms of what purpose incentives exist for us to sort of push ahead. I think you brought up a good point where, you know, different cultures may have different moral views as to the appropriate use of these kinds of things based on, you know, weighing different strategic objectives and that kind of thing. You know, so other countries may not see some of the risks of AI that we might see, or we might not see some of the risks that other countries see. I mean, it's kind of a. It's a spectrum, there's. Which of course, complicates everything. So I think in a lot of ways, the cat's out of the bag where we're going to see adversaries developing AI and probably integrating it into some of these potentially catastrophic kinds of systems. I think what we need to do, I think ideally what would happen is there'd be some sort of a global consensus on appropriate use of AI in military applications.
Just sort of understanding the magnitude of the potential risk, you know, similar to how a lot of countries view nuclear weapons, where, you know, there's. There are a lot of different types of treaties that prevent nuclear proliferation and that sort of thing. The difference being nuclear technology is a lot easier to interdict, I think, than AI because, you know, you can't. Like, I can't walk down the street here and buy yellow cake uranium, but I could spin up, you know, an AWS instance or something. And if I have enough money and enough compute and, you know, I could theoretically train an AI model from anywhere. It's ubiquitous, it's everywhere, it's permeating. So interdiction would become a challenge in that kind of a scenario. But, you know, there might be some sort of physical infrastructure that could be interdicted or controlled in some way. So maybe controlling certain types of GPUs or data centers of A certain, of a certain capacity, that kind of thing. But yeah, I think like I said, being a realist and sort of that having that perverse incentive of, you know, global competition, especially in light of the fact that, you know, some of the countries may not put the same moral weight on the same things that we do, it looks to me like we may end up going down that path and I don't want to see that happening. And I don't really know what the best approach is. I think like I said, some sort of a treaty might be the appropriate thing, but I don't know if we'll come to that. So I guess I'm a little pessimistic about it, but that's what I would hope would happen.
[00:14:03] Speaker C: Yeah. One of the follow up questions I had, I think you addressed was going, was going to be, you know, if our adversaries do move on into fully lethal autonomous weapons, does that change the calculus for us here? You know, when, when another country has definitive overmatch, do we make different decisions? But concerning the treaties? And you, you kind of referenced some of the nuclear treaties. Do you know of groups that are looking at this right now? Are there, are there AI groups out there that are discussing this, that are trying to push forward and trying to, to either create the conversation or are they drafting policy? Is this happening?
[00:14:38] Speaker B: Absolutely. There are think tanks, there are, you know, labs that are associated with different universities and other types of groups that are, that are definitely thinking about this problem. CSET at Georgetown University is certainly there are other groups that are made up of engineers, philosophers, all kinds of people who have expertise in this area, expertise in policy expertise and in ethics and morality who are, who are thinking about these topics.
I just hope policymakers are willing to listen.
[00:15:05] Speaker C: Yeah, that's the hope. It's so hard to keep up in terms of the speed of technology. Policy always lags behind that. But you also need the decision makers to be willing to listen and take that on board.
So let's move on to the future of this technology. How do you see it unfolding? Generally you can talk about just AI and autonomy in a general sense, technology wise, but also for the military.
[00:15:26] Speaker B: You know, there's sort of this holy grail of AI. I think a lot of tech, you know, a lot of like tech CEOs talk about AGI and we always. Which means artificial general intelligence. But the definition, like what that actually means is sort of a nebulous thing. I think sort of the standard definition is that AI that does any, that sort of cognitively Equivalent to a human in all areas would be AGI. And then from that you would get to what we call ASI or artificial superintelligence, where AI far exceeds human capabilities. And that gets into some more science fiction kinds of scenarios that are interesting to think about what those implications are like long term. But I think there are some more near term risks that ought to be examined. So bias in data, for instance. So if AI is trained on a biased data set, it's going to make biased decisions that are reflected in that kind of data set. And so I think that's something that we need to, we need to fix before we really start thinking about a lot of these more long term existential kinds of risks with AI. But, you know, I could see a lot of these large language models that we have now that continually improve significantly in terms of their capabilities.
You're sort of blurring the lines between these weak AI systems that do one thing well and this more general type of intelligence, because arguably a large language model is kind of in between those things. It can do a lot of different types of things and it may not be cognitively equivalent to a human. It may not be embodied in the same way that a human is. And some philosophers and, and psychologists argue that embodiment, the ability to interact with your environment in a physical way is sort of a necessary component of intelligence. Right now. They're kind of these oracles in a box that can't really interact in a lot of ways, although they're becoming more agental. They're, they're able to search the web and they're able to, to do different types of sets of tasks on the Internet. So I mean, they're kind of moving in that direction. And I can see that continuing. I see AI being used sort of permeating our lives in a lot of other ways and maybe in ways that we don't always suspect and in some ways that kind of are a little bit terrifying.
So you can already see AI used to create a lot of deepfakes that are more realistic and difficult to discern from something that's, from something that's true.
So not only will AI make it easier to create different types of false content that can be used for propaganda purposes or various other nefarious actions, but it'll also, you may also see AI agents acting, you know, sort of in, in cyberspace in that capacity. So you might see AI agents that tow a party line, you know, produce disinformation or other types of malicious content that promotes, you know, say like the the PRC's regime or something like that, to kind of keep the pop check. And so I see that happening and I see it being more difficult for us to discern, you know, truth from falsehood moving forward.
So I think it's going to lead to more of an epistemic crisis before anything else.
[00:18:22] Speaker C: Yeah, scary things to think about.
But generally when we talk about these types of technologies, there's a lot of, there's a lot of positive that you can think about. There are great ways to use it, but we should be more concerned about those who are using it for nefarious purposes and how easy it's becoming. In fact, today, August 8th, as we record this, one of our interns is posting an article on the Mad Sci laboratory that talks about how AI is really an accelerant for misinformation, disinformation, deepfakes, things like that. So everybody listening, please check that blog post out.
So, Mark, hey, these were great answers. This was, this was kind of the deep part of our conversation here. Definitely a lot to think about as AI. I mean, you talk about how it's ubiquitous and permeating. It's, it's in the dod. We use it to help us, you know, write our analysis and things like that. We use it as a tool here. It's everywhere out in the commercial world, I think I even saw whether it's a marketing gimmick or it's actually AI. You know, you see things like washing machines have AI in them now. It's kind of everywhere. So it's not, it's not going away. And it's something that we have to start dealing with in terms of how do we use it, how we use it correctly, and what do we use it for. So let's switch now to our rapid fire questions. We pose these to every guest on the show. They give us a little bit of inside information on who it is we're talking to.
Your last answer kind of alluded to this, but what's a trend or technology that keeps you up at night?
[00:19:40] Speaker B: Sort of the callous glibness of a lot of the, the tech CEOs who talk about AI in these sorts of almost messianic terms where AI, you know, is going to be the savior of mankind, or even worse. I think sometimes they talk about AI or they readily admit, oh, AI may kill us all, but we're still going to do it.
And so, I don't know, I feel like if you're developing a technology that you think has any probability of killing everyone on the planet. Why would you do that?
So I kind of wish they would. You know, the people who have the power to do this would take a more nuanced and maybe a more reflective approach to sort of the appropriate applications of the technologies that they're building at a rapid pace, like we talked about earlier, because our ability to reflect on the appropriate use of technology does not keep up with that trend. So that's sort of the thing that keeps me up at night, is sort of the enabling glibness of AI development.
[00:20:38] Speaker C: Second question. What's something about you that most people might not know that you're willing to share on our podcast?
[00:20:43] Speaker B: So I'm an avid gardener, beekeeper. And I also have like 40 some quail. I like homesteading. I like sort of disconnecting from technology as much as I can.
[00:20:52] Speaker C: That's excellent. I mean, 40 quail, that seems like a lot. Is that a lot for one person to have?
[00:20:56] Speaker B: Well, they have like, they live in sort of a rabbit hutch, like a two level rabbit hutch that we built. So they, they don't take up a lot of space.
[00:21:03] Speaker C: Do you garden, like sustainably for vegetables and fruit and things or flowers or what?
[00:21:09] Speaker B: Vegetables, fruit, flowers, everything. Just kind of just keeping that connection to nature, I think is important for my humanity, I guess.
[00:21:15] Speaker C: Very cool.
All right, final question. I'm sure you've been thinking about this a lot. What's your favorite movie?
[00:21:22] Speaker B: I would say Interstellar is one of my favorites. I mean, they're sort of dealing with an environmental crisis, which is, which is interesting to think about, but also I think the way in which they portray time spatially near the very end. Not to spoil it for anyone who hasn't seen it, but it's just kind of an interesting way to think about some of those physics issues.
[00:21:41] Speaker C: Yeah. And I can understand as a gardener, the, the plot point is that blight has taken over on all the crops and so we can't, you know, grow food anymore. So I can understand that. Also a favorite of mine as well, and a big fan of Christopher Nolan. Rachel, have you seen Interstellar?
[00:21:55] Speaker A: I actually have.
[00:21:56] Speaker C: All right.
[00:21:57] Speaker A: Normally I haven't seen some of the movies that our guests say, so when I, when I have seen one, I feel pretty good about myself.
[00:22:03] Speaker C: Yeah, good choice. You've won. You've won. Rachel.
Yeah, Mark, great conversation, especially about an important topic like artificial intelligence as it takes over pretty much everything in day to day life, but especially for the military because we will have some hard decisions coming up. And I don't mean to use the word hard flippantly. These are going to be very tough decisions that are, you know, life or death in, in very many instances. So this is something we need to take seriously and start coming to some decisions on how and if we use them. So I want to thank you for talking to us and give you, you know, an opportunity now to talk to the audience and let them know, you know, where can they find you and find your work if they want to read more about what you've been working on.
[00:22:42] Speaker B: Absolutely. Thank you. This has been a lot of fun.
So I have a website, unknowableminds.com is the website for my book.
And then you can also find other things that I've written on doctormmailey.com so yeah, this was really fun. I appreciate it.
[00:22:58] Speaker C: Awesome. So we'll put links to that in the blog post that goes along with this podcast. And one last time, Dr. Mark Bailey, thank you for coming on the show.
[00:23:05] Speaker B: Thank you.