Episode Transcript
[00:00:02] Speaker A: So how can we take these LLMs and add enough scaffolding on top of that so they can be deployed against mission?
[00:00:09] Speaker B: They may say like we need AI, but what are the use cases behind that? What have you tried? What's worked? What hasn't worked?
[00:00:17] Speaker C: This is the Convergence, the Army's Mad Scientist Podcast. I'm Matt Sanispert, Deputy Director of Mad Scientist and I'll be joined in just a moment by Rachel Melling.
Mad Scientist is a US army initiative that continually explores the evolution of warfare, challenges assumptions and collaborates with academia, industry and government.
You can follow us on social media meadsci or subscribe to the blog the Mad Scientist Laboratory at madsci Blog Tradoc Army Mil on today's episode we're talking with Murali Kanan, Vice President, Enterprise Technologies and Coley Lewis, Vice President, Growth Partnerships, both at In Q Tel, a not for profit global investment firm to accelerate the transition of groundbreaking technologies from the private to the public sector.
We'll talk with them about IQT's mission and history, how it's helping the government foster innovation and get their views on the boom of large language models permeating the federal government. As always, the views expressed in this podcast do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command, or the Training and Doctrine Command.
Let's get started. Gentlemen, welcome to the show.
[00:01:23] Speaker A: Thank you.
[00:01:24] Speaker B: Thank you Matt.
[00:01:24] Speaker C: Before we get started and asking you the big questions, can you kind of introduce yourselves to our audience? Who you are, what you do, how you got to where you are today. We'll start with Murali.
[00:01:33] Speaker A: Thanks Matt. I'm Murali Tanan. I'm the Senior Vice President for AI at In Q Tel. I'm the technology lead for the AI practice.
I've been with In Q Tel for about six years now. Before that I used to build big data Systems for the U.S. common for a long time mainly on the civilian side. And before that I was in the service. I was a 68 whiskey active clear Scott.
[00:01:54] Speaker C: Awesome. Welcome. And Coley.
[00:01:56] Speaker B: Matt. I've been at IQ team now about 10 and a half years. I started here as a program manager and after that I led a number of our partner our government partnerships so working with them to create impact against their mission. And now I'm currently the Vice President of Growth Partnership. So our team is dedicated to looking across the national security space and seeing where Inky Tel's lines of effort may be able to create impact against mission for those national security partners.
[00:02:22] Speaker D: Awesome. So to start this uh, we're wondering if you can kind of explain to our audience a little bit more about in Q Tel and how it operates.
[00:02:31] Speaker B: So IQ T was formed over 26 years ago by CIA as a 501c3 not for profit strategic investor.
So CIA understood that market at the time that the private sector was pacing far ahead of the government when it came to commercial innovation. And IQT was created to help bridge that gap and bringing the latest greatest technologies over to, in this case originally with the IC.
And since in the last 26 years, IQT has begun working across DIC, DoD and DHS to bring in the latest and greatest technologies over to our government partners. And that really begins with understanding what their technical priorities are working across our investment and technology staff. They sit and meet with our government partners on a routine basis as an annual event, but we also do it routinely throughout the year to kind of sit down and understand in their spaces like what their major technical priorities are.
Our tech and investment teams will go across and understand across the global market space what really is out there in a commercial venture backed startup community that may be able to address these technical challenges that our government partners are having. And once we identify those through of course what we call an IQT work program, which is a agreement between IQ Team, one of these companies, we then set across to develop and enhance those capabilities to support this government use case. At the end of the day, the technology will be commercially available and the government partner will have access to that technology, drive down those cost and leveraging the latest and greatest in commercial technology to drive it to support mission in.
[00:04:06] Speaker A: The Venn diagram of market activity and then technology needs for the government and then startups with credible technologies we're operating, we try to find out or try to land on companies that are doing interesting work that may need some enhancements or some fine tuning before it can be deployed against problems that the government's facing right now. So we typically look at technologies that are early stage that are either in the process of figuring out product market fit or just figured out product market fit and then try and shape those technologies into something that the government can deploy for a mission. So for example, we were early investors in some of the technologies that power Google Maps right now. So the Keyhole technologies. And then we were also early invested in investors in AI technologies as it evolved from machine learning into the current generative AI framework we're all trying to wrap our heads around today.
[00:05:01] Speaker B: I would add to that also is that one of the magic sauce that IQT brings here is especially the morality. One of the technology practice areas at IQT is that they can go into a scif, sit and listen to our government partners, in this case often a classified ucas, and go back out to the commercial market and translate over that to a commercial company about what this problem is and how their commercial technology might be able to address this area and then go about developing a work program so keeping it commercially relevant for the startup company. The same point, addressing the need of one of our government partners.
[00:05:35] Speaker C: Yeah, it really is kind of a unique thing. And when you think about the history of like public private partnerships, it bridges that gap between the government and what's at the bleeding edge of technology, which I don't think we've really had ever in the past. So it's a super interesting organization. I'm happy that we have you here to learn a little bit more about IT. Morale. You brought up AI and that's kind of what we're going to focus on today in a certain sense because we've been kind of inundated lately with, with a lot of large language models, both on the private side, but also in the government. We've got Ask Sage, we've got Nipper GPT, we've got Camo GPT.
So talk to us a little bit about from, from your perspective, how IQ T has helped integrate LLMs into the intelligence community so far and what have you seen? Space.
[00:06:18] Speaker A: So from our perspective, LLMs are just a tool, right? Another tool in the, you know, in the query to kind of deploy against mission, we are taking a more holistic view on AI, right? So we're looking at the entire stack, all the way from infrastructure to applications where AI can be a force multiplier and definitely LLMs are a huge part of it, no question there. We're also looking at, look, there are commercial LLMs that are being developed by these companies like OpenAI and Anthropics of the World and they're deploying billions of dollars to develop those LLMs. And the government cannot deploy the same kind of capital to develop equivalent models on their own to kind of bring the same capabilities to mission. So how can we take these LLMs and add enough scaffolding on top of that so they can be deployed against mission is definitely one of the focus areas for us.
The other focus area is it's not just about the AI models, but it's about things like data, things like governance, things like policies, things like safety and security.
So how we bring technologies in all those surrounding areas, adjacent areas. So when the government deploys LLMs for emission use cases. They actually get the best value out of them is another focus area for us.
We are also looking at unique models, right? So things like, okay, there are commercial LLMs, general purpose LLMs that are being developed for use cases like document summarization or image recognition or regular use cases. But how can we develop or look at unique models for government specific use cases? I'll give you an example with the recent focus on border security.
So there's a lot of use case for, hey, we've got hours and hours of drone footage. Can we have AI help go through that footage and identify a few specific areas for humans to look at so actually can spend more time focused on high value video rather than spending hours looking at just regular video that may or may not have mission impact. Right, so those are some of the areas that we focus on. The idea always is not to replace humans or have automate the entire process of collection to analysis. More like how can we augment existing capabilities by deploying AI in the right places? And that's where we are focused on. That's where we have been focused on for the last 20 or so years. Interestingly, our very first AI investment was back in 2001. So we've been investing in AI for the last 20 or so years. But every step of the way we have always been focused on bringing in the right technology to augment the work that the IC and the DOD have been doing. And so where it can be accelerate mission impact by bringing the right tools to the users.
[00:09:08] Speaker D: So yeah. So Merali, I think you really hit on an interesting point is these commercial companies have billions of dollars to throw at these new technologies and new LLMs and they also have different governing rules and things that they abide by. So like you said, it's a different world than the world that the federal government is working within.
So what are the large differences that you've seen between those commercial LLMs and those designed for the DOD and IC?
[00:09:38] Speaker A: I don't think we are at a place right now where Companies are developing LLMs for specifically for DOD or IC.
What we are seeing is a trend of can we take commercial LLMs and fine tune them or shape them for IC and DOD use cases? Because again, going back to my earlier point, it simply takes too much capital to develop a new model for say, DoD or IC use cases. And I don't think the government's set up to do that right now.
Long term that may change, but right now what we are seeing is an Effort to can we take a commercial model, maybe choose what guardrails can be deployed against those models so they can be deployed for mission use cases. So what I mean by that is if you for example, take chat GPT and ask it questions about bioweapons, it may not give you the right answer because commercially the companies may not want that liability. Right. So they may not want regular users to ask questions about bioweapons and get good detailed answers. For the DoD and IC, that may not be the case. The entire job is to keep an eye on weapon development are threats around that. So they may actually want these models to augment those capabilities. So how can we take commercial LLMs and shape them to help for these use cases is where our focus is and that deploys a variety of different techniques. There are techniques like fine tuning, there are techniques like model distillation, there are techniques like small language models. So we are kind of keeping an eye on all these different techniques and making investments in the space. But it's not just one focus on hey, can we get LLM deployed for this? More like can we take a look at Again, going back to my earlier point about whole of stack approach, what is all of the entire stack that's needed for this and how can we make appropriate investments in the entire stack so we can actually get mission impact?
[00:11:32] Speaker C: What about the security side of it? Obviously you can't just take a commercial LLM, bring it into a SCIF or even bring it into an unclassified environment that has controlled material.
So what are the kind of the roadblocks or the opportunities we have there to take something that is already pretty mature in the commercial side, but have us be able to use it on the government side with government information and even up to classified information.
[00:11:54] Speaker A: There are a few solutions for this. So the great thing is this is not unique to the government. Right. So enterprises have similar concerns and their concerns are more about commercial liability. I mean, if you go to a financial institution, they don't really want to put their data on commercial LLM as well and have the LLM provider train on the data. If you go to a healthcare institution, they have similar concerns about PII and health data too. So the government's concerns more about classified data, sure. But there are similar constraints on the commercial side. So the companies that are building these models are very aware of these constraints that they're trying to develop solutions for that. So one example is things like zero data retention. That's being pioneered by OpenAI and Anthropic and all These models where you can set up a commercial arrangement, an enterprise arrangement with these model providers so you can still use these models but the models don't retain any of the data and they don't train on this data.
That's a good first step. But that may not just be enough for the government because again the government has these bespoke unique data sets that you simply never ever want to leave the premises. So we are looking at approaches where we can deploy the entire model stack within a classified network, including the model weights. So the government kind of has end to end visibility about where the model source from, how is it being deployed, how is it being used and there's a governance framework around that so we can identify and track the right use cases, where the model is being deployed against and who's using that, what questions are they asking, what answers are they getting and is there some, some kind of spillage that we should monitor both in terms of how the model is answering the questions and how the users are trying to ask questions off the model?
[00:13:42] Speaker C: Coley, if somebody from the government had a requirement for something, what's the process for them to work with in Q tel, how would they, how would they get in touch with you? Or how would they go to you and say hey we're looking for something that can accomplish X?
How do we get involved in your process?
[00:13:57] Speaker B: Great question Matt. And I think there's two answers to that. One, assuming that you're one of our existing government partners, if we go that route first it really begins with speaking to us, whether we come sit in your spaces to understand what that priority is. And now mind you, we have an annual event where our government partners submit their, what we call problem set over to iqt, which our teams then review and they brief out to us. We get a chance to learn a little bit more about what it is, the challenges because they may say like we need AI, but what are the use cases behind that? What have you tried, what's worked, what hasn't worked. It's like a bad analogy would be going to the doctor and describing your symptoms. The more detailed you can be, the better chance our investment in tech teams can go out there and find the right company that might, that might fit that need.
That begins the process of in all of our government partners there sit down to learn the problem. Our teams then go out there and begin to look for different companies. Then we're going to put a number of companies we think might be solutions to that problem.
Now one of the things it's really Important as part of this process is to make sure on the government side you've included the requisite stakeholders that need to be involved. It just can't be the analyst or someone in the field using the technology because more than likely it's going to be someone else who either has to deploy it, someone has to manage it, and that may not be the same person. So we may have the analysts ready to go to go use this thing, but there's no one there to deploy it and maintain it over a period of time. So we need to make sure that we all the requisite stakeholders are involved.
The other part of it is what's going to be the environment for the evaluation. So a big part of working with IQT is being pretty quick in response back to the startup companies about how is the tech working with you right now? What is your plan for evaluation?
Right now there's a pressure to move out pretty quickly. The faster that we give responses back to these companies, the faster they can get enhanced technology into the hands an analyst or a war fighter to use against mission. The other element is part of it is to lean out of what's the pathway transition to technology. So once we got the evaluation completed and we have a, we're pretty excited about what it can do. What's the pathway transition? We all know that it's never an easy lift, but we need to have that conversation before we close a transaction with the company to understand what does success look like here, who do we need to have involved and who's going to own this piece of hardware software in a year or two. And it can adjust over time, but we need to have a plan right now because it would be not a good move for us to go all the way investing in this company, developing this capability. And at the end of the day someone says actually we don't know who's.
[00:16:41] Speaker A: Going to own it.
[00:16:41] Speaker B: And we're like, oh, okay, all this effort with a lot of wasted time. There really is kind of the high level process that we walk through.
We provide documentation to kind of spur ideas to make sure that everyone is thinking about these type of questions.
[00:16:54] Speaker C: And then conversely, on the flip side, how does it work? Where do you find the startups? What kind of a scouring do you do? How does that process work?
[00:17:01] Speaker B: As well, there is not just one source, like for instance, there's just not one location. In some cases it's the ecosystem of being a very active investor, working with a lot of other VCs, being many in the community. For us, we provide a standard of diligence that a lot of places do not. Because we're very tech heavy. Tech heavy strategic investment firm. Outside of morality's practice, we have four other technology practice areas. So we're very deep technically and there's a lot of VCs would like to work with us for the fact that we can do the technical diligence against these companies. So we often will get introduced to companies through some of our partners. In the venture capital world when we have successful startups, it wouldn't be a surprise that many of those entrepreneurs going to start another startup, they have a positive experience with iqt. They're like, hey, we like to keep working with you. Or they, through their own system of people that they work with, they introduce us to other startup companies that are out there.
Rye has been my experience. So usually it comes from multiple different sources, not just one.
[00:18:06] Speaker A: Just to add to that, I think at Lastcom we co invest with about 4,000 different investors around the globe. So there is over 26 years of history here. We have done over 800 investments. So there's also all this history to look at and how we have plans with this technology over the government and how we have been a good partner to other investors throughout our history. And so all of that definitely comes into play. We also, to Coley's point, we spend a lot of time doing diligence, right? So I mean, just to give you some numbers, we speak to over 1400 companies a year and we end up investing in about 60 to 70. So there's a lot of filtering that happens. And so other investors look at us as a gold standard for technology diligence. And so when we invest in a company, it's more or less like, okay, we have done the diligence, we have a good sense of what the technology is. And we have also understood that there's a market, active market need for it. Because when we do diligence, we're not just looking at is the government going to use this technology, we're also looking at is this technology commercially viable? Because one of our main concerns is that to Chloe's point earlier, you're going to go through a long process to transfer this technology to the government. At the end of it, if this technology is not commercially viable and if the company goes out of business, then there's no sense in having gone through all this process. So we spend a lot of time making sure that yes, this company is commercially viable, yes, this technology is commercially viable. There's an actual commercial need for it beyond just what the government wants to do. Because honestly, our focus is not getting the government to be the main investor backer of a startup company. It's more like the government's going to be another customer that may have some unique needs that we are going to position the company for. Right. So that's the goal. So just again, going back to some of the companies we work with, the idea is not that we're going to come in and we are going to basically give you a huge check and we are going to make sure that you focus just on the government. It's more like here is an idea. This is a potential market for you to tap into as long as you do these few enhancements to shape your product.
[00:20:18] Speaker B: One of the questions we come into is that there's a lot of companies currently right now that are engaged with the government. And one of the things that we highlight to a lot of our government partners is that many of the companies that we work with, we invest in and develop a work program with are companies that initially that came out of it didn't recognize one, their technology was a viable product for the government.
So we explain to them, hey, there is a government use case here for your technology that maybe you didn't think about there's a market for. The other one is that those who are familiar with the government have a hesitancy to engage with them because of some of the challenges that may exist of spending a lot of time and effort, but in a year or two being the same spot they were before. So how can we help them along to identify users for their technology, who will evaluate it, provide fantastic feedback for them to help further and develop their product better. So those are often some type of companies. We interact with those who are unfamiliar that this is a market and those who are familiar but a little hesitant because they're not quite sure they understand how to navigate it effectively. So we can come in there and add value there.
[00:21:21] Speaker C: Yeah, I think that last part is really important because often for especially smaller companies, it's kind of a labyrinth trying to work with the government and figure out what the processes are and what your technology can be used for. So I think that's super important at the end there.
Let's jump back to talking about tech again.
So, and you can be as specific or as general as you want with these answers, but what have you seen that's emerging in the commercial world right now, but maybe isn't yet mainstream or available to the public that you think may help the government in the next few years.
[00:21:51] Speaker A: So something that's really unique about AI is that the pace of progress has been kind of mind boggling over the last few years. That the timeline between fundamental research to productization has been compressed. Right? So it's no longer the case that someone's doing basic research and it takes several years before you end up finding about it in a model or product.
Right now what we're seeing is that papers come out, research happens, and then a couple of months later you're actually seeing it being implemented in OpenAI model or anthropic model or something like that. So that's where we are. But taking this a little further, one thing that has emerged in the last year or so is this whole concept of reasoning models, right? Test time, computer inference, computer, however you want to call that. The idea here is that reinforcement learning, especially if it's associated with some sort of verifiable rewards, that's going to help us get past some of the initial areas where AI was struggling to make significant progress, right? So one of the areas that has been very intentional on the part of some of these model providers has been software engineering. We are seeing significant progress happening in that space and we fully expect software engineering to be a solved problem in the next couple of years, given the pace at which things have been progressing. And that's because software engineering is one area where the reward signals are very easy. Either the code compiles or it doesn't. Either it works or it doesn't. So it's very easy to say, okay, this is a good answer, this is a bad answer. So the models are making rapid progress in that space and you're seeing a lot of activity happening, commercial activity happening around that space as well. I mean, some examples are things like, you know, GitHub, Copilot, which has become more or less a norm for software engineering. For most enterprises we have models like vinsurf and Cursor, they're making a lot of money, right? Like Cursor, for example, started out, I want to say, maybe two years ago and now they're at about 500 million in revenue in just a matter of 12 months. So they're making rapid progress. And as, because people are actually seeing value from these tools. So that's where we think progress is going to happen. Along those lines, the reinforcement learning, especially if companies can, or the model providers can design the right reward signals, the reward frameworks for some of these tasks, that's where we'll think the progress is going to continue to happen.
[00:24:22] Speaker D: So I kind of Want to expand on that a little bit because you just mentioned, you know, this fast paced environment where things are taking months from research to the time when we see, you know, a capability entering into the space and jumping off what you just mentioned, where do you see this technology or the integration of this technology into the military, IC government heading in the next 10 years? So a little bit farther out.
[00:24:48] Speaker A: 10 years is somewhat of a hard question answer simply because again, the progress has been astounding. So when we talk to our portfolio companies, for example, and ask them, hey, where do you think AI is heading in the next two years? They simply don't know how to answer the question because they're really measuring progress in a matter of months because that's how fast things are progressing. But there are some general things to consider here. So given what I said earlier in terms of reward signals and verifiable tasks, one general trend is that white collar work, meaning regular things that you would expect an office worker to do, everything from tax preparation to document processing to say, financial investing, some of those tasks or most of those tasks, you can expect to be automated in the next several years. Right. So definitely within the next five years, maybe quicker than that. So that has impact for the DoD and IC. So if you can take a system or a model that's really good at white collar work, and now we are trying to apply the same capabilities to DoD and IC mission use cases, you can suddenly have a really compressed timeline, from data collection to insight. So again, I'm talking about augmenting existing analyst capabilities, not about entirely replacing them, but how do we handle that kind of compressed timelines? How do we introduce the right guardrails? How do we introduce that humans are in the loop at the right point so they can actually make the right decisions, to keep progressing along these decision loops is going to be really critical. So it's less about AI progress and more about the policies that can translate AI progress into effective mission implementation is probably where the focus needs to be.
[00:26:40] Speaker C: Yeah, that's, that's a great point.
Policy always lags behind technology, especially now that it's moving that, that quickly. So those were the kind of the big questions we had for you guys, but you're not off the hook yet. We do have a few more for you when we get to our rapid fire questions. These are the same questions we ask to every guest to help our listeners learn a little bit about who you guys are.
So I'll ask each question, I'll give you each an opportunity to respond. Then we'll go to the next ones. The first one is what technology or trend keeps you up at night. What's something that you're kind of fearful of?
[00:27:10] Speaker B: I'd say the one that it seems to be coming, especially with the conflicts taking place in Ukraine and Israel, is the use of drone technology and how quickly? Well, one of them being used in conflict and then two, I mean while we've seen a bunch of counter US technologies out there, are they really viable because of the current laws, at least in this country, prevent some of this actually any type of kinetic against a drone. So those are kind of keep me up like okay, how that and swarming technology, how are we actually going to use those effectively? It's really complicated.
So for me personally that's something that I think about readily and see I think what's actually out there and the cost of them. So yeah, there's a whole gamut of concerns that happen that I think about it now regarding drones.
[00:28:03] Speaker A: My concerns similar to what Coley's is like. If you think about how fast we've been making progress in AI, one amazing positive effect from this is it's more or less democratized technology access, right? So everyone can code now, right? You don't have to be a PhD from MIT to write software. I mean AI can help you write software, there can be citizen data scientists all over the place. But there's also negative consequences to this, right? So because technology is getting democratized, you can have really lone wolves are small groups who can mount really sophisticated operations against us, Right? So you can have a terrorist group mounting a really sophisticated deep fake operation against us. And if you don't have the right tools to identify what is what, it's going to be very hard to distinguish fact from fiction.
And you don't really need for example an NSA size budget to mount cyber operation. I mean AI can help you like Mao, if you ask the right questions and if you prompted the right way. So there's definitely negative consequences to making technology that is this capable available to pretty much everyone without the right kind of guardrails, which is what's happening right now. Again, because the model providers are incentivized to keep making progress and they're incentivized to keep making these capabilities available to their commercial users as fast as possible. So unless we kind of develop our own systems when we say, when I say we, I'm Talking about the DoD and the IC to develop own systems to kind of counter these and identify and mitigate some of these risks, we are going to be at a place where it's going to be very hard to catch up because there's a compounding nature of how AI makes progress. And if we lag initially, then it's going to be very hard to catch up once things become more mainstream.
[00:29:51] Speaker C: Yeah, two great lessons in dual use technologies.
So the second question is kind of fun and keep in mind we do have probably a few thousand listeners, but what's something about you that most people might not know that you're willing to share with us?
[00:30:05] Speaker B: You know what? I thought about that quite a bit.
And one thing that this is again a little bit off on the technology side is that I have three young kids at home.
And so I found myself over the last several years becoming quite, I want to say aficionado when it comes to cleaning products.
But I spend quite a bit of time looking at things to get stains out of the sofa, cleaning the floors, which so I found myself becoming quite a heavy investor in like Bono, which is this kind of a type of Swiffer. It cleans like hardwood floors. It doesn't leave a horrible smell and gets most of the grime off the the floor. Because my kids managed to make quite a mess literally every day, every second. So I found myself in the evenings having this set of cleaning products I kind of take out and I thought I often will look at myself in the mirror and think what is really happening here. But something most people don't know is I spend a fair amount of time doing a lot of cleaning in the evenings.
[00:31:09] Speaker D: That is a great skill to have.
[00:31:13] Speaker C: Congratulations. In order to. You just had another one, correct?
[00:31:15] Speaker B: Correct, yes. Third one.
He was born actually, you know what ironically, exactly two months ago today. So he's two months.
Two months old and probably the least of my, you know, my cleaning challenges. He has two older brothers that are make it their daily mission to see how much they can be destructive now that they're out of school.
They have now carte blanche throughout the day to find new ways. And I test the limits of some of these products to see how effective they are in getting cleaning, cleaning up so anybody can make me go quicker and more efficiently to do that. I love for technology to kick in there to detect when they may go and I can maybe move things around in my house to prevent more damage. But yeah, it's kind of a thing that's happened over the year. I'm curious now that I say that I kind of wonder where morale is going to go now. So rally.
[00:32:12] Speaker C: It's probably not going to be on the same lines, although you never know. No, I mean, I have two kids. I commiserate with you, but let's see. Morali. Go ahead.
[00:32:18] Speaker A: Yeah, so, I mean, I'm gonna stick with what Coley said. Right. So I have two girls, nine and seven.
The younger, elder one is turning 10 in four weeks. And so I hate. I used to hate shopping, but I've become more or less very proficient in being patient as my girls shop.
I've spent many Sunday afternoons in Claire's. That's more or less become my personal hill where I stand outside waiting for my kids to ch stuff. So that's. That's the patience is something that I've learned to develop over the last, especially last two or so years as my kids have started to develop their own tastes.
[00:32:59] Speaker B: I have one question for morale, though, and that is, do you make recommendations?
[00:33:04] Speaker A: I tried that. It just. Just puts me in the doggos so fast. So we basically learned to say yes. That's the. The best skill to developers, basically. A father of two girl children, I.
I get more rewards by saying yes to pretty much anything they ask for and then let their mom be the. The disciplinary.
[00:33:22] Speaker C: Good move. See? Smart.
All right. The final question is usually the hardest question. What's your favorite movie?
[00:33:29] Speaker B: That is a hard question.
So let me give a little bit of how my thought process was.
So I love westerns, so I was thinking I was going there, but like, I love Pale Rider, Open Rangers. My favorite westerns that I thought really came down to, which is one of my favorite books is the Count of Monte Cristo, the version with Jim Caviezel, Richard Harris, Guy Pierce, a big fan of that movie. How they. How they, you know, portrayed the book. And it's a very good message, especially I think some of the work that we're doing. You got to continue to persevere. So I really like the message of the movie and the book and really enjoy. I really enjoyed that movie. So that's ultimately all the movies I think. I think anybody knows. I'd be surprised by the selection, but I thought about it a bunch here as we're talking.
[00:34:19] Speaker C: I love that movie. I love that one. I think it was, I don't know, 2002 somewhere around that time frame. Yeah, it was a great one. All right. Morale, you're up.
[00:34:26] Speaker A: Well, shockingly, I'm a huge sci fi nerd. You wouldn't imagine, but Matrix probably is way up there. Just the first one, not the second or the third one.
And then my kids and I are starting to watch Star Wars. Star Wars. I tried explaining to them that Star wars and Star Trek, one is about battle and the other one's about exploration. But my kids are apparently huge Star wars fans, which is surprising. So we are a journey of discovery through the George Lucas universe right now. Just catching up on all the Star wars movies, which, you know, with Disney, they seem to be making one new series every year. So I think we'll have enough material to last us for the next couple decades to watch.
[00:35:07] Speaker C: Yeah, it used to be where like, if your kids wanted to watch Star wars, you'd be like, okay, let's watch it in a weekend.
Now the library is.
I couldn't even name all the titles anymore. So you've got your work cut out for you.
[00:35:18] Speaker A: Yeah, I mean, I was very proud the other day when my kid quoted Star wars at me.
I was trying to get her to do something and she tried to kind of rebuff by saying something from Star wars which was, you know, okay, I'm just going to give it to you because this is very cooperate and I'm very proud of you right now.
[00:35:33] Speaker C: Fantastic. Well, I'll accept all of your answers for the rapid fire questions. Very good job.
We want to thank you guys for coming on and talking to us about integrating technology into the DoD, but also about in Q Tel, how it works, the great work that it's been doing for decades now.
[00:35:49] Speaker A: So the one thing I would offer is if you're in the DOD and if you're part of one of our existing partners and if you want to talk AI or if you have questions and if you want to reach out, we're always happy to engage with government partners. We're always happy to come down to your spaces so we can have a discussion on what use cases you have, how we can help and what is happening in the commercial marketplace. We do this all the time.
So happy to engage. And then, you know, you can always reach out through Coli or in any of our existing partnerships team and we are happy to come down and have a chat and see if we can work together on something.
[00:36:25] Speaker C: All right, gentlemen, thank you so much. It's been an awesome conversation. We look forward to hearing more about you qtel in the future and seeing some of the great technologies that you guys are looking at and investing and seeing those filter into the DoD. So once again, thank you for coming on and talking with us.
[00:36:38] Speaker A: Thank you.
[00:36:38] Speaker B: Yeah, thank you for having us. We look forward to continue to work in this case with our, with our DoD partners and hopefully create an impact and add value to against mission.
[00:36:49] Speaker C: Thanks for listening to the Convergence. I'd like to thank our guests Morali Kanan and Koli Lewis. You can follow us on social media Social media Me madsci and don't forget to subscribe to the blog the Mad Scientist Laboratory at madsci Blog Tradoc Army Mil.
Finally, if you enjoyed this podcast, please consider giving us a rating or review on Apple, Spotify or wherever you accessed it. This feedback helps improve future episodes of the Convergence and allows us to reach a bigger and broader audience.