I'm Alex Dibble from The Times. There's so much happening in the world right now. How do you stay on top of it? On the World in 10 podcast, near my team, sit down and go through it all in just 10 minutes. The Times correspondents join us with eyewitness accounts and interviews. I was actually on the orthopedic ward, almost all injuries are shelf-fired. A woman who was inside the police van. They also give us their unique analysis of world events. Give the world in 10-a-listen, it is 10 minutes to stay on top of the world.
Yo! Hickmology. What is it all about? One of the latest ones was, can a machine pilot, and I think there were 16s, but I don't know, maybe there were 20 toes. Pilot a plane better than a human top gun. And they ran this. There were five, I think five or six teams that entered. The winning team that went against the human, so it was a human top gun and a simulator against a machine. In a dog fight. In a dog fight, one on one. Right, one on one. The machine beat the human five zero, not even close. One might think about it as autopilot steroids. And it's got so good now that it's better than the best human. And the next step on that. So Maverick and Goose are screwed. Yeah, the Goose is cooked, right?
Hello and welcome to Danny in the Valley or weekly dispatch from behind the scenes and inside the minds of the top people in tech this week. We're back with more AI stuff after a little detour to human composting last week. Hope you guys enjoyed that one. But anyhow, really, is there anything else going on in tech aside from AI? You know, what is crazy actually between last week's recording and this week's episode we had a bank run a bailout. We've had the White House reportedly telling bite dance, the owner of TikTok that they have to sell the app or get banned in America. And then of course, there's also the release of GPT 4, which is a dramatically more powerful version of chat GPT, which all works out great because this week's guest is on to talk all things AI and specifically what it means for defense and how the technology could really change war and conflict forever and really in unimaginable ways.
大家好,欢迎收听 Danny in the Valley,这是我们每周的专题节目,从科技行业内部和顶尖人物的角度,深度探讨各种话题。这周我们又回归到 AI 的话题,上周我们稍微有点偏离主题,聊了一下人体堆肥,希望大家喜欢。不过其实除了 AI,科技行业还有其他的动态吗?很疯狂的是,在上次录制和这一次播出之间,我们经历了一家银行的紧急救市,白宫据称要求 TikTok 的所有者 Byte Dance 出售该应用程序,否则将被禁在美国。当然,此外还有 GPT-4 的发布,它是一个更强大的 GPT 版本,这一切都非常棒,因为我们本周的嘉宾将谈论 AI 的所有事情,特别是它对国防的影响,以及这种技术将如何彻底改变战争和冲突,以及未来可能发生的难以想象的变化。
So this week's guest is Sean Gourley. He is the chief executive and founder of primer AI, a security company that develops AI tools for intelligence and the military establishment. Gourley is a serial entrepreneur. He's been at the forefront of big data and AI for years. He has a really unique vantage point, especially as these tools like GPT 4, which can pass virtually every standardized test, write complex software code, turn some scribbled handwritten instructions into a website. These tools are really setting the world light, but for defense, what they show is just this rapid development of AI, means something very different, especially vis-a-vis the West competition for supremacy with.
So I admit all the hullabaloo Gourley can just give a very different perspective on how we should be thinking about these momentous times in which we live these incredibly powerful technologies which feel like they're just kind of coming out of nowhere. And just before we get started, very important programming note, we had this conversation about 10 days ago. So before the news emerged that the White House hasn't reportedly ordered the sale or ban of TikTok in America. So just when we talk about that, please bear that in mind. It doesn't make the conversation any less pertinent, but just for time being perspective, just keep that in mind.
Chad G.P.T. kind of comes out, what, November? Yeah, I think it was started in November, right? Yeah. And that has kind of set the world light in a lot of different ways. And in this past month, I've been kind of going around the house, talking to various different people in some aspect of AI. I was just had a really interesting conversation with Stuart Russell at UC Berkeley who's kind of written a textbook. The most pirated textbook in artificial intelligence, he will tell you. Exactly. He's, as I'm sure you know, really worried about autonomous weapons. And also autonomous weapons and more broadly, this idea of how do we build AI that we can still control? In other words, that doesn't kind of supersede us and figure out that it wants to do things that are not beneficial to humans. So I was really interested in conversation, but along my travels that came across you guys, and I'd love to just get a sense.
So Lister is kind of understand kind of what primer is, how long you've been around, and kind of how you got going. And then we can talk about all the interesting slash scary things that are happening in the world right now.
Yeah, look absolutely. So I started primer in 2015. Prior to that, I'd come out of computational physics background. And spent a lot of time modeling insurgents, these in places like Iraq and Afghanistan and building computational models.. To try and explain how insurgents would organize themselves.
And effectively why they were so hard to defeat. And so that was, you could do that with physics?
实际上,为什么他们很难被打败?那么,你能用物理学做到这点吗?
Well, so we were sort of the ugly child of physics until we published the work on the cover of nature. And then we were their favorite. When you say we, who was we? Yeah, so myself, my supervisor, and there was a couple of other people on that. There's about I think five authors on that paper.
And I think it was the first time nature that ever published an analysis of conflict or quantitative analysis of how insurgents work. And it was really, you know, the first sort of reaction to that stuff was like, go and put them in the political science space.
Yeah, yeah, yeah. When was this? Where you? It published in 2008. Okay. So it was kind of like right at the cusp of the sort of, you know, the big data revolution was like actually.
A new student. Yeah, I was a PhD student. Yeah. Yeah. Yeah. Got you.
一个新学生。是的,我是一个博士生。嗯。嗯。嗯。我明白了。
So I had a lot of kind of interest in physics systems to try and explain the world that we're living in. Right. And as part of that, this sort of like brought into a couple of places. One was we were using unclassified information around attacks, where they happen, when they happen, and so forth. So we got into really primitive in LP stuff back in 2008.
And the second bet natural language process, natural language processing. Yeah. Which has just come so far like to what we are today. I couldn't have imagined how far it would have come.
And the second bet was agent based modeling and using reinforcement learning to start to kind of model insurgent dynamics to try and match the information that we're picking up.
Really trying to get a handle on, you know, how insurgents work and why they're successful. That's that was my PhD work. And so you created that and that kind of put you on the map, so to speak.
And I was back in 2008. So how did you end up starting primer and kind of what was the path there?
那时我回到了2008年,你是怎么开始创办Primer的?整个过程是怎样的呢?
I mean, so after that, I mean, there was interesting. So now you kind of get meshed into the world of counterinsurgency. And I did that probably for a couple of years. I spent as an advisor, I spent, you know, a couple of months in Northern Iraq with the deputy prime minister of Iraq.
And that was where? Urbiel. I've been an air bill. You've been her bill? Yeah. A few times. So I used to cover oil. Yeah. So I did a bunch of stories on kind of Iraqi Kurdistan and their attempts to kind of create the oil industry and all of that stuff. And yeah, I mean, it's her year of bill.
I think I was there. I remember watching the election results come in. It was the Obama results with with the deputy prime minister of Iraq and just being like, this is this is a strange thing. But yes, it took me some interesting places.
It took me to presentations the UN that took me briefing the young officers that were about to leave to go from West Point. And it was kind of like everyone at the time was kind of struggling to say, well, how do we really understand this opponent? And you know, if you look at it, you had an insurgency was taking on and defeating the strongest military in the world. Perhaps the strongest military the world's ever seen. And so we didn't certainly have all the answers, but you know, we had some. And so that took me around all of that.
But I ultimately came back and I was like, if I'm going to do anything here, it can't just be writing scientific papers. The theories need to kind of be put into actions with tools. And you know, it was 2009 and I was like, I don't know everything, but I know one thing that is like if you come anywhere in the world, you build in Silicon Valley.
And so I came out here. I think, you know, maybe had, you know, my two suitcases and five thousand dollars in my pocket and slept on friends couches. And I was just like, I just need to be here. And I've got a sense that I need to kind of build. And so that sort of started that journey off and started my first company company called Quid. And we were visualizing high dimensional data structures and it was super interesting and learned all sorts of stuff and made all sorts of mistakes.
And then we we sold that company and it was nice, but 2015 came around. And I'd seen my friends building these big computer gaming rigs and training these large kind of like image recognition models. And the results were just incredible. And I was like, this changes everything.
What made you think that? Well, you just so the benchmarks that we were saying in image recognition problems were jumping like 30 points. So image recognition was was in, you know, if you go back to sort of 2010 or just when I was finishing PhD work, my friends at the computer science department was like, this, this is impossible. This is this is something that machines can't do is what humans do. And so you see machines doing it.
And I remember just shaking my head and trying to like. Like put images in front of this computer. I was like, it was guessing them. And I was like, right, I need to learn about this. So this is deep learning. And I'm like, all right, this changes everything. And so I thought about that. I was like, two things. One was like, it's probably going to have a big impact on language. Not that the images is as important, but most of us work with language. And the second bit was, I want to get back into the defense intelligence problem space.
I've been out of that for about five or six years. And it was like, it's time to get back into the defense space. And so what was quid? So quid was focused on data visualization. So you take these data sets, which could be anything from sort of comments on products through to advertising, copy through to anything like big data. You'd have these the all this text based information.
And you're like, well, what do I do with 5,000 comments about this product? Right. And what we said was actually you can visualize that navigated and start to see all the different clusters of things that emerge in a visual way. So you can start to get a handle on the narratives that are unfolding. And so I was really kind of saying, look, natural language processing at that time was not as advanced as it needed to be, but we can bridge the gap between where the technology is. And where value is by visualizing things.
Right. Right. So it was using some really, really interesting visualization techniques to kind of allow people to interact with at the time what we're cutting edge natural language models. But today would be seen as being very primitive. Yeah. Right. So you started primary work in 2015.
Yeah. Yeah. What was the idea? So the idea was basically to come in and say, look, there's two macro trends. One is like defense is going to be a bigger, bigger issue. And the second is we're going to we should expect rapid performance improvement in language understanding because what we were saying from deep learning and images. So those are the two things is, you know, we don't know exactly how that's all going to map out, but that's exactly where you want to be.
Right. And so what that meant was firstly was assembling a very strong technical team to go after those problems. And then secondly, it was looking as saying, right, we think the first place is going to land as in the intelligence community. Right. They deal with text. They're getting more and more text. And this is going to be the place we land. So they became the first customers.
And we took financing really early on from in Qtel, which was the investment arm of. They say the intelligence community, but yes, you can you can infer kind of maybe who might be behind that. So we took money from in Qtel and got working on automating a number of processes that intelligence analysts do want on a regular basis. And one of those is building knowledge graphs.
Right. So there's a quite a manual process. This person is the same as that person, which goes by this alias. They travel to this location. They met this person. They talked about these things. So you've got a kind of a graph, right. And you can kind of fill it out manually, but it's very consuming right now without going to the classified.
So we know from things like Wikipedia, the recall of everyone that has Wikipedia pages that should have Wikipedia pages or put another way of everyone who should have a Wikipedia page. How many actually do is depending on the metrics about a third, right. And it skews obviously along certain traditional biases.
So even with things like Wikipedia, where you have a huge kind of crowdsourced dynamic, we don't we do a terrible job of keeping that information up to date and accurate and accurate and then it's not just do they have a page, but how long from when information changes does it get updated. Right. And the latency on the stuff can be about six months. Yeah. Right.
So that's one child right building and maintaining knowledge graphs. And that's one that intelligence has in spades. What does that mean? It means like one thing they talk about is you can have a request to say look, we've just had a pro US demonstration at Iran. What should we expect the actions of the Iranian Revolutionary Guard to be in the next month.
And you say, well, I don't know like I could give you thing or you could say, well, I don't know what have they done every time there was a pro US demonstration in the past and we say, all right.
Now traditionally to solve that, you now have like a dozen intelligence analysts working for three weeks to kind of put down all that information, write it up and then do an analysis with a system like hours you just go through and say, give me all the events that followed immediately after a program.
And then we after a pro US demonstration within 50 miles of Iran and then categorize them according to the ontology that I care about. And so now you've got that data in the space of minutes with one person at the wheel.
You know, conversely like if you had to find all the locations that someone's been or travel to again, I can go through and read every document and figure out where they traveled or I can just say person A has traveled to location.
So what you've built, kind of like an LLM or like a large language model, but for intelligence.
你所构建的东西,有点像LLM(大规模语言模型),但用于智能。
Yes. So what we've seen since 2015 really has just been a continual but pretty rapid increase in capabilities of language or language models. And so what that means, concretely for us and our customers is yes, we deploy large language models into intelligence environments to allow them to interact with the data in a way that firstly reduces the time.
But what we really say is it allows you to be more curious and for an intelligence analyst, if I tell you, you know, hey, I've got a hunch that there might be a patent from the Iranian Revolutionary Guard response. Give me three weeks to go and think about it. That's very different from I've got a hunch it will take me 30 seconds and I'll see if that hunch is correct. And as an intelligence analyst, it's not that you don't have the data available.
It's that the cost of answering the questions is so high that you don't necessarily connect the dots.
问题是回答的成本太高了,你并不一定能够连接起来。
Given where we are, because it does kind of go back to where we started with chat, GBT kind of Stuart Russell called it. He referred to it as like a wake up call for kind of everybody for the public, but also for governments who are like, oh, the stuff is cross some threshold of usefulness. And now it's going to change a lot of stuff pretty quickly.
From where you sit, does this moment or AI broadly, I know AI is kind of this squishy term, does it change war?
从你所在的位置看,这个时刻或广义上的AI,我知道AI是一个比较模糊的说法,它会改变战争吗?
I think it's good to kind of split AI into a few different buckets, right? And the first is AI for autonomy. And that's kind of a combination of different things, but there's AI for autonomy. There's generative AI, which is AI to create things. And then there's what I would kind of characterize as AI for effectively search. I think when we think about AI, you're across those places.
Search can be text, but it can also be images, right? And so you're effectively saying to the AI, I look at the world, text images, audio, and structure it in a way that says when I see a car, it's a car, when you refer to this person, is that person, etc.
So now if we step back on this here and look at how artificial intelligence is going to impact war, I think chat GPT is probably less kind of like relevant than maybe some of the other pieces.
所以现在如果我们回过头来看看人工智能将如何影响战争,我认为聊天GPT可能比其他一些部分更不相关。
I would say first and foremost, autonomy, having systems that can autonomously navigate, move, make decisions, like that's very, very clear. And we know even from the early simulations that I think the writings very much on the wall that humans will not fly single planes better than machines.
You referenced this when last we spoke, but most people don't know this because so could you explain what you're talking about?
上次我们交谈时,你提到过这个,但大多数人并不知道这个是什么,请问你能否解释一下你所说的是什么?
So we've all been watching Top Gun, right? Maverick and he's back and he's like, that is pure nostalgia, right? Like the reality here is, so DARPA has been running a series of tests, they did their first head to head competition. DARPA is the Pentagon's kind of tech investment in the research.
Yeah, for looking big defense bets, like they really ran the first self-driving car test, I think back in 2000 and maybe 7,6. And so the latest one they did, well, one of the latest ones was Canada machine pilot, and I think there were 16s, but I don't know maybe there were 22s.
Pilot a plane better than a human Top Gun. And they ran this, there were five, I think five or six teams that entered the winning team that win against the human.
So it was a human Top Gun and a simulator against a machine. In a dog fight, in a dog fight, one on one, right? One on one. The machine beat the human 5-0, not even close. And the human came back and said, or some of the commentators maybe said it wasn't fair because the machine took risks that the human wouldn't take. And I'm like, yeah, exactly, right?
They said, you know, the machine was able to kind of like shoot before the human reaction. So it wasn't really a fair fight. And he's like, well, exactly.
他们说,这台机器有点像在人类反应之前射击。所以这并不是一场公平的比赛。他说,确实是这样。
Right? So it's kind of like you have tanks and horses and you're like, well, it's not really fair.. Obviously the tank is going to be faster because it's not a horse. So you don't have to feed a tank and. Right. And so look, you know, you don't need to put a human in these machines. They can take risks. You don't have body counts associated with them. They've got higher G's that they can pull because they don't need to have a human blacking out. Right.
So there's all these advantages. And it's pretty clear the machine is one. Now, they're moving to live in air tests now, some movement from the simulations to in the air, which is simulations of that good these days. I don't think we should expect the results many different. How they're going to do that? I mean, I guess it's just idle pilot on steroids. It's auto pilot on steroids. And artificial intelligence.
I mean, one way to think about it is auto pilot on steroids. And it's got so good now that it's better than the best human. And the next step on that. So Maverick and Goose are screwed. Yeah, the Goose is cooked, right? Yeah. That's right. You know, so that's gone. But then it's not going to stop at one plane. It's going to be a swarms of planes. Yeah. Right. And or swarms of drones or swarms of drones.
And all of this is going to move because, of course, you can manufacture these things cheaper, they're more disposable. So you're going to have swarmed the swarm conflicts. And this also brings back to kind of some of the stuff that we studied with our colleagues in physics or swarm dynamics. You know, how do you model biological swarms and all the rest of it? So that I think is going to be really interesting on the autonomy side.
Do we know when the actual in air dog fight, man versus machine is happening? You know, I hope this gets as much love and attention as the AlphaGo match or the Watson Janet imaginary will get less. You know, it should because it's kind of crazy, right? Like, you know, we're putting machines for the Battle of S, air supremacy machines will either give it to us or give it to our opponents. But it's not going to be humans doing it. And do we know if that is that happening like this year? You know, I think it is, right? Like I saw the photo shoots of a plane I think that they're going to put the controls in. You know, that came out, like maybe a couple of weeks ago.
I need to bone up on my like, you know, air defense weekly, you know, reading because that's quite amazing. That's amazing. So that's going to come down. And I think that should give us another Sputnik moment, or hopefully gives it a different department, another Sputnik moment.
And what that really means, though, is you're OK. Run that forward. You've got a swarm of these things, running by machines, taking on an opponent swarm. And then you say, all right, that's cool. You've got software and AI supporting these. And we think we're pretty good. We've got dominance. But then your opponent upgrades all of the swarm overnight and all of a sudden they have 99% kill rate on your system.
So once you're into a place where the controlling factor is the AI capabilities that drive the machines, you can push an update like you would with a Tesla. And all of a sudden, your car goes from being OK to being superhuman. Now you're 20,000 drones that are whatever, 1,000 bucks a pop or whatever are just appreciably better than yours. That's exactly right.
So you've got the fans you're feeling good about defending Taiwan. You've got all the things. And then overnight, your opponent upgrades the capabilities. You didn't even know the upgraded the capabilities they decide to attack and they beat you. And everything that you've done to that point was rendered useless by an offset in the AI capabilities.
Now, when it comes to war, we've never seen this kind of speed at which you can achieve an offset across your entire fleet against your opponent. Because over the year update, it's not like you can over the year update the quality of the tanks or your nukes. But you can over the year update the quality of your swarm for your supremacy.
I'm Alex Dibble from The Times. There's so much happening in the world right now. How do you stay on top of it? On the World in 10 podcasts, near my team, sit down and go through it all in just 10 minutes. The Times Correspondence join us with Eyewitness Accounts and Interviews. I was actually on the orthopedic war, almost all the injuries are shelf-fired. A woman who was inside the police van. They also give us their unique analysis of world events. Give the world in 10 listen. It is 10 minutes to stay on top of the world.
In the Uni, I studied political science, which hasn't really done much for me. But I remember studying a lot this idea of mutual issue destruction, which was what kept the cold war cold. Because we had, however many thousands of nukes pointed at them and likewise, and everybody was like, well, once you pressed a button once, then everybody's dead..
And it feels like the competition here, and obviously everybody's been talking about this for a while, especially folks like Eric Schmidt, the former Google CEO, the competition with China, this dynamic feels different than that. And I don't know how you think about that, or if that is, are people thinking about that yet? Because it feels, again, going back to like the chat GP team, just a sputnik moment, or when the machine beats Maverick, et cetera.
It feels like this stuff isn't far away, this idea of being able to do the over-the-air update, and all of a sudden, our machines are just the best in the world, our leapfrog, and then back and forth and back and forth.
But these aren't like a bunch of A-bombs that we just have there pointed. It feels like more back and forth, but I don't know. I'm just trying to kind of create a meta model for what that dynamic is. And it feels somehow very similar, but also very different.
Yeah, so when it take back, I think, I was back 2016. So 2016, the Office for Net Assessment, which goes back to the Cold War, which is an interesting kind of group from Pentagon, assembles a bunch of AI people, and brings us up to West Point. And I'm there, Stuart Russell is there, there's some hedge fund-quant people there. It's a really interesting group, 15 of us.
And we spend the entire week in back-to-back kind of presentations from us and debates and discussions around what impact AI is going to have. This isn't 2016. 2016. Oh, interesting. It was. So we go up there, and it's just they won't let us out of the room.
And it was just going through all night on back and forth. And I was there with Stuart Russell, where they were, it was fascinating. And a couple of things emerge. One is this should be considered as fundamental change to war as the internal combustion engine. The second is it will touch everything. Yep.
那是一整夜地反复讨论。我和 Stuart Russell 一起在那里,那真是很有意思。有几个要点。第一,这应该被视为战争基本变革,就像内燃机一样。第二,这将触及到一切。没错。
And the third piece is this is probably best understood in the concept of offset. And so we presented the work to the Deputy Secretary of Defense Robert Work, who came in, and we briefed him on at the end of the week and so on. And sort of started to kind of formulate around this hypothesis of like artificial intelligence being the third offset.
Yeah, so offsets. Yeah. So an offset is a technological advantage, so great that it renders those without that capability defeated before the battle even starts. Right. So to run it back, first offset nuclear weapons. If you have nuclear weapons and your opponent doesn't, no point fighting because I'll drop a nuclear weapon and your entire country's gone. Which is gone.
Second one comes through. So it's interesting. So you talk about loot to the arms race of nuclear weapons. One of the things that actually broke that was precision guided munitions. And precision guided munitions basically said, yeah, we're kind of equal in the number of nuclear weapons that we've got.
But if I can't guarantee my nuclear weapon hits your nuclear weapon, then I need three or four of them. If I can guarantee it, I only need one of them. And if you don't have that precision munition, now I've got a five to one six to one advantage. The reality is it actually moves a lot further. There's a story from Vietnam where they tried to bomb a bridge.
They counted 800 craters. And they couldn't count the ones that were in the water because they couldn't see them. But there were 800 craters around the bridge and the bridge was not hit. Right. And this isn't the 70s, right?
And then they finally got the first semi-conductor powered precision munition that got the bridge. And so what you see as the second offset was precision munitions and stealth weaponry. And we sort of forget that a little bit. But if you look at the first Gulf War, you have the strongest army in the world, which is America, against the sixth strongest army in the world, which is Iraq, and the entire war is over in 72 else. Yep.
And so that's the second offset. And then the third offset then becomes artificial intelligence. And artificial intelligence says, if your machines have the ability to beat every human in the air, then the war is over before it starts. Conversely, if your AI can beat my AI, the war is over before it starts.
But what's different about this is that you can get an offset in AI capabilities that renders your opponent useless overnight. And the thing that should be the wake up call with chat GPT is we had large language models, hundreds and billions of parameters.
No one really thought to put reinforcement learning human feedback into it at scale. It was done. And you went from a reasonably good autocomplete to I can pass an MBA test overnight. Like take that into the defense environment. You have a reasonably good swarm.
To I have one that is so much better than anything you've ever seen overnight. And that's the bet that the US Defense Department hasn't orientated itself around as that we've seen offsets before. But we've never seen an ability to have an offset overnight. And what that means is as soon as you have that capability, you attack because you know that your opponent is behind.
And so this arms race ratchets up that you can never let your opponent get more than an offset ahead of you with AI, least they find out that they've got it, and then it's game over. So the speed of this just accelerated massively.
This is a clumsy analogy. But it's like the app storeification of defense where it's just like, oh, I've got TikTok on my phone. I just did a story this week on TikTok. And you know what I keep about this new filter.
That turns very normal looking people into like to look like models. Terrible for kids. But I couldn't access it. They're like, updated. And you get this whole new array of filters. And it's just like, but like that with AI and the consequences are mass death or supremacy or whatever.
Right, because you've got the offset. And once you're able to get that offset over your opponent, you jump through that window. And this is where the thing comes right. It really matters in AI how fast you get that into the hands of the war fighter, how fast you get into the hands where it can make a difference.
And the issue here, people talk a lot about an AI arms race with China. They talk about chips. They talk about training data. They talk about research and models. The long poll in the tent is how long it takes to go from something that could be done to something that is being used.
Right. And what we've seen here is that the US and the procurement cycle has been set up in a Cold War mentality, which says. I'm going to take 15 years to get my F-35. Right. We're number one. So we can operate on our own timeline. And secondly, most of the things we're buying, a large cap X expenditure piece is a metal.
Versus, we've got competitors that are ahead of us. We don't control the timeline. And things can be upgraded overnight that render everything you've done previously useless. And so when you're in that mindset, which is hardware, powered by software, you need to be able to procure technology on the time frame of potentially hours of an app update. An app update. So you look through those capabilities.
And this is the thing that I think is the single biggest danger that the US is facing is our inability to get the technology that we can all produce here in Silicon Valley into the hands of the people that can actually make a difference with it faster than our opponents in China. And China is a long way ahead of us in that military civil fusion than we are.
What does that look like on the ground? In other words, because when we chatted the other week, it feels like America is ahead in certain aspects. China is ahead in certain aspects. What does that competitive landscape look like? And what does it mean for?
Because I feel like the other thing is, which is a theme, talking to all these people in different parts of the AI world, is that some people are surprised by all of a sudden AI and everything. Some people are like, well, this has been the direction of travel for the past 10 years.
Since that 2012 deep learning moment, image net moment, where I was like, oh, these things actually work really well. This is a nude paradigm. I'm just trying to like, game out kind of, everybody's been talking about this arms race, AI arms race, China for years. But it just feels like, OK, sure. I don't even know what that means. Not quite who cares, but kind of who cares?
Like, OK, yeah, sure. The thing is about an AI arms race is it doesn't really have an impact until you go to war. Totally. Right? And when you go to war, it really matters. So I was thinking with the deputy minister of technology from Ukraine.
So he came across to DC and spent some time with him. And you know, what's top of mind for him was how do you label data faster for image recognition for the drones that they've got? That said, what do you mean?
Because they will look, we've got things that are spotting objects in the field, but then the Russians will upgrade that. And then we've got to go back and label thousands and more things and, you know, away it goes. If we can shorten that down, we have an advantage. Right?
And so you look at this thing and when you're in war, the inches matter. Because it's a game where it's obviously adversarial and it's competitive in those inches matter.
Now, as we look at this, I think we, firstly as Americans, and most of the West, I think we think conflict with China is a long way off, right? It could never happen. We also thought 12 months ago a land war in Europe could never happen. And that's changed obviously, you know, and changed our reality.
If you look at the dynamics of the statements from the Admiral Guil Day and the other, you know, folks that are orientated around decisions, they'll say, we have to be ready to fight tonight, right? What that means is that any day that we were sitting here, we may get the call up to send naval vessels into fight in the South China Sea.
If you look at the sort of the predominant view in Washington DC, it's, there's a five year window with which we expect China to invade Taiwan. And that's credible.
如果你看看华盛顿特区的主流观点,他们认为中国有一个五年的时间窗口会入侵台湾。这个说法是可信的。
Because I, you also know people in like the military industrial complex, they're kind of in a way talking their book, you know, because it's like, oh yeah, of course. So buy a bunch of our stuff. Guil Day is not buy a bunch of our stuff. He controls the fleet. Yeah. The people that are sitting there on the National Security Council, they're not buy our stuff. They control the decisions.
The dynamics of this is being driven as much from the US Department response to a perceived threat coming in from China. So that's one side of it.
这种动态是因美国政府对来自中国的视为威胁的反应所驱动的,这是其中的一个方面。
The second side of it there that comes through is even if you don't have intention to go to war, accidents can ratchet things up significantly. And as you look at that, I think, look, Taiwan is preparing for an invasion. Certainly you can see China is making strong steps towards that.
And I think, you know, if I'm putting on my hat and saying, look, if I'm G, an armchair quarterbacking, I'm like, yeah, this is something I want to take back and be part of China. My reign as emperor is not complete without a reunification of Taiwan back into China.
And I think looking at that dynamic of all the things here is like, it's, I think, much more likely than not that within a five year window that there is an attempt to take Taiwan back by China. And the US will be, I believe, forced into action. And that's a very difficult set of decisions to make around that.
But ideally that doesn't happen. And I think one of the things that stops that from happening is having an offset, which is if you were to do this, you'll surely lose, so don't even bother.
Not quite mutually assured destruction, but like, look at it. Look at it. We've got the bigger stick. Yeah, you're sure destruction the CCP, right?
不完全是相互保证毁灭,但是,看看这个。看看这个。我们有更大的棍子。对吧,你们确信能毁灭中共,对吧?
And if we maintain an offset in AI capabilities that can be credibly posed back against the CCP or the people's Republican army, then they're not going to attack. If they believe that it's a chance or that they've got the jump, then they'll take that window.
One thing that occurs to me is that, which is a fact that I think a lot of people don't appreciate about Silicon Valley is that from the very beginning, it was kind of hand in glove with the Pentagon and the defense industry. And a lot of those early bets were funded by Washington, you know, kind of going back decades and decades.
And then we've had this turn in the past 10, 15 years. I've heard about it. Others have of just like, you know, folks at Google writing letters to this leadership and like, we do not want to make war machines. We don't want to help the Pentagon with their, you know, project may have been drone, contract, et cetera.
From where you sit, what does that relationship like today? Because like I said, it does feel like it was very hand in glove. Like it was kind of, there would be no Silicon Valley without the Pentagon.
There seem to be have been a breach and without kind of judging whether that's good or bad, but just that as a fact, where are we right now?
貌似存在着一些漏洞,而不去判断这是好事还是坏事,只是作为事实,我们现在处于什么地步?
I think it's bad. Oh, I'll judge it. It's bad. I mean, like, it's like, it's like, you know, you're crying. It's like, you know, imagine Ukraine is just like, oh, like we don't think we should have war. We don't think we should contribute to like any, any like defense stuff. And you're like, all right, well, there just was a missile that went through an apartment block. What are you going to do?
It's like, oh, yeah, but we just don't think we should have anything to do with that. And it's just like, what a goddamn luxury to be able to sit in a place where you're like, I don't want to have anything to do with defensive freedom, right? Defensive the values, defensive human life against an oppressor coming in and coming across.
So first of all, like that is such an IE view is that you don't need defense or you don't need to have technology as part of defense. So that's number one.
首先,就像IE观点一样,您不需要防御或不需要将技术作为防御的一部分。 这是第一点。
I think the value is guilty of being naive about what it means to be a superpower in the world. And whether you like it or not, you are going to need a defense structure, all right?
我认为价值观对于成为世界超级大国的意义有些天真。无论你喜不喜欢,你都需要防御机制,对吗?
The second bit is you don't always choose when you fight, but you choose with the technology that you build how you will respond or your capability of responding. So I think those are there.
I think it hasn't proved a lot since the Maven space. That was 2019. I think it was 2019 was when that came. Yeah, and one of the things that changed was the administration. And I think people have become a lot more comfortable with the Biden administration and the Valley, despite the fact that Pentagon and intelligence agencies are not the same as the White House administration. But they've certainly become more comfortable.
So I think part of that is being knocked down and I think people have realized and Landmore in Europe did a lot of that. Is it actually the Stuz Matter, right? And so that's come through. But if you're rewind to 2019, it was like, you had to have really clear thoughts and discussions with your employee base. At least they walk out and say, like, you know, this is not there. So part of that, I think, is understanding what it means to be in a world where war exists.
I think the second bit now, though, is not so much can Silicon Valley build those things. It's can they actually get into the hands of the soldiers and the people fighting on the front lines. And this is where we go back to procurement dynamics, right?
So we've gotten past the kind of, what I'd call social license to operate. It is acceptable to be a defense focused company. It's not ever going to be the mainstream kind of light building consumer apps. Is it going to be easier for you to recruit than it was three years ago? Yeah, 100%. Right? People come in and understand now that there's a mission that is actually important, right?
And you know, I think one of the things here with defense is always to say, look, the work that you're doing is going to have an impact in a way that is literally life and death, right? And that, I think, is a mission that people want to get in and get behind. And it's acceptable now to do that, I think, three or four years ago, it was kind of like, you had to be very careful as you boast their subject.
I also think people have focused on problems that you have a luxury in the Valley of focusing on. Yeah, right? And it's just like war isn't something you think about in the Valley because it's just not present. Well, because we need a better app to walk our dogs. That's right. You better have to walk our dogs or whatever it is. And so, but it's also a long way away.
Like the cultures of Washington, D.C. and Silicon Valley are a long way away. And you know, there's half a dozen flights a day that go back and forwards, but that only means that there's, you know, a few hundred people that make a commute between the two cities every day, which is tiny given the sort of the importance of those two places in our sort of trajectory as a country.
But now I think that's improved. I think the bit that has to improve now is the ability to, I go through the procurement cycle, but I'd say B as well, the Pentagon has to also embrace the fact that they're not selling to large defense primes that can, you know, do cost plus and that can take forever and all the rest and they have to be able to move at the speed that the technology is moving at and move at the speed the companies are moving at behind it as well.
Not just because it's a good idea, it's because that's what your opponent's doing. You know, they're moving incredibly fast. In a way, the Pentagon has to become more like the App Store, right? Like if software and AI are eating defense, you have to be like, all right, well, we need an update next week, not three years from now. Well, you're, let's say, like you're in a war and you've got a computer vision system that's identifying your opponent.
And they figure out a way to disguise themselves so that now your computer vision system doesn't work. Well, you don't wait three years to upgrade that. You want that in three minutes. So what's your system for procuring that next generation of capabilities as quickly as possible?
Is that happening? That overhaul that you're talking about that feels necessary? No, and America's got a problem, right? Because you can have all the technology, you can have technological advantage, you can even be winning the AI arms race. But if you can't get that in the hands of people that need to use it, then it's for naught, it's for nothing.
And I think we are a long way away from getting that done. And I think it's recognized as a problem, but I haven't seen anything over the last three or four years that have indicated that we've made substantive steps towards fixing it. There was what I would call a lot of innovation theatre that happened where people who like, here's $100,000 to try out this thing.
And you see all of these well-meaning, but ultimately groups having very little impact of like, we're engaging tech. But none of that made into the hands of anyone who was actually fighting a war.. Have you talked about the last three or four years not seeing that change? What about the last three or four months?
I either chat GBT effect. Yeah, so what's interesting now is you've got, I think people like opening their eyes and saying, wow, this came at it faster than we thought. So I think that's one. I think there's a realization that they need to do something now that these things are moving faster. But I don't think people know quite what that is.
Right, what to do. Right, and for me, I think that the bit here, if I'm putting on my hat and saying, well, how do you solve for this? You don't, if you're saying you're in the hardware space because they understand hardware.
You're in the hardware space and you produce steel for ships. When a war starts, you don't say, hey, can you make us some steel? Yeah. You're like, well, we can, but we need to build a factory and et cetera. You actually go through and you say, we need you to be able to produce this much steel in case we should ever use it. And you have this kind of baseline of capacity for defense that allows you to manufacture. That needs to transfer across to software.
And what it says here is that, look, by the time you need it, you can't just spin up an entire software division, organization, whatever to produce the stuff. But until a war happens, which we hope never does, there's no need for it. So the way to get around this and the way we've done it in the past has said, we're going to build a baseline capacity that we can tap into.
And so what we need is the software equivalent of that, which says, look, we're going to fund the production of X, Y, and Z. We hope never to have to use them. But we need to have that capability and people with the ability to do that stuff on tap should we ever need to. And that's, I think, the only way you can do this, right?
因此,我们需要一个软件版本的这个东西,它可以说,看,我们要资助 X、Y 和 Z 的生产。我们希望永远不需要使用它们。但是,我们必须拥有那种能力,并且有能力随时做出这样的东西,以备不时之需。我认为这是唯一的方法对吧?
Precuring things when a war breaks out is exactly the wrong way to do this. And so we sat down, for us, we had a whole bunch of stuff that could be deployed and having a big impact inside of Ukraine. This is like the kind of real-time intelligence. Real-time intelligence.
For somebody like, I'm wherever, picks random spot in Ukraine. Yep. Being able to have common operating and architecture, situational awareness of what's unfolding from the multiplicity of data sources mediated by artificial intelligence, so you can make better common control decisions. Right. So we have that. We have the money allocated. The money's available, but there's no contract to put that on.
And so you go through this kind of like thing, which is like, there's money, this technology, and there's need. And there's need. And then you go through, it's like, where's the contract? And so now, well, now to get a contract, here's the process, and here's how you unfold through it. Then you're in the line at the post office, basically. And so like 12 months later, you're there, but it's like, well, 12 months, like, that's not how these things work.
So that's kind of the dynamic is this stuff on folds. I think the other bit here is like, if you make that simpler, you're going to attract more people to this problem, because people, as much as they're building dog walking apps, they don't really want to. No one gets up and says, you know what I want to do with my life is build dog walking app. I'm sorry for the dog walking app engineers out there, but really, like, you can fight for the ideological future of the planet.
Like, you can build dog. And then back to that, cool. He can build a dog walking app when you try. That's right. Once we've fought and won the ideological battle of the next century between Western liberal democracy and authoritarianism, then we can walk dogs. But it's like, everyone's going to choose that time and again, right?
This is the same reason that people, when there was an invasion in Ukraine, they didn't line up to get out of the country. The young men lined up to get back into the country, because this is something that matters. And that story is as true in America as it is anywhere else in the Western world. But you've got to give that technology and the people building it space to go ahead and do it.
And the reality is it's a tiny fraction of the price of any of the metal that you're building. And it's the future of where, like, conflict is going. And we honestly, we don't have a choice, but to engage in this and run incredibly fast and we have to win it. Because if we lose AI dominance to China, the offset kicks in and everything that we've built is destroyed.
So is the next weapon of mass destruction, the next kind of A-bomb equivalent, thinking about what this kind of embodiment of what that offset you're talking about? Is it like, I don't know, a thousand or a million, like, you know, $100 drones all loaded with just a little payload enough to kill one person and they just send them all, you know what I mean?
Yeah, and this is sort of like this murder bots kind of dynamic, which Stuart Russell kind of has been banging on the drums.. I mean, but I'm chatting with him and, you know, and other folks that kind of share similar beliefs. And you know, look, I get it, right?
Like, you don't want to create an AI system that goes and destroys your population. But you certainly want an AI system that defends your population. And that will likely be a swarm of killer robots. They just will be aimed to kill the people that are attacking you. And so we're not going to escape autonomy in the weapon systems that we've got.
And indeed, it's even a false kind of thing should we bring autonomy? And we built in the Second World War, automatic anti-aircraft guns that tied to radar systems that improved the accuracy 10-fold by automating the targeting anti-aircraft guns, right? Like, we've had autonomy. We've had autonomy and precision weapons. What we're talking about is better autonomy.
You know, I think there's no escaping autonomy. The question is, how is the opponent going to use it and how are we going to defend against that and how are we going to maintain dominance such that those conflicts never happen? But I think what does this mean if you get this, you know, orientated, you move from a space of building very complex single machines driven by humans into cheap, distributed swarm dynamics swarm intelligence controlled by AI systems.
And it then says, how could you AI and how good is your low cost manufacturing to spend out tens of thousands of these things? So think of the F-35 versus 35,000 drones that emerge out and just kind of like go into its engine systems that get sucked through and destroy it. Or think of 10,000 C or underwater drones that come in at a battle group for an aircraft carrier, each one containing enough payload to destroy or cripple. And even if you knocked it one of them out, you're knocking each one of them out with a $400,000 missile or a $4 million missile. Right. So like this is where war is going to go. So like it's going to change that dynamic.
But of course, the one there which we think a lot about, but we forget is there's no war unless there's a desire for war. And one of the bits that I think Ukraine has taught us is that I think Putin would have been right with his actions in his strategy, but for the fact that people, instead of turning around to walk out of the country, turned around and walked back in. If people didn't pick up, clash, and a carves to fight, if people around the world didn't pressure their governments to send javelins and tanks, there was no war to be had.
And so they won in Ukraine, they won the information war, they won the narrative, and they said, snake island and the woman with sunflowers in your pocket. And the ghost of Kiev, they won that, and there was a war to be fought. But without that spirit, there'd be no conflict.
And if we look towards Taiwan, the single biggest thing that China wants is a conflict that looks like Hong Kong, which is like, there's nothing to see here. It's an internal Chinese matter. Don't engage versus something that looks like Kiev, which is like everyone decides that this is something they need to get in and support.
And so what you're going to see, and what you are seeing, is I think a concerted effort by China to win an information war. And the reason for that is if China can take Taiwan without kinetic destruction, it gets TSMC, or the semiconductor manufacturing, which controls 90 plus percent of all the advanced AI chips in the world. And it controls that piece of it.
And so if I'm China sitting here, I'm going to run a very, very concerted information operation. And I'd probably love to have a very popular application that every soldier and person had in their pocket, where I could control the information they can consume, and maybe distracted them or divided them or created information that said, hey, don't worry about Taiwan. It's an internal Chinese matter. And so they have no idea which is that.
I'm just trying to think if there was an operation, where you could design that and put it in. I mean, wouldn't that be wonderful? And we're still dividing about whether or not TikToks are a good thing. Yeah, yeah. We live in bizarre times.
Well, look, if I was a CIA, and I had the ability to implant an application that controlled the information stream of every single person in China to my liking, that would be considered the greatest intelligence when ever. And that is exactly what we're sitting in here today. And we're still having this debate of whether or not we should have TikTok in the pockets of soldiers and the pockets of politicians, of the pockets of literally every person on the age of 40.
And everything from information superiority through to artificial intelligence, superiority through to the speed and engagement of the technology sector into that.
把从信息优势到人工智能,再到技术行业的速度和互动等方方面面都考虑进去。
All of this matters.
所有这些都很重要。
But this starts now.
但是现在就开始了。
It's not something we can think about in the future.
这不是我们未来能考虑的事情。
We are going to be in a fight for the dominant global military power of the planet.
我们将在争夺地球主导全球军事力量的战斗中参与。
And it's going to be America and the Western liberal democracies, or it's going to be a authoritarian China.
要么是美国和西方自由民主国家,要么就是威权主义的中国。
And I don't think that's a very hard decision from where I'm sitting.
我觉得,从我的角度来看,这不是一个很难的决定。
And if that's the thing that's in front of us, why is this not front and center for America and everyone who can make an impact and a difference?
如果这是摆在我们面前的事情,那为什么美国和所有可以产生影响和改变的人们不把这个问题放在首位呢?
Because at a time, bomb start dropping is too late.
因为在炸弹开始降落的时候,为时已晚。
Absolutely, especially when they're dropped by swarms of drones.
当它们被一大群无人机投下时,特别是那种情况,绝对是这样。
With an I.I. system that we can't counteract and dictate.
我们无法对抗和控制的I.I.系统。
That they have the opposite.
他们有相反的东西。
Right.
好的。
Well, that's why I wanted to have you on.
嗯,这就是为什么我想要邀请你参加的原因。
Appreciate it.
感谢你。
It's been fun.
这很有趣。
It's all good.
一切都好。
Thank you.
谢谢你。
And that was all the time we have.
我们的时间已经用完了,就这样。
I want to thank Sean for coming on.
我想感谢Sean的加入。
I actually have a meet to his office, which is really funny because there was literally nobody there.
实际上我去了他的办公室开会,这真的很搞笑,因为那里真的一个人也没有。
It was a Friday.
今天是星期五。
So we have the whole run of the place.
所以我们可以整个地方任意自由活动。
I want to thank you all for listening for the ratings, for the reviews, for telling your friends and neighbors about the pod.
我想感谢大家的聆听、评分、评论,以及向朋友和邻居介绍这个播客的支持。
Thank you, thank you, thank you.
谢谢你,谢谢你,谢谢你。
And that's it for me this week.
这就是本周我要说的全部了。
I was actually out in the field, so to speak, on a couple of different stories.
我实际上可以说是在现场,做了几个不同的故事。
So I'm kind of not sure what I'll be writing about this week.
所以我不是很确定这个星期我会写些什么。
It's been kind of a crazy few days for me.
最近几天有些疯狂,我有点不知所措。
But anyhow, please have a gander at thetimes.co.uk or you find me on Twitter at Danny Fortson where you can email me Danny.Fortson at SundayHive and Times.co.uk.