I think this has really been an amazing partnership through every phase. We had no idea where I was going to go when we started, as Satya said. But I don't think this is one of the great tech partnerships ever. And certainly without Microsoft and particularly Sauts' early conviction, we would not have been able to do that. What a week, what a week. Great to see you both. Sam, how's the baby? Baby is great. That's the best thing I've ever. Every cliche is true and it is the best thing ever. Hey Satya, with all your time. And the smile on Sam's face when he talks about, it's just his baby. It's just so different. I got it and compute. I guess when he talks about computing his baby.
Well, Satya, have you given any dad tips with all this time you guys have spent together? I said just enjoy it. I mean, it's so awesome that you know, we had our babies or what our children are so young. And I wish I could redo it. So in some sense, it's just the most precious time. And as they grow, it's just so wonderful. I'm so glad to see him is. I'm happy I'm doing it older, but I do think sometimes man, I wish I had the energy of when I was like 25. That part's harder. No, not about it. What's the average age at Open AI Sam? Any idea? It's young. It's not crazy young. Not like most Silicon Valley startups. I don't know, maybe low 30s average. Are babies trending positively or negatively? Babies trending positively. Oh, that's good. That's good.
Yeah. Well, you guys such a big week. You know, I was thinking about I started at Nvidia's GTC, you know, just hit five trillion dollars. Google, Meta, Microsoft, Sacha, do you had your any just today? You know, and we heard consistently not enough compute, not enough compute, not enough compute. We got rate cuts on Wednesday, the GDP is tracking near 4%. And then I was just saying to Sam, you know, the presidents cut these massive deals in Malaysia, South Korea, Japan, sounds like with China. You know, deals that really incredibly provide the financial firepower to reindustrialize America. 80 billion for new nuclear vision, all the things that you guys need to build more compute.
But certainly wasn't, what wasn't lost in all of this was you guys had a big announcement on Tuesday that clarified your partnership. Congrats on that. And I thought we'd just start there. I really want to just break down the deal in really simple plain language to make sure I understand it and others. But, you know, we'll just start with your investment, Sacha. You know, Microsoft started investing in 2019 is investing the ballpark at $1314 billion into open AI. And for that, you get 27% of the business ownership in the business on a fully diluted basis. I think it was about a third and you took some dilution over the course of last year with all the investment. So does that sound about right terms of ownership?
Yeah, it does. But I would say before even our stake in it, Brad, I think what's pretty unique about open AI is the fact that as part of open AI's process of restructuring one of the largest non-profit data. And the biggest non-profit gets created. I mean, let's not forget that in some sense, I say at Microsoft, we are very proud of the fact that we were associated with the two of the largest non-profit, the Gates Foundation and now the Open AI Foundation. So that's, I think, the big news. We obviously were thrilled. It's not what we thought. And as I said to somebody, it's not like when we first invested our billion dollars, that this is going to be the 100-bagger that I'm going to be talking about to VCs about. There we are, but we are very thrilled to be an investor and an early backer.
And it's great. And it's really a testament to what Sam and team have done quite frankly. I mean, they obviously had the vision early about what this technology could do and they ran with it and just executed in a masterful way. I think this is really an amazing partnership through everything. We had kind of no idea where I was looking to go when we started, as Satya said, but I don't think I think this is one of the great tech partnerships ever and without, certainly without Microsoft, in particularly, SOTUS, early conviction. We would not have been able to do this. I don't think there were a lot of other people that would have been willing to take that kind of a bet given what the world looked like at the time.
We didn't know exactly how the tech was going to go. Well, not exactly. We didn't know at all how the tech was going to go. We just had a lot of conviction in this one idea of pushing on deep learning and trusting that if we could do that, we'd figure out ways to make wonderful products and create a lot of value. And also, as Satya said, create what we believe will be the largest nonprofit ever. And I think it's going to do amazingly great things. It was, I really like this structure because it lets the nonprofit grow in value while the PBC is able to get the capital that it needs to keep scaling. I don't think the nonprofit would be able to be this valuable if we didn't come up with the structure, if we didn't have partners around the table that we're excited for it to work this way. But, you know, I think it's been six, more than six years since we first started this partnership.
And the pretty crazy amount of achievement for six years. And I think much, much more to come. I hope that Satya makes a trillion dollars on the investment, not 100 billion. You know what I'm saying? Well, as part of the restructuring, you guys talked about it. You have this nonprofit on top and a public benefit court below. It's pretty insane. The nonprofit is already capitalized with $130 billion, $130 billion of open AI stock. It's one of the largest in the world out of the gates. It could end up being much, much larger. The California Attorney General said they're not going to object to it. You already have $130 billion dedicated to making sure that AGI benefits all of humanity. You announced that you're going to direct the first $25 billion to health and AI security and resilience, Sam.
First, let me just say, you know, as somebody who participates in the ecosystem, kudos to you both. It's incredible. This contribution to the future of AI. But Sam, talk to us a bit about the importance of the choice around health and resilience. And then help us understand how do we make sure that you get maximal benefit without it getting weighted down as we've seen with so many nonprofits with its own political biases? Yeah. First of all, the best way to create a bunch of value for the world is hopefully what we're we've already been doing, which is to make these amazing tools and just let people use them. And I think capitalism is great. I think companies are great. I think people are doing amazing work getting advanced AI into the hands of a lot of people and companies that are doing incredible things.
There are some areas where the, I think, market forces don't quite work for what's in the best interest of people and you do need to do things in a different way. There are also some new things with this technology that just haven't existed before, like the potential to use AI to do science at a rapid clip, like really truly automated discovery. And when we thought about the areas we wanted to first focus on, clearly if we can cure a lot of disease and make the data and information for that broadly available, that would have a wonderful thing to do for the world. And then on this point of AI resilience, I do think some things may get a little strange and they won't all be addressed by companies doing their thing.
So as the world has to navigate through this transition, if we can fund some work to help with that, and that could be cyber defense, that could be AI safety research, that could be economic studies, all of these things, helping society through this transition smoothly. We're very confident about how great it can be on the other side, but I'm sure there will be some choppyness along the way. Let's keep us through the deal. So models and exclusivity, Sam, open AI can distribute its models, it's leading models on Azure, but I don't think you can distribute them on any other leading the big clouds for seven years until 2032. But that would end earlier if AGI is verified, we can come back to that, but you can distribute your open source models, Sora agents, codecs, wearables, everything else on other platforms.
So Sam, I assume this means no chat GPT or GPT 6 on Amazon or Google. So we have a cat, first of all, we want to do lots of things together to help create value for Microsoft, we want them to do lots of things to create value for us, and there are many, many things that will happen in that category. We are keeping what Satya termed once, and I think it's a great phrase of stateless APIs on Azure exclusively through 2030, and everything else we're going to distribute elsewhere, and that's obviously in Microsoft's interest too. So we'll put lots of products, lots of places, and then this thing we'll do on Azure and people can get it there or be awesome. I think that's great.
And then the rev share, there's still a rev share that gets paid by open AI to Microsoft on all your revenues that also runs until 2032, more until AGI is verified. So let's just assume for the sake of argument, I know this is pedestrian, but it's important that the rev share is 15%. So that would mean if you had 20 billion in revenue that you're paying 3 billion to Microsoft, and that counts as revenue to Azure. Such as that, does that sound about right? Yeah, we have a rev share, and I think as you characterized it, we're either going to AGI or till the end of the thumb. And I actually don't know exactly where we counted, quite honestly, whether it goes into Azure or something else. That's a good question. It's a good question for Amy.
Given that both exclusivity and the rev share end early in the case, AGI is verified. It seems to make AGI a pretty big deal. And as I understand it, if open AI claimed AGI, it sounds like it goes to an expert panel. And you guys basically select a jury who's got to make a relatively quick decision whether or not AGI has been reached. So you said on yesterday's earning call that nobody's even close to getting to AGI, and you don't expect it to happen anytime soon. You talked about this spiky and jagged intelligence. Sam, I've heard you perhaps sound a little bit more bullish on, you know, when we might get to AGI.
So I guess the question is to you both, do you worry that over the next two or three years, we're going to end up having to call in the jury to effectively make a call on whether or not we've hit AGI? I realize you've got to try to make some drama between us here. You know, I think putting a process in place for this is a good thing to do. I expect that the technology will take several surprising twists and turns and we will continue to be good partners to each other and figure out what's. Well said, I think, and that's one of the reasons why I think this process we put in place is a good one and at the end of the day, I'm a big believer in the fact that intelligence capability-wise is going to continue to improve.
Our real goal, where frankly is that, which is how do you put that in the hands of people and organizations so that they can get the maximum benefits and that was the original mission of open AI that attracted me to open AI and Sam and team and that's kind of what we plan to continue on. Brad, to say the obvious if we had super intelligence tomorrow, we would still want Microsoft's help getting this product out into people's hands and we want them like yeah. Of course, of course, yeah, no again, I'm asking the questions, I know that around people's minds and that makes a ton of sense to me, obviously, Microsoft is one of the largest distribution platforms in the world.
我们的真正目标是这样的,说实话,就是如何把这些技术放到个人和组织手中,使他们能够获得最大的收益。这也是 OpenAI 最初的使命,这个使命吸引我加入了 OpenAI 和 Sam 以及他的团队。这也是我们计划继续努力的方向。Brad,很明显,如果我们明天就有了超级智能,我们仍然会需要微软的帮助,将这个产品推向用户手中。是的,当然,当然。我提这些问题是因为我知道大家都在想这些,而这对我来说也非常合理。显然,微软是全球最大的分发平台之一。
You guys have been great partners for a long time, but I think it dispels some of the myths that are out there, but let shift gears a little bit. Obviously, open AI is one of the fastest growing companies in history, such as you said on the pod a year ago, this pod that every news phase shift created a new Google and the Google of this phase shift is already known and it's open AI. And none of this would have been possible had you guys not made these huge bets with all that said, open AI's revenues are still reported $13 billion in 2025 and Sam on your live stream this week. You talked about this massive commitment to compute right 1.4 trillion over the next four or five years with, you know, big commitments 500 million to Nvidia 300 million to AMD and Oracle 250 billion to Azure.
So I think this single biggest question I've heard all week and hanging over the market is how you know, how can the company with 13 billion in revenues make 1.4 trillion of spend commitments, you know, and and you've heard the criticism Sam. We're doing well more revenue than that second of all Brad, if you want to sell your shares, I'll find you a buyer. I just enough like, you know, people are, I think there's a lot of people who would love to buy open AI shares. I don't I don't think you want to. Including myself. Including myself. Talk with a lot of like breathless concern about our compute stuff or whatever that would be thrilled to buy shares.
So I think we could sell, you know, your shares or anybody else's to some of the people who are making the most noise on Twitter, whatever about this very quickly. And we do plan for revenue to grow steeply revenue is growing steeply we are taking a forward bet that it's going to continue to grow and that not only will try to be T keep growing, but we will be able to become one of the important AI clouds that our consumer device business will be a significant important thing that AI that can audit made science will create huge value. So, you know, there are not many times that I want to be a public company, but one of the rare times it's appealing is when those people are writing these ridiculous open AI is about to go out of business and, you know, whatever.
我觉得我们可以很快把你的股份或其他人的股份卖给那些在推特上讨论这个话题最激烈的人。我们确实计划收入会大幅增长,目前收入增长迅速,我们押注它会继续增长。不仅如此,我们希望Try to be T能够继续增长,而我们也能够成为重要的人工智能云之一,我们的消费设备业务将成为一个重要的板块,能够自动化科学的AI将创造巨大的价值。尽管有很多次我并不想成为一家上市公司,但当那些人写这些“Open AI即将倒闭”之类的荒谬言论时,成为上市公司是极少数让人心动的时刻之一。
I would love to tell them they could just short the stock and I would love to see them get burned on that. But, you know, I we carefully plan we understand where the technology where the capability is going to grow, go and how the products we can build around that in the revenue we can generate we might screw it up like this is the bet that we're making and we're taking a risk along with that. A certain risk is if we don't have the compute we will not be able to generate the revenue or make the models at these at this kind of scale. Exactly.
And so let me just say one thing Brad as both a partner and an investor there is not be the single business plan that I've seen from open AI that they're put in and not beaten it. So in some sense, this is the one place where you know in terms of their growth and just even the business it's been unbelievable execution quite frankly. I mean obviously open AI everyone talks about all the success in the usage and what have you. But even I would say all up the business execution is being just pretty unbelievable.
I heard Greg Brockman say on CNBC a couple of weeks ago right if we could 10X our compute we might not have 10X more revenue but we'd certainly have a lot more revenue. Simply because of lack of compute power things like yeah it's just it's really wild when I just look at how much we are held back and in many ways we have you know we've scaled our compute probably 10X over the past year. But if we had 10X more compute I don't know if we'd have 10X more revenue but I don't think of you that far. And we heard this from you as well last night Sasha that you were compute constrained and growth would have been higher even if you add more compute.
So help us contextualize Sam maybe like how compute constrained do you feel today and do you when you look at the build out over the course of the next two to three years. Do you think you'll ever get to the point where you're not compute constrained. We talk about this question of is there ever enough compute a lot I think the answer is the only the best way to think about this is like a. Energy or something you can talk about demand for energy at a certain price point but you can't talk about demand for energy without talking about. At different you know different demand at different price levels if the price of compute per like unit of intelligence or whatever you want to think about it fell by a factor of 100 tomorrow you would see usage go up by much more than 100 and there be a lot of things that people would love to do with that compute that just make no economic sense at the current cost but there would be new kind of demand so I think the the.
Now on the other hand is the models get even smarter and you can use these models to cure cancer discover novel physics or drive a bunch of human robots to construct a space station or whatever crazy thing you want. Then maybe there's huge willingness to pay a much higher rate cost per unit of intelligence for a much higher level of intelligence that we don't know yet but I would bet there will be so I. I think when you talk about capacity it's it's like a you know cost per unit and you know capability per unit and you have to kind of without those curves it's sort of made up now it's it's not a super well specified problem yeah I think the one thing that you know Sam you talked about which I think is the right ways to think about is that if intelligence is what a log of compute.
Then you try and really make sure you keep getting efficient and so that means the tokens per dollar per watt and the economic value that the society gets out of it is what we should maximize and reduce the costs and so that's where if you sort of where the jeven's paradox point is that which is you keep reducing it commoditizing in some sense intelligence so that it becomes the real driver of GDP growth all around. Unfortunately, it's something closer to a log of intelligence he was log of compute but we may figure out better scaling laws and we figure out. We heard from both Microsoft and Google yesterday both said their cloud businesses would have been growing faster if they have more GPUs you know I asked Jensen on this pod if there was any chance over the course of the next five years we would have a compute glut and he said it's virtually nonexistent chance in the next two to three years.
And I assume you guys would both agree with Jensen that while we can't see out five six seven years certainly over the course of the next two to three years for the for the reasons we just discussed that it's almost a nonexistent chance that you have excess compute. I think the cycles of demand and supply in this particular case you can't really predict right I mean even the point is what's the secular trend the secular trend is what Sam said which is at the end of the day because quite frankly the biggest issue we are now having is not a compute glut but it's a power and it's sort of the ability to get the bills done fast enough close to power. So if you can't do that you may actually have a bunch of chips sitting in inventory that I can't plug in in fact that is my problem today right it's not a supply issue of chips it's actually the fact that I don't have warm shelves to plug into and so how some supply chain constraints emerge tough to predict because the demand is just going you know is tough to predict right I mean I wouldn't it's not like seven I would want to be sitting here saying oh my god we're less short on computers because we just want to get the money.
So that's not that good at being able to project out what the demand would really look like so I think that that's and by the way the worldwide side red wanted it's one thing to sort of talk about one segment in one country but it's about you know really getting it out to everywhere in the world and so there will be constraints and how we work through them is going to be the most important thing it won't be a linear path for sure. There will come a glut for sure and whether that's like in two to three years or five to six so I can't tell you but like it's going to happen at some point probably several points along the way like this is there is something deep about human psychology here and bubbles and also as Satya said like there's it's such a complex supply chain weird stuff gets built the technological landscape shifts in big ways so you know if a very cheap form of energy comes online soon at mass scale and a lot of people are going to be extremely burned with existing contracts they've signed it if we can continue this unbelievable reduction in cost per unit of intelligence let's say it's been averaging like 40 x for a given level per year you know that's like a very scary exponent from an infrastructure build out standpoint now again we're taking the bet that there'll be a lot more demand is that gets cheaper but I have some fear that it's just like man we keep going with these breakthroughs and everybody can run like a personal agenda laptop and we just did an insane thing here some people are going to get really burned like has happened in every other tech infrastructure cycle at some points along the way.
I think that's really well said and you have to hold those two simultaneous truths we had that happen in 2000 2001 and yet the internet became much bigger and produce much greater outcomes for society than anybody estimated in that period of time. Yeah, but the one thing that Sam said is not talked about enough which is the come that for example the optimizations that open a eyes done on the inference stack for a given GPU I mean I it's kind of like it's you know we talk about the Moore's law improvement on one end but the software improvements are much more exponential than that. Someday we will make incredible consumer device that can run a GPT five or GPT six capability model completely locally at a low power draw and this is like so hard to wrap my head around that will be incredible and you know that's the type of thing I think that scares some of the people who are building obviously these large centralized a few stacks and such you talked a lot about the distribution both to the edge as well as having inference capability distributed around the world.
Yeah, I mean the way at least I've thought about it is more about really building a fun jubble fleet I mean when I look at sort of in the cloud infrastructure business one of the key things you have to do is have two things one is an efficient like in this context and a very efficient token factory and then high utilization that's that's it there are two simple things that you need to achieve and in order to have a high utilization you have to have multiple workloads that can be scheduled even on the training I mean if you look at the AI pipelines there's pre training. There's mid training there's post training there's RL you want to be able to do all of those things so thinking about fun jability of the fleet is everything for a cloud provider.
Okay, so Sam you referenced you know and Reuters was reporting yesterday that opening I may be planning to go public late 26 or 20. No, no, we don't we don't have anything that specific I'm a realist I assume it will happen someday but that was I don't know why people write these are like big and wide. Great. Well, I'm not going to do this or anything like that I just assume it's where things will eventually go but it does seem to me if you guys were you know are doing an excess of $100 billion of revenue in 28 or 29 that you at least would be in. What what I'm about 27.
Yeah 27 even better you are in position to do an IPO and the rumored trillion dollars again just to contextualize for listeners if you guys want public at 10 times. 100 billion in revenue right which would be I think a lower multiple than Facebook went public at a lower multiple than a lot of other big consumer companies went public at that would put you to a trillion dollars if you floated 10 to 20% of the company that raises 100 to 200 billion dollars which seems like that would be a good path to fund a lot of the growth and a lot of the stuff that we just talked about so you're you're not opposed to it you're not.
But you guys are on the company with revenue growth which is what I would like us to do. But well I've also said I think that this is such an important company and you know there are so many people including my kids who like to trade their little accounts and they use chat GPT and I think having retail investors have an opportunity to buy one of the most important and largest. So honestly that that is probably the single most appealing thing about it to me that would be really nice.
One of the things I've talked to you both about shifting gears again is part of the big beautiful bill you know senator crews had included federal preemption so that we wouldn't have this state patchwork 50 different laws that Myers the industry down and kind of needless compliance and regulation. Unfortunately got killed at the last second by senator blackburn because frankly I think AI is pretty poorly understood in Washington and there's a lot of doom or is I think that is game traction in Washington.
So now we have state laws like the Colorado AI act that goes into full effect in February I believe that creates this whole new class of litigants anybody who claims any unfair impact from an algorithmic discrimination and a chatbot so somebody could claim harm for countless reasons. Sam how worried are you that you know having this state patchwork of AI you know poses real challenges to you know our ability to continue to accelerate and compete around the world. I don't know how we're supposed to comply with that California.
Sorry Colorado law I would love them to tell us and you know we'd like to be able to do it but that's just from what I've read of that that's like I literally don't know what we're supposed to do. I'm very worried about a 50 state patchwork I think it's a big mistake I think it's there's a reason we don't usually do that for these sorts of things I think it'd be bad.
Yeah I mean I think the fundamental problem of you know this patchwork approaches quite frankly I mean between open AI and Microsoft will figure out a way to navigate this right I mean we can figure this out the problem is anyone starting a startup and trying to kind of this is sort of it just goes to the exact opposite or I think what the intent. It's obviously safety is very important on making sure that the fundamental you know concerns people have a address but there's a way to do that at the federal level so I think the you if we don't do this again you know you will do it and then that'll cause its own issue so I think if US leads it's better as you know as one regulatory framework for sure.
And to be clear it's not that one is advocating for no regulation it's simply saying let's have you know agreed upon regulation at the federal level as opposed to 50 competing state laws which certainly fire bombs the AI startup industry and I think it makes it makes it super challenging even for companies like yours who can afford to defend all these cases yeah that would just say quite frankly my hope is that this time around even across EU and the United States that that would be the dream right quite frankly for any European. Start up I don't think that's going to happen. What is that that would be great I don't I wouldn't hold your breath for that one that would be great.
Now but I really think that if you think about it right if you sort of if anyone in Europe is thinking about there you know what how can they participate in this AI economy with their companies this should be the main concern there as well so therefore that's I hope there is some enlightened approach to it but I agree with you that you know today I wouldn't bet on that. I do think that with sacks as the a is are you at least have a president that I think might fight for that in terms of coordination of of AI policy using trade as a lever to make sure that you know we don't end up with overly restricted European policy but we shall see.
I think first things first federal preemption the United States is pretty critical you know we've been down in the weeds a little bit here Sam so I want to tell us go about a little bit you know I've heard people on your team talk about all the great things coming up and as you start thinking about much more unlimited compute chat GPT six and beyond robotics physical devices scientific research as you as you look forward to twenty twenty six what do you think surprises us the most what what what what are you most excited about in terms of what's on the drawing board.
You are mean you just hit on a lot of the key points there I think codex has been in a very cool thing to watch this year and as these go from multi hour tasks to multi day tasks which I expect to happen next year what people to do to create software unprecedented rate and really in fundamentally new ways I'm very excited for that I think we'll see that in other industries to I have like a bias towards coding I understand that one better but I think we'll see that really start to transform what people are capable of. I hope for very small scientific discoveries in twenty twenty six but if we can get those very small ones we'll get bigger ones in future years that's a really crazy thing to say is that like AI is going to make a novel scientific discovery in twenty twenty six even a very small this is like this is a wildly important thing to be talking about so I'm excited for that certainly robotics and computer.
And new kind of computers in future years that'll be that'll be very important but yeah my personal biases if we can really get AI to do science here that is I mean that is super intelligence in some sense like if this is expanding the total sum of human knowledge that is a crazy big deal.
Yeah I mean I think one of the things to use your codex example I think the combination of the model capability I mean if you think about the magical moment that happened with chat GPT was the UI that met intelligence that just took off right there's just you know unbelievable right form fact and some of it was also the instruction following piece of model capability was ready for chat.
I think that that's what the codex and the you know these coding agents are about to help us which is what's that you know coding agent goes off for a long period of time comes back and then I'm then dropped into what I should steer like one of the metaphors I think we're all sort of working towards this I do this macro delegation and micro steering what is that UI meets this new intelligence capability and you can see the beginnings of that with that.
And I think that is a very important thing to do is to get the codex right the way at least I use it inside a GitHub Copaola is like you know it's now it's just a it's just a different way than the chat interface and I think that that I think it would be a new way for the human computer interface quite frankly it's probably bigger than that that might be the departure.
I'm very excited that we're doing new form factors of computing devices because computers were not built for that kind of workflow very well certainly a UI like chat to be t is wrong for it but this idea that you can have a device that is sort of always with you but able to go off and do things and get micro steer from you when it needs and have like really good contextual awareness of your whole life and flow and I think that would be cool.
And what neither of you have talked about is the consumer use case I think a lot about you know again we go on to this device and we have to hunt and pack through 100 different applications and fill out little web forms things that really haven't changed in 20 years but to just have. You know a personal assistant that we take for granted perhaps that we actually have a personal assistant but to give a personal assistant for virtually free to billions of people around the world to improve their lives whether it's you know ordering diapers for their kid or whether it's. You know booking their hotel or or making changes in their calendar I think sometimes it's the pedestrian that's the most impactful and as we move from answers to memory and actions and then the ability to interface with that through an earbud or some other device that doesn't require me to constantly be staying at this rectangular piece of glass I think it's pretty extraordinary.
I think that that's what Sam was teasing yeah yeah right I got to drop off unfortunately. Sam it was great to see you thanks for joining us congrats again on this big step forward and we'll talk to you. Let me crash. See you Sam take care. It's it. Sam well knows we're certainly a buyer not a seller but but but sometimes you know I think it's important because the world you know we're pretty small we spend all day long thinking about this stuff right and so conviction it comes from the 10,000 hours we've spent thinking about it. But the reality is we have to bring along the rest of the world in the rest of the world doesn't spend 10,000 hours thinking about this and then frankly they look at some things that appear overly ambitious right and get worried about whether or not we can pull those things off.
So you took this idea to the board in 2019 to invest a billion dollars into open AI was it a no brainer in the board room you know did you have to expend any political capital to get it done dish dish for me a little bit. Like what that moment was was like because I think it was such a pivotal moment not just for Microsoft not just for the country but I really do think for the world. Yeah I mean it's interesting when you look back the journey when I look at it it's been you know we were involved even in 2016 and initially open AI started in fact as you was even the first sponsor I think and then they were doing a lot more reinforcement learning at that time I remember the daughter to competition I think happened on Azure and then.
They moved on to other things and you know I was interested in our out but quite frankly you know it speaks a little bit to your 10,000 hours or the prepared mind Microsoft since 1995 was obsessed I mean builds obsession for the company was natural language natural language I mean after all we had a coding company that's right information work company. So it's when Sam in 2019 started talking about text and natural language and transformers and scaling laws that's when I say wow like this is an interesting I mean you know this was a team that was going in the direction or the direction of travel was now clear it had a lot more overlap with our interest so in that sense it was a no brainer obviously go to the board and say hey I have an idea of taking a billion dollars and giving it to the board.
Because the board will return 10 dollars and giving it to this crazy structure which we don't even kind of understand what is it it's a nonprofit blah blah blah and and saying go for it there was a debate bill was kind of rightfully so skeptical because and then he became like once you saw the GPT for demo like that was like the thing that bills talk about publicly where when he saw he said it's the best demo he saw after you know what Charles Simone should him at Zerox Park and but you know quite honestly not that's what you want to say of us could.
So the moment for me was that, you know, let's go give it a shot, then seeing the early codecs inside of Copilot, inside of GitHub Copilot, and seeing just the code completions and seeing it work, that's when I would say we, I felt like I can go from one to 10, because that was the big call, quite frankly. One was controversial, but the one to 10 was what really made this entire era possible. And then obviously the great execution by the team and the productization on their part, our part, I mean, if I think about it, right, the collective monetization reach of GitHub Copilot, chat GPT, Microsoft 365 Copilot and Copilot, you add those four things, that is it, right, that's the biggest sort of AI set of products out there on the planet.
And that's, you know, what obviously has led us to sustain all of this. And I think not many people know that your CTO, Kevin Scott, you know, an ex-Googleer lives down here in Silicon Valley and to contextualize it, right, Microsoft had missed out on search, had missed out on mobile, you become CEO, almost had missed out on the cloud, right, you've described it caught the last train out of town to capture the cloud. And I think you were pretty determined to have eyes and ears down here so you didn't miss the next big thing. So I assume that Kevin played a good role for you as well. Absolutely. I find deep seek and open AI.
Yeah, I mean, if it's in fact, I would say Kevin's conviction. And Kevin was also skeptical. Like that was the thing. I always watch for people who are skeptical who change their opinion. Because to me, that's a signal. So I'm always looking for someone who's a non-believer or something and then suddenly changes and then they get excited about it. I have all the time for that because I'm then curious why. And so Kevin started with it. All of us were kind of skeptical, right? No, I mean, in some sense, it defies the, you know, we're all having gone to school and said, God, you know, there must be an algorithm to crack this versus just let's scale in laws and throw compute. But quite frankly, Kevin's conviction that this is worth going after is one of the big things that drove this.
Well, we talk about, you know, that investment that it's now worth 130 billion, I suppose could be worth a trillion someday, as Sam says. But it really in many ways understates the value of the partnership, right? So you have the value in the rev share billions per year going, going to Microsoft. You have the profit you make off the $250 billion of the Azure compute commitment from OpenAI. And of course, you get huge sales from the exclusive distribution of the API. So talk to us how you think about the value across those domains, especially how this exclusivity has brought a lot of customers who may have been on AWS to Azure.
Yeah, no, absolutely. I mean, so to us, if I look at it, you know, aside from all the equity parts, the real strategic thing that comes together and that remains going forward, is that stateless API exclusive video on Azure that helps quite frankly both OpenAI and us and our customers. Because when somebody in the enterprise is trying to build an application, they want an API that stateless, they want to mix it up with compute and storage, put a database underneath it to capture state and build a full workload. And that's where, you know, Azure coming together with this API.
And so what we're doing with even Azure Foundry, right? Because in some sense, let's say you want to build an AI application, but the key thing is how do you make sure that the e-vals or what you're doing with AI are great. So that's where you need even a full app server in Foundry. That's what we have done. And so therefore, I feel that that is the way we will go to market in our infrastructure business. The other side of the value capture for us is going to be incorporating all this IP. Not only we have the exclusivity of the model in Azure, but we have access to the IP.
我们在处理 Azure Foundry 时采用的策略是什么呢?假如你想构建一个 AI 应用程序,关键在于确保你的评估或是使用 AI 的结果是优秀的。这就是为什么我们需要在 Foundry 中拥有一个完整的应用服务器。这就是我们目前正在做的。因此,我认为这将是我们在基础设施业务中推向市场的方式。对于我们来说,捕获价值的另一面是整合所有这些知识产权。不仅我们在 Azure 上拥有模型的独家使用权,我们还可以接触到这些知识产权。
I mean, having a royalty free, let's even forgetting all the know how and the knowledge side of it. But having royalty free access all the way till seven more years gives us a lot of flexibility, business model wise. It's kind of like having a frontier model for free. In some sense, if you're an MSFT shareholder, that's kind of where you should start from is to think about we have a frontier model that we can then deploy, whether it's in GitHub, whether it's in M365, whether it's in our consumer copilot, then add to it our own data, post train it. So that means we can have it embedded in the weights there.
And so therefore, we are excited about the value creation on both the Azure and the infrastructure side, as well as in our high value domains, whether it is in health, whether it's in knowledge work, whether it's encoding or security. You've been consolidating the losses from OpenAI. You know, I think you just reported early in yesterday. I think you consolidated four billion of losses in the quarter. Do you think that investors are, I mean, they may even be a attributing negative value, right? Because of the losses, as they apply, they're multiple of earnings such, whereas I hear this and I think about all of those benefits we just described, not to mention the look through equity value that you own in a company that could be worth a trillion unto itself.
Do you think that the market is kind of misunderstanding the value of OpenAI as a component of Microsoft? Yeah, that's good one. So I think the approach that Amy is going to take is full transparency because at some level, I'm no accounting expert. So therefore, the best thing to do is to give all of the transparency. I think this time around as well, I think that's why the non-gap gap so that at least people can see the EPS numbers because the common sense where I look at it, Brad is simple. If you've invested, let's call it $13.5 billion, you can of course, lose $13.5 billion, but you can't lose more than $13.5 billion. At least the last time, I check, that's what you have at risk. You can also say, hey, the $135 billion that is, today, our equity stake is sort of illiquid. What have you? We don't plan to sell it.
So therefore, it's got risk associated with it, but the real story I think you are pulling is all the other things that are happening. What's happening with Azure growth? Would Azure be growing if we had not sort of had the OpenAI partnership to your point? The number of customers who came from other clones, clouds, for the first time. This is the thing that really we benefited from. What's happening with Microsoft 365? In fact, one of the things about Microsoft 365 was, what was the next big thing after E5? Guess what? We found it in Copilot. It's bigger than any suite. Talk about penetration and usage and the pace. It's bigger than anything we have done in our information work, which we mean added for decades. We feel very, very good about the opportunity to create value for our shareholders.
Then at the same time, we fully transparent so that people can look through the what are the losses. Who knows what the accounting rules are, but we will do whatever is needed and people will then be able to see what's happening. A year ago, Sacha, there were a bunch of headlines that Microsoft was pulling back on AI infrastructure. Fair or unfair, they were out there. You know. Perhaps you guys were a little more conservative, a little more skeptical of what was going on. Amy said on the call last night, though, that you've been short power and infrastructure for many quarters. She thought that you would catch up, but you haven't caught up because demand keeps increasing. I guess the question is, were you too conservative knowing what you know now?
What's the roadmap from here? It's a great question because the thing that we realized, and I'm glad we did, is that the concept of building a fleet that truly was fungible, fungible for all the parts of the life cycle of AI, fungible across geographies, and fungible across generations. Right. So because one of the key things is when you have, let's take even what Jensen and Tim are doing. Right. I mean, they're at a pace. In fact, one of the things I like is the speed of light. Right. We now have GB 300s bringing, you know, that we're bringing up. So you don't want to have ordered a bunch of GB 200s that are getting plugged in only to find the GB 300s are in full production.
So you kind of have to make sure you're continuously modernizing. You're spreading the fleet all over. You are really truly fungible by workload. And you're adding to that the software optimizations we talked about. So to me, that is the decision we made. And we said, look, sometimes you may have to say no to some of the demand, including some of the open AI demand, right? Because sometimes, you know, Sam may say, hey, give me a dedicated, you know, big, you know, whatever multi gigawatt data center in one location for training, make sense from an open AI perspective, doesn't make sense from a long term infrastructure build out for Azure.
That's where I thought they did the right thing to give them flexibility to go procure that from others. While maintaining, again, a significant book of business from open AI, but more importantly, giving ourselves the flexibility with other customers, our own one P. Remember, like one of the things that we don't want to do is be short on is, you know, we talk about Azure. In fact, some of times our investors are overly fixated on the Azure number. But remember, for me, the high margin business for me is co-pilot. It is security co-pilot. It's GitHub co-pilot. It's the healthcare co-pilot.
So we want to make sure we have a balanced way to approach the returns that the investors have. And so that's kind of one of the other misunderstood perhaps in our investor base in particular, which I find pretty strange and funny because I think they they want to hold Microsoft because of the portfolio we have. But man, are they fixated on the growth number of one little thing called Azure? On that point, Azure grew 39% in the quarter on a staggering $93 billion run rate. And, you know, I think that compares to GCP that grew at 32% and AWS closer to 20%. But could Azure because you did give compute to one P and because you did give compute to research.
It sounds like Azure could have grown 41% 42% had you had more compute to all. Absolutely. Absolutely. There's no question. There's no question. So that's why I think the internal thing is to balance out what we think again is in the long term interests of our shareholders. And I know to serve our customers well. And also not to kind of, you know, one of the other things was, you know, people talk about concentration risk, right? We obviously want a lot of open AI. But we also want other customers. And so we're shaping the demand here. You know, we are in a supply, you know, you know, we're not demand constrained, we're supply constrained. So we're shaping the demand such that it matches the supply in the optimal way with the long term view.
To that point, Sacha, you talked about 400 billion. It's an incredible number of remaining performance obligations last night. You said that, you know, that's your book business today. It'll surely go up tomorrow as sales continue to come in. And you said you're going to, you know, you're need to build out capacity just to serve. That backlog is very high. You know, how diversified is that backlog to your point? And how confident are you that that 400 billion does turn into revenue over the course of the next couple of years? Yeah. That 400 billion has a very short duration. It's an Amy expert. It's the two year duration on average.
So that's definitely our intent. That's one of the reasons why we're spending the capital outclear with high certainty that we just need to clear right backlog. And to your point, it's pretty diversified both on the one P and the three P. Our own demand is quite likely pretty high for our one first party. And even amongst third party, one of the things we're now are seeing is the rise of all the other companies building real workloads. That are scaling. And so given that, I think we feel very good. I mean, obviously, it's that's one of the best things about RPOs. You can be planned for quite frankly.
And so therefore, we feel very, very good about building. And then this doesn't include obviously the additional demand that we're already going to start seeing, including the 250, you know, which will have a longer duration and will build accordingly. Right. So there are a lot of new entrants, right, in this race to build out compute Oracle, Core, we've crucible, et cetera. And normally we think that will compete away margins, but you've somehow managed to build all this out while maintaining healthy operating margins at Azure.
So I guess the question is for Microsoft, how do you compete in this world that is where people are leveraging up, taking lower margins, while balancing that profit and risk? And do you see any of those competitors doing deals that cause you to scratch your head and say, oh, we're just setting ourselves up for another boom and bust cycle? I mean, at some level, the good news for us has been competing even as a hyperscaler. Every day, you know, there's a lot of competition, right, between us and Amazon and Google on all of these, right?
I mean, it's sort of one of those interesting things, which is everything is a commodity, right, compute storage. I remember everybody saying, wow, how can they be able to margin except at scale, nothing is a commodity. And so therefore, yes, so we have to have a cost structure, our supply chain efficiency, our software efficiencies all have to kind of continue to compound in order to make sure that there's margins, but scale. And to your point, one of the things that I really love about the OpenAI partnership is it's gotten us to scale, right?
This is a scale game. When you have the biggest workload there is running on your cloud, that means not only are we going to learn faster on what it means to operate with scale, that means your cost structure is going to come down faster than anything else. And guess what, that will make us price competitive. And so I feel pretty confident about our ability to, you know, have margins and this is where the portfolio helps. I've always said, you know, I'm being forced into giving the Azure numbers, because at some level, I never thought of allocating,
I mean, my capital allocation is for the cloud, from whether it is Xbox cloud gaming or Microsoft 365 or for Azure, it's one capital outlay. And then everything is a meter as far as I'm concerned from an empty perspective. It's a question of, hey, the blended average of that should match the operating margins we need as a company. Because after all, otherwise why we're not a conglomerate, we're one company with one platform logic. It's not running five, six different businesses, we're in these five, six different businesses only to compound the returns on the cloud and AI investment.
Yeah, I love that line. Nothing is a commodity at scale. You know, there's been a lot of ink and time spent even on this podcast with my partner Bill Gurley talking about circular revenues, including Microsoft, stasher credits, right open AI that were booked as revenue. Do you see anything going on like the AMD deal, you know, where they traded 10% of their equity and, you know, for a deal or the Nvidia deal? Again, I don't want to be overly fixated on concern, but I do want to address head on what is being talked about every day on CMBC and Bloomberg. And there are a lot of these overlapping deals that are going on out there. Do you, do you, when you think about that in the context of Microsoft, does any of that worry you again as to the sustainability or durability of the AI revenues that we see in the world?
Yeah, I mean, first of all, our investment of, let's say, that 13 and a half, which was all the training investment, that was not booked as revenue. That is the, that is the reason why we have the equity percentage. That's the reason why we have the 27% or 135 billion. So that was not something somehow that made it into Azure revenue. In fact, if anything, the Azure revenue was purely the consumption revenue of chat GPT and anything else. And the APIs, they put out that they monetized and be monetized. To your aspect of others, to some degree, it's always been there in terms of vendor financing, right? So it's not like a new concept that when someone's building something and they have a customer who is also building something, but they need financing for whether it is, it's sort of, they're taking some exotic forms, which obviously need to be scrutinized by the investment community. But that said, when the financing is not a new concept, interestingly enough, we have not had to do any of that, right? I mean, we may have, really either invested in OpenAI and essentially got an equity stake in it for return for compute, or essentially sold them great pricing of compute in order to be able to sort of bootstrap them. But others choose to do so differently.
And I think circularity ultimately will be tested by demand because all this will work as long as there is demand for the final output of it. And up to now, that has been the case. Certainly, certainly. Why wouldn't a shift, you know, as you said, over half your business to software applications, you know, I want to think about software and agents, you know, last year on this pod, you made a bit of a stir by saying that much of the application software, you know, was this thin layer that sat on top of a crud database? The notion that business applications exist, that's probably where they're all collapse, right, in the agent era. Because if you think about it, right, they are essentially crud databases with a bunch of business logic.
The business logic is all going to these agents. Public software companies are now trading at about 5.2 times forward revenue. So that's below their 10-year average of seven times, despite the markets being at all time highs. And there's lots of concern that SaaS subscriptions and margins maybe put at risk by AI. So how today is AI affecting the growth rates of your software products, of, you know, those core products? And specifically, as you think about database, fabric, security, office 360. And then second question, I guess, is what are you doing to make sure that software is not disrupted, but is instead super powered by AI?
Yeah, I think that's right. So the last time we talked about this, my point really, there was the architecture of SaaS applications is changing, because this agent here is replacing the old business logic tier. And so because if you think about it, the way we built SaaS applications in the past was you had the data, the logic tier, and the UI, all tightly coupled. And AI, quite frankly, doesn't respect that coupling, because it requires you to be able to decouple. And yet, the context engineering is going to be very important. I mean, take, you know, something like Office 365. One of the things I love about our Microsoft 365 offering is it's low RPU, high usage. Right? I mean, if you think about it, right? Outlook or Teams or SharePoint, you pick Word or Excel, like people are using it all the time, creating lots and lots of data, which is going into the graph. And our RPU is low. So that's sort of what gives me real confidence that this AI tier, I can meet it by exposing all my data.
In fact, one of the fascinating things that's happened, Brad with both GitHub and Microsoft 365 is thanks to AI, we're seeing all time highs in terms of data that's going into the graph or the repo. I mean, think about it, the more code that gets generated, whether it is Codex or Clod or wherever, where is it going? GitHub. More power points that get created, Excel models that could create it, all these artifacts and chat conversations, chat conversations are new docs. They're all going to the graph and all that is needed again for grounding. So that's what you turn it into a forward index, into an embedding. And basically, that's semantics is what you really go ground any agent request.
其实,Brad,令人着迷的是,在 GitHub 和 Microsoft 365 中,由于人工智能的发展,我们看到进入数据图表或代码仓库的数据量达到了历史新高。想想看,无论是 Codex、Clod 还是其他任意生成的代码,它们都被上传到哪里?GitHub!更多的演示文稿、Excel 模型,还有各种文档和聊天对话——这些聊天对话已经成为新的文档形式——它们都进入了数据图表,而这些都是系统的基础。这些数据最终被转换成前向索引或嵌入;实际上,这个过程是为了赋予语义,以便为任何智能代理的请求提供准确的基础信息。
And so I think the next generation of SaaS applications will have to sort of, if you are high RPU low usage, then you have a little bit of a problem. But if you are we are the exact opposite, we are low RPU high usage. And I think that anyone who can structure that and then use this AI as in fact, an accelerant because I mean, right, if you look at the M365 Copa price, I mean, it's higher than any other thing that we sell. And yet it's getting deployed faster and with more usage. And so I feel very good or coding, right, who would have thought in fact, think GitHub, right? What GitHub did in first or 15 years of its existence or 10 years of its existence, it was basically done in the last year, which is because coding is no longer a tool, it's more a substitute for wages.
And so it's a very different type of business model. I kind of think about the stack and where value gets distributed. So until very recently, right, clouds largely ran pre compiled software. You didn't need a lot of GPUs and most of the value accrued to the software layer, to the database, to the applications like CRM and Excel. But it does seem in the future that these interfaces will only be valuable, right, if they're intelligent, right? If they're pre compiled, they're kind of dumb. The software's got to be able to think and to act and to advise. And that requires, you know, the production of these tokens, you know, dealing with the ever changing context.
And so in that world, it does seem like much more of the value will accrue to the AI factory, if you will, to, you know, gents in producing, you know, helping to produce these tokens at the lowest cost and to the models. And maybe that the agents or the software will accrue a little bit less of the value in the future than they've accrued in the past. Well, steel man for me, why that's wrong. Yeah, so I think there are two things that are necessary to try and to drive the value of AI. One is what you describe first, which is the token factory. And even if you unpack the token factory, it's the hardware silicon system.
But then it is about running it most efficiently with the system software, with all the fungibility, max utilization. That's where the hyperscalers role is, right? What is a hyperscaler? Is hyperscaler? Like everybody says, if you sort of said, hey, I want to run a hyperscaler. Yeah, you could say, oh, it's simple. I'll buy a bunch of servers and wire them up and run it. It's not that, right? I mean, it was that simple, then there would have been more than three hyperscalers by now. So the hyperscaler is the know how of running that max utel and the token factories. And it's not the end, by the way, it's going to be heterogeneous, obviously, gents in super competitive.
Lisa is going to come, you know, Hawks going to produce things from Broadcom. We will all do our own. So there's going to be a combination. So you want to run ultimately a heterogeneous fleet that is maximized for token throughput, inefficiency, and so on. So that's kind of one job. The next thing is what I call the agent factory. Remember that a SaaS application in the modern world is driving a business outcome. It knows how to most efficiently use the tokens to create some business value. In fact, GitHub Copilot is a great example of it, right? Which is, you know, if you think about it, the auto mode of GitHub Copilot is the smartest thing we've done, right?
So it chooses based on the prompt, which model to use for a code completion or a task handoff, right? That's what you and you do that, not just by, you know, choosing in some round robin fashion, you're doing because of the feedback cycle you have, you have the e-vals, the data loops and so on. So the new SaaS applications, as you rightfully said, are intelligent applications that are optimized for a set of e-vals and a set of outcomes that then know how to use the token factories output most efficiently. Sometimes latency matters, sometimes performance matters and knowing how to do that trade in a smart way is where the SaaS application value is.
But overall, it is going to be true that there is a real marginal cost to software this time around. It was there in the cloud era too, and we were doing, you know, CD-ROMs, there wasn't much of a marginal cost, you know, with the cloud there was, and this time around it's a lot more. And so therefore, the business models have to adjust and you have to do these optimizations for the agent factory and the token factory separately. You have a big search business that most people don't know about, you know, but it turns out that that's probably one of the most profitable businesses in the history of the world because people are running lots of searches, billions of searches, and the cost of completing a search if your Microsoft is many fractions of a penny, right, doesn't cost very much to complete a search.
But the comparable query or prompt stack today when you use a chatbot looks different, right? So I guess the question is assume similar levels of revenue in the future for those two businesses, right? Do you ever get to a point where kind of that chat interaction has unit economics that are as profitable as search? I think that's a great point because see, search was pretty magical in terms of its ad unit and its cost economics because there was the index which was a fixed cost that you could then amortize in a much more efficient way. Whereas this one, you know, each chat to your point, you have to burn a lot more GPU cycles both with the intent and the retrieval.
So the economics are different. So I think you knew that's why I think a lot of the early sort of economics of chat have been the freemium model and subscription on the even on the consumer side. So we are yet to discover whether it's agentic commerce or whatever is the ad unit, how it's going to be litigated. But at the same time, the fact that at this point, you know, I kind of know, in fact, I use search for very very specific navigational queries. I used to say I use it a lot for commerce, but that's also shifting to my, you know, copilot look at the copilot mode in edge and being or copilot.
Now they're blending in. So I think that yes, I think that is going to be a real litigation. Just like we talked about the SaaS disruption, we're in the beginning of the cheese being a little moved in consumer economics of that category. Right. I mean, and given that it's the multi trillion dollar, this is the thing that's driven all the economics of the internet, right? When you move the economics of search for both you and Google and it converges on something that looks more like a personal agent, a personal assistant chat, you know, that could end up being much, much bigger in terms of the total value delivered to humanity, but the unit economics, you're not just tamerizing this one time fixed index.
That's right. And so that's right. The consumer. Yeah, the consumer category, because you are pulling a thread on something that I think a lot about, right? Which is what, during these disruptions, you kind of have to have a real sense of where is, what is the category economics? Is it winner take all and both matter? Right. The problem in consumer space always is that there's finite amount of time. And so if I'm not doing one thing, I'm doing something else. And if your monetization is predicated on some human interaction in particular, if there was truly a gentick stuff, even on consumer, that could be different.
Whereas in the enterprise, one is it's not winner take all. And two, it is going to be a lot more friendly for agentic interaction. So it's not like, for example, the first seat versus consumption. The reality is agents are the new seats. And so you can think of it as the enterprise monetization is much clearer. The consumer monetization, I think, is a little more murky. You know, we've seen a spade of layoffs recently with Amazon announcing it's a big layoffs this week. You know, the Mag 7 is at little job growth over the last three years, despite really robust top lines. You know, you didn't grow your head count really from 24 to 25. It's around 225,000.
You know, many attribute this to normal getting fit. You know, just getting more efficient coming out of COVID. And I think there's a lot of truth to that. But do you think part of this is due to AI? Do you think that AI is going to be a net job creator? And do you see this being a long term positive for Microsoft productivity? Like it feels to me, like the pie grows, but you can do all these things much more efficiently, which either means your margins expand or it means you reinvest those margin dollars and you grow faster for longer.
I caught the golden age of margin expansion. I'm a firm believer that the productivity curve does and will bend or in the sense that we will start seeing some of what is the work and the work flowing particular change, right? There's going to be more agency for you at a task level to get to job complete because of the power of these tools in your hand. And that I think is going to be the case. So that's why I think we are even internally, for example, when you talked about even our allocation of tokens, we want to make sure that everybody at Microsoft standard issue, right? All of them have Microsoft 365 to the tilt in the sort of most under limited way and have get up co-pilot so that they can really be more productive.
But here is the other interesting thing that Brad will be learning is there's a new way to even learn, right? Which is, you know, how to work with agents, right? So that's kind of like when the first when word Excel PowerPoint all showed up in office, kind of we learn how to rethink. So let's say how we did a forecast, right? I mean, think about it, right? In the 80s, the forecast were inter-office memos and faxes and what have you? And then suddenly somebody said, oh, here's an Excel spreadsheet. Let's put in an email, send it around people, enter numbers and there was a forecast.
Similarly, right now, any planning, any execution starts with AI, you research with AI, you think with AI, you share with your colleagues and what have you. So there's a new artifact being created and a new workflow being created. And that is the rate of the pace of change of the business process that matches the capability of AI. That's where the productivity efficiencies come. And so organizations that can master that are going to be the biggest beneficiaries, whether it's in our industry or quite frankly in the real world.
And so is Microsoft benefiting from that? You know, so let's think about a couple of years from now, five years from now, at the current growth rate will be sooner, but let's call five years from now, your top line is twice as big as what it is today, Sacha. How many more employees will you have if you're if you grow revenue by like one of the best things right now is these examples that I'm hit with every day from the employees of Microsoft. There was this person who leads our network operations, right? I mean, if you think about the amount of fiber we have had to put for like this, you know, this two gigawatt data center we just built out in fair water, right?
And the amount of fiber there, the AI, and whatever, it's just crazy, right? So and it turns out this is a real world asset. There are I think 400 different fiber operators we're dealing with worldwide. Every time something happens, we're literally going and dealing with all these DevOps pipelines. The person who leads it, she basically said to me, you know what, there's no way I'll ever get the head count to go do all this. Not forget, even if I even approved the budget, I can't hire all these folks. So she did the next best thing. She just built herself a whole bunch of agents to automate the DevOps pipeline of how to deal with the maintenance.
That is an example of your to your point, a team with AI tools being able to get more productivity. So if you are question, I will say we will grow a head count. But the way I look at it is that head count we grow will grow with a lot more leverage than the head count we had, pre AI. And that's the adjustment, I think structurally or seen first, right? Which is one, you call it getting fit. I think of it as more getting to a place where everybody is really not learning how to rethink how they work. And it's the how, not even the what, even if the what remains the constant, how you go about it has to be relearned.
And it's the unlearning and learning process that I think will take the next year or so. Then the head count growth will come with max leverage. Yeah, no, it's a, I think we're on the verge of incredible economic productivity growth. It does feel like when I talk to you or Michael Dell that most companies aren't even really in the first any, maybe the first batter in the first inning in reworking those workflows to get maximum leverage from these agents. But it sure feels like over the course of the next two to three years. That's where a lot of gains are going to start coming from.
And again, I, you know, I certainly am an optimist. I think we're going to have net job gains from all of this. But I think for those companies, they'll just be able to grow their bottom line, their number of employees slower than their top line. That is the productivity gain to the company aggregate all that up. That's the productivity came to the economy. And then we'll just take that consumer surplus and invest it in creating a lot of things that didn't exist before. 100%. 100%. Even in software development, right? One of the things I look at it is no one would say we're going to have a challenge in having, you know, more software engineers contribute to our sort of society because the reality is you look at the IT backlog in any organization. And so the question is, all these software agents are hopefully going to help us go and take a whack at all of the IT backlog we have.
And think of that dream of evergreen software that's going to be true. And then think about the demand for software. So I think that to your point, it's the levels of abstraction in which knowledge work happens will change. We will adjust to that. The work and the workflow. That will then adjust itself even in terms of the demand for the products of this industry. I'm going to end on this, which is really around the reindustrialization of America. Now I've said, if you add up the $4 trillion of CAPEX that you in these in so many of the big large US tech companies are investing over the course of the next four or five years, it's about 10 times the size of the Manhattan project on an inflation adjusted or GDP adjusted basis.
So it's a massive undertaking for America. The president has made it a real priority of his administration to recut the trade deals. And it looks like we now have trillions of dollars. South Koreans committed $350 billion of investments just today into the United States. And when you think about what you see going on in power in the United States, both production, the grid, etc. What you see going on in terms of this reindustrialization. How do you think this is all going? And maybe just reflect on where we're landing the plane here, your level of optimism for the few years ahead. Yeah, I know. I feel very optimistic because in some sense, Brad Smith was telling me about the economy around Wisconsin data center.
It's fascinating. Most people think of a data center that is sort of like, yeah, it's going to be one big warehouse and there is fully automated. A lot of it is true. But first of all, what went into the construction of that data center and the local supply chain of the data center? That is in some sense the reindustrialization of the United States as well. Even before you get to what is happening in Arizona with the TSMC plans or what was happening with micron and their investments in memory or intel and their fabs and what have you right? There's a lot of stuff that we will want to start building. Doesn't mean we won't have trade deals that make sense for the United States with other countries.
But to your point, the reindustrialization for the new economy and making sure that all the skills and all that capacity from power on down, I think is sort of very important for us. And the other thing that I also said, Brad, it's important and this is something that I've had a chance to talk to President Trump as well as Secretary Lutnik and others is it's important to recognize that we as hyperscalers of the United States are also investing around the world. So in other words, the United States is the biggest investor of compute factories or token factories around the world, but not only are we attracting foreign capital to invest in our country so that we can reindustrialize.
We are helping whether it's in Europe or in Asia or elsewhere in Latin America and in Africa with our capital investments bringing the best American tech to the world that they can then innovate on and trust. And so both of those, I think, are really bone well for the United States long term. I'm grateful for your leadership. Sam is really helping lead the charge at Open AI for America. I think this is a moment where I look ahead, you know, you can see 4% GDP growth on the horizon. We'll have our challenges. We'll have our ups and downs. These tend to be stairs, you know, stairs up rather than a line straight up into the right.
But I for one see a level of coordination going on between Washington and Silicon Valley between big tech and the reindustrialization of America that gives me cause for incredible hope. Watching what happened this week in Asia led by the president and his team and then watching what's happening here is super exciting. So thanks for making the time. We're big fans. Thanks, thanks, Sasha. Thanks so much, Brad. Thank you. As a reminder to everybody, just our opinions, not investment advice.