I would guess that the world is going to get a lot more, a lot funnier. And like, we're weirder. If you think that someone is doing as bad, and they think it's really valuable, most of the time in my experience, they're right and you're wrong. I am worried that we're just removing all the friction between getting totally a reward hacked by our technology. We are trying to build a coding agent that advances Lama research. I would guess that like sometime in the next 12 to 18 months, we'll reach the point where like most of the code that's going towards these efforts is written by AI. I tend to think that for at least the foreseeable future, this is going to lead towards more demand for people doing work, not less. If you've got the cost of providing that service down to one tenth of what it would have otherwise been, maybe now that actually makes sense to go do.
All right, Mark, thanks for coming on the podcast again. Yeah, happy to do it. Good to see you. You too. Last time you were here, you had launched Lama 3. Yeah. Now you've launched Lama 4? Well, the first version. That's right. What's new? What's exciting? What's changed? Oh, well, I mean, the whole field is so dynamic. So I mean, I feel like a ton has changed since the last time that we talked. Met AI has almost a billion people using it now, monthly. So that's pretty wild. And I think that this is going to be a really big year on all of this, because especially once you start getting the personalization loop going, which we're just starting to build in now, really, from both the context that all the algorithms have about what you're interested in feed and all your profile information, all the social graph information, but also just what you're interacting with the AI about.
I think that's just going to be kind of the next thing that's going to be super exciting. So really big on that. The modeling stuff continues to make really impressive advances too, as you know. The Lama 4 stuff, I'm pretty happy with the first set of releases. You know, we announced four models, and we released the first two, the Scout and Maverick ones, which are kind of like the mid-size models, mid-size to small. It's not like, actually, the most popular Lama 3 model was the 8 billion parameter model. So we've got one of those coming in the Lama 4 series too. Our internal code name for it is Little Lama, but that's coming probably over the next, over the coming months. But the Scout and Maverick ones, they're good. There's some of the highest intelligence per cost that you can get of any model that's out there, the natively multimodal, very efficient run on one host, designed to just be very efficient and low latency for a lot of the use cases that we're building for internally.
And that's our whole thing. We basically build what we want, and then we open source it, so other people can use it too. So I'm excited about that. I'm also excited about the Behemoth model, which is coming up. That's going to be our first model that is sort of at the frontier. I mean, it's like more than two trillion parameters. So it is, I mean, it's, as the name says, it's quite big. So we're trying to figure out how we make that useful for people. It's so big that we've had to build a bunch of infrastructure just to be able to post train it ourselves. And we're trying to wrap our head around how does the average developer out there, how are they going to be able to use something like this? How do we make it so it can be useful for distilling into models that are of reasonable size to run? Because you're obviously not going to want to run something like that and a consumer model.
But yeah, I mean, there's a lot to go. I mean, as you saw with the with the Lama three stuff last year, the initial Lama three launch was was exciting. And then we just kind of built on that over the year. 3.1 was when we released the 405 billion model. 3.2 is when we got all the multimodal stuff in. So we basically have a roadmap like that for this year too. So lock on on. I just want to hear more about it. There's this impression that the gap between the best close source and the open source models has increased over the last year, where I know the full family of Lama four models is not yet, but Lama four Maverick is 35 on Chabot Arena and about major benchmarks, it seems like 0.4 mini or GP Gemini 2.5 flash are beating Maverick, which is in the same class. What do you make of that impression?
Yeah, well, okay, there's a few things. I actually think that this has been a very good year for open source overall. Right, if you go back to the like what we were last year, what we were doing with Lama was like the only real super innovative open source model. Now you have a bunch of them in the field. And I think in general, the prediction that this would be the year were open source generally overtakes close sources. The most used model models out there, I think is generally on track to be true. I think the thing that's been sort of an interesting surprise, I think positive in some ways, negative in others, but I think overall good is that it's not just Lama. There are a lot of good ones out there. So I think that that's quite good.
Then there's the reasoning phenomenon, which you basically are alluding to with talking about 0.3 and 0.4 and some of the other models. I do think that there is this specialization that's happening where if you want a model that is sort of the best at math problems or coding or different things like that, I do think that these reasoning models with a lot of the ability to just consume more test time or inference time compute in order to provide more intelligence is a really compelling paradigm. But for a lot of the applications that, and we're going to do that too, we're building an Lama 4 reasoning model and that'll come out at some point.
For a lot of the things that we care about, latency and good intelligence per cost are actually much more important product attributes. If you're primarily designing for a consumer product, people don't necessarily want it to wait half a minute to go think through the answer. If you can provide an answer that's generally quite good too in half a second, then that's great and that's a good trade-off. I think that both of these are going to end up being important directions. I am optimistic about integrating the reasoning models with kind of the core language models over time. I think that's sort of the direction that Google has gone in with some of the more recent Gemini models and I think that's really promising.
But I think that there's just going to be a bunch of different stuff that goes on. You also mentioned the whole chat about arena thing, which I think is interesting and it goes to this challenge around how do you do the benchmarking? Basically, how do you know what models are good for which things? One of the things that we've generally tried to do over the last year is anchor more of our models in our meta AI product, North Star use cases because the issue with both open-source benchmarks and any given thing like the LM arena stuff is they're often skewed for either a very specific set of use cases which are often not actually what any normal person does in your product.
They are often weighted kind of the portfolio of things that they're trying to measure is different from what people care about in any given product. Because of that, we've found that trying to optimize too much for that stuff is often let us astray and actually not lead towards the highest quality products and the most usage and best feedback within meta AI as people use our stuff. We're trying to anchor our North Star in basically the product value that people report to us and what they say that they want and what their revealed preferences are and using the experiences that we have.
Sometimes these things don't quite line up and I think a lot of them are quite easily gameable. I think on the arena you'll see stuff like Sonnet37, it's a great model and it's not near the top and it was relatively easy for our team to tune a version of Loma 4 Maverick that basically was way at the top whereas the one that we released that's the pure model actually has no tuning for that at all so it's further down. I think you just need to be careful with some of the benchmarks and we're going to index primarily on the products.
Do they feel like there is some benchmark which captures what you see as a North Star of Value to the user which can be sort of objectively measured between the different models and you're like, I need Loma 4 to come out on top on this. Well I mean our benchmark is basically user value in meta AI. Right, so it's not compare other models. Well we might be able to because we might be able to run other models in that and be able to tell and I think that's one of the advantages of open sources basically you have a good community of folks who can poke holes that okay where is your model not good and where is it good.
But I think the reality at this point is that all these models are optimized for slightly different mixes of things. I mean everyone is trying to I think go towards the same, you know I think all the leading labs are trying to create general intelligence right and super intelligence whatever you call it right that basically AI that can lead towards a world of abundance where like everyone has these superhuman tools to create whatever they want and that leads to just dramatically empowering people and creating all these economic benefits. I think that's sort of however you define that I think that that's kind of what a lot of the labs are going for.
And but there's no doubt that different folks have sort of optimized towards different things. I think the anthropic folks have really focused on kind of coding and an agents around that. You know the open AI folks I think have gone a little more towards reasoning recently. And I think that there is a space which if I to guess I think we'll end up probably being the most used one which is quick is very natural to interact with is very natively multimodal that fits into kind of throughout your day the ways that you want to interact with it.
And I think you got a chance to play around with with the new meta AI app that we're releasing and you know one of the fun things that we put in there is the demo for the full duplex voice. And it's I mean it's early right? I mean it's not you know there's a reason why we haven't made that the default voice model in the app. But there's something about how naturally conversational it is that I think is just like really fun and compelling and I think being able to mix kind of that in with the right personalization is going to lead towards a product experience where you know I would basically just guess that you go forward a few years like we're just going to be talking to AI throughout the day about different things that we're wondering and you know it's like you'll you'll have your phone you'll talk you'll talk to on your phone you'll talk to it while you're browsing your feed apps it'll give you context about different stuff you'll be able to answer questions it'll help you as you're interacting with people and messaging apps you know eventually I think we'll walk through our daily lives and we'll either have glasses or you know other kinds of AI devices and just be able to kind of seamlessly interact with it all day long.
So I think that that is that's kind of the north star and whatever the benchmarks are that lead towards people feeling like the quality is like that's what they want to interact with that I think is actually the thing that is ultimately going to matter the most us. I got a chance to play around with both Orion and also the mid AI app and the voice mode was super smooth that was quite impressive. I on the point of what the different labs are optimizing for to steal man their view I think a lot of them thing that once you fully automate software engineering and AI research then you can kick off an intelligence explosion where you have millions of copies of these software engineers replicating the research that happened between Lama one and Lama four that scale of improvement again in the matter of weeks or months rather than years and so it really matters to just have close the loop on the software engineer and then you can be the first to ASI.
What do you make of that? Well, I personally think that's pretty compelling and that's why we have a big coding effort too. We're working on a number of coding agents inside meta because we're not really an enterprise software company. We're primarily building it for ourselves. So again, we go kind of like for the specific goal. We're not trying to build a general developer tool. We are trying to build a coding agent in an AI research agent that basically advances Lama research specifically and it's like just fully kind of plugged into our tool chain and all this.
I think that that's important and I think is going to end up being an important part of how this stuff gets done. I would guess that like sometime in the next 12 to 18 months we'll reach the point where most of the code that's going towards these efforts is written by AI and I don't mean like autocomplete. Today you have good autocomplete. You start writing something and it can complete the section of code. I'm talking more. You give it a goal. It can run tests. It can improve things. It can find issues. It writes higher quality code than the average very good person on the team already. I think that's going to be a really important part of this for sure.
I don't know if that's the whole game. I think that that I think is going to be a big industry and I think that that's going to be an important part of how AI gets developed. I guess one way to think about this is this is a massive space. I don't think that there's just going to be one company with one optimization function that serves everyone as best as possible. I think that there are a bunch of different labs that are going to be doing leading work towards different domains. Some are going to be more kind of enterprise focused or coding focused. Some are going to be more productivity focused. Some are going to be more social or entertainment focused.
Within the assistant space I think there are going to be some that are much more kind of informational or productivity. Some are going to be more companion focused. There's going to be a lot of the stuff that's just like fun and entertaining and like shows up in your feed. I think that there's just a huge amount of space. Part of what's fun about this is going towards this AGI future. There are a bunch of common threads for what needs to get invented, but there are a lot of things at the end of the day that need to get created. I think you'll start to see a little more specialization between the groups if I had to guess.
It's really interesting to me that you basically agree with the premise that there will be an intelligence explosion and something like super intelligently at the end. But then if that's the case, tell me about misunderstanding you. If that's the case, why even bother with personal assistance and whatever? Why not just get to super-human intelligence first and then deal with everything that is there? Well, I think that that's just one aspect of the flywheel. Part of what I generally disagree with on the fast takeoff thing is it takes time to build out physical infrastructure. If you want to build a gigawatt cluster of compute, that just is going to take some time.
It takes some video a bunch of time to stabilize their new generation of the systems and then you need to figure out the networking around it and then you need to build the building, you need to get permitting, you need to get the energy and then you want some, whether it's gas turbines or green energy, you need to. There's a whole supply chain of that stuff. I think there's a lot of. We talked about this a bunch on the last time that I was on the podcast with you. Some of these are just physical, world, human time things that as you start getting more intelligence in one part of the stack, you'll basically just run into a different set of bottlenecks. That's sort of the way that engineering always works. It's like you sell one bottleneck, you get another bottleneck.
Another bottleneck in the system or another ingredient that's going to make this work well is basically people getting used to in learning and having a feedback loop with using the system. I don't think these systems don't tend to be the type of thing where something just shows up fully formed and then people magically fully know how to use it. That's the end. I think that there is this co-evolution that happens where people are learning how to best use these AI assistants on the same side. The AI assistants are learning what those people care about and the developers of those AI assistants are able to make the kind of AI assistants better.
Then you're also building up this base of contexts and how you wake up and you're like a year or two into it. The AI assistant can reference things that you talked about a couple of years ago and that's pretty cool. You couldn't do that if you just launched the perfect thing on day one. There's no way that it could reference what you talked about two years ago if it didn't exist two years ago. My view is there's this huge intelligence growth. There's a very rapid curve on the uptake of people interacting with the AI assistants. The learning feedback and data fly wheel around that.
Then there is also the build out of the supply chains and infrastructure and regulatory frameworks to enable the scaling of a lot of the physical infrastructure. I think at some level all of those are going to be necessary and not just the coding piece. One specific example of this that I think is interesting. Even if you go back a few years ago, we had a project on, I think it was on our ads team to automate ranking experiments. That's a pretty constrained environment. It's not like right open-ended code. It's basically look at the whole history of the company, every experiment that any engineer has ever done in the ad system and look at what worked, what didn't, what the results of those were and basically formulate new hypotheses for different tests that we should run that could improve the performance of the ad system.
We basically found we were bottlenecked on compute to run tests based on the number of hypotheses. It turns out even with just the humans that we have right now on the ads team, we already have more good ideas to test than you actually have either kind of compute or really cohorts of people to test them with. Even if you have like three and a half billion people using your products, you still wouldn't want each test needs to be statistically significant. It needs to have some number of whatever it is, hundreds of thousands or millions of people. There's kind of only so much throughput that you can get on testing through that. We're already at the point even with just like the people we have that we already can't really test everything that we want. Now just being able to test more things is not necessarily going to be additive to that.
We need to get to the point where the average quality of the hypotheses that the AI is generating is better than all the things above the line that we're actually able to test that the best humans on the team have been able to do before it will even be marginally useful for it. I think that there's like, we'll get there. I think pretty quickly. It's not like, okay, cool, the thing can write code. All of a sudden everything is just improving massively. There are these real world constraints that basically it needs to first, it needs to be able to do a reasonable job. Then it needs to be able to have the compute and the people to test. Then over time as the quality creeps up, I don't know, are we here in like five or ten years and it's like no set of people can generate a hypothesis as good as the AI system? I don't know, maybe.
Then I think in that world obviously that's going to be how all the value is created. That's not the first step. Publicly available data is running out. The major AI labs like Meta, Google DeepMind and OpenAI all partner with scale to push the boundaries of what's possible. Through scales data foundry, major labs get access to high quality data to fuel post training, including advanced reasoning capabilities. Scales research team seal is creating the foundations for integrating advanced AI into society through practical AI safety frameworks and public leaderboards around safety and alignment.
Their latest leaderboards include humanities last exam, Enigma, Eval, multi-challenge, and Vista, which test a range of capabilities from expert level reasoning to multimodal puzzle solving to performance on multi-turn conversations. Scale also just released scale evaluation, which helps diagnose model limitations, leading frontier model developers rely on scale evaluation to improve the reasoning capabilities of their best models. If you're an AI researcher or engineer and you want to learn more about how scales data foundry and research lab can help you go beyond the current frontier of capabilities, go to scale.com slash to our cache.
If you buy this view that this is where intelligence is headed, the reason to be bullish on Meta is obviously that you have all this distribution, which you can also use to learn more things that can be useful for training. You mentioned the Meta app as not as a billion active users. Not the app, not the app. The app is a standalone thing that we're just launching now. I think it's fun for people who want to use it. It's a cool experience. We can talk about that. We're kind of experimenting with some new ideas in there that I think are novel and worth talking through.
But I'm talking mostly about our apps. Meta is actually most used in WhatsApp. So in WhatsApp is mostly used outside of the US. We just passed like 100 million people in the US, but it's not the primary messaging system in the US, I messages. So I think people in the US probably tend to underestimate the Meta AI use somewhat, but it's also part of the reason why the standalone app is going to be so important is the US is for a lot of reasons one of the most important country. And the fact that WhatsApp is the main way that people are using Meta AI and that's not the main messaging system in the US means that we need another way to build a first-class experience that's in front of people.
And I guess to finish the question, the bearish case would be that if the future of AI is less about just answering your questions and more so just being a virtual coworker, it's not clear how Meta AI instead inside of WhatsApp gives you the relevant training data to make a fully autonomous programmer, remote worker. So yeah, in that case, does it not matter that much who has more distribution right now with LLMs? Well, again, I just think that there are going to be different things. It's like, if you were sitting at the beginning of the development of the internet and it's like, well, what's going to be the main internet thing? Is it going to be knowledge work or is it going to be like massive consumer apps? It's like, I don't know, you get both, right? It's like, you don't have to choose one, right?
And now, the world is big and complicated and does one company build all that stuff. I think normally the answer is no. But yeah, no, to your question, people do not code in WhatsApp for the most part and I don't foresee that that's going to be like that's people starting to write code in WhatsApp is going to be like a major, major use case. Although I do think that people are going to ask AI to do a lot of things, the result in the AI coding without them necessarily knowing it. So that's a separate thing. But we do have a lot of people who are writing code at Meta and they use Meta AI. We have this internal thing that we call MetaMate and basically in a number of different coding and AI research agents that we're building around that. And that has quite its own feedback loop and I think can get good for accelerating those efforts.
Again, I just think that there are going to be a bunch of things. I think AI is almost certainly going to unlock this massive revolution in knowledge work and code. I also think it's going to be kind of the next generation of search and how people get information and do more complex information tasks. I also think it's going to be fun. I think people are going to use it to be entertained. And a lot of the internet is memes and humor. We have this amazing technology at our fingertips and it is amazing and funny when you think about it, how much of human energy just goes towards entertaining ourselves and pushing culture forward and finding humorous ways to explain cultural phenomenon that we observe.
I think that that's almost certainly going to be the case in the future. If you look at the evolution of things like Instagram and Facebook, if you go back 10, 15, 20 years ago, it was like text, then we all got phones with cameras, most of the content became photos. Then the mobile network's got good enough that if you wanted to watch a video on your phone, it wasn't just like buffering. So that got good. So over the last 10 years, most of the content has moved basically towards video at this point, most of the time spent in Facebook and Instagram is video. But like, I don't know, do you think in five years, we're just going to be like sitting in our feet and consuming media that's video?
It's like, no, it's going to be interactive. It's like you'll be scrolling through your feed and there will be content that is basically, I don't know, maybe it looks like a real to start, but then like you talk to it or you interact with it and it talks back or it changes what it's doing or you can jump into it like a game and interact with it. And that's all going to be like AI. So I guess my point is there's just all these different things and I guess we're ambitious, so we're working on a bunch of them. But I don't think anyone company is going to do all of it.
Okay, so on this point of AI generated content or AI interactions, already people have meaningful relationships with AI therapist, AI friends, maybe more. And this is just going to get more intense as these AI's become more unique and more personable, more intelligent, more spontaneous and funny and so forth. How do we make sure people are going to have relationships with the AI's? How do we make sure that these are healthy relationships? Well, I think there are a lot of questions that you only really can answer as you start seeing the behaviors. So probably the most important up front thing is just like ask that question and care about it at each step along the way.
But I think also being too prescriptive up front and saying we think these things are not good often cuts off value. Because I don't know, people use stuff that's valuable for them. One of my core guiding principles in designing products is like people are smart, right? They know what is valuable in their lives. Every once in a while, you know, something bad happens in a product and you want to make sure that you design your products well to minimize that. But if you think that someone is doing as bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong and you just haven't come up with the framework yet for understanding why the thing that you're doing is valuable and helpful in their life.
Yeah, so that's kind of the main way that I think about it. I do think that people are going to use AI for a lot of these social tasks. Already one of the main things that we see people using that AI for is kind of talking through difficult conversations that they need to have with people in their life. It's like, okay, I'm having this issue with my girlfriend or whatever, like help me have this conversation or like I need to have this hard conversation with my boss at work. Like how do I have that conversation? That's pretty helpful.
And then I think as the personalization loop kicks in and the AI just starts to get to know you better and better, I think that will just be really compelling. One thing just from working on social media for a long time is there's the stat that I always think is crazy. The average American I think has, I think it's fewer than three friends, three people have made consider friends. And the average person has demand for meaningfully more. I think it's like 15 friends or something, right? I guess there's probably some point where you're like, all right, I'm just too busy. I can't deal with more people.
But the average person wants more connectivity connection than they have. So, there's a lot of questions that people ask of stuff like, okay, is this going to replace kind of in-person connections or real life connections? And my default is that the answer that is probably no. I think that there are all these things that are better about kind of physical connections when you can have them. But the reality is that people just don't have the connection and they feel more alone a lot of the time than they would like.
So I think that a lot of these things that today there might be a little bit of a stigma around. I would guess that over time we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things are like why they are rational for doing it and how it is adding value for their lives. But also I think that the field is very early. I think they're handful of companies and stuff are doing virtual therapists. There's virtual girlfriend type stuff. But it's very early. The embodiment in the things is pretty weak.
A lot of them, you open it up and it's just like an image of the therapist or the person you're talking to or whatever. I mean sometimes there's some very rough animation but it's not like an embodiment. You've seen the stuff that we're working on in reality labs where you have the codec avatars and it feels like it's a real person. I think that's kind of where it's going. You'll be able to basically have like an always on video chat where it's like, and also the AI will be able to, the gestures are important too. More than half of communication when you're actually having a conversation is not the words that you speak. It's all the nonverbal stuff.
Yeah. I did get a chance to take out Orion the other day and I thought it was super impressive. And I'm mostly optimistic about the technology just because generally as you mentioned, like Libertarian about if people are doing something probably they think it's good for them. Although I actually don't know if it's the case that if somebody is using TikTok they would say that they're happy with how much time there's something on TikTok or something. So I'm mostly optimistic about it. Also in the sense that we're going to be living in this future world of AGI.
We need to be in order to keep up with that humans need to be upgrading our capabilities as well with tools like this. And just generally looking at me more beauty in the world if you can see, student-gibbly everywhere or something. I was worried that one of the flagship use cases that your team showed me was I'm sitting at the breakfast table and on the periphery of my vision is just a bunch of reels that are scrolling by maybe in the future my AGI girlfriend is on the other side of the screen or something.
And so I am worried that we're just removing all the friction between getting totally re-reward hacked by our technology. How do we make sure this is not what ends up happening in five years? Again, I think people have a good sense of what they want. That experience they saw was a demo just to show multitasking and holograms. I agree that I don't think that the future is like you have stuff that's trying to compete for your attention in the corner of your vision all the time. I don't think people would like that too much.
So, it's actually one of the things we're designing these glasses that we're really mindful of is like probably the number one thing that glasses need to do is get out of the way and be good glasses. And as an aside, I think that's part of the reason why the Rayban Meta product has done so well is like, all right, it's great for listening to music and taking phone calls and taking photos and videos and the AI is there when you want it.
所以,我们在设计这些眼镜时非常注意的一点就是,眼镜最重要的功能就是“不碍事”,并且首先要是优质的眼镜。同时,我认为这也是为什么雷朋 (Rayban) 与 Meta 合作的产品如此成功的部分原因。这个眼镜不仅能很好地用于听音乐、打电话、拍照和录像,而且在你需要的时候,AI 功能也很出色。
But when you don't, it's like a great, good looking pair of glasses that people like and it kind of gets out of the way well. I would guess that that's going to be a very important design principle for the augmented reality future. Right, the main thing that I see here is, I think it's kind of crazy that for how important the digital world is in all of our lives, the only way we can access it is through these like physical digital screens.
It's like you have a phone, you have your computer, you can put a big TV, it's like this huge physical thing. It just seems like we're at the point with technology where the physical and the digital worlds should really be fully blended. And that's what the holographic overlay is allowed you to do. But I agree, I think a big part of the design principles around that are going to be, okay, you'll be interacting with people and you'll be able to bring digital artifacts into those interactions and be able to do cool things like very seamlessly, right? It's like if I want to show you something here, like here's a screen, okay, here it is, I can show you, you can interact with it, it can be 3D, we can kind of play with it. You want to play a card game or whatever, it's like, all right, here's a deck of cards, we can play with it, it's like two of us are here physically, like you have a third friend who's just hologramming in, right? And they can kind of participate too.
But I think that in that world, people are going to be, you know, just like you don't want your physical space to be cluttered, it's sort of like a, you know, it just kind of has like a, it wears on you psychologically. I don't think people are going to want the digital kind of physical space to feel that way either. So I don't know, that's more of an aesthetic and one of these norms that I think we'll have to get worked out. But I think we'll figure that out. Going back to the AI conversation, you're mentioning how big of a bottleneck the physical infrastructure can be. And related to other open source models like deep seek and so forth, deep seek right now has less compute than a lot like meta and you could argue that it's competitive with a lot of models. If China is better at, you know, physical infrastructure, industrial scale ups, getting more power and more data centers online, how worried are you that this will, they might beat us here.
I mean, I think it's a, it's like a real competition. Yeah. I'm seeing the, the industrial policies really play out where, yeah, I mean, I think China's bringing online more power. And because of that, I think that the US really needs to focus on streamlining the ability to build data centers and build and produce energy or I think we will be at a significant disadvantage. At the same time, I think some of the export controls on things like chips, I think you can see how they're clearly working in a way because, you know, there was all the conversation with deep seek about how they did all these like very impressive low-level optimizations. And the reality is they did and that is impressive. But then you ask, why did they have to do that when none of the, like American labs did it? And it's a well because they're using like partially nerfed chips that are the only thing that Nvidia has allowed to sell in China because of the export controls.
So deep seek basically had to go spend a bunch of their calories in time doing low-level infrastructure optimizations that the American labs didn't have to do. Now they produced a good result on text, right? It's like, I mean, deep seek is text only. So the infrastructure is impressive. The text result is impressive. But every new major model that comes out now is multimodal, right? It's image, it's voice, and there isn't. And now the question is why is that the case? I don't think it's because they're not capable of doing it. I think that they basically had to spend their calories on doing these infrastructure optimizations to overcome the fact that there were these export controls.
When you compare like Lama 4 with deep seek, I mean, our reasoning model isn't out yet. So I think that the kind of R1 comparison isn't clear yet. But we're basically like effectively same ballpark on all the tech stuff is what deep seek is doing, but with a smaller model. So it's much more kind of efficient per the kind of cost per intelligence is lower with what we're doing for Lama on text. And then all the multimodal stuff we're effectively leading at. It just doesn't even exist in their stuff. So I think that the Lama 4 models when you compare them to what they're doing are good. And I think generally people are going to prefer to use the Lama 4 models.
But I think that there is this interesting contour where like it's clearly a good team that's doing stuff over there. Then I think you're right to ask about the accessibility of power, the accessibility of compute and chips and things like that. Because I think the kind of work that you're seeing the different labs do and play out, I think is somewhat downstream of that. Premium products attract a ton of fake account signups, bot traffic, and free-tier abuse. And AI is so good now that it's basically useless to just have a cap draw of six squiggly numbers on your signup page.
Take cursor. People were going to insane links to take advantage of cursors for credits, creating and deleting thousands of accounts, sharing logins, even coordinating through Reddit. And all this was costing cursor a ton of money in terms of inference compute and LLMA PI calls. Then they plugged in work OS radar. Radar distinguishes humans from bots. It looks at over 80 different signals from your IP address to your browser, to even the fonts installed on your computer to ensure that only real users can get through. Radar currently runs millions of checks per week.
And when you plug Radar into your own product, you immediately benefit from the millions of training examples that Radar has already seen through other top companies. Previously, building this level of advanced protection in-house was only possible for huge companies. But now with work OS radar, advanced security is just an API call away. Learn more at workOS.com slash radar. All right, back to Zuck. So Sam Altman recently tweeted that OpenAI is going to release an open source, a soda reasoning model. I think part of the tweet was that we will not do anything silly, like say that you can only use it if you have less than 700 million users.
DeepSeq has the MIT license, whereas LLMA, I think a couple of the contingencies in LLMA license require you to say built with LLMA on applications using it. Any model that you train using LLMA has to begin with the word LLMA. What do you think about the license? Should it be less onerous for developers? I mean, look, we've basically pioneered the open source LLMA things. I mean, I don't consider the license to be onerous. I kind of think that when we were starting to push on open source, it was this big debate in the industry of like, is this even a reasonable thing to do?
Can you do something that is safe and trustworthy with open source? Like, will open source ever be able to be competitive enough that anyone will even care? And basically, when we were answering those questions, which a lot of that hard work, that I think a lot of the teams at MetA, although there are other folks in the industry, but really the LLMA models were the ones that I think broke open this whole open source AI thing in a huge way. We were very focused on, okay, if we're going to put all this energy into it, then at a minimum, if you're going to have these large cloud companies like Microsoft and Amazon and Google turn around and sell our model, that we should at least be able to have a conversation with them before they do that around basically like, okay, what kind of business arrangement should we have?
But our goal with the license isn't, we're generally not trying to stop people from using the model. We just think like, okay, if you're like one of those companies or if you're Apple, just come talk to us about what you want to do and let's find a productive way to do it together. So I think that that's generally been fine. Now, if the whole open source part of the industry evolves in a direction where there's like a lot of other great options and if the license ends up being a reason why people don't want to use LLMA, then I don't know, we'll have to reevaluate the strategy, whether you know, what it makes sense to do at that point.
But I just don't think we're there. That's not in practice a thing that we've seen companies coming to us and saying, we don't want to use this because your license says that if you reach 700 million people, you have to come talk to us. So, at least so far, it's a little bit more of something that we've heard from like kind of open source purists like, is this as clean of an open source model as you'd like it to be? And look, I mean, I think that debate has existed since the beginning of open source with like, you know, just all the GPL license stuff versus other things and it's like, okay, just like does it need to be the case that anything that touches open source can has to be open source or can people just take it and use it in different ways?
And I'm sure there will continue being debates around this, but I don't know if you're spending many, many billions of dollars training these models, I think asking the other companies that are also huge and similar in size and can easily kind of afford to have a relationship with us to talk to us before they use it, I think it seems like a pretty reasonable thing. If it turns out that you, you know, other models are also, you know, there's like a bunch of good open source models. So that part of your mission is fulfilled and maybe other models are better at coding.
Is there a world where you just say, look, opens there's some ecosystem is healthy, there's plenty of competition. We're happy to just use some other model, whether it's for internal software engineering at meta or deploying to our apps. We don't necessarily need to build with Lama. Well, again, I mean, we do a lot of things. So it's possible that, you know, I guess, let's take a step back. The reason why we're building our own big models is because we want to be able to like build exactly what we want, right?
有没有一种可能,你会说,看看,现在已经有一个健康的生态系统了,竞争也很充分,我们可以使用其他模型,不管是在 Meta 的内部软件工程中,还是在我们应用的部署中。我们不一定非得用 Lama 来构建。当然,我们做很多事情,可能性是有的。我们还是要退一步来思考,我们之所以开发自己的大型模型,是因为我们想要能够精确地构建出我们想要的东西。
And none of the other models in the world are sort of exactly what we want. If they're open source, then you can take them and you can find to them in different ways. But you still have to deal with the model architectures and, you know, they make different size trade-offs around that affect the latency and inference cost of the models. But it's like, okay, the scale that we operate at, that stuff really matters.
Like we made the Lama Scout and Maverick models certain sizes for a specific reason, because they fit on a host and we wanted certain latency, especially for the voice models that we're working on that we want to just basically have pervade and be across everything that we're doing from the glasses to all of our apps to the meta AI app and all this stuff.
So I think that there's a level of control of your own destiny that you only get when you build the stuff yourself. That said, there are a lot of things that like AI is going to be used in every single thing that every company does. When we build a big model, we also need to choose which things, which use cases internally we're going to optimize for.
So does that mean that for certain things we're not going to, you know, think that like, okay, maybe Claude is better for building this specific development tool that this team is using. And like, use that. Fine. Great. I don't think we don't want to fight with, you know, one hand tied behind our back. We're doing a lot of different stuff.
You also asked would we maybe, would it not be important because other people are doing open source? I don't know. On this, I'm a little more worried because I think you have to ask for anyone who shows up now and is doing open source now that we have done it. There's a question, which is would they still be doing open source if we weren't doing it?
And like, I think that there are a handful of folks who see the trend that more and more development is going towards open source. And like, crap, like we kind of need to be on this train or also we're going to lose. Like we have some closed model API and like, increasingly a lot of developers, that's not what they want. So I think you're seeing a bunch of the other players start to do some work in open source.
It's just unclear if it's dabbling or fundamental for them in the way that it has been for us. And, you know, a good example is like what's going on with Android. Right? It's like Android start off as the open source thing. There's not really like any open source alternative. Like I think over time, Android has just been kind of getting more and more closed.
So I think if you're us, you kind of need to worry that if we stopped pushing the industry in this direction, that like all these other people, maybe you're only really doing it because they're trying to kind of compete with us in the direction that we're pushing things. And you know, they already have their revealed preference for what they would build if open source didn't exist. And it was an open source, right?
So I just think we need to be careful about relying on that continued behavior for the future of the technology that we're going to build at the company. I mean, another thing I've heard you mention is that it's important that the standard gets built around American models like Lama.
I guess I wanted to understand your logic there because it seems like with certain kinds of networks, it is the case that the Apple App Store just has a big contingency around what it's built around. But it doesn't seem like, you know, you, if you build some sort of scaffold for deep seek, you couldn't have easily just switched it over to Lama 4, especially since between generations.
Like Lama 3 wasn't MOE. Lama 4 is. So things are changing between generations of models as well. So what's the reason for thinking things will get built out in this contingent way on a specific standard? I'm not sure what do you mean by contingent. Or as in like it's important that people are building for Lama rather than for Lama in general because that will determine what the standard is for the future. Sure. I mean, I think these models encode values and ways of thinking about the world.
And you know, we had this interesting experience early on where we took an early version of Lama and we translated it. I think it was, it might have been into French or some other language. And the feedback that we got, I think it was French. From French people was this sounds like an American who learned to speak French. Like it doesn't sound like a French person. It's like, what do you mean? Does it not speak French well? It's like, no, it speaks French fine. It's just like the way that it thinks about the world is like, seems slightly American.
So I think there's like these subtle things that kind of get built into it. Over time as the models get more sophisticated, they should be able to embody different value sets across the world. So maybe that's like a very kind of, you know, not particularly sophisticated example, but I think it sort of illustrates the point. And you know, some of the stuff that we've seen in testing some of the models, especially coming out of China is like they sort of have certain values encoded in them.
And it's not just like a light fine tune to get that to feel the way that you want. Now the stuff is different, right? So I think language models or something that has like a kind of like a world model embedded into it have more values. Reasoning I think is, I mean, I guess there are kind of values or ways to think about reasoning. One of the things that's nice about the reasoning models is their train unverifiable problems.
So do you need to be worried about like cultural bias if your model is doing math? Probably not, right? I think that that's, you know, I think it's like the chance that like some reasoning model that was built elsewhere is like going to kind of incept you by like solving a math problem in a way that's, that's, um, DVS seems low. Um, there's a whole set of different issues.
I think we're on coding, which is the other verifiable domain, which is, you know, I think you, you kind of need to be worried about like waking up one day and like does a model that I have some tie to another government like can it embed all kinds of different vulnerabilities in code that then like the intelligence organizations associated with that government can then go exploits and now you sort of like, all right, like in some future version where you have, you know, some model from some other country that we're using to like secure or build out a lot of our systems and then all of a sudden you wake up and it's like everything is just vulnerable to in a way that like that country knows about, but like you don't or it turns on a vulnerability at some point.
Those are real issues. So what we've basically found is, um, now I mean, I'm very interested in studying this because I think one of the main things that's interesting about open source is the ability to distill models. Um, you know, most people, the primary value isn't just like taking a model off the shelf and saying, okay, like meta built this version of Lama, I'm going to take it and I'm going to run it exactly in my application.
It's like, no, well, your application isn't doing anything different if you're just running our thing. You're at least going to fine tune it or try to distill it into a different model. And when we get to stuff like the behemoth model, like the whole value in that is being able to basically take this very high amount of intelligence and distill it down into a smaller model that you're actually going to run.
But this is like the beauty of distillation. And it's like one of the things that I think has really emerged as a very powerful technique in the last year since the last time we sat down is, um, I think it's worked better than most people would predict as you can basically take a model that is much bigger and take probably like 90 or 95% of its intelligence and run it in something that's 10% the size.
Now, do you get 100% of the intelligence? No, but like 95% of the intelligence at 10% of the cost is like pretty good for a lot of things. Um, the other thing that's interesting is now with this like more varied open source community where you, it's not just Lama, you have other models, you have the ability to distill from multiple sources.
So now you can basically say, okay, Lama is really good at this. Like maybe the architecture is really good because it's fundamentally multimodal and fundamentally more, um, inference friendly and more efficient. But like, let's say the Southern models better at coding. Okay, well, just you can distill from both of them and then build something that's better than either of them for your own use case. Um, it's that's cool. But you do need to solve the security problem of knowing that you can distill it in a way that is safe and secure. And so this is something that we've been researching and have put a lot of time into and what we've basically come to is like, look, anything that's kind of like language is, is quite fraught because there's like a lot of values embedded in that.
So unless you don't care about having the values from whatever the model is that you got, you probably don't want it like to distill the straight like language world model. Um, on reasoning, I think you can get a lot of the way there. By limiting it to verifiable domains, running, um, kind of code, cleanliness and security filters, like, like, like, whether it's like the Lama guard open source or the code shield open source things that we've done that basically, um, allow you to incorporate different, different, um, input into your models and make sure that the, that both the input and the output, um, are secure.
And then just a lot of red teaming to make sure that you're, your, um, like you just have people or experts who are looking at this, it's a guard is this model doing anything that isn't what I want after distilling from something. And I think with the combination of those techniques, you can probably distill on the reasoning side for verifiable domains quite securely. Um, that's something I'm pretty confident about and it's something that that we've done a lot of research around. But I think this is a very big question is like, how do you do good distillation because there's just so much value to be unlocked. But at the same time, I do just think that there is some fundamental bias in the different models.
Speaking of value to be unlocked, what do you think the right way to monetize AI will be? Because obviously digital ads are quite lucrative, but as a fraction of total GDP, it's small and comparison to like all the remote work, um, like, even if you can increase this productivity and not replace work, that's still worth tens of trillions of dollars. So is it possible that ads might not be it? Yeah, how do you think about this? I mean, like we were talking about before, there's going to be all these different applications and different applications tend towards different things.
Um, ads is great when you want to offer people a free service, right? Because it's free. You need to cover it somehow. Ads is like, okay, it's ads solves this problem of like a person does not need to pay for something and they can get something that is like amazing for free. Um, and also by the way, with modern ad systems, a lot of the time people think that the ads add value to the thing. If you do it well, right? It's, um, you know, you need to get be good at ranking and, and you need to be good at having enough liquidity of advertising inventory.
Um, so that way, you know, if you only have five advertisers in the system, no matter how good you are at ranking, you may not be able to show something to someone that they're interested in. But if you have a million advertisers in the system, then you're probably going to be able to find something pretty compelling. If you're good at it, picking out, you know, the different needles in the haystack that that person's going to be interested in.
So I think that definitely has its place. But there are also clearly going to be other business models as well, including one is that, um, just have higher costs. So it doesn't even make sense to offer them for free, um, which by the way, there have always been business models like this. There's a reason why social media is free and ad supported. But then if you want to watch Netflix, um, or like ESPN or something, you need to pay for that.
Let's go. Okay, because the content that's going into that, like they need to produce it. And that's very expensive for them to produce. And they probably could not have enough ads in the service in order to make up for the, the cost to produce in the content. So basically you just need to pay to, to access it, um, then the tradeoff is fewer people do it, right? It's like they're talking about hundreds of millions of people using those instead of billion. So it's, it's, there's kind of a value switch there.
Um, I think similar here, you know, not everyone is going to want like a software engineer or a thousand software engineering agents or whatever it is. But if you do, that's something that you, you are probably going to be willing to pay thousands or tens of thousands or hundreds of thousands of dollars for. Um, so I think that this just speaks to the diversity of different things that need to get created is like, they're going to be business models at each point along the spectrum. And it met a, um, yeah, for the consumer piece, we definitely want to have a free thing. And I'm sure that will end up being ad supported. But I also think we're going to want to have a business model that supports people using arbitrary amounts of compute to do like really even more amazing things than what it would make sense to be able to offer the free service. And for that, I'm sure we'll end up having a premium service.
But, but I mean, I think our, our basic, you know, values on this, so we want to serve as many people in the world. Lambda is de-cloud for AI developers. They have over 50,000 Nvidia GPUs ready to go for startups, enterprises, and hyper sailors. Compute seems like a commodity though. So why use Lambda over anybody else? Well, unlike other cloud providers, Lambda's only focus is AI. This means their GPU instances and on demand clusters have all the tools that AI developers need pre installed, no need to manually install CUDA drivers or manage Kubernetes. And if you only need GPU compute, you can save a ton of money by not paying for the overhead of general purpose cloud architectures.
Lambda even has contracts that let enterprises use any type of GPU in their portfolio and easily upgrade to the next generation. For all of you wanting to build with Lama 4, Lambda has a serverless API without rate limits. It's built with rapid scaling in mind. Users have 1000 extra inference consumption without ever having to apply for a quota or even speak to a human head to lambda dot AI slash the forecast for a free trial of their inference API for sharing the best open source models like deep sea and Lama 4 at the lowest prices in the industry.
All right, back to Zach. How do you keep track of? You've got all these different projects that some of which we've talked about today. I'm sure there's many I don't even know about. As the CEO overseeing everything, there's a big spectrum between like going to the Lama team and here's the hyper parameters you should use to just giving like a mandate like go make the AI better. And there's many different projects. How do you think about the way in which you can best deliver your value ad and oversee all these things?
Well, I mean, a lot of what I spend my time on is trying to get awesome people onto the teams, right? I mean, it's um, so there's that. And then there's stuff that cuts across teams. It's like, all right, you build met AI and you want to get it into WhatsApp or Instagram. It's like, okay, then I'm now into get those teams to talk together. And then there's a bunch of questions like, okay, I was. You know, it's like, okay, do you want the thread for met AI and WhatsApp to feel like other WhatsApp threads or do you want it to feel like other kind of like AI chat experiences? There's like different idioms for those.
And so I think that there's like all these interesting questions that sort of need to get answered around like, how does this stuff basically fit into all of what we're doing? Then there's a whole other part of what we're doing, which is basically pushing on the infrastructure. If you're if you want to stand up a gigawatt cluster, then first of all, that has a lot of implications for the way that we're doing infrastructure buildouts. It has sort of political implications for how you engage with the different states where you're building that stuff. It has financial implications for the company in terms of, all right, there's like a lot of economic uncertainty in the world.
Do we like go double down on infrastructure right now? And if so, what are their trade offs? Do we want to make around the company? Like those are things that like, it's tough for other people to really make those kind of decisions. And then, and then I think that there's this question around like taste and quality, which is like, when is something good enough that we want to ship it? And, and I do feel like in general, I'm the steward of that for the company, although, you know, we have a lot of other people I think have good taste as well, who are also filters for for different things.
But, yeah, I think that those are, those are basically the areas. But I think AI is interesting because more than some of the other stuff that we do, it is more of research and model led, been really product led. Like you can't just like design the product that you want and then try to build the model to fit into it. You really need to like, design the model first and like the capabilities that you want and then you get some emergent properties. Then it's going to build some different stuff because this kind of turned out in this way. And I think at the end of the day, like, like people want to use the best model, right? So that's partially why, you know, when we're talking about building the most, like personal AI, the best voice, the best personalization. And like also a very smart experience with very low latency. Those are the things that we basically need to design the whole system to build, which is why we're working on full duplex voice, which is why we're working on, like the personalization to both have like good memory extraction from your interaction with AI, but also be able to plug into all the other meta systems.
And why we design the specific models that we designed to have the kind of size and latency parameters that they do. Speaking of politics, there's been this perception that some tech leaders have been aligning with from you and others have donated to his inaugural event. We're on stage with them. And I think you settled like a lawsuit, which gave a result that I'm getting $25 million. I wonder what's going on here? Is it does it feel like the cost of doing business with the administration or, yeah, what's the best sort of thing about this? My view on this is like, he's the president of the United States. Our default as an American company should be to try to have a productive relationship with whoever is running the government. I would do this, you know, like we've tried to offer to support previous administrations as well. I've been pretty public with some of my frustrations with the previous administration, how they basically did not engage with us or the business community more broadly, which I think, frankly, I think is going to be necessary to make progress on some of these things.
Like we're not going to be able to build the level of energy that we need. If there, if you don't have a dialogue and they're not prioritizing trying to do those things. So, but fundamentally, you know, look, I mean, I think a lot of people want to write the story about like, like, you know, what direction are people going? I just think it's like, we're trying to build great stuff. We want to work with, we have a productive relationship with people. And that's sort of, that's how I see it. And it is also how I would guess most others see it. But obviously, I can't speak for them. You've spoken out about how you've rethought some of the ways in which you, engage and defer to the government in terms of moderation stuff in the past. How are you thinking about AI governance? Because if AI is as powerful as we think it might be, the government will want to get involved. What is like the most productive approach to take there?
And what should the government be thinking about here? Yeah. I guess in the past, I probably just, I mean, most of the comments that I made, I think we're in the context of content moderation, where, you know, it's been an interesting journey over the last 10 years on this where there, it's obviously been an interesting time in history. There have been novel questions raised about online content moderation. Some of those have led to, I think, productive new systems getting built like our AI systems to be able to detect nation states trying to interfere in each other's elections. I think we will continue building that stuff out and that that I think has been that positive. I think other stuff. We went down some bad paths. Like I just think the fact checking thing was not as effective as community notes because it's not internet scale solution. There weren't enough fact checkers and like people didn't trust the specific fact checkers. They like, you know, you want a more robust system.
So I think what we got with community notes is the right one on that. But, but my point on this was, it was more that, that I think historically, I probably deferred a little bit too much to either the media and kind of their critiques or the government on things that they did not really have authority over, but just as like a central figure. Like I think we tried to build systems that were maybe we could not have to make all the content moderation decisions ourselves or something. And I guess I just think part of the growth process over the last 10 years is just, okay, like we're a meaningful company. We need to own the decisions that we need to make. We should listen to feedback from people, but shouldn't defer too much to people who are not who do not actually have authority over this because at the end of the day, we're like, we're in the seat and we need to like own the decisions that we make.
And so I think we probably, you know, it's been a maturation process and in some ways painful, but I think we're probably a better company for both tariffs increase the cost of building data centers in the US and shift buildouts to Europe and Asia. It is really hard to know how that plays out. I think we're probably in the early innings on that and it's very hard to know. Got it. What is your single highest leverage hour and a week? What are you doing in that hour? I don't know. I mean, every week is a little bit different. And it's probably got to be the case that the most leverage thing that you do in a week is not the same thing each week or else by definition, you should probably spend more than one hour doing that thing every week.
But yeah, I don't know. It's part of the fun of both, I guess, this job, but also the industry being so dynamic as like things really move around. Right? And like, and the world is very different now that it was at the beginning of the year, then it was six months into the middle of last year. I think a lot has sort of has really advanced meaningfully and like a lot of cards have been turned over since the last time that we sat down. I think that was about a year ago, right? Yeah, yeah. But I guess you were saying earlier that recruiting people is a super high leverage thing you do. It's very high leverage. Yeah. Yeah.
What would be possible if you talked about these models being middle-level soft engineers by the end of the year or what would be possible if say software productivity increased like 100x in two years? What kinds of things could be built that we can't build right now? What kinds of things? Well, why, it's, and that's an interesting question. I mean, I think one theme of this conversation is that the like amount of creativity that's going to be unlocked is going to be massive. And if you look at like the overall arc of kind of human society and the economy over 100 or 150 years, it's basically people going from being primarily agrarian and most of human energy going towards just feeding ourselves to that has become a kind of smaller and smaller percent in the things that take care of like our basic physical needs or a smaller and smaller percent of human energy, which has led to two impacts.
One is more people are doing kind of creative and cultural pursuits. And two is that more people in general spend less time working and more time on entertainment and culture. I think that that is almost certainly going to continue as this goes on. This isn't like the one to two-year thing of what happens when you have a super powerful software engineer, but I think over time, you know, if you, like everyone is going to have these superhuman tools to be able to create a ton of different stuff and you're going to get this incredible diversity, part of it is going to be solving like things that we hold up is like, these like hard problems like solving diseases or like solving different things around science or or just like different technology that makes our lives better.
But I would guess that a lot of it is going to end up being kind of cultural and social pursuit and entertainment. And like I would guess that the world is going to get a lot more like a lot funnier and like weirder and and corkier in a way that like the memes on the internet have sort of gotten over the last 10 years. And I think that that adds a certain kind of richness and depth as well that in kind of funny ways, I think it actually helps you connect better with people because now like, I don't know, it's like all day long. I just find interesting stuff on the internet and like send it in group chats to the people I care about who I think are going to find it funny.
And it's like like the media that people can produce today to express very, very nuanced specific cultural ideas. I don't know, it's cool. And I think that'll continue to get built out. And I think it does advance society in a bunch of ways even if it's not like the hard science way of curing a disease. Yeah. But I guess this is sort of if you think about it like the like meta social media view of the world is like, yeah, I think people are going to spend a lot more time doing that stuff in the future.
And it's going to be a lot better and it's going to help you connect because it's going to help express different ideas. Because the world's going to get more complicated but like our technology or cultural technology to kind of express these very complicated things in like a very kind of funny little clip or something are going to just get so much better. So I know that's all great. I don't know, next year for I tend to, I mean, just I guess one other thought that I think is interesting to cover is I tend to think that it for at least the foreseeable future.
This is going to lead towards more demand for people doing work not less. Now people have a choice of how much time they want to spend working. But I'll give you one interesting example of something that we were talking about recently. We, so you have like three almost three and a half billion people use our services every day. And one question that we've struggled with forever is like how do we provide customer support?
Right today, like you can you can write an email. But we've never seriously been able to contemplate having, having like voice support where someone can just call in. And I guess that's maybe one of the artifacts of having a free service, right? It's like the revenue per person's not so high that you can have an economic model that people can kind of call in. But also, I'm with three and a half billion people using your service every day.
I mean, you do, there'd be like a massive, massive number of people like some like the biggest call center in the world type of thing. But it would be like 10, 20 billion dollars some ridiculous a year to kind of staff that. So we've never really kind of like thought too seriously about it because it was always just like, no, there's no way that this kind of makes sense. But now as the AI gets better, you're going to get to this place where the AI can handle a bunch of people's issues, not all of them, right?
Because maybe 10 years from now or something, it can handle all of them. But when we're thinking about like a three to five year time horizon, it'll be able to handle a bunch kind of like self driving cars can handle a bunch of terrain. But in general, they're not like doing the whole route by themselves yet in most cases, right? It's like people thought truck driving jobs were going to go away.
There's actually more truck driving jobs now than there were like when we started talking about self driving cars, you know, whatever it was almost 20 years ago. And I think for going back to this customer's support thing, it's like, all right, it wouldn't make sense for us to staff out, um, calling for everyone. But let's say the AI can handle 90% of that. Then like, and then if you if it can't handle it, then it kicks it off to a person.
Okay, now like if you've gotten the cost of providing that service down to one tenth of what it would have otherwise been, then all right, maybe then now that actually makes sense to go do and that would be kind of cool. So the net result is like, I actually think we're probably going to go hire more customer support people, right? It's like the common knowledge or like the kind of common belief that people have is that like, oh, this is clearly just going to automate jobs and like all these jobs are going to go away.
I actually just that has not really been how the history of technology has worked. It's been, you know, you you can you like create things that take away 90% of the work and that leads you to want more people, not less. Yeah, I mean, to close off the interview, I I've been playing devil's advocate on a bunch of points and I really appreciate you being a good sport about it.
But I do think there's like not enough and bound to how much beauty there can be in the world, especially if there's billions of AI is optimizing about beauty, you can see in the amount of connection you can have and so forth. So yeah, I'm pretty optimistic about it. Final question, who is the one person in the world today who you most seek offer advice?
Oh, man. Well, I feel like it's part of my style is I like having a breadth of advisors. So it's it's not just it's not just one person, but it's um we've got a great team. I mean, it's uh, you know, I'm I think that there's people at the company people on our board. Um, there's a lot of people in the industry who are doing new stuff.
There's there's not there's not like a single person. Um, but it's uh, I know it's fun. And also as when the world is dynamic, I'm just having a reason to work with people you like on cool stuff. I'm to me like that's what life is about.
Yeah. All right. Great note to close on. Awesome. Thanks for doing this.
好的。太棒了,结束在这样的一个 note 上。非常感谢你参与这次活动。
Yeah. Thank you. I hope you enjoyed this episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it. Send it to your friends, your group chats, Twitter, wherever else. Just let the word go forth.
Other than that, super helpful if you can subscribe on YouTube and leave a five star review on Apple Podcasts and Spotify. Check out the sponsors in the description below. If you want to sponsor a future episode, go to doarkesh.com slash advertise.