We've got a special guest who's gonna come join us. This always happens. A nutty ass chicken brain everybody. Oh my god. Somebody told me you started submitting code and it kind of freaked everybody out that daddy was hung. Well models tend to do better if you threaten them. Be threatened. Like with physical violence. Yes. Management is like the easiest thing to do with AI. Absolutely. It must be a weird experience to meet the bureaucracy and the economy that you didn't hire. On the other side of it I would say it's pretty amazing that some junior marketing market basically look at you and say, hey go fuck yourself. But I'm serious. That's a sign of a healthy culture actually. You're punching a clock man. I hear the reports you and I have talked about it. You're going to work every day.
Yeah it's been, you know, some of the most fun I've had in my life honestly. And I retired like a month before COVID hit in theory. Yeah. And I was like, you know, this has been good. I want to do something else. I want to hang out in cafes, read physics books. And then like a month later I was like, that's not really happening. So then I just started to go to the office, you know, once we could go to the office. And actually to be perfectly honest, there was a guy from open AI. This guy named Dan and I ran into a little party and he said, you know, look what are you doing? This is like the greatest transformative moment in computer science ever.
And you're a computer scientist. I'm a computer scientist. Forget that. I'm a chapter of Google but you're a PhD student for computer science. I haven't finished my PhD yet but working. Keep working. You get there. Technically unleave absence. Right. And he told me this and I'd already started kind of going into the office a little bit and I was like, you know, he's right. And it has been just incredible. Just, well, you guys all obviously follow all the AI technology. But being a computer scientist it is, you know, the most exciting thing of my life just technologically.
And the exponential nature of this, the pace of it, it works. Anything we've seen in our career, it's almost like everything we did over the last 30 or 40 years has led up to this moment and it's all compounding on itself. The pace maybe you could speak, you know, you had a company, Google, that grew from, you know, a hundred users and 10 employees. So, right now you have over two billion people using, I think, six products or five products have over two billion. It's not even worth counting because it's the majority of the people in the planet touch Google products. Describe the pace.
Yeah, I mean, the excitement of the early web, like I remember using mosaic and then later, Netscape. How many of you remember mosaic? Actually, my weirdo. And remember there was a what's new page? The what's new page? Great. Like you go to every new web page. To every new web page is it? Yeah, it was like this last week. These were the new websites. Yes. And it was like such and such elementary school, such as such a fish tank. Yeah. Like Michael Jordan appreciation page. Yeah, well, whatever it was, these were the three new sites on the whole internet.
So obviously the web, you know, developed very rapidly from there. And that was a very exciting and then we've had smartphones and whatnot. But, you know, this, the developments in AI are just astonishing, I would say, by comparison, just because of, you know, the web spread, but didn't technically change so much from, you know, month to month, year to year. These AI systems actually changed quite a lot, quite a lot, you know, the like, if you went away somewhere for a month and you came back, you'd be like, whoa, what happened?
Somebody told me you started submitting code and it kind of freaked everybody out that daddy was home. Okay. Daddy did a PR. What happened? Well, the code I submitted wasn't very exciting. I think I needed to like, add myself to get access to some things and, you know, minor CL here or there, nothing, nothing that's going to win any awards. But, but I, you know, you need to do that to, to do basic things, run basic experiments and things like that.
And I've tried to do that and touch different parts of the system so that, you know, so that, first of all, it's fun. And secondly, I know what I'm talking about. It's really feels privileged to be able to kind of go back to the company, not have any real executive responsibilities, but be able to actually go deep into every little pocket. Are there parts of the AI stack that interests you more than others right now? There's certain problems that are just totally captivating you?
Yeah, I started, you know, like sort of, I don't know, a couple of years ago and maybe a year ago, I was really very close with what we call pre-training. Yeah. Actually, most of what people think of as AI training, whatever people call it, pre-training for various historical reasons. But that's sort of the big, super, you know, you throw huge amounts of computers at it. And I learned a lot, you know, just being deeply involved in that and seeing us go from model to model and so forth and running little baby experiments, but kind of just for fun, so I could say I did it.
And more recently, the post-training, especially as the thinking models have come around. And that's been, you know, another huge step up in general in AI. So you know, we don't really know what the ceiling is. When you explain what's happening with prompt engineering, then to deep research and what's happening there to like a civilian, how would you explain that sort of step function? Because I think people are not hitting the down carrot and watching deep research in Gemini's mobile app and you got a mobile app, and it's pretty great.
And by the way, I got the fold after you and I were talking about it. Okay, Google, kick series ass now. Like it actually does what you ask it to do when you ask it to open up and stuff. But the number of threads, the number of queries, the number of follow-ups that it's doing in that deep research is 200, 300? Maybe explain that jump and then what you think the jump after that is. To me, the exciting thing about AI, especially these days, I mean, it's not like quite a G.I. yet as people are seeking or it's not superhuman intelligence.
But it's pretty damn smart and can definitely surprise you. So I think of the superpower is when it can do things in a volume that I cannot. Yes. Right? So by default, when you use some of our AI systems, it'll suck down whatever top 10 search results will, you know, and kind of pull out what you need out of them, something like that. But I could do that myself, to be honest. Maybe it would take me a little bit more time. But if it sucks down the top, you know, 1000 results and then does follow-on searches for each of those and reads them deeply, like that's, you know, a week of work for me.
Like I can't do that. This is the thing I think people have not fully appreciated who are not using the deep research projects. Before we had our F1 driver on stage, I'm a neophyte. I don't know anything about it. I said, how many deaths occurred per decade? And I said, I want to get to deaths per mile driven. And it first was like, that's going to be really hard. I was like, I give you permission to make your best shot at it and come up with your best theory. Let's do it. And it was like, okay.
And it was like, there's this many teams, there's this many races. Which model did you use it? Open eyes. No, I use Gemini's fabulous. Gemini's fabulous first. The fabulous one. And it was like, let's go. I treat it like I get sassy with it. And it kind of works for me. You know, it's a weird thing. It's like, we drink it in the wine. We don't circle, but it's too much. Yeah. The AI community. But not just our models, but all models tend to do better if you threaten them.
It'd be threatened them. Like with physical violence. Yes. But like, people feel weird about that. So we don't really talk about that. Yeah. I was threatening with not being fabulous. And it responded to that as well. Yeah. So, luckily, you just say like, oh, I'm going to kidnap you if you don't fuck all of them. Yeah, they actually. Can I ask you a more. Hold on, but it went through it. Okay.
And it literally came up with a system where I said, I think we should include practice miles. So let's say there's a hundred practice miles for every mile in the track. And then it literally gave me the death's per mile estimated. And then I started cross referencing it and I was like, oh my God. This is like somebody's term paper for undergrad. You know, like, whoa. Doug, in minutes. It's.
Yeah, I mean, it's amazing. And all of us have had these experiences where you suddenly decide, okay, I'll just throw this day. I don't really expect it to work. And then you're like, whoa. That actually worked. Yeah. So as you have those moments and then you go home to your just life as a dad, have you gotten to the point where you're like, what will my children do? And are they learning the right way? And should I totally just change everything that they're doing right now?
Have you had any of those moments yet? Yeah. I mean, I don't really know how to think about it to be perfectly honest. I don't have like a magical way. I mean, I see I've a kid in high school, middle school. And, you know, I mean, the AIs are basically, you know, already ahead, you know. I mean, obviously there's some things AIs are particularly dumb at and they, you know, it makes certain mistakes human would never make. But generally, you know, if you talk about like math or calculus or whatever, like they're pretty damn good. Like they, you know, can win like math contests and coding contests, things like that against, you know, some top humans. And then I look at, you know, okay, he's whatever my son's going to go on to whatever from sophomore to junior and what is he going to learn.
And then I think in my mind, and I talked him about this, well, what is the AI going to be in my ear? Exactly. Yeah. Yeah. And it's like comparable, right? Obviously, the AIs where you would tell your son, look, don't or not, not yet. I don't know if you can like plan your life around this. I mean, I didn't particularly plan my life to like, I don't know, be an entrepreneur or whatever. I was just like math and computer science. I guess maybe I got lucky and I worked out to be, you know, useful in the world. I don't know, I guess I think, you know, my kids should do what they like. Hopefully it's somewhat challenging and they can, you know, overcome different kinds of problems and things like that.
What about specifically college? Do you think college is going to continue to exist as it is today? I mean, it seems like college was already undergoing this kind of revolution even before this sort of AI challenge of people are like, is it worth it? Should I be more vocational? What's actually going to be useful? So we're already kind of entering this kind of situation where there's sort of questions asked about colleges. Yeah, I think, you know, AI obviously, but at the forefront. As a parent, I think a lot about, hey, so much of education in America and the middle class, upper class is all about what college, how do you get them there? And honestly, lately, I'm like, I don't think they should go to college. Like it's just fundamentally.
You know, my son is a rising junior and his entire focus is he wants to go to an SEC school because of the culture. And two years ago, I would have panicked and I would have thought, should I help him get into a school, this school, that school? And now I'm like, that's actually the best thing you could do. Be socially well adjusted, psychologically deal with different kinds of failures, you know? Enjoy a few years of exploration. Yeah. Yeah. I asked you about hardware. You know, years ago, Google owned Boston Dynamics, maybe a little bit ahead of its time. But the way these systems are learning through visual information and sensory information and basically learning how to adjust to the environment around them is triggering these kind of pretty profound, like, learning curves in hardware.
And there's dozens of like, startups now making robotic systems. What do you see in robotics and hardware? Is this a year or are we in a moment right now where things are really starting to work? I mean, I think we've acquired and the later sold like five or so robotics companies and Boston being one of them. I guess if I look back on it, we built a hardware. We also had this more recently we built out everyday robotics internally and then later had to transition that. You know, the robots are all cool and all, but the software wasn't quite there. That's every time we've tried to do it to, you know, to make them truly useful. And presumably one of these days that'll no longer be true. Right.
But have you seen anything lately that you do? Yeah. Do you believe in the humanoid form factor robots? Or do you think that's a little overkill? I'm probably the one weirdo who doesn't, who's not a big fan of humanoids. But maybe I'm jaded because we've, you know, we at least acquired at least two humanoid robotics startups and later sold them. But the reason is, I mean, the reason people want to do humanoid robots for the most part is because the world is kind of designed around this form factor and, you know, you can train on YouTube, we can train on videos, people do all the things. I personally don't think that's given the AI quite enough credit.
Like AI can learn, you know, through simulation and through real life pretty quickly how to handle different situations. And I don't know that you need exactly the same number of arms and legs and wheels, which is zero in the case of humans, as humans to make it all work in it. So I'm probably less bullish on that. But to be fair, there are a lot of really smart people who are making humanoid robots. So I wouldn't discount it.
What about the path of being a programmer? That's where we're seeing with that finite data set. And listen, Google's got a 20-year-old code base now. So like it actually could be quite impactful. What are you seeing, like literally in the company, you know, are the KENX developers always just like ideal that you can, you know, you get a couple of unicorns once in a while. But are we going to see like all developers, like, you know, their productivity hit that level 8, 9, 10 and they're just going to, or is it going to be all done by computers?
And we're just going to check it and make sure it's not too weird. Because it could get weird if you vibe code, yeah. I'm embarrassed to say this. Okay, I like recently, I just had a big tiff inside the company because we have this list of what you're allowed to use to code and what you're not allowed to use to code. And the Gemini was on the nildest. You have to be pure. You can't. I don't know for like a bunch of really weird reasons that it was like boggled my mind that, you know, this vibe code on the Gemini code. I mean, nobody would like enforce this rule, but there was this, you know, actual internal web page for whatever is historical reason.
Somebody had put this and I had a big fight with them. I, you know, I cleared it up after a shock. Did you tell your boss period of time? You escalated to your boss. Oh, I, I definitely told. I'm whatever I do. I'm like, I don't know if you remember, but you got super-photic founders. You are the boss. You can do what you want. It's your company still. No, no, it was, he was very supportive. It was more like, I was like, I talked to him. I was like, I can't deal with these people.
You need to deal with those like, I just like, I'm beside myself that they're like saying. Did you hear that there's bureaucracy like in a company that you find must be a weird experience to meet the bureaucracy in a company that you didn't hire? No, but, but on the other side of it, I would say it's pretty amazing that some junior Muckity Muck can basically look at you and say, go fuck yourself. But I'm serious. That's a sign of a healthy culture, actually. I guess so.
Anyway, it did get fixed and people are using it. They got fired. That person's working in Google's Siberia. No, we're trying to roll out every possible kind of AI. And trying external ones, you know, be whatever the cursors of the world, all those, to just see what really makes people more productive. I mean, for myself, definitely makes me more productive. Because I'm not coding.
Do you have a number of foundational models? Like, if you look three years forward, will they start to cleave off and get highly specialized? Like, beyond the general and the reasoning, maybe there's a very specific model for chip design. There's clearly a very specific model for biologic precursor design, protein folding. Like, is the number of foundational models in the future, Sergei? A multiple of what they are today, the same, something in between.
That's a great question. I kind of, if I, I mean, look, I don't know, like you guys could take a guess just as well as I can. But if I had to guess, you know, things have been more converging. And this is sort of broadly true across machine learning. I mean, you used to have all kinds of different kinds of models and whatever, convolutional networks for vision things. And, you know, you had one of our RNNs for text and speech and stuff.
And, you know, all of this has shifted to transformers basically. And increasingly, it's also just becoming one model. Now, we do get a lot of them occasionally. We do specialized models. And it's definitely scientifically a good way to iterate. We have a particular target. You don't have to like do everything in every language and handle whatever, both images and video and audio and in one go. But we are generally able to, after we do that, take those learnings and basically put that capability into a general model.
So there's not that much benefit. You know, you could, you can get away with somewhat smaller, specialized models, a little bit faster, a little bit cheaper. But the trends have not gone that way. What do you think about the open source, closed source thing? Has there been big philosophical movements that change your perspective on the value of open source?
We're still waiting on this open AI. Open source. We haven't seen it yet, but theoretically it's coming. I mean, have to give credit to where credits do. I mean, deep seek released really surprisingly powerful model when it was January. Or so, so that definitely closed the gap to proprietary models. We've pursued both. So we released Jema, which are our open source or open to eight models. And those perform really well. They're small dense models, so they fit well on one computer. And they're not as powerful as Gemini. But I mean, the jury's out.
Which way is this going to go? Do you have a point of view on what human computing interaction looks like as AI progresses? It used to be thanks to you. As a search box, you type in some keywords or a question, and you would click on links on the internet and get an answer. Is the future typing in a question or speaking to a ear pod or thinking or thinking? Or like, what's the, what's the, yeah, and then the answer is just spoken to you. I mean, by the way, just to build on this, it was Friday, right? Newerling got breakthrough designation for their human brain interface. That's a very big step in allowing the FDA to clear everybody getting it implanted.
Yeah, and is it like, if you could just summarize what you think is kind of the most common place human computer interaction model in the next decade or whatever, is it a, you know, there's this idea of glasses with a screen in glasses and you tried that a long time ago. Yeah, I kind of messed that up. I'll be honest. Got the typing totally wrong in that. Early again. Yeah. Right. But early. There are a bunch of things I wish I'd done differently, but honestly, it was just like the technology was ready for Google Glass. But nowadays, these things, I think, are more sensible. I mean, there's still battery life issues. I think that, you know, we and others need to overcome.
But I think that's a cool form factor. I mean, when you say 10 years, though, you know, a lot of people are saying, hey, the singularity is like five years away. So your ability to see through that into the future, yeah. I don't know if it's very hard to get. But do you have any, sorry, just let me ask about this. There was a comment that Larry made years ago that humans were a stepping stone in evolution. Okay. Can you comment on this? Like, do you, do you think that this AGI superintelligence or really silicon intelligence exceeds human capacity and humans are a stepping stone in, you know, progression of evolution?
Boy, I think like sometimes us nerdy guys go and get in heaven too much wine. I know one of them. I've had two glasses and I'm ready to go. I need to explore for their conversation. Human implants, let's go. I mean, I guess we're starting to get experience with these AIs that can do certain things, you know, much better than us. And they're definitely, you know, with my skill of math and coding, I feel like I'm better off just turning to the AI now and how do I feel about that?
I mean, it doesn't really bother me. You know, I use it as a tool. So I feel like I've gotten used to it. But you know, maybe if they get even more capable in the future, I'll look at it differently. Yeah, there's a lot of insecurity maybe. I guess so. As an aside, management is like the easiest thing to do with AI. Absolutely. And I did this, you know, at Gemini on some of our, you know, work chats kind of like Slack, but we have our own version.
We had this AI tool that actually was really powerful. We unfortunately, anyway, temporarily got rid of it. I think we're going to bring it back and bring it to everybody. But it could suck down a whole chat space and then answer pretty complicated questions. So I was like, okay, summarise this for me. Okay, now assign something for everyone to work on. And then I would paste it back in so people didn't realise it was the AI. That's all I admitted it had pretty soon. And there were a few giveaways here or there, but it worked remarkably well.
And then I was like, well, who should be promoted in this chat space? And actually picked out this woman, this young woman engineer who like, you know, I didn't notice she wasn't very vocal, particularly in that group. PRs kicked out. No, no, it was like, and then I don't know, something that the AI had detected. And I went and I talked to the manager actually and he was like, yeah, you know what, you're right. Like she's been working really hard at all these things. Wow. I think that ended up happening actually.
So I don't know, I guess after a while, you just kind of take it for granted that you can just do these things. I don't know, it hasn't really. Do you think that there's a use case for like an infinite context link? Oh, 100%. I mean, all of Google's code base goes into the infinite. Yeah, for sure. You should have access to the infinite. Yeah, it's painful. Yeah. And then multiple sessions so that you can have like 19 of these things, 20 of these things running. Or it just evolved into real time. Eventually it'll evolve itself.
Yeah, I mean, I guess if those everything then you can have just one in theory. You just need to somehow just and dig your own. You're all that what you're talking about. But yeah, for sure, there's no limit to use of context and there, you know, there are a lot of ways to make it larger and larger. There's a rumor that internally there's a Gemini build that is a quasi infinite context line. Is it is it a valuable thing? Like, I don't know. It, well, you say what you want to say, but I mean, for any such cool new idea in AI, there are probably five such things internally.
And, you know, the question is how well do they work? And yeah, I mean, we're definitely pushing all the bounds in terms of intelligence, in terms of context, in terms of speed, you know, you name it. And what about the hardware? Like, when you guys build stuff, do you care that you have this pathway to Nvidia? Or do you think eventually that'll get abstracted? There'll be a transpiler and it'll be Nvidia plus 10 other options. So who cares? Let's just go as fast as possible. Well, we mostly for Gemini, we mostly use our own TPUs.
So, but we also do support Nvidia and we were one of the big purchasers of Nvidia chips and we have them in Google Cloud available for our customers in addition to TPUs. At this stage, it's for better for us not that abstract and maybe someday the AI will abstract it for us. But, you know, given just the amount of computation you have to do on these models, you actually have to think pretty carefully how to do everything and exactly what kind of chip you have and how the memory works and the communication works and so forth are actually pretty big factors.
And it actually, yeah, maybe one of these days the AI itself will be good enough to reason through that today. It's not quite good enough. I don't know if you guys are having this experience with the interface, but I find myself, even on my desktop and certainly on my mobile phone, going immediately into voice chat mode and telling it, nope, stop. That wasn't my question. This is my question. Nope. Let's say that again, insured a bullet points. Nope. I want to focus on this definitely. It's so quick now.
Last year it was unusable. It was too slow and now it like stops. Okay. And then you sell it. I would like what I want to go to. I don't want to type. I want to use voice. And then, concurrently, I'm watching the text as it's being written on the page and I have another window opening. I'm doing Google searches or second queries to an LLM or writing a Google doc or a notion page or typing something. It's almost like that scene in a minority report where he has the gloves or in Blade Runner where he's in his apartment saying zoom in, zoom in, closer to the left to the right.
And there's something about these language models and their ability to the response time, which was always something you focused on response time. Is there like a response time thing where it actually is worth doing voice and where it wasn't previously? Everything is getting better and faster. So if we're smaller models and are more capable, there are better ways to do inference on them that are faster. You can also stack them. This is Niko's company, 11 Labs. It's an exceptional TTS, STT stack.
There are other options. Whispers are really good at certain things. But this is where I kind of believe you're going to get this compartmentalization where there'll be certain foundational models for certain specific things. You stack them together, you kind of deal with the latency. And it's pretty good because they're so good. Whisperer and 11 for those speech examples that you're talking about are fucking kick ass. They're acceptable. Wait till you turn on your camera and it sees your reaction to what it's saying.
And you go, and before you even say that you don't want it, you put your finger up. It's pauses. Oh, did you want something else? Oh, I see you're not happy with that result. It's going to get really weird. It's funny thing, but we have the big open shared offices. So during work, I can't really use voice mode too much. I usually use it on the drive. The drive is well. I don't feel like I could, I mean, I would get it. It's output in my headphones.
But if I want to speak to it, then everybody's listening to me. It's weird. I just think that'll be socially awkward. But I should I should that. In my car ride, I do chat to the AI. But then it's like audio and audio out. Yeah, but I feel like I honestly, maybe it's a good argument for a private office. I should spend more time with you guys. I think you could talk to your manager. They might get one. I like being out in the. No, that's the good. I like to get them off. Everybody. Yeah. But I do think that there's this, hey, I use the case that I'm missing a chase probably figure out how to try more often.
If people want to try your new product, is there a website they can visit? Special code or go check it. I mean, honestly, there's a dedicated Gemini app. If you're using Gemini, just like you're going to the Google navigation from your search, just get to download the actual Gemini app. It's kick-ass. It really is the best models. I think it will. And you should use 2.5 pro. 2.5 pro. Pay the fee. It's a, you got to pay, right? Yeah, you got a few queries. You got a few prompts for free, but if you do it a bunch, you need to just going to make all these free.
20 bucks a month. Yeah, it's free. You got a vision for making a free and throwing some ads on the side. Yeah, once you've got a hardware cost, the whole thing will be free. Okay, it's free today. Without ads on the side, you just get a certain number of the top model. I think we're likely are going to have always now top models that we can't supply infinitely to everyone right off the bat. But wait three months and then the next generation.
To me, like if I'm asking all these queries, just having a little on the sidebar of things I might be a running list that changes in real time of things I might be interested in. Oh, and also really good AI advertising. I just, I don't think we're going to like necessarily our latest and greatest models, which are take a lot of computation. I don't think we're going to just be free to everybody right off the bat. But as we go to the next generation, it's like every time we've gone forward to generation, then the sort of the new free tears usually as good as the previous pro tier and sometimes better.
All right, give it up for Sergey Brent. Thank you. Okay, thanks everybody for watching that amazing interview with Sergey Brent and thanks Sergey for joining us in Miami. If you want to come to our next event, it's the All-In Summit in Los Angeles. Fourth year for All-In Summit. Go to all-in.com slash events to apply.
A very special thanks to our new partner, OKX, the new money app. OKX was the sponsor of the McLaren F1 team, which won the race in Miami. Thanks to Hyder and his team, an amazing partner and an amazing team. We really enjoyed spending time with you and OKX launched their new crypto exchange here in the US.
If you love all in, go check them out. And a special thanks to our friends at Circle. They're the team behind USDC. Yes, your favorite stablecoin in the world. USDC is a fully backed digital dollar, redeemable one for one for USD. It's built for speed, safety and scale. They just announced the Circle Payments Network. This is enterprise-grade infrastructure that bridges the gap between the digital economy and outdated financial else. Go check out USDC for all your stablecoin needs and special thanks to my friends, including Shane over a polymarket Google Cloud, Salana and VVNK.
We couldn't have done it without y'all. Thank you so much. You should all just get a room and just have one big huge or two because they're all just like this like sexual tension that we just need to release that out.