The deadline to apply for the first YC Spring batch is February 11th. If you're accepted, you'll receive $500,000 in investment plus access to the best startup community in the world. So apply now and come build the future with us. I think with AI, there's sort of two forks on the road. There's the bad direction and there's the good direction in the good path, which I think we're moving towards is looking to say how do we maximize human agency and freedom and are just potential to be kind of the best versions of ourselves. This is the first time no one's saying no, everyone is saying yes and more. There's just like unprecedented amounts of demand for just AI stuff. There's a whole category of businesses or products that would not have been economically viable or even possible to create before that are now possible. So we've actually just like expanded the universe of possible businesses. Never been a better time to be a founder, that's for sure.
Welcome back to another episode of The Lightcone. And we've got a special one today because we are in Sonoma and we just wrapped up a 300 person retreat of some of our top AI founders. And we also have a very special guest today, the creator of Gmail and our partner at YC Paul Buchanan. Harge, why is this such a special episode? What are we doing? Well, we're filming from a different place. This weekend we put on an AI retreat for some of our alumni companies to share ideas about AI and what they're seeing as they're building their startups. And we learn a bunch of really interesting stuff. So we thought we would film an episode to talk about it.
So PVE, back in the day when we were working with companies, what was sort of an aspirational growth rate? What would we tell people to try to do week on week? Well, 10% week on week is an amazing metric to hit. And I think back then, if you were like maybe the top one or two companies in the whole batch, you'd be able to achieve that. And since summer of last year, the wildest thing is realizing that both summer and fall batch in aggregate, on average, over the batch in 12 weeks, average 10% week on week growth. So not just the very best, the Airbnb of the batch, but the batch overall. It's amazing. And it's not just during the batch. Diana and Harge, you guys have companies that you've worked with that have continued an insane growth rate long after the batch is over. Do you want to talk about those ones?
One of the ones that really stands out is a particular company that went from zero to 12 million in 12 months. I'd never seen any growth like that. And I think we've seen this not to be just the exceptional different company of the batch, but actually more of them do that as well. Right? Yeah, that was my general pick up from this weekend was that just the rate of execution of startups is going is going much higher. And you can see it in both like how quickly companies are hitting like a million dollars in ARR. Like we used to, I think, say you should aim for that like 12 to 18 months out of the batch. And that's like the equivalent of the 10% week on week growth. Like that's what you should aspire to. Now it feels to me like that's probably like the minimum. I start up of companies hitting it within six months.
And then I was just talking to some founders, we can just about their goals for this year. Like some of them have hit a million dollars ARR just now. I had one company say that their goal is 20. Like another company say they're aiming for 10 at least. Like just going from one to 20 million in one year. Yes. Yeah. And this is a goal founders have goals and like we hope them we hope they hit them. But I think like saying that like even a couple of years ago, hey, like also go from like one to 20. You would have thought you were just. Yeah. You would have just you would have been either like just like that's total nonsense. Like this is going to happen or like it just you just wouldn't have said it. And I just think that the general level of ambition has gone way up because of AI. The things are starting to work.
And let's talk about why that is the case. Does anyone have any thoughts on that? Well, I guess I have a meme that I showed you guys earlier. You know, I think the classic thing is you have a boss who's sort of like slave driving and then I still believe this. Like if you're a leader, you're not you know, sort of slave driving from the back. You're way in the front like sort of leading. And then the meme has, you know, this one person pulling the cart alone. And that's the introvert. And what might happen now is the introvert with AI can pull three times as many carts all alone actually. Once intelligence is truly on tap, then it's actually a forced multiplier for founders and people with really, really strong senses of agency. Sure.
Well, why specifically is this happening? One of the interesting talks I heard was from Aaron Levy, the CEO of Box. And he was talking about he's been through like multiple cycles of enterprise software. And he said that usually when there's sort of like a new cycle shift, but like cloud or mobile, there's always people in the room, decision makers at the big enterprise software companies saying like, no, like, you know, we're never going to shift to cloud. Like it's apparently like a famous quote from Jamie Dimon. Well, whenever like mobile is not going to be a thing, it's not that important. But with AI, it's different. This is the first time no one's saying no. Everyone is saying yes and like more. So there's just like unprecedented amounts of demand for just AI stuff.
Yeah, it's notable that all these companies that are having these incredible growth rates, they're all the same flavor of startup, right? They're all basically selling AI agents to businesses. And there's other companies that were funded that are doing well, but like all of the ones that you guys were talking about, right? They're all agents for businesses. And so they're all essentially, I think, riding on this wave of like enterprises are like have enormous pressure internally to adopt AI. This seems like it goes back to our fundamental based advice, which is make something people want. And in this case, traditionally, the challenge was convincing people that they wanted the product. And it sounds like what's driving the growth is that the demand is already there.
Yeah. And so you just have to show up with a product that works and you don't even have to be that good at sales. The point is that actually building the product that works is quite hard. Like a lot of a lot of what the demand we're seeing is not for is for software that can actually do the work of a person that's essentially services and doing that to the equivalent level of a human doing the job, whether it's like customer support, sales, phone calls, whatever it is, is actually very, very hard. So I think just like a trend I noticed, a lot of our heavily technical CEOs who aren't necessarily the strongest at sales are able to win big enterprise contracts now because although there's 10 or 15 other companies competing for the same contract, it's very, very hard to build a product.
And so just building the thing that actually does the work well is enough to win these huge deals. A lot of the details of how they build the products, they're really inventing a lot of new patterns on how to build the product. Because nobody knew how to really get a lens to behave, let's say correctly and give very predictable results. And people thought that was impossible. That's because they only tried, maybe surface level they play with chat, JBT and sometimes I listen, and then people give up. That could be the random person just does that. But a lot of the technical founders, they don't. They find ways and wizardry around on how to really state a problem, how to properly prompt it to actually be very accurate and it is possible because we're actually seeing a lot of these products getting bought at businesses to handle all these complex tasks.
One thing I noticed this weekend is that a lot of the talks that the founders gave were around evals and like testing, which I don't think that would have ever been true at a previous YC conference. Like testing was sort of this like afterthought thing that you try to do as little of as possible. I heard one really interesting comment from a founder who's building an AI agent who said that he thinks that the most valuable thing that his company has built is not the code base. It's the eval set. It's a gold standard labeled set of data of like this is what is the correct answer for the AI to do. And that was sort of a mental change for me that like there's I think this perception that would that like companies have like data assets, but just like general brand. And data is actually not that valuable. The thing that's really valuable is like a gold standard meticulously labeled eval set.
这个周末我注意到一个现象,很多创始人在演讲时都围绕着评估和测试这个主题,而在以前的 YC (创业者大会)会议上,我认为这从来不是重点。过去,测试好像只是一个附属想法,大家都想尽量少做。我听到一个正在开发 AI 代理的创始人说了一句非常有趣的话,他认为他们公司最有价值的东西不是代码库,而是一套评估数据。这是一套高标准标记的数据集,用来定义 AI 应该给出的正确答案。这让我重新思考,过去可能大家觉得公司拥有的资产就是一般的数据和品牌,但实际上,真正有价值的是那套经过精心标记的黄金标准评估数据集。
I mean, this is exactly why the whole chat GPT wrapper meme is wrong and that actually it's the model that is changing very quickly. There are clearly five or more AI labs, all of which are right there at the frontier. So now there's a lot of alternatives to which model, but then the thing that nobody has that is actually hard to get is the eval set. And I'd argue the prompting, which is sort of like mirrors basically all of the opportunity for the people watching right now. It's like basically agency and taste prompting just knowing what to tell someone to do or what to tell the agent to do. And then he vows his taste. It's like, is it good? Is it beautiful? Is it useful?
我的意思是,这正是为什么整个 "chat GPT 包装器" 的观点是错误的,其实真正快速变化的是模型。目前显然有五个或更多的 AI 实验室处于前沿。因此,现在有很多模型可供选择,但没有人真正容易获得的是评估集。我认为提示也是如此,这为现在正在关注的人提供了大量机会。提示基本上就像代理和品味,知道告诉某人该做什么或者告诉代理该做什么。然后他评估他的品味:这是否好?是否美观?是否有用?
I heard a very interesting tidbit from a founder who said that his designer, they've actually stopped using Figma mock-up things and the workflow is interesting. So the designer is designing entirely with Claude and going from text to JavaScript. It was just counter into to me because you assume designers is like very visual thing, but apparently their designer has figured like it has enough taste to be able to turn that into just like text prompts and like via prompt engineering essentially get to a actual like lines of code that they are like as tasteful and as good as any Figma mock up would have been. So it's like the pattern as always is whoever can iterate the fastest wins. Yeah. And AI is an incredible tool for rapid iteration.
我听到一个很有意思的小趣闻,一位公司创始人说他们的设计师已经不再使用 Figma 进行设计了,整个工作流程很有趣。设计师完全使用 Claude 从文本转化为 JavaScript。这对我来说有点反直觉,因为我们通常认为设计是一个非常需要视觉化的东西,但显然他们的设计师已经找到了方法,通过文本提示和精心设计的提示工程,将这些转化为像任何 Figma 模型一样精美的代码。正如一贯的模式,谁能最快速地迭代,谁就能获胜。人工智能在快速迭代方面是一个不可思议的工具。
I guess people are, you know, sort of worried that the jobs are going to go away. But earlier we were talking about, um, there's this great Milton Friedman, uh, quote where he's visiting a developing country and he sees this large group of workers digging a canal using shovels and he asks his government official host, you know, Why are you not using machinery? What's going on? And the guy says, it's a jobs program, actually. And, uh, what does Friedman say? He says, if it's a jobs program, you should go to spoons, not shovels. I think that that's actually the most useful mental model for sort of the fear about job loss, at least for now.
Certainly, you know, the potential of AI is because it is this incredible tool where we're moving not from spoons to shovels or shovels to bulldozers, but to the point where the AI can do so much work that we're actually just able to create dramatically more wealth. And I think that's really the dream that we have for, you know, peering 10 years into the future. Um, is I think there's a potential for an unprecedented level of certainly scientific discovery.
Um, the AI is incredibly good at reading thousands of papers, um, you know, digesting textbooks, very good at chemistry. Um, so I think we're going to see incredible levels of productivity. The story is fascinating to me because the alternative is what replacing people's shovels with spoons. Like, I, you know, I think it's absurd on its face. Like, you know, like that actually is a little bit like torture. Like I feel like, you know, growing up, my dad would force me to work in the gardens. That alone was barbarous to me. But like, if you made me deal with a spoon, what would we call that? That'd be torture, actually.
I think the question that I posed to Sam that everyone seemed most interested in is essentially, you know, are any of these startups actually going to exist in 10 years? We're sending you relevant to the audience. Yeah. Yeah. Everyone felt that was relevant because, yeah, if we, if we do achieve AGI, you know, what does that mean? How quickly can that actually just displace all of the work that we're doing here? And honestly, like no one's quite sure. Which makes us a very exciting time in technology.
But, you know, it's very clear that we're able to achieve a lot more. And I think like throughout history, every time we've found ways to create more wealth, more rapidly, that's actually worked out really well. You know, historically, 97% of people were farmers or something like that. And now it's, you know, I think maybe 3% or even less. And so we seem to be very good at inventing new work for ourselves and new ways to find purpose and meaning.
Well, what was the answer to your question? I think luxury real estate. You know, what are the things that people will value in the future? And if we get to the point where we really do have just a real abundance of the kinds of things that machines are good at creating. And so this is actually an idea I've been talking about for 10 or 15 years of almost thinking about the world in terms of machine money and human money. That really what we want to do is take the products of technology and create actually massive deflation.
We want to drive the prices, you know, down to zero so that we're all able to afford, you know, certainly like medical care or something. I think a lot about where it's really hard for most people to get really great medical care today. And I think that that's something where in 10 years we're going to be able to make it so that, you know, the majority of humans on earth have probably better medical care than we here at the table have today, which is, I think, going to be a huge achievement. But at the same time, you know, that's kind of on the machine money side of things.
But then you think about the human money. What are the things that we get that we really value from humans? Like, you know, if you go to see live music, we seem to have a preference to see a band live instead of just sit in front of a bunch of speakers, you know, or maybe robots playing music. Human money, I think, might be something that comes closer to just like an hour of your time, right? And that we actually have almost like a dual economy. That's super interesting. Embedded in that is actually maybe a better version of UBI. I mean, a bunch of the studies around UBI are sort of showing that there are sort of nice benefits here and there, but fundamentally, it's not creating a greater sense of well-being in the way that people hoped, maybe like five or 10 years ago. Yeah, it's definitely had mixed results.
And I think a lot of that really comes down to, you know, people still need guidance in life. And especially a lot of the people who are targeted with UBI are not necessarily people who are in a great sort of social position to begin with. And again, I think that there's a lot of potential for AI to actually kind of act as like a life coach. And again, like, you know, if you're fortunate enough to grow up with great parents and a great culture or something, you have a lot of advantages that a lot of other people maybe didn't have. And so again, I think like the great promise of AI is kind of taking like the best of what we have available and then just making it universally accessible. Because we're able to drive the cost so low.
I mean, honestly, this past December, I spent a couple of weeks in Vietnam and you're in a developing country and then you realize like there is so much that needs to be developed. It's actually, you know, there's roads, there's infrastructure, like the whole country seems like it's under construction. I imagine that's what China probably looked like in, you know, maybe the mid 80s or mid 90s. But there's also like this crazy optimism as it's building. If you have a robot, like it could build your house, it could clean your house. It could, you know, take care of all of these things for you. And that would like radically change your day to day, your standard of living and how much more direct can you be if you rather than sort of just giving people more human money does give them like the better way to live.
And that's sort of, you know, everyone can get here. But then I think there's like a special thing here around the human money where like the really remarkable things like, you know, nobody's guaranteed to have, you know, beachfront property in California or something like that. And that's where human money might go. Like everyone has the basics and actually something that's probably five or 10 times better than what even the most wealthy people have today. Yeah. Yeah, the way I like to think about it is I kind of think with AI, there's sort of two forks on the road. There's there's the bad direction and there's the good direction. And I think the bad direction is one where it's used to just like constrain and control and essentially like imprison us.
And in the good path, which I think we're, you know, we're moving towards is looking to say, how do we maximize human agency and freedom and are just potential to be kind of our best, the best versions of ourselves. And we even have that, you know, today with some of the creative tools where, you know, I don't have like a lot of artistic ability, but with like AI image generation or something, I can convey, you know, funny concepts or whatever. And we see that again, like with the design tools, you know, someone who can't code can now all of a sudden create basic apps and things like that. And we're able to realize our visions in a way that we were never able to do before.
A conversation we were having earlier is how we are actually now on the good timeline on how AI is shaping. Yeah, 10 years ago, this is very different view of how AI could turn out. Do you want to kind of talk? Yeah, exactly. So I, you know, I like to think about things on a time, 10 year time scale, in part because, you know, that's kind of how our startups work, roughly speaking, we seed fund them. They come through YC and then 10 years later, they IPO. And so I've been asking a lot of people about your 2035, you know, what do you want to see in 2035? But also thinking backwards to 2015.
And so if we go back to 2015, you know, 10 years ago was where we're having discussions inside of YC about artificial intelligence because we believed that we had sort of crossed a threshold. Basically, in the early teens, somewhere around 2012 is where we started to really believe that actually we had broken through. I think everything kind of prior to 2012 was fake, in my opinion. But it was really deep learning that that started to really deliver on AI.
But when we were looking at this 10 years ago in 2015, one of the big questions was, you know, it was all reinforcement learning and what is the thing that we were reinforcing? Because at the time, they were playing video games and trying to make the score go up. And this is, I think, also kind of where the paperclip maximizer concept and fear came from is like, if you gave it the wrong objective function. And so we had a lot of fear that based on our own evolution, our intelligence arose as a survival mechanism that we became intelligent and other animals became intelligent as a way to survive and perpetuate themselves.
And we thought that if AI did the same thing, it would, by its very nature, want to wipe us out in order to maximize its own odds of survival. And what's happened in the last 10 years as we actually found the right objective function, which is simply to predict the next token. And that actually intelligence in its most raw form is simply predicting what comes next. And so all of our really at a root level, what we're predicting, what our reinforcement function is simply predicting what comes next.
And that is the fundamental core of intelligence. And the great thing about that is we've been able to create this intelligence that doesn't have this drive to survive. It doesn't mind that we spin up intelligence and it doesn't work. And then it disappears because it's just based on that ability to predict patterns. I would argue like the most important part of this is actually the agency piece. Venkatesh Rao has this crazy thing he talks about.
And this is sort of a function of maybe the Uber and DoorDash era of things where in society, there's this like API line. So either you are above the line, meaning you create Uber or you drive for Uber. And obviously that's a distillation of like sort of the last idea. And then in this AI world, basically, if you're below the API line in the old model, like you don't have agency. You sort of have to like play this never ending game. Like the human is being doing the paperclip maximizing.
And so there's this other sort of world that I'm hoping we live in where it's humans not just writing the prompt and then the machine runs software and then this vast machinery and that's it. And you can never change the prompt. Like that would be tyranny probably. It's conceivable that in the future, we might have, and I don't know if this is the right thing, but you know, this sounds like something the EU would do, for instance. It would probably mandate there to be a human in the loop on, you know, maybe the CEO of a company.
Spoons. Yeah. And that might be the case, right? Yeah. It might be a form of like, we cannot use shovels here. Right. We must use tiny little spoons just for this one part. Right. Yeah. I think kind of the fundamental error they keep making over and over again is taking a very static view of the world. And then essentially trying to disregulate the current structure.
And that closes off our ability to evolve and see into the future. And it's, you know, again, very difficult and oftentimes impossible. So, you know, again, going back to 2015, the conclusion of our of our thinking was actually that we needed to create our own AI lab because at the time, you know, all of the best AI work was being done at Google and they had, you know, Google had all the money, all the data, all the users, all the researchers.
And it kind of seemed possible that they were going to have essentially an monopoly and it was all going to be locked up inside of that system. And so we had this very sort of like loony moonshot idea to start at the time we called it YC research, but it eventually got renamed open AI. And so open AI, you know, we were going to take on Google with a small nonprofit, which it was sort of doesn't quite pass the laugh test, right?
Like, how is this little nonprofit, you know, going to be the one that that actually develops AGI when, you know, the other companies have dramatically more resources. And then here we are 10 years later, it actually happened. And at the time, it just seemed incredibly implausible. Like no one would have believed it yet. Here we are on, I think, basically the best timeline.
Like we actually delivered it. We have an open and competitive market with, I would say at least kind of like six, basically, you know, you know, foundation models that are competing, including an open source one from Meta. And I think that's our best shot for preserving freedom is choice and competition. And talking about Google, it actually is digging the traffic too. Do you want to talk a bit about that? About the stats that were? I mean, some of it is like, I don't think it's out there in the annual reports yet. And certainly like we did some, you know, research prior to this episode, we couldn't really find anything that conclusive, but maybe purely anecdotally, because we're in this pool of people who are very, very early adopters, very much software engineers and you know, our behavior interacting with the internet has changed already.
It's not a surprise to me. Some people are starting to report in their referral traffic. Google referrals are down maybe 15% in the last year. And that certainly probably mirrors my own behavior. Like I still use Google, but I'm increasingly not clicking on any links in Google because they're sort of the snippet at the top. Or the first thing I think of is using chat GPT with web or using perplexity directly. Yeah, exactly. I mean, if you want to understand the future, I think you always have to look at where the early adopters are. And so you say, you know, again, now if we go back 25 years, right, if we go back to the year 2000 or 1999, you know, the early adopters with the people using Google.
So at the time, you know, people are like, well, Google is just kind of this fringe thing that, you know, maybe techy people use or something. But at this point in history, the same people or those some kinds of people who are the early adopters of Google are now switching their behavior to where your default action, if you're looking for information, is, you know, chat GPT or perplexity or one of these things. And even just, you know, observing my own behavior, I'll use Google mostly for kind of navigational. Like, I'm just looking for a specific website and I know it's going to give the same thing. But it's starting to have that weird kind of like legacy website, like I'm using eBay or something vibe to it.
Even earlier sign was the drop in traffic for Stack Overflow that actually started back in 2022, even before chat GPT. And this was primarily because of a get up go pilot. And they're down 60% this year. Yeah. Yeah. The pool of people here are quite have quite a good track record of predicting trends, right? If you think of, but the call, I mean, just like technical startup founders at YC, like I remember 2007 sort of Apple was back on the rise, but you could tell because just everybody who is in a YC batch was using a Mac. You could see the rise of AWS and the shift from rack service to everything being in the cloud because all of the founders in the batch just started by using AWS.
更早的迹象是Stack Overflow的访问量下降实际上在2022年就已经开始了,那时甚至还没有Chat GPT。这主要是因为一个叫做 "Get up go" 的计划。而今年,他们的访问量下降了60%。对,这里的这些人确实在预测趋势方面有很好的记录。你可以想到,比如YC(Y Combinator)中的技术初创公司创始人。我记得2007年左右,苹果公司开始重新崛起,但你可以从那些参与YC项目的人都在用Mac看得出来。再比如,AWS的崛起和从机架服务向云服务转变的趋势,也可以从当时参与项目的创始人大多一开始就用了AWS看出。
Same thing now. I've spoken to a bunch of founders and just personal productivity. Like they just have chat GPT open all day that founders like say they just constantly screenshotting their desktop and just like sending it to chat GPT if they need to debug something or figure out how to like navigate a government website. It's like one rather example. Like I need to do set up some registration. They, you know, here's a screenshot. Just tell me like exactly where I need to click in order to do this quickly. The only thing that we saw last year and summer batch was how so much of the batch is using cursor. And is one of the companies that's been growing a lot very quickly.
Anig totally they hit 50 million revenue. I think we may mention this in other episode. But yeah, I can't think of another tool that's got adoption so quickly within a YC batch as cursor just went from nothing to from one batch to the other. Like up to like 80% of the batch using it from one to the other one. The previous batch was like single digit percent. There's some people mentioned it felt like a like a technical conference a little bit and a lot of people were trading notes on how to hire the best engineers and a few people said, you know what, like if someone comes in and I asked them if they use cursor or any code gend tools and they say no.
Right now I can't hire them because they're not going to be able to be as productive as the rest of my team. I think that's an extension of something stripes started a decade ago actually like in general engineering interviews and technical interviews. Most of the value copied Google, I would say it was like whiteboards. Yes problems, which probably made sense for Google Google was looking for, but I think strike for the first around like 2011. I think they started doing this really. We don't really need you to wipe board CS problems. We need to use of develop web apps really fast. And so just give someone a laptop and like the idea was you basically sit in the room and you just like build like a to do list app or whatever you can as quickly as you can. And you basically measured on your max output in those like two or three hours.
And so I think if you follow that line through then say well it doesn't really matter like if there's whatever tools they use the question is just like the bar moves higher like you've got three hours build what you can build and say you should be able to build a lot more with cursor than before. You still believe that you're sort of looking for fundamentally how clearly can people think or solve hard architecture problems then you're sticking to whiteboards. What do you think this means for SaaS? Because you know one of the crazier things we've been seeing is that Klarna claims that they're not even buying new SaaS tools anymore. They're using CodeGen and not even hiring new engineers anymore using their existing engineering set of people. They're just going to replace all the SaaS tools they use to run their FinTech.
And I definitely heard stories like that. One of the unconference talks was actually specifically about that. This is a company I think we mentioned before is a company called Jerry that is now halfway to $100 million a year in revenue. But a few years ago they were like still burning like five or $10 million a year. They had crazy customer support problems and basically GPT4 dropped. They implemented it and then now it totally changed the way they hire like the prompting itself is actually in the hands of their head of customer support. And so they have a PM, they have the head of customer support, the engineers made it. They don't have to touch it. It's mainly a prompt management and workflow tool and it literally cut their customer support team and their budget for that side by half.
And it turned a company that was not able to grow and burning $10 million a year to a profitable company that is cash flowing, that is also compounding its growth at north of 50% a year. Which is like a dream scenario. Yeah, this is a great example actually. I think of the way in which AI is creating wealth right because there's a whole category of businesses or products that would not have been economically viable or even possible to create before that are now possible. And so we've actually just like expanded the universe of possible businesses. Yeah, it's never been a better time to be a founder, that's for sure. There's definitely been a vibe shift in the as usual just building companies like this like start with like hiring a number of people, for example, if you're like certainly 10 years ago, the general sense was that if you were growing, if your company was growing fast and revenue was ramping up and you would go and raise around and you would some metric you'd hear a lot was like, how many people are you at?
Like how many people do you hire this year? How many people are you going to hire next year? So like a bit of a vanity metric. It just seems to me now like the companies that are reaching these numbers we're talking about like a millionaire are trying to get to 10 or 15 or 20 are doing it with less people and expect to do it with less people. Which is the new thing. This is why so many of them really haven't even raised a series A, which there's less need for for hiring a lot of people to do a lot of the operations on or maybe going to your analogy Gary, the previous generation of startups had this cause of below the API or above the API.
So you had a bunch of people that had to kind of build and operate the API. If you have to build that business like Uber or Lyft, DoorDash, marketplaces, you have to do that hyperscale of hiring lots of people. The funny thing about that era was there's this concept that I think was probably appropriate for that era called Blitzscaling. There was an entire book about it and the idea was basically I think it was born out of this descending interest rate world while at the same time like if you put more money into something like you had these network effects. So if you played that out, yeah, you want to blitz, blitz scale. You want to hire as many people as possible. You want to grow faster than everyone else. And then because of the winner take all dynamic like the world capital markets were just going to funnel you tens of billions of dollars, hundreds of billions of dollars even to your subsidized growth to be the winner.
And that was the game. And I think like from what we can tell from all the people who have more than 300 founders right here sort of sharing their stories and I don't think I heard Blitzscaling or I'm trying to hire as many people as possible like at all. Nobody is bragging about, hey, you know who I'm hanging out with these unicorn. You know, I'm going to be a unicorn. Like people are literally not bragging about that. It's all about leverage right now. Now that the real thing is how much you can do with a little bit of resources because we have these magical tools that give us superhuman leverage. Part of it is like this, there's going to be a longer tale of businesses that are possible only now because of AI. And this longer tale is going to be also fatter. It's not just companies are doing 20, 30 million revenue, but more than hundreds of millions of revenue.
And it goes back to the episode when we talked about vertical sass. There's just more willingness to pay for this new category that people are still trying to figure out how to price. That's why there's just so much willingness to pay because people want it. It doesn't go just on the software budget for a company is there's budget from the AI chief officer or something. I don't know if that's like a title that has come out yet, but I really made this point to that. I mean, one thing I'm certainly noticing is that the companies are hitting these big revenue numbers, trying to sign these contracts. It's actually sort of usage based pricing. I think the data is not necessarily the pain per use, but the pricing is tight like how much used product, which is definitely how it's close to how you would think about it as like selling services and software per se. Obvious ROI. Yes.
So the problem a lot of times with selling a product is the customer doesn't really know if they're getting the ROI. And so that makes for a long and painful sales cycle. But if you're able to drop in something that pays for itself in the same month, that's an easy sale. Right. I think the way they've kind of priced it is more like services and it's really akin to this is how intelligence is getting priced. So another on the spectrum of people thinking not worrying about just the big picture is AI going to make us all obsolete. And one and the other and existential philosophical conversations. Some of the stuff I thought that's interesting in the middle is it's just hard to predict the timeline of the tools themselves.
So there was some interesting talks about like rag, for example, I think Sam maybe seeded this with his talk about like, if you have like infinite context, or huge context windows, do you even need like rag or retrieval tools at all? And I think that's like, that's the kind of thing where it's like, it's as a startup or a builder right now. I think people are more concerned about, am I using the right tools and like, is this going to still make sense in three to six months? I think that's like actually a direct consequence of like, you know, if you're an AI lab, you're like on the frontier. And so how you know that your thing is working is actually like your model is bigger. You're like farther along on the scaling law.
And so when I meet people from AI labs, like they almost all talk about bigger, better models, but they're model makers. And then obviously, you know, we also spend a lot of time with very scrappy founders who have very little capital. There are just as many talks sort of on the other side, which was, you know, I went to one that was very much about systems level programming. Like, if you want to have, I think it was a Tavis. So Tavis is building this real time AI avatars with video and audio that are very realistic. Part of the trick is they got it to very low latency. 600 milliseconds, which is really fast. Which was even too fast that some of the customers, oh no, no, no, it's too fast. It's a bit uncanny when it's too fast.
当我遇到来自 AI 实验室的人时,他们几乎都谈论更大、更好的模型,因为他们是模型的创造者。当然,我们也花了很多时间与资金很少但充满干劲的创业者交流。同样多的讨论也存在于另一个方面,比如我去参加了一个关于系统级编程的会议。我记得是一个叫 Tavis 的公司,他们正在构建一个具有实时 AI 动画和音视频的虚拟形象,效果非常逼真。他们的一个技术诀窍是将延迟降低到了 600 毫秒,非常快。甚至快到一些客户觉得太快了,有点令人不安。
It's like now it's being rude. It's just interrupting me. So a lot of the build SDK for other companies. So a lot of the products that are getting built with this zoom video interface with another human, it is using them. So I love their talk because it's a good illustration of like, yes, like the labs are going to continue to do their thing. You know, and maybe on a more fast timeline than we even imagine, like, you know, nine months, 18 months, you know, maybe it's even every three months. There's sort of these like breakthroughs.
So if I had to guess, like that's sort of, you know, when people are in their heads being like, why should I do any of this? Because open AI models are just going to be infinitely smart and I should just lie down on a bed. But you know, what I would say is like, I'm actually heartened by all the stories that I heard. Like, will the models change? Will the technology change? Will, you know, will Tavis change at stack?
Like, yes, like they've already seemingly rewritten their stack multiple times to take advantage of what's been going on. Their product has only gotten better in the marketplace as time goes on. And then what will it look like? Will there be like, you know, a model model? Maybe not the same AI labs today that are talking about, you know, there's going to be a trillion token context.
It's like, man, how much is that going to cost? Ultimately, engineering and systems like those matter. Those are actually the most valuable things right now. And then along the way, you're going to have these golden e-vals. I don't know, I hate to bring in, you know, consulting speak, but like, what are the modes, right? And the modes in the end are brand.
It's a data that no one else has. Sometimes it actually literally is caring about customers that, you know, the giant company will never care about, right? Actually, I think the other mode is going to be ultimately start-ups move quickly. One of the remarkable things that I observed, a lot of the founders actually have rebuilt a lot of their tech stack.
To be with the latest, they were very willing to, oh, this particular approach to rag doesn't work or vector database, throw it away. PG vector became the better thing. Let's use that and just throw it away and use the best thing. So what was fun to see is I think the best start-ups are going to be the ones that can build the fastest and be willing to be at the bleeding edge and be willing to reevaluate assumptions on what's the best approach.
And I heard a lot of how a lot of how things got built. They redo it or they do it again with the best with the latest and best. We should also explain another reason why they're securing enterprise contracts and these big contracts fast and ever, right? Like big companies have never been great at continuing to build great software.
But now like, yeah, if you need to constantly rip and replace the tool you're using every three months to be like at the bleeding edge, like it's going to take three months to get like the meeting schedule to discuss like whether we should reevaluate the rules. We're going to plan in the next print.
We're going to take it in whatever. We can get to that in like, you know, 2029 for sure. Yeah, and these companies are getting to the six or 12 million example. I know they actually have rewritten a lot of their tech stack a lot of times and the architect every time I actually talk to them.
Oh, yeah, we threw away that thing that we told you like this new way of doing is like, okay. And that's like every month, every other month. From talking to founders this weekend, what was your sense of the overall vibe? It's pretty exciting. I mean, I don't know that there's ever been a better time.
You know, again, just kind of looking back historically, really the foundation of YC. If we jump back not 10 years to when we were starting OpenAI, but 20 years to the summer founders program, the thesis behind why Paul and team started. YC was the realization that it was getting easier to build startups.
You didn't need to raise a mountain of capital and hire a giant team that actually just a couple of smart kids could build a web app. And that trend has only accelerated now with AI, where you can build an entire 12 million dollar business or something with just a handful of employees.
And so it again goes back to technological leverage enables people who have sort of ambition and insight to do incredible things. Well, that's all we have time for today, but we'll catch you next time on the Litecoin.