Hi listeners, welcome back to No Pires. Today, a lot of nyer here with Andrew Aing. Andrew is one of the godfathers of the AI revolution. He was the co-founder of Google Brain, Coursera, and the Venture Studio AI Fund. More recently, he coined the term agentic AI and joined the board of Amazon. Also, he was one of the very first people a decade ago to convince me that deep learning was the future. Welcome, Andrew. Andrew, thank you so much for being with us. No, always great to see you.
I'm not sure we should begin because you have such a broad view of these topics, but I feel like we should start with the biggest question, which is, if you look forward at capability growth from here, where does it come from? It's come from more scales, come from data work. Multiple vectors of progress. So I think there is probably a little bit more crucial that the scalability limit is to be smooth, so hopefully you can see me, because there is a really, really difficult. Societies, perception of AI has been very skewed by the PR machinery of a handful of companies with amazing PR capabilities.
And because that number of companies drove scales in narrative, people think of scale first of this vector progress. But I think, you know, agentic workflows, the way we build multi-model models, we have a lot of work to build concrete applications. Multiple vectors of progress, as well as wild cards like brand new technologies like Confusion models, which are used to generate images for the most part, will that also work for generating text? I think that's exciting. So I think that would be multiple ways for AI to make progress.
You actually came up with the term agentic AI. What did you mean then? So when I decided to start top of agentic AI, which wasn't a thing when I started using the term, my team was slightly annoyed at me. One of my team members that were named, I said, Andrew, the world does not need you to make up another term, but I decided to do it anyway and for weather reasons to stop. And the reason I started to top of agentic AI was because a couple years ago, I saw people were spending a lot of time debating, is this an agent? Is this not an agent? What is an agent?
And I felt there was a lot of good work and there was a spectrum of degrees of agency, whether it's highly autonomous agents that could plan, take multiple sets of using to a lot of stuff by themselves. And then things that were lower degrees of agency, where we're prone to now, we're affecting this output. And I felt like rather than debating, this is an agent or not, let's just say the degrees of agency and say it's all agentic, so you spend your time actually building this. So I started to push the term agentic AI.
What I did not expect was that several months later, eventually marketers, we get a hold of this term and use this as a sticker to stick about everything in sight. And so I think the term agentic AI really took off. I feel that the marketing hype has gone like that insanely fast. But the real business progress has also been rapidly growing, but maybe not as fast as the marketing.
But do you think of the biggest obstacles right now to true agents actually being implemented as AI applications? Because to your point, I think we've been talking about it for a little while now. There are certain things that we're missing initially that are now in place in terms of everything from certain forms of inference time compute on through to forms of memory and other things that allow you to maintain some sort of state against what you're doing. What do you view are the things that are still missing or need to get built or most sort of foment progress on that end?
I think the technology component level, this stuff that I hope for example computer use kind of works often doesn't work. I think so the God rails e-vows is a huge problem. How do you quickly evaluate these things and drive e-vows? So I think the component is this room for improvement. But what I see is the single biggest barrier to getting more agentic AI workflows implemented is it's actually talent. So when I look at the way many teams built agents, the single biggest differentiator that I see in the market is does the team know how they drive a systematic error analysis process with e-vows?
So you're building the agents by analyzing at any moment in time what's working, what's not working, what the improve as supposed to less experience seems kind of try things in a more random way than it's just six long time. And what I look at was huge range of business small and large. It feels like there's so much work that can be automated through agentic workflows, but you know, detailing the skills and maybe the software tooling. I don't know. Just isn't there to drive that disciplined entry and process to get the stuff how much of that engine process could you imagine being automated with AI? You know, it turns out that a lot of this process of building agentic workflows requires ingesting external knowledge, which is often locked up in the heads of people.
So until and unless we built, you know, AI avatars can interview employees doing the work and that's a visual AI that can look at the computer monitor. I think maybe eventually, you know, but I think at least right now for the next year or two, I think there's a lot of work for human engineers to do to build agentic workflows. What's more, the kind of collection of data feedback, etc. for certain moves that people are doing is that other things that I'm sort of curious like what that translates into tangibly versus. Yeah, so we have one example.
So I see a lot of workflows like, you know, maybe I customise your document, you're going to convert the document to text, then maybe do a web search for some compliance reason to see you're working a vendor you're not supposed to and then look at the database records, see the pricing, right, save it somewhere else in some ways. There's multi-state agentic workflows kind of mixed-gen robotic process automation. So the implementers that it doesn't work, you know, is it a problem if you got to invoice date wrong? Is that a problem or not? Or if you route to the message, the wrong person for verification.
So when all of we implement this thing, you know, almost always it doesn't work the first time, but into know what's important for your business process and is it okay that I don't know, I bothered the CEO of the company too many times or is the CEO of me if it doesn't mind, verifying some invoices. So all that external contextual knowledge, often at least right now, I see thoughtful human product managers or human engineers having to just think through this and make these decisions. So can an AI agent do that someday? I don't know. It seems pretty difficult right now, maybe someday.
But it's not in the internet pre-training data set and it's not in a manual that we can automatically extract. I feel like for a lot of work to be done building agentic workflows, that data set is proprietary. It's just not it's not general knowledge on the internet. So figuring it out, it's still exciting work to do. What is the, if you just look at this spectrum of agentic AI, what's the strongest example of agency you've seen? I feel like leading edge of agentic AI, I've been really impressed by some of the AI coding agents.
So I think in terms of economic value, I feel like the two very clear, very apparent buckets. One is answering people's questions. Probably, opening AI chat, you can see the mockery leader of that with Rolex, take off lift off velocity. The second massive bucket of economic value is coding agents, where coding agents like my personal favorite club, De La Toe, right now is club code. Maybe we'll change at some point, but I just use it. Love it. Heidi Autonomous in terms of planning out what to do to build a software, building a checklist, going through one of the times, so there's ability to plan a multi-step thing, execute the multi-step plan.
Is one of the most highly autonomous agents out there being used that actually works. There's other stuff that I think doesn't work. Something computer used to go shop for something from me and browse online. Some of those things are really nice demos, but not yet production. Because I was sort of criteria in terms of what needs to be done and more variability around actions, or do you think there's a better training set or sort of set of outputs for coding? I'm sort of curious why does one work so while there almost feels magical at times, and the others are really struggling as use cases so far.
I think, you know, engineers really like getting all sorts of stuff to work, but the economic value of coding is just clear and apparent and massive. So I think the sheer amount of resources dedicated to this has led to a lot of smart people for whom they themselves are the user, so also good instinct-long product, building really amazing coding agents. And then I think I don't know. You don't think it's a fundamental research challenge. You think it's like capitalism at work, and then domain knowledge in a lab.
Oh, I think capitalism is great at solving fundamental research problems. Yeah. At what point do you think models will effectively be bootstrapping themselves in terms of, you know, 90% of the code of a model will be written by agent-coding agents. For the error analysis, I feel like where I started to think was slowly getting there. So some of the leading foundation model companies are clearly well-deserved publicly. They're using AI to rally all the codes. One thing I find exciting is AI models using agent-to-work flows to generate data for the next generation of models. So I think the Lama research we were talking about is what older virtual Lama would be used to think for a long time to generate puzzles. Then you train the next generation of the model to try to solve really quickly without me to think as long. So I find that exciting too.
Yeah, multiple vectors of progress. It feels like AI is not just one way to make progress. There's so many spy people pushing forward in so many different ways. I think you have rejected the term vibe coding in favor of AI-assisted coding. What's the difference? You know, I know that you do the latter. You're not vibing.
Yeah. Vive coding leads people to think, you know, like, I'm just going to go to the vibes and set all the changes, it creates a suggestion or whatever. And it's fine that sometimes you could do that and it works, but I wish it was that easy. So when I'm coding for a day or for an afternoon, I'm not like going with the vibes. It's like a deeply intellectual exercise. And the thing that the term vibe coding makes people think is easier than it is. So frankly, after a day of using AI-assisted coding, I'm exhausted mentally, right? So I think of it as rapid engineering, where AI is letting us build serious systems, build products, much faster than ever before. But it is, you know, engineering just done really rapidly.
是的。大家常常把 "vibe coding" 想得太过简单,觉得只要随心所欲地去写代码,做一些调整,提出一些建议就行了。有时候这种方式确实能带来不错的结果,但我希望一切能真那么简单。当我花一天或一下午的时间写代码时,我并不是随便应付,而是在进行一场深刻的智力活动。"vibe coding" 这个说法让人误以为编程比实际简单。不过,老实说,经过一天使用 AI 辅助编程后,我的脑子真的很累。我认为这更像是快速工程,AI 让我们能够比以往更快地构建严肃的系统和产品,但本质上这仍然是工程,只是速度更快而已。
Do you think that's changing the nature of startups? How many people you need? How you build things? How you approach things? Or do you think it's still the same old kind of approach, but you just have people to get more leverage because they have these tools now? So, you know, yeah, I find that we've built startups and it's really exciting to see how rapid engineering, AI-assisted coding, is changing the way we build startups. So there's so many things that, you know, would have taken a team of six engineers, like three months to build. They're now today, one of my friends are I, which is building a weekend.
And the fascinating thing I'm seeing is, if we think about building a startup, the core loop of what we do, right? I want to build a product that uses love. So the core iteration loop is right software, you know, it's a software engineering work and then the product managers maybe go to user testing, look at it, go back, got whatever to decide how to improve the product. So when we go look at this loop, the speed of coding is accelerating, the cost is falling. And so increasingly, the bottleneck is actually product management.
So the product management bottleneck is now looking at build, what do we want? Much faster. Well, the bottleneck is deciding what do we actually want to build at privacy. If it took you, say three-week-to-builder prototype, if you need a week to get user feedback, it's fine. But if you can now build a product in the day, then boy, if you have to wait a week for user feedback, that's really painful. So I find my teams, frankly, increasingly relying on gut because we're going to collect a lot of data that informs our very human mental model, our brains, mental model, whether user ones.
And then we often, you know, have to have deep customer empathy. So you can just make products decisions like that really, really fast in all the drive progress. Have you seen anything that actually automates some aspects of that? I know that there have been some versions of things, or people, for example, are trying to generate market research by having a series of bots kind of react in real time. And that almost forms your market or your user base as a simulated environment of users. Have you seen any tooling that work or take off, or do you think that's coming, or do you think that's too hard to do?
Yeah, so there's a bunch of tools to try to speed up product management. I feel like, well, the recent fake model IPO is one, you know, really that book design, AI, hiding it, you know, doing to the great job. Then there are these tools that I try to use AI to hold interview prospective users. And as you say, we looked at some of their scientific papers on using a flock of AI agents to simulate, you know, a group of users and how to calibrate that. It all feels promising and early and hopefully, while they're exciting in the future, I don't think those tools are set everything product managers, nearly as much as coding tools are set everything software engineers.
So this does treat more the bottleneck on the product management side. It doesn't make sense to me that my partner Mike has this idea that I think is broadly applicable in a couple different ways of like computers can now aggregate humans at scale. And so there's companies like Listen Labs working on this for like consumer research type tasks, right? But you could also use it to, you know, understand tasks for training or for, you know, the data collection piece that you described. When you think about your teams that are in this iteration loop has like the founder profile that makes sense changed over time.
To me, there are so many things that the world used to do in 2022. They just do not work in 2025. So if I often I ask myself, is there anything with doing that today that we're also doing in 2022? And if so, let's take a look and see if it's still going to make sense today because a lot of the workflows in 2003 don't make sense today. So I think today, the technology is moving so fast. Founders, they're on top of genie technology does, you know, tech oriented product leaders. I think are much more likely to succeed than someone that maybe is more business oriented, more business savvy, but it's not a good feel for where AI is going.
I think unless you have a good feel for what the technology can they cannot do, it's really difficult to think about strategy, whether to lead the company. We believe this too. Yeah, cool. Yeah. Yeah. I think that's like old school Silicon Valley even. Like if you look at gates or Steve Jobs, Slashwasniak, or a lot of the really early pioneers of the semiconductor computer, early internet era, they're all highly technical. And so I must feel like we kind of lost that for a little bit of time. And now it's very clear that you need technical leaders for technology companies.
I think we used to think, oh, you know, they've had one exit before. So two exits even. So let's just back that founder again. But I think if that founder has stayed on top of AI, then that's fantastic. But if, you know, and I think part of it is, in moments of technological disruption, AI rapidly changing, that's the rare knowledge. So actually take mobile technology. You know, like everyone kind of knows what a mobile phone can and cannot do. Right. When mobile app is as GPS, all that, everyone kind of knows that.
So you don't need to be very technical to have a gutful, can I build a mobile app for that? But AI is changing so rapidly. What could do with voice act with engine? What does it do? How rapidly found? Isch model? Well, it was a reason model. So having that knowledge is a much bigger differentiator, whereas, you know, knowing what a mobile app can do to build a mobile app. Yeah. It's an interesting point because when I look at the biggest mobile apps, they were all started by engineers.
So what's Apple Star by an engineer, Instagram was started by an engineer. I think Travis at Uber was was technicalish. Technically adjacent. Technically adjacent. And Stacarta, poor, if I was an engineer at Amazon. Yeah. And Travis read the insight that GPS enabled a new thing. But so you have to be one of the people that saw GPS on mobile coming early to go and do that. Yeah. You have to be like really aware of the capabilities.
Yeah. Yeah. I have to know the technology. Yeah. Super interesting. What other characteristics do you think are common? I mean, I know people have been talking about, for example, it almost felt like there was an era where being hard working was kind of poopy or do you think founders have to work hard? Do you think people who succeed? I'm just sort of curious. Like, aggression, hours work, like what else may correlate or not correlate in your mind?
You know, I work very hot. The interest in my life where, you know, I encourage others that want to have a drink or rather than like work hot. But even now, I feel like a little bit of nervous is saying that because in some parts of society is considered not politically correct to say, well, working hard pray carl is a personal success. I think it's just a reality. I know that not everyone at every point in their life is in the time when they work hard.
When my kids were first born, that week I did not work very hard. It was fine. Right? So acknowledging that not everyone is in a circumstance in their work hard, just the factual reality is people that work hard accomplish a lot more. But of course, you need to respect people that on the face with it. Yeah, I'd say something maybe a little less correct, which is I less politically correct, which is like, I think there was an era where people thought like there was a there was a statement that startups are for everyone. And like, I do not believe that's true. Right? I think like, you know, you're trying to do a very unreasonable thing, which is like create a lot of value impacting people very quickly. And when you're trying to do an unreasonable thing, you probably have to work pretty hard. Right? And so I think people, I think that got very the sort of work ethic required to like move the needle in the world very quickly disappeared.
Yeah. So, those are those are those are hold. I wish I remember who said this, but was it the only people that would change the world are the ones crazy enough to think they can? I think it does take someone with the bonus, the decisiveness that go and say, you know what, does the state of the world? I'm going to take a shot at changing and and and is only people with that conviction of that I think can do this. Thanks me as being true in any endeavor, you know, I used to work as a biologist and I think it's true in biology. I think it's true in technology. I think it's true in almost every field that I've seen is it's the people who work really hard do very well. And then in startups, at least the thing I tended to forget for a while was just how important competitiveness or people who really wanted to compete in win mattered.
And sometimes people come across as really low key, but they still have that drive in that urge and they want to be the ones who are the winners. And so I think that matters. And similarly that was kind of put aside for a little bit, at least from a societal perspective relative to companies. Actually, I've seen I feel I've seen two types. One is they really want their business to win. That's fine. Some do great. Some are they really want their customers to win. And it's so obsessed with serving the customer that that works out. I used to say early physical Sarah. Yeah, yes, I knew about competition blah blah blah. But I was really obsessed with learning this with the customers and that drove a lot of my behaviors that now that that's a really good framework.
And when I say competition, I don't mean necessarily with other companies, but it's almost like with whatever metric you set for yourself or whatever thing you want to win at or be the best at. Well, when I found this in a startup environment, you just got to make so many decisions every day. You just have to go by gut a lot of time. Right. I feel like, you know, building a startup feels more like playing tennis than solving calculus problems. You just don't have time to think. So make a decision. And I feel like, so this is why people that obsessed day and night with the customer with the company think really deeply and have that construction knowledge that when when someone says, do I ship product feature a of future B?
Yeah, like you just got to know a lot of the time, not always. And it turns out there are so many to use your basis term, like two way doors in startups because frankly, you know, you're very low to lose. So just make a decision and it is wrong. Change of the week later is fine. So I find, but to be really decisive and move really fast, you need to have obsessed usually about the customer, maybe the technology to have that say the knowledge to make really rapid decisions and still be right most of the time. How do you think about that bottleneck in terms of product management that you mentioned or people who have good product instincts because I was talking to one of the best known sort of tech public company CEOs.
And his view was that in all of Silicon Valley or in all of tech kind of globally, there's probably a few hundred at most great product people. Do you think that's true? Or do you think there's a broader swath of people who are very capable at it? And then how do you find those people? Because I think that's actually a very rare skill set in terms of the people who are, you know, just like there's a 10x engineer, there's 10x product insights it feels. Boy, that's a great question.
I feel it's got to be more than a few hundred great product people. Maybe just as I think there are way more than a few hundred great AI people. I think there are. But I think one thing I find is very difficult is that user empathy or that customer empathy because, you know, to form a model of the user or the customer, there's so many sources of data, you run surveys, you talk to handful of people, you remark or reports, you look at people's behavior on other parallel or competing apps, whatever.
But there's so many sources of data, but to take all these data and get of your own head to form a mental model for what you're maybe I do customer profile or some user you want to serve, think and act so you can very quickly make decisions serve them better. That human empathy, one of my failures, one of the things I did not do well early phase of my career for some dumb reason, I tried to make a bunch of engineers product managers, I gave them product management training and I found that I just foolishly made a bunch of really good engineers feel bad for not being good product managers, right?
But I found that one correlate for whether someone would have good product instincts is that very high human empathy, where you can synthesize loss of signals to really put yourself into the present shoes, to then very rapidly make product decisions and all the sort of. You know, going back to coding assistance, it's really interesting, I think it is like reasonably well known that the cursor team, like they make their decisions actually very instinctively versus spending a lot of time talking to users.
And I think that makes sense if you are the user and then like your mental model of like yourself and what you want is actually applicable to a lot of people. And similarly, like I think, you know, these things change all the time, but I don't think Cloud Code incorporates despite, you know, scale of usage, feedback, data today from like a trained loop perspective. And I think that surprises people because it is really just like what do we think the product should be at this stage?
So it's also one advantage that starts at half is why you're early, you can serve kind of one user profile. Today, if you're, I don't know, like Google, right? Google serves such a diverse set of user personnel. Now, you really have to think about a lot of different user personnel. Now, since that has complexity, the product changes. But we're starting to get your initial ways in the market.
You know, if you pick even one human that is representative enough, but for broad set of users, and you just build a product for one user that you have one ideal customer profile, one hypothetical person, then you should actually go quite far. And I think that for some of these businesses, be it cluster or cloud code or something, if they have internal via mental picture of a user, that's close enough.
So very large your prospective users, you guys, you go really far that way. The other thing that I've observed and curious you guys see this in some of our companies is just like the floor is lava, right? The ground is changing in terms of capability all the time. And the competition is also very fierce in the categories that are already obviously important and have multiple players.
So leaders who are really effective in companies a generation ago are not necessarily that effective when recruited to these companies as they're scaling, like because the pace of, it is a velocity of operation or the pace of change. It's interesting to see you say, like I'm looking at what I was doing in like today and in 2022 and saying like, is that still right versus if you're an engineering leader or go to market leader and you've like built your career being really great at how that's done, that may not be applicable anymore.
I think it's a challenge for a lot of people and now many great leaders in lots of different functions still doing things the way they were in 2022 and I think it's just a college change. When new technology comes, I mean, you know, once upon a time there was no such thing as a web search today, would you hire anyone for any road that doesn't know how to search the web?
I think we're well-possibly that for a lot of job roles, if you can't use OMS in the effective way, you're just much less effective than someone that can. And as a result, everyone in my team, AI find knows how to code. Everyone is a good outcome. And I see for a lot of my team members, you know, when my, I don't know, assistant general counsel or my CFO or my friend that's operator when they learn how to code, they're not software engineers, but they do their job function better because by learning the language of computers, they can now tell a computer more precisely what they want to do for them and computer to do for them and this makes them more effective, their job function.
I think the rapid pace of change is disconcerting a lot of people, but I guess, no, no. I feel like when the world is moving at this pace, we just have to change at the world, at the pace in the world. Yeah, to your point, show up in Hires, particularly around product. So, uh, or product and design. So one sort of later stage AI company I'm involved with, they were doing a search for somebody to run product and somebody to run design. And in both cases, they selected for people who really understood how to use some of the vibe coding, such AI, assistant coding tools because they said, they said your point, it's like you can prototype something so rapidly.
And if you can't even just mock it up really quickly to show what it could look like or feel like or do in a very simple way, you're wasting an enormous amount of time talking and writing of the product requirements document and everything else. And so I do think there's a shift in terms of how do you even think about what processes do you use to develop a product or even pitch it, right? Like what should you show up with to a meeting when you're talking about a product that's going to be fair? Yeah, no, you should have a prototype in some cases. Actually, just give me an example. Resource engineering engineers for a row and hire their interview someone with about 10 years of experience, you know, full-sack, very good resume, also interview the fresh college draft.
But the difference was the person's 10 years of experience had not used AI tools much at all. Fresh college draft had, and my assessment was the fresh college draft that new AI would be much more productive and I decided to hire them instead to another great decision. Now, the flip side of this is the best engineers I work with today are not fresh college drafts. They're people with, you know, 10, 15 or more years of experience, but they're also really on top of AI tools and that doesn't generally just completely cause their own. So I feel like, I actually think software engineering is a harbinger of what happened in other disciplines because the tools are most advanced in software engineering.
It's interesting. One company that I guess both of us are involved with this called Harvey, and I led their series B, and when I did that, I called a bunch of their customers and the thing that was most interesting to me about some of those customer calls was because illegal as notorious as being a tough profession for adopting new technology, right? There aren't a dozen great legal software companies. Those customers that I called, which were big law firms or people who were, you know, quite far along in terms of adopting Harvey, they all thought this was a future. They all thought that AI was really going to matter for their vertical.
And the main thing they would raise is questions like, in a world where this is ubiquitous, suddenly instead of hiring 100 associates, I only hire 10. And how do I think about future partners and who to promote if they don't have a big pool? And so I thought that mindset shift was really interesting. And to your point, I feel like it's percolating into all these markets industries and it's sort of slowly happening, but as industry by industry, people are starting to rethink aspects of their business in really interesting ways. And I'll take a decade, two decades for this transformation to happen. But it's compelling to kind of see how people, like the earliest adopting verticals and something that the people were thinking deepest about it.
It should be really interesting. I think, yeah, I would say about legal startup, Callez's AI, the AI fun help builds, it's doing very well as well. I think the nature of work in the future will be very interesting. So I feel like a lot of teams wound up outsourcing a lot of work, partly because of the costs. But with AI and AI assistants, part of me wonders is a really small, really skilled team with lots of AI tools. Is that going to all perform a much larger and maybe lower cost team that may only not be able to. And they have less coordination cost.
So actually, so some of the most productive teams I'm on, you know, I'm a part of now, is some of the smallest teams, then very small teams of really good engineers with lots of AI enablements and very local things should cost them as well as the other person. So see, we'll see how the world evolves too. We already need to make a call, but you can see where I'm maybe thinking the world may or may not be headed. I work with several teams now. One of which is called Open Evidence and has like a pretty good penetration, like 50% of doctors in the US now, where it's an explicit objective in the company to try to be as small as possible as they grow impact. And, you know, we'll see where these companies land because, you know, there's lots of functions that need to grow in a company over time. But that certainly wasn't an objective for like five years ago.
I've heard that objective a lot. I've actually heard that objective a lot in the 2010s. And there's a bunch of companies that I actually think underhired pretty dramatically or stayed profitable and when brag about being profitable for gross, what wasn't as strong as it could be. So I actually feel like that's a trap. How would you calibrate that? Yeah. It's basically really, it's almost are you being laxed asical or too accepting of the progress that your company's making because it's going just fine. It could be going much better, but it's still going great on a relative basis. And so you're like, oh, I'll keep the team small. I'll be super lean. I won't spend any money. Look at me how profitable I am.
And sometimes it's amazing. Right capital efficiency is great. But sometimes you're actually missing the opportunity or not going as fast as you can. And usually I think what happens is in the early stage of a startup, like you're competing with other startups. And if your way ahead, it feels great. But eventually, if they're incumbents in your market, they come in. And the faster you capture the market and move up market, the less time you give them this sort of realize what's going on and catch on. And so often five, six, seven years in the life of a startup, you're actually competing with incumbents, suddenly, and they just kill you with distribution or other things. And so I think people really missed the mark.
And you could argue that was kind of slack versus teams that was, you know, there's a few companies I won't name, but I feel like they're so proud of their profitability and they kind of blew up. I guess on the design side, that was sketch, right? Yeah, but the naming coding, yeah. You know, they were based on the other ones. They were super happy. They were profitable. They were doing great. And then the Figma wave kind of came. Do you think your companies stay this small? Do you think your teams stay this small? Do they my team stay this small?
What do you mean? In terms of just efficiency of like, can you actually get to, you know, affect millions and billions of people with 10, 50, 100 percent teams? I think teams can definitely be smaller now than they used to be, but are we over investing or under this thing? And then also, I think to your point, analysis and market dynamics, right? If there's a, if there's like a winnett take all market, then the incentives just gotta go.
Yeah, yeah. I'm craft, I think when it sold the Microsoft, was how many people like five people are something? And it sold for a few billion dollars and it was massively used. I think people forget all these examples, right? It's just this, oh, suddenly you can do things really and you could always do something, things lean before. The real question is how much leverage did you have in headcount? How did you distribute? What did you actually need to invest money behind?
And then I would almost argue that one of the reasons small teams are so efficient with AI is because small teams are efficient in general. Even higher at 30 extra crafty people who get in the way. And I think often people do that. If you look at the big tech companies, for example, right now, many, not all of them, but many of them could probably shrink by 70% and be more effective. Right? And so I do think people also forget the fact that there's AI efficiency, B, there's sort of high value capital being arbitrage into markets that normally wouldn't have them. Legal is a good example. Great engineers didn't want to work in legal. Now they do because of things like Harvey. More healthcare or healthcare, which again, suddenly you have these great people showing up.
But I think also the other part of it is just small teams tend to be more effective. And AI helps you argue other reasons to keep teams highly small in performance, which I think is kind of underdiscosity. I feel like one of the other reasons why that AI is things so important. I remember one week had two conversations with two different team members. One person came to me to say, hey, I'm going to do this. Can you give me some more headcount to do this? I said no. Later that week, I think independently, someone else, very similar, say, hey, Andrew, can you give me some budget to hire AI to do this? Yes. And so that realization is your high AI, not a lot more humans with this. You just got to have those instincts.
我认为另一方面是小团队往往更高效。AI 可以帮助你找到更多理由维持团队的小规模,从而提高表现,这一点常常被低估。我觉得这是 AI 之所以如此重要的原因之一。我记得有一周,我和两个不同的团队成员进行了两次对话。一个人来找我说,他要做一件事,能不能给他更多的人手。我当时拒绝了。到了那周的晚些时候,又有另一个人独立地来找我,提出类似的需求:他说,"Andrew,我能否获得预算,用 AI 来做这件事?" 对此我回答:可以。这个例子让我意识到,与其增加很多员工,不如利用 AI。这种直觉很重要。
Yeah, that's interesting. If you think of what's happening in software engineering as the harbinger for the next industry transformations, you spend a lot of time investing at the application level or building things there. What do you think is next? What do you want to be next? I feel just a lot of at the tooling level, I feel like I actually prefer a ranked list for all investing in this stuff.
Yeah, does it? One thing I find really interesting, which is a web economist doing all the studies on whether the jobs at high-risk AI disruption. I think you're skeptical. I actually look at them sometimes for inspiration for where we should find ideas to build projects. One of my friends, Eric Brenner, he's here in his company, Work Heelings, which we're involved in. Very insightful in the nature.
Yeah, I like him. Yeah, good. I find talking to that sometimes useful. Although actually, one of the lessons of learning though is in the top-down market analysis, I think AI was in the talk of vision environment. There's so many ideas that no one's working on yet because the tech of it is so new. So one thing I've learned is AI fun. We have a session with speed. All my life will always end up session of speed, but now we have tools to go even faster than we could.
And so one of the lessons of learning is we really like concrete ideas. So someone says, I did a market analysis. AI would transform healthcare. It's true, but I don't know what to do with that. But if someone has subject matter, X-Fit or an engineer comes and says, have an idea. Look at this part of healthcare operations and drive vision and all this. They go, great, great. That's a concrete idea. I don't know if it's a good idea or a bad idea, but it's concrete. At least we could very efficiently figure out what your customers want to do. It's technically feasible and get going.
So I find it, hey, I fun. We're trying to decide what to do. We've been a long list of ideas. We try to select your small number that we want to go for on. We don't like looking at ideas that are not concrete. What do you think investing firms or incubation studios like yours will not do two years from now? Like not do manually, sorry. I think there's a lot could be automated.
But the question is whether the task we should be automated. So for example, you know, we don't make follow-on decisions that often, right? Because of portfolio or some dozens of companies. So do we need to fully automate that? Probably not. Because we're very, okay. I'm very hard to automate. I feel like doing deep research on individual companies and competitive research that seems right for automation.
I personally use whether I open a SD researcher and other D researcher types of tools a lot to just do at least a cursory market research things. LP reporting, that is a massive amount of paperwork that maybe you could simplify. Yeah. I'm taking the strategy of general avoidance. Besides, you know, basic compliance. You know, one of my partners, Bella, she worked at Bridgewater before, where they had like an internal effort to take a chunk of capital and then try to disrupt what Bridgewater was doing with AI.
And it's like, you know, macro investing is a very different style. But I think, but I think it probably gives us some indications where the human judgment piece of our business, I think, is not obvious. Like, does an entrepreneur have the qualities that we're looking for when, you know, your resume on paper or your GitHub or, you know, what minor work history you have when you're a new grad? It's not very indicative.
And so people have other ideas of doing this. Like, I know investors that are like, you know, looking at recordings of meetings with entrepreneurs and seeing if they can get some signal out of like communication style, for example. But I think that part is very hard. I do think you can be like, programmatic about looking at materials, for example. And it's like ranking, you know, quality of teams overall.
Does actually one thing, I feel like our AI models are getting really intelligent. But does it sound like places where humans still have a huge advantage of the AI? It is often if the human has is there has additional context that for whatever reason, the AI model can't get at. And it could be things like meeting the founder and sussing out there, you know, just how they are as a person in the leadership qualities, the communication or whatever. And those things may be reviewing video, maybe eventually we can get that context at AI model. But I find that all these things like as humans, you know, we do a background reference check and someone may as an offhand comment that we catch that affects the decision. Then how does the AI model get disinformation, especially when, you know, a friend will talk to me, but they don't really talk to my AI model. So I find that there are a lot of these tasks where human have a huge information advantage still because they're not figured out the plumbing or whatever it's needed to get information to the AI model.
The other thing I think is like very durable is things that rely on like a relationship advantage, right? If I'm convincing somebody to work at one of my companies and they works at a previous company and they trust me because of it or whatever reason, like, you know, all the information in the world about why this is a good opportunity isn't the same thing as me being like Sally, you got to do this, it's going to work. It remains to be seen whether or not company building is actually that correlated with investment returns. But I do think that that side of it feels harder to fully automate. Yeah, yeah, yeah, yeah, yeah, yeah, I think trust because people know people do trust you, right? Trust you, right? Because you can say so many things, it's very easy to lose trust.
Yeah, so that makes sense. Yeah, actually one thing I'm curious to take on is, we increasingly see highly technical people try to be first time founders, you know, set up the processes to set up first time founders to learn all the hard lessons and all the craziness needed to be a successful founder. So a lot of time thinking through that, how the set up founders were successful when they have, you know, 80% of the skills needed to be really great, but there's another just a little bit that we can help them with. That's a very manual process. I don't sweat it. You don't sweat it. I just feel it as like a mix of pure groups. Like, can you surround people with other people who are either similar or one or two steps ahead of them on the founding journey? And then the second thing is complimentary hire. I think in general, one of my big learnings is I feel like early in careers, people try to compliment or try to build out the skills that they don't have.
And later in careers, they lean into what they're really good at and then they hire people to do the rest. And so if the company's working, I think you just hire people, like Bill Gates would notoriously talk about his COO was always the person he'd learn the most off of and then once he does certain level scale, he'd hire his next COO. And so I must via through that lens for founders. Yeah, computerized, we've seen. But I think the best way to learn something is to do it. And so that therefore just go, you know, you'll screw it up. It's fine as long as it's not existential, the business repairs. So I tend to be very elaxidazical. I probably think too many things are existential for companies.
Yeah, it's something. It's like, do you have customers and are you building product? Most of them, yeah. Are you building a product that uses love, right? And then of course, go to market, it's important and all that is important. But you just solve for the product for us. They're usually sometimes you can figure the rest. I grew up most of the time and not always. Yeah, I think there's lots of, there's some counter examples, but yeah, I generally grew up with you. Yeah, sometimes you can build a sucky product. You have a sales channel you can force it through, but I rather know that's not my defile model. I don't know how to make sure I'm just saying that it does work. There's a lot of really bad technology that has big companies right now.
Okay, if you have these, you know, first time, very technical founders with gaps in their knowledge or skill set, being like the core profile of folks, you're backing again. Like, do you augment them somehow? Like, what's what helps them when they begin? I think a lot of things. That's the one they realize that, you know, at venture firms, venture studios, we do so many reps that we just see a lot that even repeat founders have only done twice in their life or even once or twice in their life.
So I find that when my firm says, alongside the founders and shares their instincts on, you know, when do we get customer feedback faster, are you really on top of the latest technology trends? How do you just speed things up? How do you fundraise? Most people don't fundraise that much in their lives, right? Most founders just do it handful times. That helps even very good founders with things that because of what we do with more reps and then I think, harming others around the peer group, I know these are things that you guys do.
I think there's a lot we could do. It turns out, even the best founders need help. So hopefully, you know, VC's venture studios can provide that to great founders. A lot's wiser about this than I am. I mean, I can't help myself but like want to specifically try to upscale founders on a few things that have to be able to do, like recruiting, right? But I would agree that the higher leverage path is absolutely like you can put people around yourself to do this and to learn it on the job.
Last question for you. What do you, what do you believe about broad impact of AI over the next five years? Do you think most people don't? I think many people will be much more empowered and much more capable in a few years than they are today. And the capability of individuals is probably, of those in embrace AI, will probably be five greater than most people realize.
Two years ago, who would have realized that software engineers would be as productive as they are today when they embrace AI? I think in the future, people also do job functions and also for personal tasks. I think people and phrases would just be so much more powerful and so much more capable than they're pretty even imagined.
Awesome. Thanks, Andrew. Thanks. Thanks. Thanks. Thanks. Thanks. Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple podcasts, Spotify or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
太棒了。谢谢你,Andrew。感谢,感谢,感谢,感谢,感谢。可以在 Twitter 上关注我们,账号是 no priors pod。如果你想见到我们的样子,欢迎订阅我们的 YouTube 频道。你也可以在 Apple 播客、Spotify 或其他收听平台关注节目。这样你每周都会收到新剧集。你还可以在 no-priors.com 注册邮件,或找到每集的文字记录。