Hi everyone, welcome to Gray Matter, the podcast from Greylock where we share stories from company builders and business leaders. I'm Heather Mack, head of editorial for Greylock.
Today, we're re-broadcasting Greylock General Partner Read Hoffman's interview with OpenAI CEO Sam Altman. Founded in 2015, OpenAI is now generally regarded as one of the most advanced artificial intelligent companies operating today.
In the past year, OpenAI has released several products that have drawn widespread attention and some say set up a sort of arms race in the field of AI. In short succession, the company released its generative transfer model GPT-3, which uses deep learning to produce human-like text, its image creation platform Dalai, and most recently, chat GPT. Train on massive, large language models, the highly sophisticated chatbot can mimic human conversation and speak on a wide range of topics.
As a note, this interview was recorded a few months before chat GPT was released, and the company recently shared updated information about how the technology has been further developed based on its performance and public response thus far.
You can watch the video of this interview on our YouTube channel, and you can read the transcript in the content section of our website, greylock.com slash blog, both are linked in the show notes.
Sam, close friend, many, many things, I think we actually probably first met, I think, on the street, on El Camino, bumping into when you're doing looped. Have done a number of things, including a good portion of my nuclear investments, are with you. Because you call me and say, hey, this is really cool, and I agree.
Let's start a little bit more pragmatic, but then we'll branch out. One of the things I think a lot of folks here are interested in is based off the APIs that very large models will create, what are the real business opportunities? What are the ways to look forward?
And then how, given the APIs will be available to multiple players, how do you create distinctive businesses on them? Yeah.
那么,考虑到API将可供多家玩家使用,如何在它们上面创建独特的业务呢?是啊。
So, I think so far we've been in the realm where you can do an incredible copywriting business or you can do education service or whatever. But I don't think we've yet seen the people go after the trillion dollar take on Google's. And I think that's about to happen.
Maybe it'll be successful, maybe Google will do it themselves. But I would guess that with the quality of language models we'll see in the coming years, there will be a serious challenge to Google for the first time for a search product. And I think people are really starting to think about how do the fundamental things change? And that's going to be really powerful.
I think that a human level chat bot interface that actually works this time around, I think many of these trends that we all made fun of were just too early. The chatbot thing was good, it was just too early. Now it can work.
You know, having new medical services that are done through that, where you get great advice or new education services, these are going to be very large companies. I think we'll get multimodal models and not much longer and that'll open up new things.
I think people are doing amazing work with agents that can use computers to do things for you, use programs. And this idea of a language interface where you say a natural language what you want in this time, like dialogue back and forth, you can iterate and refine it and the computer just does it for you.
You see some of this with like Dolly and Copilot in very early ways. But I think this is going to be a massive trend and you know, very large businesses will get built with this as the interface and more generally that like these very powerful models will be one of the genuine new technological platforms which we haven't really had since mobile and there's always like an explosion of new companies right after. So that'll be cool.
And what do you think the key things are given that the large language model we provided is an API service? What are the things that you think that folks who are thinking about these kind of AI businesses should think about is how do you create them during differentiated business?
So you know, there, there I think there will be a small handful of like fundamental large models out there that other people build on but right now what happens is you know, company makes large language model, API, other two building top of it.
And I think there will be a middle layer that becomes really important where I'm like skeptical of all of the startups that are trying to sort of train their own models. I don't think that's going to keep going but what I think will happen is there will be a whole new set of startups that take an existing very large model of the future and tune it, which is not just fine tuning. like all the things you can do.
I think there will be a lot of access provided to create the model for medicine or using a computer or like the kind of like friend or whatever. And then those companies will create a lot of enduring value because they will have like a special version of they won't have to have created the base model but they will have created something they can use just for themselves or share with others that has this unique data fly wheel going that sort of improves over time and all of that. So I think there will be a lot of value created in that middle layer.
And what do you think some of the most surprising ones will be? It's a little bit like, for example, you know, a surprise a couple years ago and we talked a little bit to Kevin Scott about this this morning as we opened up which is train on the internet do code. Right?
So what do you think some of the surprises will be of you didn't realize it reached that far? I think the biggest like systemic mistaken thinking people are making right now is they're like, all right, you know, maybe I was skeptical but this language model thing is really going to work and sure like images video too but it's not going to be generating net new knowledge for humanity. It's just going to like do what other people have done and you know, that's still great. That's still like brings the marginal cost of intelligence very low but it's not it's not going to go like create fundamentally new.
It's not going to go cure cancer. It's not going to add to the sum total of human scientific knowledge and that is what I think will turn out to be wrong that most surprises the current experts in the field.
Yep. So let's go to science then as the next thing. So talk with the general tooling that really enhances science. What are some of the things, whether it's building on the APIs, you know, use of APIs by scientists, what are some of the places where science will get accelerated now?
So I think there's two things happening now and then a bigger third one later. One is there are these science dedicated products, whatever like alpha fold and those are adding huge amounts of value and you're going to see in this like way more and way more. I think I that I were like, you know, had time to do something else. I would be so excited to like go after a bio company right now. Like I think you can just do amazing things there.
Anyway, but there's like another thing that's happening, which is like tools that just make us all much more productive that help us think of new research directions that sort of write a bunch of our code so you know, we can be twice as productive. And that impact on like the net output of one engineer or scientist. I think will be the surprising way that AI contributes to science that is like outside of the obvious models.
But even just seeing now like what I think these tools are capable of doing, co-pilot as an example, you know, be much cooler stuff than that, that will be a significant like change to the way that technological development, scientific development happens.
But then, so those are the two that I think are like huge now and lead to like just an acceleration of progress. But then the big thing that I think people are starting to explore is I hesitate to use this word because I think there's one way it's used, which is fine and one that is more scary.
But like AI that can start to be like an AI scientist and self-improve. And so when like can we automate like can we automate our own jobs as AI developers very first the very first thing we do? Can that help us like solve the really hard alignment problems that we don't know how to solve? Like that honestly I think is how it's going to happen.
就像 AI 可以开始像 AI 科学家一样并进行自我完善一样。那么,当我们自动化作为 AI 开发人员自己的工作时,我们能否解决我们不知道如何解决的非常困难的匹配问题?我认为,这是实现的方式。
The scary version of self-improvement like the one from the science fiction books is like you know editing your own code and changing your optimization algorithm and whatever else. But there's a less scary version of self-improvement which is like kind of what humans do, which is if we try to go off and like discover new science.
You know that's like we come up with explanations, we test them, we think like we, we, whatever process we do that is like specialty humans. And I'm actually AI to do that, I'm very excited to see what that does for the total like. I'm a big believer that the only real driver of human progress and economic growth over the long term is the structure, the societal structure that enables scientific progress and then scientific progress itself. And like I think we're going to make a lot more of that.
Well especially science that's deploying technology, say. a little bit about how what I think probably most people understand what the alignment problem is but it's probably worth four sentences on the alignment problem.
Yeah so the alignment problem is like we're going to make this incredibly powerful system and like be really bad if it doesn't do what we want or if it sort of has you know goals that are either in conflict with ours and many sci-fi movies about what happens there or goals where it just like doesn't care about us that much. So the alignment problem is how do we build AGI that does what is in the best interest of humanity? How do we make sure that humanity gets to determine the you know the future of humanity?
And how do we avoid both like accidental misuse? Like we're something that's wrong that we didn't intend intentional misuse where like a bad person is like using an AGI for great harm even if it that's what other person wants. And then the kind of like you know inner alignment problems where like what if this thing just becomes a creature that uses as a threat.
The way that I think the self-improving systems help us is not necessarily by the nature of self-improving but like we have some ideas about how to solve the alignment problem at small scale and we've you know been able to align open AIs biggest models better than we thought we we would at this point so that's good. We have some ideas about what to do next but we cannot honestly like look anyone in the AI and say we see out a hundred years how we're going to solve this problem but once the AI is good enough that we can ask it to like hey can you help us do alignment research?
I think that's going to be a new tool in the toolbox. Yeah like for example one of the conversations you and I had is could we tell the agent don't be racist right and it's supposed to try to figure out all the different things where they're weird correlative data that exists on all the training settings that everyone knows may lead to yeah racist outcomes it could actually in fact do a self-cleansing. Totally once the model gets smart enough that you can that it really understands what racism looks like and how complex that is you can say don't be racist. Yeah exactly.
What do you think are the kind of moon shots that in the terms of evolution of the next couple years that people should be looking out for? In terms of like evolution of where AI will go.
I'll start with like the higher certain things. I think language models are going to go just much much further than people think and we're like very excited to see what happens there. I think it's like what a lot of people say about you know running out of compute running out of data like that's all true but I think there's so much algorithmic progress to come that we're going to have like a very exciting time.
Another thing is I think we will get true multimodal models working and so you know not just text and images but every modality you'd like in one model you're able to easily like you know fluidly move between things. I think we will have models that continuously learn so like right now if you use GPT whatever it's sort of like stuck in time that it was trained and the more you use it it doesn't get any better and all of that. I think we'll get that changed.
So very excited about all of that and if you just think about like what that alone is going to unlock and the sort of applications people will be able to build with that that would be like a huge victory for all of us and just like a massive step forward and a genuine technological revolution if that were all that happened. But I think we're likely to keep making research progress into new paradigms as well. We've been like pleasantly surprised on the upside about what seems to be happening and I think you know all these questions about like new knowledge generation how do we really advance humanity. I think there will be systems that can help us with that.
So one thing I think would be useful to share because folks don't realize that you're actually making these strong predictions from a fairly critical point of view not just say we can take that hill say a little bit about some of the areas that you think are current kind of illusionally talked about like for example AI and fusion.
Oh yeah so like one of the unfortunate things that's happened is you know AI has become like the mega buzzword which is usually a really bad sign I hope I hope it doesn't mean like the field is about to fall apart but historically that's like a very bad sign for you know new startup creation or whatever if everybody is like I'm this with AI and that's definitely happening now.
So like a lot of the you know we were talking about like are there all these people saying like I'm doing like these you know RL models for fusion or whatever and as far as we can tell they're all like much worse than what like you know smart physicists to figure it out.
I think it is just an area where people are going to say everything is now this plus AI. Many things will be true I do think this will be like the biggest technological platform of the generation but I think it's like we like to make predictions where we can be on the frontier understand predictably what the scaling laws look like or already have done the research where we can say all right this new thing is going to work and make predictions out from that way and that's sort of like how we try to run the open AI which is you know do the next thing in front of us when we have high confidence and take 10% of the company to just totally go off and explore which has led to huge wins and there will be wait like oh I feel bad to say this like I that will still be using the transformers in five years I hope we're not I hope we find something way better but the transformers obviously been remarkable.
So I think it's important to always look for like you know where am I going to find the next sort of the next totally new paradigm and but I think like that's the way to make predictions don't don't pay attention to the like AI for everything like you know can I see something working and can I see how predictably gets better and then of course leave room open for like the you can't plan the greatness but sometimes the research breakthrough happens.
Yep so I'm going to ask two more questions and then open it up because I want to make sure that people have a chance to do this the broader discussion although I'm trying to paint the broad pictures so you can get the crazy ass lessons as part of this.
What do you think what do you think is going to happen vis-a-vis the application of AI to like these very important systems like for example financial markets you know because the very natural thing would be is say well let's let's do a high frequency quant trading system on top of this and other kinds of things what what is it is it just kind of be a neutral arms race is it is it what how do you have what what you thought and like it's almost like the life 3.0 yeah amegas point of view yeah um.
I mean I think it is going to just see been everywhere my basic model of the next decade is that uh the cost of intelligence the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly towards zero like surprisingly far and and those I think are two of the major inputs into the cost of everything else except the cost of things we want to be expensive the status goods whatever and and I think you have to assume that's going to touch almost everything um because these like seismic shifts that happen when like the whole cost structure of society change which happened many times before um like the temptation is always to underestimate those uh so I wouldn't like make a high confidence prediction about anything that doesn't change a lot or where it doesn't get to be applied um but one of the things that it's important is it's not like the thing trends either trends all the way to zero they just trend towards there and so it's like someone will still be willing to spend a huge amount of money on computing energy they will just get like unimaginable amount of intelligence energy they'll just get unimaginable amounts about that and so like who's going to do that and where's it going to get the weirdest not because the cost comes way down but the amount spend actually goes way up yes the intersection of the two curves yeah you know the thing got ten or a hundred times cheaper in the cost of energy you know a hundred million times cheaper in the cost of intelligence and I was still willing to spend a thousand times more into days dollars like what happens then yep.
And then uh last of the buzzword bingo part of the the future questions metaverse an AI what do you what do you see coming in this you know I think there are like both independently cool things it's not like totally clear to me yeah other than like how AI will impact all computing yeah well obviously computing simulation environments agents possibly possibly entertainment certainly education right um you know like an AI tutor and so forth those those would be baseline but the question is is there anything that's occurred to you that's I I would bet that the metaverse turns out in the upside case then which I think has a reasonable chance to happen in the upside case the metaverse turns out to be more like something on the order of the iPhone like a new a new container for software and you know a new way a new computer interaction thing and AI turns out to be something on the order of like a legitimate technological revolution um and so I think it's more like how the metaverse is going to. fit into this like new world of AI than AI fit into the metaverse but low confidence TBD all right questions
hey there how do you see uh technologies uh foundational technologies like tpc3 affecting the pace of life science research specifically um you can group in medical research there and and sort of just quickening the iteration cycles and then what do you see as the rate limiter in life science research and sort of where we won't be able to get past because they're just like laws of nature yeah something like that um so I I think the current leo available models are kind of not good enough to have like made a big impact on the field at least that's what like most like life sciences researchers have told me they've all looked at it now I guess a little helpful in some cases um there's been some promising work in genomics but like stuff on a bench top hasn't really impacted it I think that's going to change and I think uh they this is one of these areas where there will be these like you know new hundred billion to trillion dollar companies started and those those areas are rare but like when you can really change the way that if you if you can really make like a you know future of pharma company that is just hundreds of times better than what's out there today that that's going to be really different um as you mentioned there still will be like the rate limit of like bio has to run its own thing and human trials take over long they take and that's so I think an interesting cut of this is like where can you avoid that like where are the the synthetic bio companies that I've seen that have been most interesting to the ones that find a way to like make the cycle time super fast um and that that benefits like an AI that's giving you a lot of good ideas but you've still got to test them which is where things are right now um I'm a huge believer first startups that like the thing you want is low costs and fast cycle times and if you have those you can then compete as a startup against the big incumbents uh and so like I wouldn't go pick like cardiac disease is my first thing to go after right now with like at this kind of new kind of company um but you know using bio to manufacture something that sounds great uh I think the other thing is the simulators are still so bad and if I were and if I were a biomeats AI startup I would certainly try to work on that somehow when you think the AI tech will help create itself it's almost like a self improvement will help make the simulator significantly better um people are working on that now uh I don't know quite how it's going but you know there's very smart people are very optimistic about that yep other questions and I can keep going on questions I just want to make a sure you guys had a chance of this oh here yes great uh Micah's coming awesome thank you
focus on niche applications of these models, or will we see broader platforms that enable easier access to these models for a wider array of use cases?
我们会看到更多关注这些模型的细分应用,还是会出现更广泛的平台,以便更多用例更容易地使用这些模型?
I was curious what aspects of life do you think won't be changed by AI? Um, sort of all of the deep biological things like I think we will still really care about interaction with other people, we'll still have fun. And the reward systems of our brain are still going to work the same way. We're still going to have the same drives to create new things and compete for silly status and form families and whatever. So I think the stuff that people cared about 50,000 years ago is more likely to be the stuff that people care about 100 years from now than 100 years ago.
As they amplify on that before we get to whatever the next question is, what do you think are the best utopian science fiction universes so far?
在我们进行下一个问题之前,他们再详细阐述一下,你认为到目前为止最好的乌托邦科幻世界是什么?
Good question. Star Trek is pretty good, honestly. Like, I do like all of the ones that are sort of like, we turn our focus to exploring and understanding the universe as much as we can. It's not a utopian one. Maybe.
I think the last question is like an incredible short story. Yeah, that came up. Yeah, mine. Yep. I was expecting you to say, Ian Banks, on the culture. Those are great. I think science fiction is like, there's not like one sci-fi universe that I could point to and say, I think all of this is great. But like the optimistic corner of sci-fi, which is like a smallish corner, I'm excited about. Actually, I took a few days off to write a sci-fi story, and I had so much fun doing it, just about sort of like the optimistic case of AGI, that it made me want to go read a bunch more. So I'm looking for recommendations of more to read now. Like the sort of less known stuff, if you have anything. I will get to some great thumb recommendations.
So in a similar vein, one of my favorite sci-fi books is called Childhood's End by Arthur Clark, from like the 60s, I think. And I guess the one sentence summary is aliens come to the Earth, try to save us, and they just take our kids and leave everything else. So I think we're optimistic from that. But yes, there's ascension into the overmind is meant to be more utopian. But yes. OK.
Well, also in our current universe, our current situation, a lot of people think about family building and fertility. And some of us have different people of different ways of approaching this. But from where you stand, what do you see as the most promising solutions? It might not be a technological solution, but I'm curious what you think other than everyone having 10 kids. How do we? Of everyone having 10 kids? Yeah. How do you populate? How do you see family building coexisting with AGI at high tech?
This is like a question that comes up at Open AI a lot. How do I think about how should one think about having kids? I think no consensus answer to this. There are people who say, yeah, I'm not. I thought I was going to have kids, and I'm not going to, because of AGI. Like there's just, for all the obvious reasons, and I think some less obvious ones, there's people who say, well, it's going to be the only thing for me to do in 15, 20 years. So of course, I'm going to have a big family. That's what I'm going to spend my time doing. I'll just raise great kids. And I think that's what will bring me to fulfillment. I think, as always, it is a personal decision.
这就像是 Open AI 经常会遇到的一个问题。我们该如何思考生孩子的问题呢?对此我认为没有共识性的答案。有些人说他们不打算生孩子了,理由是 AGI。这是非常明显的,也有一些不那么明显的理由。还有些人说,在未来 15 到 20 年里,生孩子可能是唯一的选择,所以他们会拥有一个大家庭,他们会花时间抚养孩子,并为此感到满足。我认为,这永远都是个人决定。
I get very depressed when people are like, I'm not having kids because of AGI. The EA community is like, I'm not doing that, because they're all going to die. They're kind of like techno-opimists are like, well, it's just like, I want to merge into the AGI and go off exploring the universe. And it's going to be so wonderful. And I just want total freedom. But I think all of those I find quite depressing. I think having a lot of kids is great. I want to do that now more than I did, even more than I did when I was younger. And I'm excited for it.
What do you think will be the way that most users interact with foundation models in five years? Do you think there'll be a number of verticalized AIs start-ups that essentially focus on niche applications of these models, or will we see broader platforms that enable easier access to these models for a wider array of use cases? have adapted and fine-team models to an industry, or do you think prompt engineering will be something many organizations have as an in-house function?
I don't think we'll still be doing prompt engineering in five years. I think it'll just be like, it will be integrated everywhere. But you will just like, either with text or voice, depending on the context, you will just like interface in language and get the computer to do whatever you want. And that will apply to generate an image where maybe we still do a little bit of prompt engineering. But it's kind of just going to get it to go off and do this research for me and do this complicated thing. Or just be my therapist and help me figure out to make my life better or go use my computer for me and do this thing or any number of other things. But I think the fundamental interface will be natural language.
Let me actually push on that a little bit before we give the next question, which is, I mean, to some degree, just like we have a wide range of human talents right now, and taking, for example, a dolly, when you have a great visual thinker, they can get a lot more out of dolly because they know how to think more, they know how to iterate the loop through the test. Don't you think that will be a general truth about most of these things? So it isn't that, while it will be natural languages the way you're doing it, it will be, there will be almost an evolving set of human talents about going that extra mile.
100%. I just hope it's not figuring out how to hack the prompt by adding one magic word to the end that changes everything else. What will matter is the quality of ideas and the understanding of what you want. So the artist will still do the best with image generation, but not because they figured out to add this one magic word at the end of it, because they were just able to articulate it with a creative eye that I don't have certain. They have as a vision and how their visual thinking and iterating through it. Obviously, it'll be that word or prompt now, but it'll iterate to better.
All right, at least we have a question here. Hey, thanks so much. I think the term AGI is used thrown around a lot. And sometimes I've noticed in my own discussions the sources of confusion has just come from people having different definitions of AGI, and so it can kind of be the magic box where everyone just kind of projects with their ideas onto it. And I just want to get a sense for me. Like, how would you define AGI and how do you think you'll know what would be when that early? That is.
It's a great point. I think there's a lot of valid definitions to this. But for me, AGI is basically the equivalent of a median human that you could hire as a coworker. So they could say, do anything that you'd be happy with a remote coworker doing, just behind a computer. Which includes learning how to go be a doctor, learning how to go be a very competent coder. There's a lot of stuff that a median human is capable of getting good at. And I think one of the skills of an AGI is not any particular milestone, but the meta skill of learning to figure things out and that it can go decide to get good at whatever you need. So for me, that's kind of like AGI. And then super intelligence is when it's like smarter than all of humanity put together.
So we have, do you have a question? Yep. Great. Thanks. Just what would you say are in the next 20, 30 years are some of the main societal issues that will arise as AGI continues to grow? And what can we do today to mitigate those issues?
Obviously, the economic impacts are huge. And I think if it is as divergent as I think it could be for some people doing incredibly well in others, not, I think society just won't tolerate it this time. And so figuring out when we're going to disrupt so much of economic activity. And even if it's not all disrupted by 20 or 30 years from now, I think it'll be clear that it's all going to be.
And what is the new social contract? My guess is that the things that we'll have to figure out are how we think about fairly distributing wealth, access to AGI systems, which will be the commodity of the realm, and governance, how we collectively decide what they can do, what they don't do, things like that. And I think figuring out the answer to those questions is going to just be huge.
I'm optimistic that people will figure out how to spend their time and be very fulfilled. I think people worry about that in a little bit of a silly way. I'm sure what people do will be very different, but we always solve this problem. But I do think like the concept of wealth and access and governance, those are all going to be huge. to change. And how we address those will be huge.
Actually, one thing I don't know would love love. Devs, you can share that. But one of the things I love about what OpenAI and you guys are doing is when you think about these questions a lot themselves, and they initiate some research. So you've initiated some research on this stuff. Yeah, so we run the largest UBI experiment in the world. I don't think that is we have a year and a quarter left in a five-year project. I don't think that's the only solution, but I think it's a great thing to be doing.
And I think we should have 10 more things like that that we try. We also try different ways to get input from a lot of the groups that we think will be most affected. And see how we can do that early in the cycle. We've explored more recently how this technology can be used for rescilling people that are going to be impacted early. We'll try to do a lot more stuff like that, too. So the organization is actually, in fact, these are great questions addressing them and actually doing a bunch of interesting research on it.
So next question. Hi, yes. So creativity came up today in several of the panels. And it seems to me that the way it's being used, you have tools for human creators and go and expand human creativity. So where do you think the line is between these tools to allow a creator to be more productive and artificial creativity, the sequence of creativity itself?
So I think, and I think we're seeing this now that tools for creatives, that is going to be the great application of AI in the short term. People love it. It's really helpful. And I think it is, at least in what we're seeing so far, not replacing it is mostly enhancing. It's replacing, in some cases, but for the majority of the kind of work that people in these fields want to be doing, it's enhancing. And I think we'll see that trend continue for a long time.
Eventually, yeah, it probably is just like, we look at 100 years. OK, it can do the whole creative job. I think it's interesting that if you asked people 10 years ago about how AI was going to have an impact with a lot of confidence from almost most people, you would have heard, first it's going to come for the blue-collar jobs, working the factories, truck drivers, whatever. Then it will come for the kind of like the low-skill white-collar jobs, then the very high-skill, like really high IQ, white-collar jobs, like a programmer, whatever. And then very last of all, and maybe never, it's going to take the creative jobs.
And it's really gone exactly, and it's going exactly the other direction. And I think this, there's an interesting reminder in here, generally, about how hard predictions are.
它已经完全消失了,它正在完全朝另一个方向发展。我认为这里有一个有趣的提示,通常是关于预测的难度。
But more specifically, more not always very aware, maybe, even ourselves of what skills are hard and easy, what uses most of our brain and what doesn't, or how difficult bodies are to control or make, or whatever.
We have one more question over here. Hey, thanks for being here. So you mentioned that you would be skeptical of any startup trying to train the old language model, and it would look to understand more.
So what I have heard, and which might be wrong, is that large language models depend on data and compute. And any startup can access to the same amount of data, because it's just like internet data.
And compute, like, different companies might have different amount of compute, but I guess they're big players because they're same amount of compute. So how good a large language model startup differentiate from another? How would the startup differentiate from another? How good one large language model startup differentiate from another?
I think it'll be this middle layer. I think in some sense, the startups will train their own models, just not from the beginning. They will take base models that are hugely trained with a gigantic amount of compute and data, and then they will train on top of those to create the model for each vertical.
And so in some sense, they are training their own models, just not from scratch, but they're doing the 1% of training that really matters for whatever this use case is going to be. Those startups, I think, there will be hugely successful and very differentiated startups there.
But that'll be about the data flywheel that the startup is able to do, the all of the pieces on top and below. Like this could include prompt engineering for a while at whatever, the core base model. I think that's just going to get to complex and to expensive, and the world also just doesn't make enough chips.
So Sam has a work thing he needs to get to. And as you probably can tell with a very far ranging thing, Sam always expands my batteries. And a little bit unlikely that when you're feeling depressed, whether it's kids and unhills, you're the person I was turned to for a while. I appreciate that. Yes. So anyway.
I think no one knows. We're sitting on this precipice of AI and like people like it's either going to be really great or really terrible. You may as well like you've got to plan for the worst.
You certainly like it's not a strategy to say it's all going to be OK. But you may as well like emotionally feel like we're going to get to the great future. And we're playing part as you can to get there and play for it, rather than like act from this place of like fear and despair all the time.
That concludes this episode of Grey Matter. If you're interested in all things AI, check out the rest of our Intelligent Future content devoted to the topic.
You can even hear the technology in action with Hoffman's Fireside Chatbot series, where he talks about AI with ChatGPT. And if you aren't already a subscriber to Grey Matter, please sign up wherever you get your podcasts. I'm Heather Mack. Thanks for listening.