You said today is the worst product from the model that you use today. Is the worst AI model that you'll ever use for the rest of your life? Kevin, so on this slide, in five years, as the ninth most valuable company in the world, OK, so $300 billion in enterprise value today is measured by the markets. They have it going to $1.6 trillion. I remember when you joined OpenAI and I asked you the question, can OpenAI get to a trillion dollars, right, in total value? And you said, that is, like, if I thought it was just a trillion dollars, I wouldn't be nearly as excited as I am about going to OpenAI. So when you just think about that. And by the way, less about market cap and.
Right, right, right. You know, more about like the ambition and the opportunity and the impact of the market. Yeah. And so that's what I want you to do. Just blue sky for us today. How long have you been at OpenAI now? But a year. OK. So you're a year into this. A year in, do you think the opportunity is bigger than you thought when you started and walk? And why would you. Talk us through that a little bit. I think it's bigger for sure than I realized. I think it's been in SAMSAD the whole time. So I'm not sure that there's been a revelation over the past year, but it's certainly opened my eyes.
Right, right, right. I mean, for one, we have grown faster than anything I've ever seen. Yeah. And I thought I had seen fast growth when I was at Instagram, for example. And for those falling at home on slide 24 in the deck, they show the user growth. And we talked about it earlier, but it's just phenomenal. Yeah. So as we've grown, we're also deepening usage with the platform. People are using it more. So just from that perspective, we're I think bigger today than I would have imagined. Walked through the data. Slide on that also. And you said some comments on stage about, as people use it more than minutes, go up.
Yeah. Well, I mean, we look at weakly active users. That's sort of the metric that we goal ourselves by from a growth perspective. Obviously, a lot of other things that we look at as well. But from a weakly active user, the reason we do that versus monthly is we're actually, you know, we don't want someone that just comes once a month, that doesn't, that means we're probably not adding all that much value in their life if they're coming once a month. And over time, I think it's going to be more natural for us to measure daily active users, because really if we're doing our job, then you're using chat GBT on a daily basis.
And hopefully multiple times per day, because it can help you multiple times per day with all the different things that you encounter in your life. There's sort of the, are you a monthly user, are you a weekly user, daily user, and then, you know, on a given day, how much time are you spending in the product? And you guys had a chart, your data not ours, but, but that showed that people were using chat GBT a whole lot more, you know, time in per day.
Yeah. And that's awesome. That's super exciting. That just means that we're actually helping people solve problems and they're not just using it. You know, like you go back a couple years when, when it launched, it was really good at various sort of writing thing. If you're going to go copywriting, if you were going to, you know, have it summarize emails or things like that. And you look today and it's so much broader. It can help in so many other areas. People are uploading, you know, that actually my own son had a health thing.
Yeah. I had to have a minor surgery, which was not that big of a deal. At least it was not supposed to be that big of a deal. And, but there was a tiny chance that there was something more serious, you know, and so he has the surgery. He's eight. Like next day, he's playing a soccer game. He doesn't know any better. Yes. Yes. But then we're still waiting for like the biopsy of this thing to come back. Yes. And we end up getting a paper in the mail, letter in the mail that has a whole bunch of like complex medical stuff on it.
Right. Right. I'm like, you know, halfway educated. I couldn't make sense of it. I didn't know what it means. And I called the doctor and I couldn't get a hold of her. She was in a surgery, I think. Right. You know, she's busy. She's got a million things going on. And I'm looking at this and there's a bunch of like scary looking medical terms. Yeah. And so the first thing I do, of course, is take a picture of it. A couple of different chat Gbt. And say, hey, should I be worried about this? And it comes back immediately.
No, you're fine. This is like completely benign. Don't worry about it. You're all good. But I actually wasn't able to get a hold of the doctor for 72 hours. Right. That would have been a really bad 72 hours. I mean, as a parent sitting there stressing about this. You know, I think you're describing, you know, as someone who's gone from once a day to 20 times a day, like you, you learn what it can do. You learn the kinds of things it can do. Like you can take a photo of an appliance and ask it how to set the clock or something.
Yeah. And so you, you, you, one of the reasons I think it expands is people come to understand all the things it can do. Yeah. And all the kinds of prompts. You know, another one is like, like I used to do Google search, go to a web page, find a number, and then put that into a formula and multiply it on a calculator. It can just do all that. Right. Like you can see, right, go get this and run this. And now you can connect it to your email and your docs and your calendar.
And it can start to be useful in so many other ways. This, you know, what you used to have to go search through 20 docs for now, you can go just ask for answers. So it's just, it's so much more useful. And I totally agree the way to sort of understand it because it's also constantly evolving, right? The models get, yes. Step changes better every two or three months is you just start using it.
You know, the, the, the number one advice I have for people when they like ask what they should do with chat GPDs, just start using it because when you start using it for things and you take, you know, little, little tiny risks and try it for new things that you're not sure if it can do, you realize in most cases it can. Yeah. And then it just becomes more and more hopeful to you over time.
And I think the numbers reflect that when you see, so you're running product, your goal is to drive engagement, drive utility, thrill your customers. So when, uh, talk to us about the things you're actively doing to take somebody from a monthly to a weekly, from a weekly to a daily, I'm sure a lot of this is happening organically as Bill describes, but I'm sure you're pressing on the fly wheel.
And then if we're at a billion monthlies today, like how do you set a measure for success for you and your team, right? Like do you have a goal, a big, a big audacious goal? I want to get to two billion in three years. Like, like how are you thinking about that? Yeah. We do less sort of mechanical things of how do I get a monthly to a, we, it's more like we have an incredible research team, just the absolutely world class research team that's constantly iterating on models, making the models that suddenly able to do things that computers have never been able to do before in the history of computers.
And so we think a lot more about how do we, how do we stay really tight with the research team? Because it's just this incredible nucleus of like creativity and discovery. And how do we stay really close to them so that we can constantly be building products just on the edge of what the models can do? Right. As our feeling is, if we're building just on the edge of what models can do, then we suddenly have this product that can do things that no one's ever seen a product do before.
And even if it's not perfect, right when it starts, because we tend to, we believe in iterative deployment, we release early, we release often like better to, better to make lots of small mistakes. And you know, we kind of collectively as society, I like understand how, how, what AI is good at, what is bad at, how we work through it. We'd rather do that.
And if you, if you build a product that can do amazing things, even if it's only 70% you know, write it at, at, at that thing in two or three months, the next model is going to come along. And it's going to be 95% good at that thing. And all of a sudden, you know, you've, you've got something great, but, but people, we've sort of brought everybody along with us and, and learn from them at the same time, like learn from the way there's, the way that they're using it.
So there's a, I think the, the thing that we've really tried to get right, and we're not perfect at this, but I think we're a lot better than we used to be, is the, the really tight loop between research and product. Yes. Because when we're, when we have that loop right, and we're, we're solving products, the problems that people have, feedback goes back to research, the research team goes, Oh, you, oh, it can't do that very well. We can fix that.
Right. And then the product gets better. And we have that, like, that's when magic happens. You said something prophetic on stage today, and simple, but it really caught me. You said, you said today is the worst product from the model that you use today. The model that you use today is the worst AI model that you'll ever use for the rest of your life. Yeah. Which is really, that's a, it's a simple thing. And it's really kind of obviously true when you think about it. But it just changes the way you think about building products. Because I think it, if you think about it the right way, it makes you much more open to building products that only kind of work. Whether you're us building chat GPT and other products or whether you're an enterprise, you know, building some internal tool or because if the model only, if the model can kind of do it, then it's going to be great at it in a few months.
What things are you most excited about that are on the horizon? I'm, I'm really excited about the, the, in order for chat GPT to be truly useful, you need to go from it just answering questions that you have to it actually doing things for you in the real world to like, you know, ideally even proactively to understand the things that you're going to need to do and help you do or you know, suggest them queue up a bunch of actions for you before you even get there. And so why we've been really focused on adding things like connections into Google Docs and, you know, the products and services that you use every day. We've been really focused on personalization and memory so that the, you know, chat GPT gets to know you and what your preferences are, how you like to interact with it.
In my case, I've like made sure that it knows about my wife and what she does, you know, she's a seed investor. My kids, how old they are because then when I ask it, you know, hey, we have some free time, what should we go do or we're going to go on this trip, it doesn't just recommend random hotels, it recommends hotels that are that are great for my family. Yeah, yeah, yeah. And then, you know, that kind of thing, each one is small, but it adds up. Right. And especially if you think about getting proactive and, and, you know, really getting to know you and helping with something that is going to, you're going to realize you need to do four hours for now, personalization is going to be a big deal. No doubt.
One quick question on the, on the kind of interactivity between different apps, there was some kerfuffle last week, I think Salesforce changed their terms of service in a way that would make it more difficult for other people to, to build models. How do you, how do you think that's going to play? Do you think people will put up walls or not, you know, will they be friendly with you, you know, running a widget on top of their app or not? Yeah, it's a good question. I think we're going to kind of feel our way there as a, as an industry as a society. It's one of the reasons that we believe so much in iterative deployment, where we release early release often, because AI is going to change just about everything.
Yeah, it's going to change the way the web works, it's going to change the way we interact with services, it's going to, you know, and we don't want to do that unilaterally. We got to, you know, I think the way to do this smoothly is to kind of iterate together on it. That means total sense. And so, you know, we figure the more we release and put the stuff out in the world, the world, we all kind of get a sense, it's not like we know all the answers, right? We all kind of get a sense of where things are going and we can co-evolve in the right way.
I want you to have access to my contacts, my address, book my email, my text, my everything. And that's your data. You should be able to take it to whatever AI you want to take your data to. You're paying for that data. On those lines, Kevin, like one of the things that Bill and I have talked a lot about on this pod is our concern about kind of like regulatory capture, regulatory intervention. Folks who are alarmist and saying this is dangerous, we need to slow it down, we need you know, we need to really regulate it. And along those same lines, you know, like whether or not we're going to have open source models or every model is going to be closed.
It seems to me that OpenAI has been on this journey over the course of let's call it the last year or two, where there was a belief, I think that OpenAI was in the, you know, go to Washington and regulate everything and kind of slow it down. And now I think it's like more clear the distinction that's being drawn. OpenAI is talking about launching an open source model here shortly. You seem to be in the camp of, you know, it's too early to regulate, get out there, accelerate, make sure that we distribute around the world and America's models are doing well. Is that a fair characterization, you think of, of where OpenAI is today?
I think no matter what, it's important to be engaging, right? Like I said, this is a AI is going to change everything and we got to, I think we do better when we co-evolve. And so, and it's one of the reasons I was saying earlier, like people should just be trying the models. I think the same is true of our, of our lawmakers and folks like that. They should be just like getting to use AI. You start using it and you're like, oh, this isn't so scary. This is helpful. And you understand the nuances. You understand what it's good at, what it's not good at, right? You get a sense for how it's improving. I think that's really important. Like context really matters.
I think it's, you know, that's one of the things we do when we're in DC as part of it, just like getting together and kind of giving people a sense of, here's where it is today and here's where we think it's going, right? So that people can make informed decisions. I don't know if you read Sam Altman's blog that was out this week called the gentle singularity. I would encourage everybody to actually go read it and it's this idea bill that it's not this scary thing that, you know, all of a sudden is going to show up, which I think a lot of people were thinking about with AGI that there was going to be this moment that everything was going to change.
Sam's like, the world has already changed dramatically over the course of the last 24 months. But people aren't, you know, it's not changed that much about our daily lives. We just switched from using Google to get Tim Blue links to using chat, CBT to get answers to our questions. I'm now uploading my blood test information to chat, CBT because I wanted to tell me things that are useful about it. I still am at the point where I find those things pretty extraordinary. I can't believe that it can do these things and that I'm getting it for next to free.
But it has come. That level of profound change has come without that big of, like, you know, societal disruption. And, you know, I think we definitely are in the camp that it is way too early to obstruct American AI with, you know, useless regulations. Like, we don't even know what we should be regulating. It's like regulating the auto industry, right? Out of existence early. It took us 40 years to realize we needed seat belts and airbags and that we needed speed limits and things like that. And we'll need all those things for AI.
But I think it's way too early to be doing that. So it's been a welcome, I think, development on our end to see, you know, open AI increasingly vocal about the need to make sure that, you know, we stay open and we stay. Kevin, one thing you, and I know you didn't give any specifics, but I think you had a question from the audience about the hardware device. And you did share some philosophical thoughts. Good. I think the audience would be thrilled to hear what you said earlier. This is about the acquisition of Johnny Ives, you know, business and what you guys intend to do in hardware.
Yeah. Oh, I just said, and I think this is true of hardware as it is of software, just that AI is going to change everything about the way that we do our jobs, the way that we, you know, get our, get stuff done on our personal lives. And I think basically every product, service, device, et cetera, that we use will be, will need to be reinvented. Yes. And that doesn't mean that the incumbents can't reinvent it. I think sometimes they will, right. But I don't, I doubt in every case they will certainly history doesn't tell us that that would be true. And so I think that means there's a huge opportunity for, for reinvention.
And that's, you know, that's an opportunity for us in places where we think we can play a role, we, where we think we have a perspective, we'll go compete. It also means great opportunities for startups. Yes. And by the way, I do think it means good opportunities for incumbents if they can move quickly and sort of overcome the, the innovators dilemma. That's a really hard thing to do. No doubt. So I just think if you look at the, the products that you spend your time in on a daily basis, and compare it to, you know, five years from now, I think they're going to be dramatic.
Is there any, are you allowed to share any time window where we might find out more? You'll have to find out. Okay. You know, we, we had to do a fly by today. We're going to have you come on the pod and we'll do a full blown podcast together. So we're here celebrating the 10th anniversary of East meets West. Yep. We were saying earlier, it's an event that we all look forward to go to coming to. I'm going to, I'm going to ask you this final question, kind of big picture. They forecasted out five years. Right.
Like what's possible for, for open AI? Skip the time window, whether it's three years or five years or seven years. What's your degree of confidence that given the value that open AI and chat GPT is delivering into the world and we haven't even got into really enterprise and everything you're doing with enterprise, you just announced a big deal with the government, the defense department, etc. What's your level of confidence that you guys will reach a hundred billion of revenue in the future and kind of be that, you know, AI defining company because at a hundred billion of revenue, it seems to me that this company could be on a path to being the most valuable company in the world, right? Five to ten trillion dollar business in the fullness of time, which frankly would not surprise me for the winner in the age of AI, right? The winner and consumer and a big company and enterprise.
But now you've been here a year, right? You joined something that you thought was going to be the biggest thing of your career. You've been at extraordinary, large and successful companies. What's your level of confidence? Okay. I think the opportunity is all there in front of us. I said like three times now how I think AI is going to change everything. Everything. And the path is there and it's up to us to execute.
Yeah. It's why, I mean, this place, opening AI moves faster than any company I've ever worked at in my life. Wow. And I thought I had worked at places that move pretty fast. That's a big Twitter and Instagram. Yeah. That's a big short name. Nothing compares. I think it's, you know, we have amazing people on the team. We really try and push decision-making and responsibility down into the teams. I think in an AI world, you need to do that because the IMLs are so new. You don't top down no all of the capabilities. You see them kind of coming through the mist and we're better at finding all of the opportunities when we have a lot of smart people thinking about what they can do with the model in their area.
Yeah. Yeah. And did it, by the way, publicly. It's why we release early and often. We try and get it in people's hands because then they also get a chance to figure out what the models can be. So I have a lot of confidence in us. We got to execute. Yeah. Like we've got to, we've got to be on top of our game because we have a very serious set of competitors and that's real. But, but man, I like, all right. It's also like, I just feel good.
Like when you look at this team, you Sarah, Fiji, Greg Brockman, Sam, etc. It's such, even though the company's small bill, it's an extraordinarily deep bench of talented people who've done this in other places. Not only does it come down to execution as investor. Like I of course want them to succeed as an investor. But in terms of defining the company that's going to do the right things in this moment, I feel good to have this company in the lead because, you know, these are great people that I think are going to do the right things.
And so it's great to have you help in. You're nice to say it. I think, you know, we tend to get unfair. The people that you named, you know, I'm beyond excited for Fiji to start all the other people you name. Like we tend to get unfair amounts of credit. It's, I could give you 100 names of the people that are very, very, the real work and are like, actually the reason I appreciate it's successful.
And, you know, hopefully we can tell their stories over time too. We will. All right. Thanks for being here. Thanks so much. Appreciate it. Yeah. As a reminder to everybody, just our opinions, not investment advice.