finding people who are high agency and work with urgency. If I was hiring five people today, like those are like some of the top two characteristics that I would look for in people because you can take on the world if you have people who have high agency and like not needing to get 50 people's different consensus. They hear something from our customers about a challenge that they're having and like they're already pushing on what the solution for them is and not waiting for all the other things to happen that like people just go and do it and solve the problem. I love that. It's so fun to be able to be a part of those situations. Today my guest is Logan Kilpatrick.
Logan is head of developer relations at OpenAI, where he supports developers building on OpenAI's APIs and JatchyPT. Before OpenAI, Logan was a machine learning engineer at Apple and advised NASA on their open source policy. If you can believe it, ChatchyPT launched just over a year ago and transformed the way that we think about AI and what it means for our products and our lives. Logan has been at the front lines of this change and every day is helping developers and companies figure out how to leverage these new AI superpowers.
In our conversation, we dig into examples of how people are using ChatchyPT and the new and other OpenAI APIs in their work and their life. Logan shares some really interesting advice on how to get better at prompt engineering. We also get into how OpenAI operates internally, how they ship so quickly, and the two key attributes they look for in the people that they hire, plus where Logan sees the biggest opportunities for new products and new startups building on their APIs. We also get a little bit into the very dramatic weekend that OpenAI had with the board and Sam Altman and all of that, and so much more, a huge thank you to Dan Shipper and Dennis Yang for some great questions suggestions.
With that, I bring you Logan Kilpatrick after a short word from our sponsors. This episode is brought to you by Hex. If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of screenshots and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no code in any combination and work together with live multiplayer and version control. And now, Hex's AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you, all from natural language prompts. It's like having an analytics co-pilot built right into where you're already doing your work.
Then, when you're ready to share, you can use Hex's drag and drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, Alltrails, Loom, Mixpanel, and Algolia using Hex every day to make their work more impactful. Sign up today at hex.techslashlenny to get a 60 day free trial of the Hex team plan. That's hex.techslashlenny. This episode is brought to you by Whimsical, the iterative product workspace. Whimsical helps product managers build clarity and shared understanding faster with tools designed for solving product challenges.
With Whimsical, you can easily explore new concepts using drag and drop wireframe and diagram components, create rich product briefs that show and sell your thinking, and keep your team aligned with one source of truth for all of your build requirements. Whimsical also has a library of easy to use templates from product leaders like myself, including a project proposal one pager, and a go-to-market worksheet. Give them a try and see how fast and easy it is to build clarity with Whimsical. Sign up at whimsical.com slashlenny or 20% off a Whimsical Pro plan. That's whimsical.com slashlenny.
Logan, thank you so much for being here and welcome to the podcast. Thanks for having me, Lenny. I'm super excited. I want to start with the elephant in the room, which I think the elephant is actually leaving the room because I think this is months ago at this point, but I'm still just really curious. What was it like on the inside of OpenAI during the very dramatic weekend with the board and Sam and all those things? What was it like? Is there a story maybe you could share that maybe people haven't heard about what it was like on the inside? What was going on? Yeah, it was definitely a very stressful, stressful Thanksgiving week. I think in broad context, OpenAI had been pushing for a really long time since TETCH-EBT came out, and that was supposed to be the first week, so the whole company had taken time away to actually reset and have a break. Very selfishly, I was super excited, spent time with my family, all that stuff. Then the afternoon, we got the message that all of the changes were happening.
I think it was super shocking because I think, and this is a perspective a lot of folks here, everybody has had and continues to have such deep trust in Sam and Greg and our leadership team that it was just very surprising. As far as company cultures go, very transparent and very open. When there's problems, there's things going on. We tend to hear about them. Again, it was the first time that a lot of us had heard some of the things that were happening between the board and the leadership team. Very surprising.
I think being someone who's not based in San Francisco, I was again, very selfishly happy that it happened over the Thanksgiving break because a lot of folks actually had gone home to different places. It felt like I had a little bit of comfort knowing I wasn't the only one, not in San Francisco because everybody was meeting up in person to do a bunch of stuff and be together during that time. It was nice to know that there was a few other folks who were who were out of the loop with me.
I think the thing that surprised me the most was just how quickly everybody got back to business. I flew to San Francisco the next week after Thanksgiving, which I wasn't planning to do to deal with the team in person. And seeing literally Monday morning, I was walking into the office being expecting weird to be going on or happening or a day. And really, it was like people laser focus and back to work. And I think that speaks to the caliber of our team and everybody who's just so excited about building towards the mission that we're building towards. So I think that was the most surprising thing of the whole incident. I think a lot of companies would have had the potential to truly be derailed for some non-trivial amount of time by this. And everybody was just right back to it, which I love.
I feel like it also maybe brought the team closer together. It feels like it was a kind of traumatic experience that may bring folks together because it was something they all shared. Is there anything along those lines that's like, wow, things are a little different now? One of my takeaways was I'm actually very grateful that this happened when it happened. I think today the stakes are, they're still relatively high. People have built their businesses on top of OpenAI. We have tons of customers who love Chatsubiti. So if something that happens to us, we definitely impact our customers. But on the world scale, somebody else will build a model with OpenAI disappeared and continue towards this progress of general intelligence. I think fast forward, five or 10 years of something like this would have happened. And we hadn't gone through the hopeful upcoming word transformation and all those changes that are going to happen.
I think it would have been a little bit, or potentially much worse of an outcome. So I'm glad that things happened when the stakes are a little bit lower. And I totally agree with you. It's like the team has been growing so rapidly over the last years since I joined. It's been crazy to think about how many new folks there are.
And I really think that this really brought people together. Because most folks, historically, many of the folks when I joined, what kind of banded us all together, was the launch of GPT, the launch of GPT4. And for folks who weren't around for some of those launches, it was perhaps DevDay, from folks who were around for DevDaily, it was probably this event. So I think we've had these events that have really brought the company together cross-functionally. So hopefully all the future ones will be really exciting things like GPT5 whenever that comes and stuff like that. Awesome.
We're going to talk about GPT5. Going in a totally different direction, what is the most mind-blowing or surprising thing that you've seen AI do recently? The things that are getting the most excited are these new interfaces around AI. The Rabbit R1, I don't know if you've seen that, but the consumer hardware device, this company called TLDRAW. I don't know if you've seen TLDRAW. I think you sketch something and then it makes it as a website. Yeah. And that's only a small piece of what TLDRAW is actually working on. But there's all of these new interfaces to interact with AI. And I think I was having a conversation with the TLDRAW folks a couple of days ago.
It really blows my mind to think about how chat is the predominant way that folks are using AI today. And I actually think and this is my bulk case for the folks at TLDRAW. I'm super excited for them to build with their building, but they're building this infinite canvas experience. And you can imagine how, as you're interacting with an AI on a daily basis, you might want to jump over to your infinite canvas, which the AI has sort of filled in all the details and you might see a reference to a file and to a video and all of these different things. And it's such a cool way. It actually makes a lot more sense for us as humans to see stuff in that type of format than I think just listing out a bunch of stuff in chat. So I'm really, really excited to see more people.
I think 2024 is the year of multimodal AI, but it's also the year that people really push the boundaries some of these new UX paradigms around AI. It's funny. I feel like chatbots, as a PM for many years, it feels like every brainstorming session we had about new features, it's like, hey, we should have built a chatbot to solve this problem. It's like the perennial like, oh, chatbot, for someone's going to suggest we do a chatbot. And now they're actually useful and working and everyone's building chatbots, a lot of them based on open AI APIs.
There's not really a question there, but maybe the question I was going to get to this later is just when people are thinking about building a product like say, TL Draw, what should they think about where open AI is not going to go versus like, here's what open AI is going to do for us. We shouldn't worry about them building a version of TL Draw in the future. What's the kind of the way to think about where you won't be disrupted, essentially, by opening it, knowing also they may change their mind. That's a great question. I think like we're deeply focused on these like very, very general use cases, like the general reasoning capabilities, the general coding, the general writing abilities.
I think where you start to get into some of these like very vertical applications, and I think a great example of this is it's actually like Harvey. I don't know if you've seen Harvey, but it's this legal AI use case where they're building custom models and tools to help lawyers and people at that legal firms and stuff like that. And that's a great example of like, our models are probably never going to be as capable as some of the things that Harvey's doing, because like, our goal and our mission is really to solve this like very general use case. And then people can do things like fine tuning and build all their own custom UI and product features on top of that. And I think that's the, I have a lot of empathy and a lot of excitement for people who are building these very general products today.
I talked to a lot of developers who are building just general purpose assistance and general purpose agents and stuff like that. I think it's cool and it's a good idea. I think the challenge for them is they are going to end up directly computing against us in those spaces. And I think there's there's enough room for a lot of people to be successful. But like, it to me, like you shouldn't be surprised when, you know, we end up launching some like general purpose agent product, because like, again, we're sort of building that with GPT today and versus like, we're not going to launch like some of these like very verticalized products like we're not going to launch like an AI sales agent. Like, that's just not what we're building towards and companies who are and have some domain specific knowledge.
And they're really excited about that problem space. Like, they can go into that and leverage our models and like, end up continuing to be on the cutting edge without having to like do all that R&D effort themselves. Got it. So the advice I'm hearing is get specific about use cases. And that could be either models that are tuned to be especially useful for a use case like sales, or make an interface or experience solving a more specific problem.
And I think if you're going to try and solve this like, very general, like if you're going to try to build like the next general assistant to compete with something like Chatching D, like it has to be so radically different. Like, people have to really like, wow, this is solving like these 10 problems that I have with Chatching D. And therefore I'm going to go and try your new thing. Otherwise, like, you know, we're just putting a ton of engineering efforts and research effort into making that like an incredible product.
And it's just going to be like the normal challenges of building companies. It's just hard to compete against somebody like that. Awesome. Okay, that's great. I was going to get that later, but I'm glad we touched that. I imagine that's on the minds of many developers and founders kind of along the same lines. There's a lot of talk about how chat GPT and GPTs and many of the tools you guys offer are going to make a company much more efficient.
They don't need as many engineers, data scientists, PMs, things like that. But I think it's also hard for companies to think about what should we actually like, what can we actually do to make our company more efficient? I'm curious if there's any examples that you can share of how companies have built to say a GPT internally to do something so that they don't have to spend engineering hours on it, or generally just used OpenAI tooling to make their business internally more efficient.
Yeah, that's a great question. I wonder if you can put this in the show notes or something like that, but there's a really great Harvard Business School study about, and I forgot, which consulting for a major was like, Boston Consulting or something like that, but it might have been one of the other ones. And they talk about the order of magnitude of efficiency gained for those folks who are using AI tools.
And I think it was chat GPT specifically in those use cases that they were using, comparatively against folks who aren't using AI. I'm really excited also just as this more time passes between the release of this technology for us to get more empirical studies. Because I feel this for myself, as somebody who's an engineer today.
I use chat GPT and I can ship things way faster than I would be able to. I don't have any good metrics for myself to put a specific number on it, but I'm guessing people are working on those studies right now. I think engineering is actually one of the highest leveraged things that you could be using AI to do today.
I'm really unlocking probably on the order of at least a 50% improvement, especially for some of the lower hanging fruit software engineering tasks. The models are just so capable at doing that work. And it's crazy to think, and I'm guessing actually GitHub probably has a bunch of really great studies that publish around co-pilot. And you could use those as an analogy for what people are getting from chat GPT as well.
But those are probably the highest leveraged things. I think now with GPTs, people are able to go in and solve some of these more tactical problems. I think one of the general challenges with chat GPT is it gives a decent answer for a lot of different use cases, but oftentimes it's not particular enough to the voice of your company or the nuance of the work that you're doing.
I think now with GPTs, and people who are using the teams in chat GPT and enterprise in chat GPT, I can actually build those things, incorporate the nuance of their own company and make solving those tasks much more domain specific. So we literally just launched GPTs a couple of months ago.
So I don't think there's been any good, public success stories. But I'm guessing that success is happening right now at companies. And hopefully we'll hear more about that in the months that come as folks get super excited about sharing those case studies. I'll share an example.
So I have this good friend named Dennis Yang. He works at Chime. And he told me about two things that they're doing at Chime that seem to be providing value. One is he built a GPT that helps write ads for Facebook and Google, just big gives you ideas for ads around. And so that takes a little load off the marketing team or the grow team.
And then he built another GPT that delivers experiment results, kind of like a data scientist with like, here's the result of this experiment. And then you could talk to it and ask for like, hey, how much longer do you think we should run this for? Or what might this imply about our product and things like that? And I think it's really like you said, is there anything else that comes to mind just like things you've heard people do just like, wow, that was a really smart way of so I get there's like engineering, copilot type tooling, is there anything else that comes to mind just to give people a little inspiration of like, wow, that's an interesting way I should be thinking about using some of these tools.
I've seen some interesting GPTs around like the planning use cases, like you want to do like, okay, our planning for your team or something like that. There's I just actually saw some retweet it like literally yesterday. I've seen some cool like venture capital ones of like doing diligence on like a deal flow, which is kind of interesting, like getting some different perspectives. I think all of those like horizontal use cases where like you can bring in a different personality and like get perspective on different things. I think is really cool. Like I've personally used in a GPT, the private GPT that I use myself that like helps with some of the like planning stuff for different quarters and like just making sure that I'm being consistent in how I'm framing things like driving back to like individual metrics stuff that like when people do planning like they often miss and like are bad at then it's been super helpful for me to like have a GPT to like force me to think about some of those things.
Wait, can you talk more about this? What does this GPT do for you and how do you, what do you feed it? Yeah, there's I forgot what article I saw online, but it was like some article that was talking about like what are the best ways to like set yourself up for success in planning. And I took a bunch of the like I'll see if I can make a public after this and send you a link, but took a bunch of the examples from that and went in and put some of those suggestions into the GPT. And then when now when I do any of my planning of like I want to build this thing, I put it through and have it like generate a timeline generate all the specifics of like what are the metrics and success that I'm working for like who might be some important cross-functional stakeholders like include in the planning process, all that stuff and it's been it's been helpful.
Wow, that is very cool. That would be awesome if you made a public and if we do we'll link to it and we'll make it the number one most popular GPT in the store. I love it. Going in a slightly different direction, there's this whole genre of prompt engineering. It feels like it's one of these really emerging skills. I actually saw a startup hiring a prompt engineer when I saw the startups I've invested in and I think that's gonna blow a lot of people's minds that there's this huge job that's emerging and I know the idea is this won't last forever that in theory AI will be so smart. You don't need to really think about how to be smart about asking it for things you needed to do but can you just describe this idea of what is prompt engineering this term that people might be hearing and then even more interesting and just like what advice do you have for people to get better at writing prompts for say CHAD, GPT or through the API in general?
Yeah, this is such an interesting space and I think it's like another space where I'm excited for people to do like more scientific and empirical studies about because there's like so much like gut feeling best practices that like maybe aren't actually true in a certain ways. I think the reason that prompt engineering exists and comes up at all is because the models are so inclined because of the way that they're trained to give you just an answer to the question that you asked. Crap in, crap out. If you ask like a pretty like basic question, you're gonna get a pretty basic response. They're actually the same thing. It's true for humans and you can think of a great example of this. When I go to another human and I ask like how's your day going, they say, hey, it's going pretty good. Like literally absolutely zero detail, no nuance, like not very interesting at all versus again, if you have some context with the person, if you have a personal relationship with them, I'm going to ask you, hey, Lenny, how's your day going? Like how did the last pond, Chasco, etc, etc. Like you just have a little bit more context and agency to go and answer my question.
I think this is like prompt engineering, my whole position on this is like, prompt engineering is a very human thing. Like when we want to get some value out of a human, we do this prompt engineering. We try to effectively communicate with that human in order to get the best output. And the same thing is true of models. And I think it's like, again, because we're using a system that appears to be really smart, we assume that it has all this context, but it's really like, you know, imagine a human, human level of intelligence, but like literally no context, like it has no idea what you're going to ask it. It's never met you before. It has no idea who you are, what you do, what your goals are. And like, it's the reason that you get super generic responses sometimes is because people forget they need to put that context into models.
So I think the thing that is going to help solve this problem. And we already kind of do this in the context of Dali. So when you go to the image generation model that we have Dali, and you say, I want a picture of a turtle, what it does is it actually takes that description, it says, I want a picture of a turtle, and it changes it into this high fidelity, like, you know, generate a picture of a turtle with a shell with a green background and, you know, lily pads and the water and all this other, it adds all this fidelity, because that's the way that the model is trained. It's trained on examples with super high fidelity. This will happen with text models. You can imagine a world where you go into chat, you're being you say, write me a blog post about AI, it automatically will go and be like, let me generate a much higher fidelity description of what this person really wants, which is, you know, generate me a blog post about AI that talks about the trade-offs between these different techniques and some example use cases and references, some of the latest papers, and it does all that for you. And then you with the user will hopefully be able to be like, yep, this is kind of what I wanted. Let me edit this. Let me edit this here.
And again, the inherent problem is like, we're lazy as humans. We don't want to type off. We don't really want to type what we mean. And I think AI systems are actually going to help solve some of that problem. So until that day, what did what can people do better when they're prompting say chat GPT? And I'll give you an example. Tim Ferriss suggested this really good idea that I've been stealing, which is when you're preparing for an interview, go to chat GPT. And I'm and so I did this for you. I was like, Hey, I'm interviewing Logan Kilpatrick. He's a head of developer relations at OpenAI on my podcast. Give me 10 questions to ask him in the style of Tyler Cowan, who I think is the best interviewer. He's so good at just like very pointed original questions. So what advice would you have for me to improve on that prompt to have better results? Because the questions were like fine. They're great. They're like interesting enough, but they were like, holy shit, these are incredible.
So I guess what advice would you give me in that example? Yeah, that's a great example. We're like thinking in context of like who it is that you're asking questions about like, I'm probably not somebody who has enough information about me on the internet, where like the model actually has been trained and like knows the nuances of my background. I think there's like probably like much more famous guests where like it might be that there's enough context on the internet to answer the question. So like you actually have to do some of that work. You need to say like if you're using browse with Bing, for example, you could say like, here's a link to Logan's blog and like some of the things that he talked about, like here's a link to his Twitter, like go through some of his tweets, go through some of his blogs and like see what his interesting perspectives are that we might want to surface on the blog or something like that.
And again, giving the model enough context to answer the question. I think again, that prompt actually might work really well for somebody who like has it like if you were interviewing like Tom Cruise or something like that, sort of you has a lot of information about them on the internet, it probably works a little bit better. So the advice there is just give more context. It doesn't tell you, hey, I don't actually know that much about Logan. So give me some more information. It's just like, here I go. Here's a bunch of good questions. Exactly. Like it wants to like it so deeply wants to answer your question. Like it doesn't care that it doesn't have enough context. It's like the most eager person in the world you could imagine to answer the question. And without that context, it's just hard to do to give up anything of value. If we got T-shirts printed, they should say like context is all you need. Context is the only thing that matters. Like it's it's such an important piece of getting a language model to do anything for you.
Any other tips just as people are sitting there, maybe they're good. They have chat to PT open right now as they're crafting a prompt. Is there anything else that you'd say would help them have better results? We actually have a prompt engineering guide which folks should go and check out and have some of these examples. It depends on sort of the order of magnitude of like how much performance increase you can get. There's a lot of like really small silly things. Like adding a smiley face increases the performance of the model. Like telling the you know, you've seen I'm sure folks have seen like a lot of these like silly examples. Like telling the model to like take a break and then answer the question. All these kinds of things.
And again, if you think about it, it's because the corpus of information that that's trained these models is the same things is that humans have sent back and forth to each other. So like you telling a human like when I go take a break and then I come back to work like I'm fresher and I'm able to answer questions better and like do work better. So very similar things are true for these models. And again, when I see a smiley face of the end of someone's message, like I feel empowered that like this is going to be a positive interaction and I should like be more inclined to give them a great answer and spend more effort on the thing that they asked me for. Wow. Wait. So that's a real thing. If you had a smiley face, it might give you better results. Again, it's like the challenge with all this stuff is like it's very nuanced. And and it's also like it's a small jump in performance. You could imagine like on the order of like one or two percent, which for a few sentence answer is like might not even be a discernible difference. Again, if you're generating like an entire saga of text, like the smiley face like could actually make a material difference for you, but for like something small and textual and it might not.
Okay. Good tip. Amazing. Okay. We've talked about GPTs. I think maybe might be helpful to describe what is what is this new thing that you guys launched? GPTs. And I'm curious just how it's going this because this is a really big change and element of open AI now with this idea that you could build your own like kind of mini and I'm almost explaining it, your mini open chat GPT. And then people can I think you can pay for it, right? Like you can charge for your own GPT or is it all for you right now? It's all for you right now. Okay. It's all for you. Okay. In the future, I imagine people will be able to charge. So there's this whole store now. Basically, it's that whole app store that you guys have launched. How's it going? What's happening? What surprised you there? What should people know? Yeah, it's going great. And again, historically, the thing that you would have to do, let's say, for example, you have like a really cool chat to be to use case, what you would have to do to share it with somebody else is like actually go in and like start the conversation with the model, like prompted to do the things that you wanted to. And then you would share that link with somebody else before the action has actually happened and be like here, now you can like essentially finish this conversation with chat GPT that I started. So GPT is kind of changes this where you take all that important context, you put it into the model to begin with, and then people can go and like chat with essentially a custom version of chat GPT. And the thing that's really interesting is you know, you can upload files, you can give it custom instructions, you can add all these different tools, like a code interpreter is built in, which allows you to like do like math, essentially, you have browsing built in image generation built in, you can also like for more advanced use cases, if you're a developer, you can like connect it to external API. So you can connect it to the notion API or Gmail or all these different things, like have it actually take actions on your behalf. So there's there's so many cool things that people are unlocking. And what's been most exciting to me actually is like the non developer persona is now empowered to like go and solve these like really, really, really more challenging problems by giving the model enough context on what that problem is to be able to solve it going back to like context is all you need, like this is very true in the context of GPTs. And if you've given enough context, like you can solve much more interesting problems. There's so many things that I'm excited about with this, like I think monetization when it comes to the store later this quarter, I think is going to be extremely exciting. Like when people can get paid based on who's using their GPTs, that's going to be a huge unlock and like open a lot of people's eyes to the to the opportunity here. I also think like continuing to push on making more capabilities accessible to GPTs for people who can't code is really exciting. Like having to even for me as like a someone who was a software engineer, like it's not super easy to like connect the notion API or the Gmail API to my GPT. And like really, I'd love to just be able to like one quick sign in with Gmail and then all of a sudden, it's like my Gmail is accessible or like someone else can sign in with their Gmail and make it accessible. So I think over time, like all those types of things will come. But today, it's really like custom prompts is essentially like one of the biggest value ads with GPTs.
Awesome. I have it pulled up here on the Undifferent Monitor. And Canva has the top GPT currently. And I was trying to play with it as you're chatting just to see, I was going to make a big banner that said it's the context stupid. And it doesn't I'm not doing some right, but I'm not paying that much attention to it because we're talking. But this is very cool. Just maybe a final question there is there a GPT that you saw someone built that was like, wow, that's amazing. That's so cool. Something that surprised you. And I'll share one. That was really cool. But is there anything that comes to mind and ask that? I think my my instinct is the Zapier, all of the stuff that Zapier has done with GPTs is like the most useful stuff that you can imagine. You're like, you can go so far with what and I don't know how it's like packaged for Zapier's GPT right now, but like you can actually as a third party developer integrate Zapier without knowing how to code into your GPT. So like they're they're pushing a lot of this stuff. And then basically like all 5000 connections that are possible with Zapier today, you can bring into your GPT and like essentially enable it to do anything. So I'm incredibly excited for Zapier and for people who are building with them because like there's so many things that you can unlock using that platform. So I think that's how we like the most the most exciting thing to me for people who aren't who aren't developers.
Awesome. Zapier is always in there getting there connecting things. Yeah, they're great. So the one that I had in mind. So I had a buddy mine, Seki, who's the CEO of a company called Runway built this thing called Universal Primer, which helps you learn. It's described as learn everything about anything. And he basically I think is kind of this secretic method of helping you learn stuff. So it's like explain how transformers work in LLMs. And then it just kind of goes through stuff and then asks you questions, I think, and kind of helps you learn new concepts. And I think it's the number two education GPT. I love that. Seki's incredible.
So yes, it's true. Let me tell you about a product called Arcade. Arcade is an interactive demo platform that enables teams to create polished on brand demos in minutes. Telling the story of your product is hard and customers want you to show them your product, not just talk about it or gait it. That's why product for teams such as Atlassian, Carta, and Retool use Arcade to tell better stories within their home pages, product change logs, emails, and documentation. But don't just take my word for it. Quantum metric, the leading digital analytics platform created an interactive product to a library to drive more prospects.
With Arcade, they achieved a 2x higher conversion rate for demos and saw five times more engagement than videos. On top of that, they built a demo 10 times faster than before. Creating a product demo has never been easier. With browser based recording, Arcade is the no-code solution for building personalized demos at scale. Arcade offers product customization options, designer approved editing tools, and rich insights about how your viewers engage every step of the way, ready to tell more engaging product stories that drive results, head to Arcade.Software.Lenny and get 50% off your first three months. That's Arcade.Software.Lenny.
I want to talk about just what it's like to work at OpenAI and how the product team operates and how the company operates. So you worked at your two previous companies were Apple and NASA, which are not known for moving fast. And now you're OpenAI, which is known for moving very fast, maybe too fast for some people's taste as we saw with the whole board thing. And so what I'm curious is just what is it that OpenAI does so well that allows them to build and ship so quickly, and it's such high a bar. Like is there a process or a way of working that you've seen that you think other companies should try to move more quickly and ship better stuff? There's so many interesting trade-offs and all this tension around how quickly companies can move.
I think for us, again, if you think about Apple as an example, if you think about NASA as an example, just like older institutions, like lots of overtime, the tendency is think slow down. There's additional checks and balances that are put in place, which sort of drag things down a little bit. So we're young and a new company, so we don't have a lot of that institutional legacy barriers that have been put in place.
I think the biggest thing, and there's a good SAM tweet somewhere in the ether about this from I think 2022 or something like that. But like finding people who are high agency and work with urgency is like one of the most, you know, if I was hiring five people today, like those are like some of the top two characteristics that I would look for in people because it's you can take on the world if you have people who have high agency and like not needing to either like, you know, get 50 people's different consensus because like you have people who you trust with high agency and they can just go and do the thing.
I think is like one of the most, it is the most important thing. I'm pretty sure if you if you were to distill it down and like I see this and folks that I work with, like folks are so high agency, like they see a problem and they go and tackle it. Like they hear something from our customers about a challenge that they're having and like they're already pushing on what the solution for them is and not like waiting for all the other things to happen that like I think traditional companies are sort of stuck behind because they're like, oh, let's check with all these like seven different departments.
So like, you know, try to get feedback on this. Like people just go and do it and solve the problem. And I love that. It's so fun to be able to be a part of those situations. That is so cool. I really like these two characteristics because I haven't heard this before as the two maybe the two most important things you guys look for high agency, high urgency to give people a clear sense of what these actually look like when you're hiring. He's shared maybe this example of customer service, someone's hearing a bug and then going to fix it.
Is there anything else that can illustrate what that looks like high agency and then similar question on urgency other than just like move move ship ship. I think like the assistance API that we released for dev day like we continued to get this feedback from developers that people wanted these higher levels of abstraction on top of our existing API's and like a bunch of folks on the team just like came together and were like, let's let's put together what the plan would look like to build something like this. And then very quickly came together and actually built the actual API that now powers so many people's assistant applications that are out there.
And I think that's a great example of like, you know, it wasn't like this like top down like, oh, someone's sitting there being like, oh, let's do these five things. And then like, okay, team, go and do that. It's like, people really seeing these problems that are coming up and like knowing that they can come together as a team and like solve these problems really quickly. And I think the assistance API and there's like a thousand and one other examples of teams taking agency and doing this. But I think that's a great one at the top of my head. That makes me want to ask just how does planning work at OpenAI?
我认为这是一个很好的例子,就像,你知道的,它并不是像这样从上而下地,哦,有人坐在那里说,哦,让我们做这五件事。然后,好的,团队,去做吧。事实上,人们真的看到了这些问题的出现,知道他们可以作为一个团队一起解决这些问题,而且解决得很快。我认为辅助 API 以及其他许多团队采取主动行动并实现这一点的例子,还有无数例子。但就我个人而言,我认为这是一个很好的例子。这让我想要问一下 OpenAI 的计划如何工作?
So in this example, it was just like, hey, we think we need to build this, let's just go and build it. Imagine there's still a roadmap and priorities and goals and things that that team had. How does road mapping and prioritization and all of that generally work to allow for something like that? I think this is one of the more challenging pieces at OpenAI. Like there's so many like everyone wants everything from us. And like today, especially in the world of JAGIBT and how large and well used are API is like, people will just come to us and say like, hey, we want all of these things.
I think there's like a bunch of like core guiding principles that we look at. Like one, going back to the mission, like is this actually like going to help us get to AGI? So there's a huge focus on like, you know, there's this like potential shiny reward right in front of us, which is like, you know, like optimize user engagement or whatever it is. And like, is that really the thing like maybe the answer is yes, like maybe that is what is going to help us get to AGI sooner. But like looking at it through that lens, I think is like always the first step of deciding any of these problems.
I think on the developer side, there's also these like core tenets of like reliability like, hey, you know, it would be awesome if we had additional API that did all these cool things like new new endpoints, new modalities, new abstractions, but like, are we giving customers a robust and reliable experience or API? And like that's often like the first question. And I think there have been times where we fall in short on that. And like, you know, there was a bunch of other things that we've been thinking about doing and like really bringing the focus and priority back to that reliability piece. Because at the end of the day, nobody cares if you have something great, if they can't use it robust and reliably. So there's like these core tenets. And I think like, again, we have like very other than all the principles about how we're making the decision or think like the actual planning process is like pretty standard, like we come together, there's like H1, Q1 goals, we all sprint on those.
I think the real interesting thing is like, how stuff changes over time. Like you think we're going to do these like very high level things and like, you know, new models, new modalities, whatever it is. And then like as time goes on, there's like all of this turmoil and change. And it's interesting to have like mechanisms to be like, Hey, how do we, how do we update our understanding of the world and our goals as everything sort of the ground changes underneath of us as is happening in the craziness of the AI space today. It's interesting that it sounds a lot like most other companies.
There's H1 planning, there's Q1 planning. Are there metrics and goals like that that you guys have? Okay, ours or anything like that? Or is it just here, we're going to launch these products? I think it's like much higher level. I actually don't think open AI is like a big okay, our company. Like I don't think teams do okay, ours today. And I don't have a good understanding of like, why that's the case? Whether or not I don't even know. Okay, ours are like still the industry. You're probably talking to a lot more folks about like, yeah, who are making those decisions.
So I'm curious, is that something that you're seeing for folks like, is it still common for people to do OTRs? Yeah, absolutely. Many companies use their cares, love OTRs. Many companies hate OTRs. I am not surprised that open AI is not an OTR driven company. Along those lines, I don't know how much you can share about all the stuff. But how do you measure success for things that you launch? I know there's this ultimate goal, AGI. Is there some way to track we're getting closer? What else do you guys look at when you launch say, DPT store or systems or anything? That's like, cool. That was exactly what we're hoping for. Is it just adoption?
Yeah, adoption is a great one. I think there's a bunch of metrics around revenue, a number of developers that are building on our platform, all those things. A lot of these, and I don't want to dive, I'll let Sam or someone else on our leadership team go more into the details. I think a lot of these are actual abstractions towards something else. Even if revenue is a goal, it's like revenue is not actually the goal. Revenue is a proxy for getting more compute, which is then actually what helps us get towards getting more GPUs so that we can train better models and actually get to the goal. So there's all these intermediate layers where even if we say something is the goal and you hear that in a vacuum and you're like, oh, well, open AI, I just want to make money. It's like, well, really money is the mechanism to get better models so that we can achieve our mission. I think there's a bunch of interesting angles like that as well.
I don't know if I've heard of a more ambitious vision for a company to build artificial general intelligence. I love that. I imagine many companies are like, what's our version of that? Before we leave this topic, is there, is there anything else that you've seen open AI do really well that allows it to move this fast and be this successful? You talked about hiring people with higher agency and high urgency. Is there anything else that's just like, oh, wow, that's a really good way of operating? Imagine part of it's just hiring incredibly smart people. Like, I think that's probably an unsight thing, but yeah, anything else.
I think there's a non-trivial benefit to using Slack. And I think maybe that's controversial, and maybe some people don't like Slack, but opening such a Slack-heavy culture. And it really, the instantaneous real-time communication on Slack is so crucial. And I just love being able to tag in different people from different teams and get everybody coalesced. So everybody is always on Slack. So even if you're remote or you're on a different team or in a different office, so much of the company culture is ingrained in Slack and it allows us to really quickly coordinate where it's actually faster to send them into Slack messages sometimes than it would be to walk over to their desk because they're on Slack and they're going to be using it.
If you saw the recent Sam and Bill Gates interview, but Sam was talking about how Slack is his number one most used app on his phone. I don't even look at the time saying, I'm like, I don't want to know how long I'm using Slack, but I'm sure the sales force people are looking at the numbers and they're like, this is exactly what we wanted. So I also love Slack. I'm a big promoter of Slack. I think there's a lot of Slack hate, but it's such a good product. I've tried so many alternatives and nothing compares. I think what's interesting about Slack for you guys is one of the, you don't know if someone in there is just an AGI. That is not actually a person that's just there working at the company. I know they're real people. There's no AGIs yet, but I think like, yeah, even Slack is building a bunch of really cool AI tools, which I'm excited to. That's why there's so much cool AI progress. At the end of the day, it's so exciting from being a consumer of all these new AI products. Google's a great example. I'm so happy that Google's doing really cool AI stuff because I'm a Google Docs customer. I love using Google Docs and a bunch of their other products. It's awesome that people are building such useful things around these models.
如果你看到最近的山姆和比尔·盖茨的访谈,山姆谈到Slack是他手机上使用最频繁的应用。我甚至不看时间,我说,我不想知道我使用Slack的时间有多长,但我肯定销售团队在看数字,他们会说,这正是我们想要的。我也很喜欢Slack。我是Slack的忠实支持者。我觉得Slack受到了很多批评,但它是一个很好的产品。我试过很多替代品,但没有一样能比得上Slack。我觉得对你们来说Slack有趣的地方之一是,并不清楚里面是否有人工通用智能,那并非是真正在公司工作的人。我知道他们是真实的人,还没有人工通用智能,但我觉得,即使是Slack也在开发很多很酷的AI工具,我对此感到兴奋。这也是为什么会有这么多酷炫的AI进展。最终,作为所有这些新AI产品的消费者,这真是令人兴奋。Google是一个很好的例子。我很高兴Google在进行很酷的AI研究,因为我是 Google Docs 的用户。我喜欢使用Google Docs和他们的其他很多产品。人们围绕这些模型构建这么有用的东西真的太棒了。
How big is the opening AI team at this point, whatever you can share, just to give people sense of the skill? Yeah, I think the last public number was something around 750 near the end of last year, 780 or something like that near the end of last year. We're still growing so quickly. I won't be the messenger to share the specific update. The team is growing like crazy and we're also hiring across all of our engineering teams. Folks are and npm teams. Folks are interested. We'll have to hear from folks who are curious about joining. Maybe one last question here. You're growing, maybe getting to 1,000 people, clearly still very innovative and moving incredibly fast. Is there anything you've seen about what OpenAI does well to enable innovation and not slow down new big ideas? Yeah, there's a couple of things. One of which is the actual research team who see the most of the innovation that happens at OpenAI is intentionally small. They're not like most of the growth that OpenAI is seen as around customer-facing roles. Our engineering roles to provide the infrastructure to protect CBT and things like that. The research team is intentionally kept small. There's all of this talk. It's really interesting. I just saw this thread from one of our research folks who was talking about how in a world where you're constrained by the amount of GPU capacity that you have as a researcher, which is the case for OpenAI researchers, and also researchers everywhere else, each new researcher that you add is actually a net productivity loss for the research group unless that person is up-leveling everyone else in such a profound way that it increases the efficiency.
If you just add somebody who's going to tackle some completely different research direction, you now have to share your GPUs with that person, and everyone else is now slower on their experiments. They're really interesting trade-off that research folks have that I don't think like products folks like I add another engineer to our API team or to some of the chat GPT teams, you can actually write more code and do more. That's actually a net beneficial improvement for everybody. That's always not the case in the case of researchers, which is interesting. In a GPU constraint world, which hopefully we won't always be in.
I want to zoom out a bit, and then there's going to be a couple of follow-up questions here. Where are things heading with OpenAI? What's in the near future of what people should expect from the tools that you guys are going to have in lunch? Yeah, new modalities. I think chat GPT continuing to push all of the different experiences that are going to be possible. Today, chat GPT is really just text in-text out. I guess three months ago, it was just text in-text out, we started to change that with now. You can do the voice mode and you can generate images, and now you can take pictures.
I think continuing to expand the way in which you interface with AI through chat GPT is coming. I think GPT is our first step towards the agent future. Again, today when you use a GPT, you send a message, you get an answer back almost right away. That's kind of the end of your interaction. I think as GPT's continue to get more robust, you'll actually be able to say, hey, go and do this thing. Just let me know when you're done. I don't need the answer right now. I want you to really spend time and be thoughtful about this.
Again, if you think back to all these human analogies, that's what we do with humans. I don't expect somebody when I ask them to do something meaningful for me to do it right away and give me the answer back right away. I think pushing more towards those experiences is what is going to unlock so much more value for people. I think the last thing is GPT's as this mechanism to get the next few hundred million people into chat GPT and into AI.
I think if you have conversations with people who aren't close to the AI space, oftentimes you talk about, even if they've heard of chat GPT, a lot of people haven't heard of chat GPT, but if they have, they're like, they show up in chat GPT and they're like, I don't really know what I'm supposed to do with this. This blokes late. I can kind of do anything. It's not super clear how this solves my specific problem. But I think the cool thing about GPT's is you can package down, here's this one very specific problem that AI can solve for you and do it really well. I'm like, I can share that experience with you.
Now you can go and try that GPT, have it actually solve the problem and be like, wow, it did this thing for me. I should probably spend the time to investigate these five other problems that I have to see if AI can also be a solution to those. I think so many more people are going to come online and start using these tools because very narrow, vertical tools are what's going to be a huge amount for them. In the last case, a classic horizontal product problem where does so many things and people don't know what exactly it should do for them.
So that makes a ton of sense. Just being a lot more template oriented, use case specific, helping people on board makes a tons of sense. Common problem for so many sales products out there. The other ones you mentioned, which is really interesting, basically more interfaces to more easily interact with opening eye voice, you mentioned audio and things like that. That makes tons of sense. And then this agent's piece where the idea is instead of just it's a chat, it's like, it good to do this thing for me.
Kind of along those lines, GPT-5, we touched on this a bit. There's a lot of speculation about the much better version. People just have these wild expectations, I think, for where GPT is going. GPT-5 is going to solve all the world's problems. I know you're not going to tell me when it's launching and what it's going to do, but I heard from a friend that there's kind of this tip that when you're building products today, you should build towards a GPT-5 future, not based on limitations of GPT-4 today.
To help people do that, what should people think about that might be better in a world of GPT-5 is it just like it's faster, it's just smarter, is there anything else that might be like, oh, wow, I should really think I'm approaching my product. If folks have looked through the GPT-4 technical report that we released back in March when GPT-4 came out, GPT-4 was the first model that we trained where we could reliably predict the capabilities of that model beforehand based on the amount of computes that we were going to put into it.
We did a scientific study to show, hey, this is what we predicted, and here is what the actual outcome was. It'll be one, I think, just as somebody who's interested in technology, but interestingly, does that continue to hold for GPT-5? Hopefully, we'll share some of that information whenever that model comes out. I also think you can probably draw a few observations, one of them, which is GPT-4 came out, the consensus from the world is everything is different. All of a sudden, everything is different. This changes the world, this changes everything, and then slowly but surely we come back to reality of this is a really effective tool, and it's going to help solve my problems more effectively. I think that is the undoubtedly the lens in which people should look at all of these model advancements.
GPT-5 is going to be extremely useful and solve some whole new echelon of problems. Hopefully, it'll be faster. Hopefully, it'll be better on all these waves, but fundamentally, the same problem that exists in the world are still going to be the same problems. You now just have a better tool to solve those problems. Going back to vertical use cases, I think people who are solving very specific use cases are just now going to be able to do that much more effectively. I don't think that's going to—people have these unrealistic expectations that GPT-5 is going to be doing backflips in the background of my bedroom while it also writes all my code for me and talks in the phone with my mom or something like that. It's not the case. It is just going to be this very effective tool, very similar to GPT-4.
It's also going to become very normal very quickly. That is actually a really interesting piece if you can plan for the world where people become very used to these tools very quickly. I actually think that's an edge. Assuming that this thing is going to absolutely change everything, in many ways, I think it's actually a downside. It's the wrong mental framing to have of these tools as they come out. Along these lines, you guys are investing a lot into B2B offerings. I think half the revenue last I heard was B2B and then half is B2C. I don't know if that's true, but that's some hired. What is it that you get if you work with OpenAI as a company, as a business? What is the lock? Is it just called OpenAI Enterprise? What's it called? What do you get as a part of that? I think a lot of our B2B customers are using the API to build stuff. I think that's one angle of it.
I think if you're a ChachieBT B2B customer, we sell Teams, which is the ability to get multiple subscriptions of ChachieBT packaged together. We also have an enterprise version of ChachieBT. There's a bunch of enterprising things that enterprise companies want around like SSO and stuff like that related to ChachieBT Enterprise. I think the coolest thing is actually being able to share some of these prompt templates and GBTs internally. Again, you can make custom things that work really well for your company with all of the information that's relevant to solving problems at your company and share those internally. To me, you want to be able to collaborate with your teammates on the cool things you create using AI. That's a huge unlock for companies. Those are the two biggest value ads. There's higher limits and stuff like that on some of those models. I think being able to share your very domain specific applications is the most useful thing.
I think if you're a company listening and you think a lot of employees are using ChachieBT, basically the simplest thing you could do is just roll it up into a business account with single sign-on that probably saves you money and makes it easier to coordinate and administer. There's also a bunch of security stuff too. If you want to control, you don't want people to use certain GBTs from the GBT store because you're worried about security or privacy and stuff like that. You don't want your private data going in places. It makes a lot of sense to sign up for that so that you have a little bit more control over what's happening. There's a launch happening tomorrow, I think, after recording this. Can you talk about what is new, what's coming out? I think this is going to come out a couple of weeks after recording, but just what should people know that's new, that's coming out from opening AI tomorrow in our world?
Yeah, there's a few different things. A couple of quick ones are updated GBT for TurboModel, the preview model that we released at DevDay, there's an updated version of that. It fixes this if folks have seen online, people talking about this laziness phenomenon in the model. We improve on that and it fixes a lot of the cases where that was the case. Hopefully the model will be a little bit less lazy. The big thing is the third generation embeddings model. We were talking off-camera before recording about all of the cool use cases from embeddings. It's used in embeddings before. It's essentially the technology that powers many of these questions in answering with your own documentation or your own corpus of knowledge. You were saying you actually have a website where people can ask questions about recordings of the podcast.
Lennybot.com, check it out. Yeah, Lennybot.com. My assumption was that Lennybot.com is actually powered by embeddings. You take all of the corpus of knowledge, you take all the recordings, your blog posts, you embed them. When people ask questions, you can actually go in and see the similarity between the question and the corpus of knowledge and then provide an answer to somebody's question and reference an empirical fact, like something that's true from your knowledge base. I'm like, this is super useful and people are doing a ton of this.
Trying to ground these models in reality in what they know to be true. We know all the things from your podcast to be at least something that you've said before to be true in that sense. We can bring them into the answer that the model is actually generating in response to a question. That'll be super cool. These new V3 embeddings models, again, state of the art performance, the cool thing is actually the non-English performance has increased super significantly. Historically, people really were only using embeddings only worked really well for English. I think now you can use it across so many new languages because it's just so much more performing across those languages. It's like five times cheaper as well, which is wonderful. There's no better feeling than making things cheaper for people. I love it. I think now it's like you can embed, I'm pretty sure it was like 62,000 pages of text for $1, which is very, very cheap. Lots of really cool things you can do with embeddings and excited to see people embed more stuff.
What a deal. Final question before we get to a very exciting lightning round. Say you're a product manager at a big company or even a founder, what do you think are the biggest opportunities for them to leverage the tech that you guys are building, GPT4, all the other APIs? How should people be thinking about, here's how we should really think about leveraging this power in our existing product or new product whichever direction you want to go? Yeah, I think going back to this theme of new experiences is really exciting to me. I think consumers are just going to be like, you're going to have an edge on other people. If you're providing AI that's not accessible in a chat bot, people are using a ton of chat. And it's a really valuable service area. It's clearly valuable because people are using it. But I think products that move beyond this chat interface really are going to have such an advantage and also thinking about how to take your use case to the next level.
I've tried a ton of chat examples that are very basic and providing a little bit of value to me. But I'm like, really, this should go much further and actually build your core experience from the ground up. I've used this product that allows you to essentially manage or view the conversations that are happening online around certain topics and stuff like that. So I can go and look online. What are people saying about GPT-4? And what I just said out loud, what are people saying about GPT-4 is the actual question that I have. And in a normal product experience, I have to go into a bunch of dashboards and change a bunch of filters and stuff like that. And what I really want is just ask my question, what are people saying about GPT-4 and get an answer to that question in a very data-grounded way?
I've seen people solve part of this problem where I'll be like, oh, here's a few examples of what people are saying. And well, that's not really what I want. I want this like summary of what's happening. And I think it just takes a little bit more engineering effort to make that happen. But I think it's like, that is the magical unlock of like, wow, this is an incredible product that I'm going to continue to use instead of like, yeah, this is kind of useful, but I really want more. Awesome. I'll give a shout out to a product. I'm not an investor, but I know the founder called visualelectric.com, which I think is doing exactly this. It's basically tools specifically built for creatives, I think specifically graphic design, to help them create imagery. So, you know, there's like Dolly, obviously, but this takes it to a whole new level where it's kind of this canvas, infinite canvas that you could just generate images, edit, tweak them, and continue to iterate until you have the thing that you need visual, like, I'm not trying to similar to Candor.
It's it's even more niche, I think, for more sophisticated graphic design, I think is the use case. But I'm not a designer. So, I'm not I'm not the target customer, but I will say my wife is a graphic designer. She'd never use AI tools. I showed her this and she got hooked on it. She paid for it without even telling me that she is going to become a paid customer. And she just started she created the imagery of our dog, all in all this art. And now it's like on our TV. She the art she created is now sitting. It's like a wave of frame TV. And that's the image on our TV. So anyway, I love that. What was it called again? Visual electric calm.
我认为,对于更复杂的图形设计,它更加专业,我认为这是使用案例。但我不是设计师。所以,我不是目标客户,但我会说我的妻子是一名平面设计师。她从未使用过人工智能工具。我向她展示了这个,她就迷上了。她付了钱,甚至没有告诉我她要成为付费客户。她开始创建我们的狗的形象,所有这些艺术品。现在就像放在我们的电视上。她创作的艺术品现在就在那里。就像一台框架电视。那就是我们电视上的画面。总之,我喜欢这个。它叫什么来着? Visual electric calm。
Anyway, anything else you wanted to touch on or share before we get to a very exciting lightning round. I've made this statement a few times on my other places, but like for people who are have cool ideas that they should build with AI, like, this is the moment, like there are so many cool things that need to be built for the world using AI. And like, again, if I or other folks on the team at over that can be helpful in like giving you over the hump of like starting that journey of building something really cool, like please reach out like they're just the world needs more cool solutions using these tools and would love to hear about like the awesome stuff that people are building. I would have asked you this at the end, but how would people reach out? What's the best way to actually do that? Twitter LinkedIn might my email should be signed to bull somewhere. I don't want to say it and I could stand with a bunch of emails like you should be able to sign my email if you needed online somewhere. But yeah, Twitter and LinkedIn is usually like the easiest place. And how do they find your Twitter? It's just Logan to Patrick or I think my name shows up as Logan dot chbq or looking gpt. artificial Logan K. Yeah, awesome. Okay. And we'll link to in the show notes. Amazing. Well, Logan with that, we reached our very exciting lightning round. Are you ready? First question, what are two or three books that you recommended most to other people?
无论如何,在我们进行非常激动人心的闪电回合之前,您还想谈论或分享其他任何事吗?我在其他地方多次提到过这个声明,但是对于那些有很酷想法的人,他们应该利用AI来构建,这是时机,世界上有很多需要使用AI构建的很酷的东西。再次强调,如果我或团队中的其他人可以帮助您开始构建真正酷的东西的旅程,比如帮助您解决一些问题,那请联系我们,世界需要更多使用这些工具的酷解决方案,并且很乐意听到人们正在构建的很棒的东西。我本来会在最后问你这个问题的,但人们应该如何联系你?实际上,最好的方式是什么?Twitter LinkedIn可能需要在某处注册我的电子邮件。我不想说出来,否则可能会收到一大堆电子邮件。如果需要,在线上应该能找到我的电子邮件。但是,Twitter和LinkedIn通常是最容易的地方。他们怎样找到您的Twitter呢?只需搜索Logan to Patrick,或者我的名字显示为Logan.chbq或lookingpt. artificial Logan K. 是的,很棒。好的。我们将在节目笔记中提供链接。很棒。好的,有了Logan,我们来到了非常激动人心的闪电回合。您准备好了吗?第一个问题,您向其他人推荐最多的两到三本书是什么?
I think the first one that I read a long time ago and came back to recently is the one room schoolhouse I sell con. Incredible. Yeah, I don't want to it's a lightning round. So I won't say too much. But like incredible story and AI is what is going to enable style cons vision of like a teacher per student to actually happen. So I'm really excited about that.
And the other one is that I always come back to was why we sleep. I sleep, sleep and science are so cool. If you don't care about your sleep, like it's one of the biggest up levels that you can do for yourself.
What is the favorite recent movie or TV show that you really enjoyed? I'm a sucker for like a good inspirational human story. So I watched with my family recently over the holidays this Gran Turismo movie. And it's a story about somebody who like this kid from London who grew up like doing like sim racing, which is like a virtual race car and did this competition ended up becoming like a real professional race car driver through some competition.
And it's just like really cool to see. Yeah, someone go from driving a virtual car to driving a real car and like competing in the 24 hour lemons and all that stuff. I used to play that game and it was a lot of fun. But yeah, I think any clue how to drive a real car race car. So that's inspiring.
Do you have a favorite interview question they'd like to ask candidates that you're interviewing? Yeah, I'm always curious to hear what people's like the thing that they so strongly believe that people disagree with them on. What do you look for in an answer that seems like, wow, that's a really good signal. I'm oftentimes it's just an entertaining question to ask in some sense, but it's also it's interesting to see like what somebody's like deeply held strong belief is.
I think that's it. And you know, not not to like judge whether or not I believe in that, they're like just curious to like see why why people feel that way.
我想就是这样。而且你知道,不是要判断我是否相信那个,只是好奇为什么人们会有那种感觉。
What is a favorite product that you've recently discovered that you really like a kind of narrative of sleep. I have this, I have this really nice sleep mask from this company called and they're not being paid to say this, but it's called like meant, meant to sleep or something like that. It's a weighted sleep mask and it feels incredible when I, I don't know, maybe I just have a heavy head or something like that, but it feels it feels good to wear a weighted sleep mask.
最近你发现的喜爱产品是什么?说说你与睡眠的故事吧。最近我发现了一个很喜欢的产品,来自一家名为的公司,我并没有被支付来说这些话,但是它叫像‘ meant to sleep’ 等等。这是一个带有重量的睡眠眼罩,戴起来感觉很棒。也许是因为我头重,但是戴重量的睡眠眼罩感觉很好。
And I really appreciate it. I have a competing sleep mask that I highly recommend. I'm trying to find it. It's an I've emailed people about it a couple times in my newsletter for gift guns. Okay, my favorite is called the Wawa sleep mask. W a like about it. Oh, a W a O a W.
我真的很感激。我有一款非常推荐的竞争睡眠眼罩。我正在尝试找到它。我已经在我的通讯中给人们发过几次关于它的邮件,用于赠送礼物。好的,我最喜欢的是叫做Wawa睡眠眼罩。我非常喜欢它的一切。哦,W a O a W。
I'll link to it in the show. It makes a lot of room. It's like very large and in their space for your eyes. So like your eyelashes and whatever eyes are impressed on. And it's just it just fits really nicely around the head. And my wife be both wearing masks at night to just speaking of sleep really helps to sleep. Yeah, it's not like I love it. Yeah, it doesn't have the weightiness piece. So it might be worth trying. But everyone I've recommended this to you is like that changed my life. Thank you for helping me sleep better.
And so we'll link to put a little bit of sleep mask. Look at us.
所以我们将戴上一点睡眠眼罩。看着我们。
Two more questions. Do you have a favorite life motto that you often come back to share with friends or family either in work or in life? Yeah, I've got it on a posted note that I write behind my my camera and it's measuring hundreds. I love this idea of measuring things in hundreds.
And it's for folks who are like at the beginning of some journey. I talk to people all the time. They're like, yeah, I've tried this thing and it hasn't worked. And if your mental model is to measure in hundreds, I measure in hundreds the five times that you failed at something you failed and tried zero times.
And I love that. It's like such a great reminder that everything in life is like built on compounding and multiple attempts at stuff. And if you don't try enough times like you're never going to be successful at it. I love that. I could see why you're successful at OpenAI and why you're a good fit there.
Final question. So I asked OpenAI I asked chat GPT for a very silly question. Give me a bunch of silly questions to ask Logan, Kilpatrick, head of developer relations at OpenAI. And I went through a bunch I have three here, but I'm going to pick one.
If an AI started doing standup comedy, what do you think would be it's go to joke or funny observation about humans? I think today, I think if you were to do this, like I think the go to question would be something along like the so an AI walks into a bar and likely because again, it's trained on some distribution of training data. And like, that's like the most common joke that comes up. And that's probably like, I'm wondering if you came up with a joke right now, whether or not that would shell up in one of the examples.
I love it. What would be the joke though? We need the joke. We need the punch line. I'm just joking. I know you can't come up with amazing. That's what we have shot you before. We're really irrelevant. Amazing. Logan, thank you so much for being here. Two final questions, even though you've already shared this information, but just for folks to remind them, work and folks find you if they want to reach out and ask you more questions. And how can listeners be useful to you? Yeah, Twitter and LinkedIn, Logan, Kilpatrick or Logan dot GBT on Twitter, please, please shoot me messages. I get a ton of DMs from people.
And it's always like really, really interesting stuff. I think the thing that I would love to have held on is like, if people find bugs and things that don't work well in chat to be too, like I oftentimes like see people be like, this thing didn't work really well. And the key and I think we as OpenAI did do a better job of like messaging this to people. But having like shared chats or like actual like tangible reproducible examples are like the two things that we need in order to like actually fix the problems that people have.
Like the model laziness was a good example, where it was kind of hard to figure out what was going on because people would be like, God, the model is lazier. But like, it's hard to figure out like what were the prompts they were using? What was the examples, all that stuff? So send those examples as you come up on things that don't work well and we'll make stuff better for you. Amazing.
And I'll also just remind people, if you're listening to this and you're like, Oh, okay, cool. A lot of cool ideas for OpenAI and chat GPT. What you need to do is actually just go to chat. The chat.openai.com and try the stuff out. There's a lot of just like theorizing. But I think once you actually start doing it, you start to see things a little differently. And at this point every day, I mean, there's doing something like asking for ideas for questions, doing research on a newsletter post. And it's just like a tab I'm always coming back to.
And I know there's a lot of people just like talking about this sort of thing. And I just want to remind people just like, go sign in, play with it, ask a question, then let me work it on and just see how it goes and keep coming back to it. Is there anything else you want to share along those lines to inspire people to give this a shot? I love it. I think that the phrase of like, you know, people being worried about humans being replaced by AI. And I've seen this narrative online that it's like, it's not AI that's going to replace humans. It's like other humans that are being augmented and like using AI tools that are like going to be more competitive than a job market and stuff like that.
So go and try these AI tools. Like this is the best time to learn. Like you're going to be more productive and like empowered in your job and the things that you're excited about. So yeah, exciting to see what people use Chachid be T for. And then you can expense your account. I think it's 10 or 20 bucks a month. A lot of companies are paying for this for you. So ask your boss if you can just have it expense to make sure you use the latest version. Anyway, Logan, thank you again so much for being here. This is awesome.
所以去尝试这些人工智能工具吧。现在是学习的最佳时机。通过这些工具,你将会在工作中更加高效和有动力,同时也能更好地处理你感兴趣的事情。所以,很期待看到人们如何使用Chachid be T。然后你可以报销你的账户。我想每月费用大概是10到20美元。很多公司会为此付费。所以问问你的老板是否可以报销这笔费用,确保你使用的是最新版本。总之,Logan,再次非常感谢你的到来。这太棒了。
I mean, thanks for having me and thoughts, low questions. Hopefully those weren't all from Chachid be T. Nope. Only the last one. I did have a bunch of others I was had in the in the belt or in the pocket, I don't know the metaphors in the back pocket. That's the metaphor. But I did not get to them because we had enough great stuff. So no, that was all me. Human. Thank you. Thanks, Logan. Lenny dot a I love it. Lenny bot.com. Check it out. Okay. Thanks, Logan. Bye everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify or your favorite podcast app.
Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lenny's podcast.com. See you in the next episode.