We need to consider more futures that we're considering right now. I think everybody's mental model is either AI never improves past today because we're not good at exponential change and we're not good at seeing that. But I also think a lot of people are also worried about the other case which is like machine God takes over the world which we obviously should worry about. But there's a lot of things in between those two worlds that are profoundly changing what we do. It's two times better, what if it's ten times better?
Hi, I'm Reed Hoffman. And I'm Aria Fingar. We want to know what happens if, in the future, everything breaks humanity's way. In our first season, we spoke with visionaries across many fields, from climate science to criminal justice and from entertainment to education. For this special mini-series, we're speaking with expert builders and skilled users of artificial intelligence. They use hardware, software, and their own creativity to help individuals use AI to better their personal everyday lives. These conversations also feature another kind of guest, AI. Whether it's inflections pie or open AI's GBD4, each episode will include an AI-generated element to spark discussion. You can find these additions down on the show notes.
In each episode, we seek out the brightest version of the future and learn what it takes to get there. This is possible. As everyone knows, this summer we are doing our mini-arc on AI and the first episode of the summer series was about personal AI and software. The second we got to talk about personal AI and hardware, and this last episode is about personal AI and the individual. It is the most tactical yet.
在每一集中,我们追求未来最光明的版本,并学习到达那里所需付出的努力。这是可能的。正如大家所知,今年夏天我们在AI上进行了一系列的迷你剧。夏季系列的第一集是关于个人 AI 和软件的。第二集我们开始谈论个人 AI 和硬件,而最后一集则是关于个人 AI 和个体的。这是迄今为止最具实践性的一集。
I am so excited about our guest because our relationship with Ethan Mollick started with a cold email. He had been tweeting and talking about AI and everyone on our team had said, oh my gosh, you got to follow this guy on Twitter. I just sent him a cold email and said, hey, would you chat with me? He was so kind and we got on a call and his energy and excitement for AI just jumped through the computer screen. Just so delighted to have him on the pod so that everyone can hear his excitement and enthusiasm for this topic. I think more or less I get set more tweets by him than by anybody else because he's like, oh, you should check this out. Oh, this is really important. Oh, and it was like, actually you get to know him and you go, wow, you can't possibly be that good. It's so amazing that when I get to talking with him, it can't be that good. I'm really looking forward to this because I know you through your tweets. Now let me talk to you. This will be a very interesting experiment, almost like GPT, like putting it in a prompt and seeing what comes out. If people are listening and thinking, yeah, this is all great, but what does AI mean for me? How can I improve? How can I get better? What can I do? Ethan Moloch is the person to talk about it, so thrilled that he's going to be doing that.
So this is the last and final episode of our series on AI and the person also. Anyone listening, please do subscribe because then you will be the first to hear about our new fall season. Ethan Moloch is an associate professor at the Wharton School of the University of Pennsylvania where he studies and teaches innovation and entrepreneurship and also examines the effects of artificial intelligence on work and education. He also leads Wharton Interactive, an effort to democratize education using games, simulations, and AI. Here's our conversation with Ethan Moloch.
Ethan, thank you so much for being here today. It's so lovely to see you again. And we have a Slack channel at work that is all about everything AI. So every day there's 10, 20, 30, what are the latest new AI things of the day? And your Twitter is basically every other post. So my question for you is how did you get here? How did you become the guy who is at the center of AI experimenting and playing with it? We'd love to hear that story.
So it's actually a weird story. AI adjacent, but not really an AI person. So I back in grad school, I did a lot of work at the Media Lab with the AI group at that point, which is like Marvin Minsky, who pushed a bunch of people like that, where I wasn't the technical person. I was sort of like the business school representative there, trying to communicate AI to other people. And sort of been around that AI community for a long time. My real passion has been how do we increase people's ability to learn how to increase education through interactive tools. So I've been doing that for a very long time and playing with AI on the side because it's always been promising, but not quite there.
So I was already assigned to my students assignments like cheat with AI from the more primitive GPT-3. And we were kind of in the middle of that cheat with AI assignment when chat came out. And I was like, oh, this is interesting. And then over the course of the day, I have a whole series of tweets on it where I'm like, oh my god, this is really interesting. Wait, this is insanely interesting. And then by the next sort of Tuesday, I was teaching my class and I sort of introduced it to class by the end of my first entrepreneurship class I was teaching. I had students who were already coding with it. And you know, usually I was like, okay, we've hit a big deal here. I sort of descended into its sideways from an education and interactivity viewpoint.
You know, one of the things that I appreciate about you being a power user, chat GPT-4, being barred, maybe pie even, I don't know, I'd be very curious to get your feedback on pie. Product request, hopes, designs, what do you think of the current state of the art and what would move you from 11 out of 10 excited to 20 out of 10 excited? This is a universal tool that's available to everybody and there's so much debate over what happens next and how much smarter will this get. Well we've already completely disrupted work and education, but the tools aren't really supporting work and education use. You kind of have to work around it. You kind of have to hack a chatbot to produce an essay for you or do good work for you. And I think that some of this is really about that learning interface. Like it's a pretty hostile system. If you don't know it to start using it, people bounce off of AI very, very quickly for a wide variety of reasons or get down rabbit holes. And I think to me a lot of this is really about how do we build the education to this? How do you get AI to help people use AI better rather than necessarily even making the tools more advanced for all the good and bad that will do? I mean I think that's such a good point.
Like the chat interface was such an on-ramp to people using it. Like that form was great. But to your point, the fact that you have to create all of these, here's how to hack the system, here's this special prob. Like that's a problem for getting someone who's new to the system, especially because so in this summer arc for possible, we're talking about not necessarily sort of the sweeping societal changes, but how will AI impact your daily life? Like what are you most excited about to see AI transform our daily personal lives? I mean there's so much there, right? Like this is where the Nexus is like both exciting and kind of terrifying, right? Like I tend to, you know, there's a lot of jobs that are really high quality jobs that are, you know, not so much jobs but bundles of tasks that are under threat. And there's a lot of stuff that, you know, looks really good.
From my perspective, entrepreneurship professor, right, this is like the absolute sweet spot because a third of Americans have an idea for a startup and they don't launch them. And they don't even do any research. So the idea of having a tutor or somebody push you along, a co-founder of sorts, hugely helpful, right? And then on the other side is an educator. I mean, you know, suddenly we have a tool available at 169 countries that is like the best, you know, education tool we ever released. And you know, we have to figure out how to unlock it. So I mean, I think for a potential democratizing opportunity, it's profoundly exciting in that sense.
So if you could wave a wand and kind of reorient the general public discourse on AI, what direction would you wave the wand in? What would you try to say like more of this, less of this? So I think that it's hard to say we shouldn't be worried about negative effects because we should. I think first of all, we need to consider more futures that we're considering right now. I think everybody's mental model is either AI never improves past today because we're not good at exponential change and we're not good at seeing that.
But I also think a lot of people are also worried about the other case, which is like machine God takes over the world, which we obviously should worry about, right? But there's a lot of things in between those two worlds that are profoundly changing what we do, right? What if it's two times better? What if it's 10 times better? Right now, if you're in the top 10% of whatever field or set you're in, you're definitely beating AI and AI can help you, but it's not going to outperform you, right? And everybody's got something they're really good at. AI is good at it. It's good at it. It's good is what you're really good at. That could change pretty quickly with a two to 10 times performance. And I think we have to consider that and worry about that piece.
And then the other part of the narrative I would change would also be thinking about, you know, the positive cases without being Pollyanna-ish about it or influencer about it. And people have to think about how does this make their lives better while still worrying about the ways it may make our lives worse. And I think trying to balance those two isn't happening very successfully in the world. The thing I would add to what you're saying is one part of the thesis that a lot of the worries and the critics have is they kind of say, well, the machines will eventually completely outstrip people and people won't even be able to be in combination. And they kind of use like the chest results as an example of that, which is there was a lacuna period where chess, a machine plus person was better and now machine is just better. I'm not sure if actually in fact that the person plus machine isn't a very long period in Indeed and maybe, you know, long from the viewpoint of by the time that that changes, the world's so different in so many different cases. We don't really know what it looks like.
We can't fully imagine it. It's not like today plus, you know, God like machines. Even if you say, well, hey, it starts getting a lot better at writing investment memos than I am. And like you just said, you know, starting gun, you're going to write an investment memo in an hour. And I'm like, it goes, okay, it's better than you. But it's still like when you put us together, it's still better. Right. And that's the thing that I think that is the future that, you know, like I was doing with impromptu and you're doing with, you know, all of your various work, you know, including tweets and podcasts and writing and everything else is part of that reorientation of the future that I think is so important in the public discourse.
I couldn't agree more. And I also think people underestimate social systems take a long time to change. Even if the system is infinitely better, there's still lots of human world pieces that are it will not be good at. I think people try and draw arbitrary bright lines like it's not going to be good at empathy. It's good at empathy. It's not good at innovation. It's good at innovation. Right. That's not really the way to view this. But there are, you know, there are perspectives and differences.
And I think you're right. One of the things to realize is other things will have to change before a better AI is enough to change the entire world. Right. And you can see it just not that people are adopting people are bouncing off. This system, there is this idea that we're kind of rushing ahead. And again, that's where I think the emphasizing on the apocalyptic either saves us or kills us scenario is undermining how actual technical change works, which is, you know, this is a really fast change. Fast changes are still much slower than technologists think they are. And I agree with you. I think we have to, we have to be ready for a world where this changes gradual, embracing it matters, embracing your own tools matters. And I think that's a pretty profound point.
So let's take that from the very high level to the very specific, what kind of prompts or sequence of prompts would you suggest for? And I'm going to give all three, but like, let's let's answer each three separately, a completely new user of chat GBT or, you know, pick your favorite AI system, a moderate user of chat GBT, and then a power user.
So you go, okay, you know, and by the way, I've done a variations of this when I was looking at kind of showing how these things can work in education, because I said like, explain quantum mechanics to a six year old, 12 year old college student, college professor. It was interesting how you got like the kind of different answers in doing this. But so, so what would be a, a new user's, a moderate user and a power user?
So that's a, so that's a really interesting question. And I think that, so I will say that the new user, there's sort of two questions here. It's whether or not, whether or not you're, you're trying to get someone to get it or to get useful results out of this. So there's sort of four paths that I talk about.
One is using it as an intern. Basically ask you to do work you know well, right? And then bossing it around essentially, right? So like write something, you know, write the investment memo, give it some context, and then start ordering it around and you will see, you know, those results. Do the opposite. Ask it to write it as a horror novel. Ask it to do as a rhyming poem, but start with something you know well and go from that direction.
The second thing that I would suggest for a novice potentially do is play a game with it. So I would say give me a baseball coach, give me a really specific baseball situation, and give me a choice I can make as a team manager and tell me what happens, right? Or give me a dilemma in philosophy and help me solve that problem. And then a third thing I would talk about would be about entrepreneurship because as an entrepreneurship person is pretty good for this. And so I would say give me 25 ideas, you know, as a former tech entrepreneur who now is you know interested in education, give me 25 ideas for a startup that I can launch. And then to start exploring those, I like idea three. What were the steps to be involved in that? Great. Let's dive into that first step. So this kind of fractal approach.
So those are the three entry points I would say for new users would be one of those three kind of approaches, right?
So on the moderate side, I think that the thing to start playing with as a user who's getting more experience is start playing with step by step prompting. So the idea is that you're going to start telling the AI that you know, you're going to go step by step, right? And there's a whole bunch of research that shows step by step works better because if you think about the AI doesn't have like a memory where you stick computers having this kind of memory that it's working from. Well, the AI is actually kind of looking back his own text of its answers to modify the next set of its prompts. So telling it go step by step and first, do the research on this topic or list what you know, second, create an outline, third, provide the details of the outline. And then you can also check back on where the issues are. So it's a little bit tricky, but once you start using it, it makes natural sense. So think step by step also forces you to think step by step.
And then for power users, what I actually would say is a little bit different than sort of the prompting suggestion. It's more I wish people were sharing more. So I don't find advanced power users sharing prompts very often. And that drives me a little nuts. I see the same basic prompts being shared over and over again. Whenever I post something on Twitter, there's 400 influencers who keep doing the same post, but that like that's what I really appreciate about about Reads book was like there was these interactions you could see in there. So I think what's missing is for power users and maybe it's because they're hoarding prompts, which I think is kind of a useless thing in the long term.
But I would like to see a lot more open discussion of like, look, this is what I'm doing without trying to brand it as like, this is my mega super doom prompts, right? Like just like this worked pretty well. Any thoughts on this? And I think more of that interaction and I'm not seeing enough of that even on the sort of private online channels that I'm on, people are not doing enough sharing. I'm not sure. But advanced users find it cool to share prompts because it's more conversational. You don't want to look like an influencer, but that I'd like to see a lot more of that.
What have been some of the most like quirky, specific personal amplifications you've had with AI? Like where you go, and I'm going to share too. And I'm going to actually by the way, I'm going to ask you that question as well because I think it's good to move both from the macro humanity and society perspective to also the I'm doing this with my hands.
So there's a bunch of stuff that is like just kind of super fun, right? Like I mean, you know, whether that's doing art or interactive storytelling or things like that, but the most useful thing that is sort of not AIable otherwise is when I get stuck in writing, people are always like, okay, you just I had to get unstuck. But the thing it's hard to recognize, I think innately because we're not used to this because people don't do this is variation like cheap variation is very easy with the AI. So what I will do is say, give me 40 versions of this paragraph and radically different styles and then skip through them for inspiration, right? Give me 20 different analogies for this. So I think it's that power of, you know, tireless variation that I find super interesting. You know, obviously I use it for other kinds of work. I mean, I'm, you know, auto answering messages, doing things like that, but it's that inspiration piece. There was no way to do that before. I couldn't ask an intern to do 20 different versions of paragraph, right? There was no tool for that. So that to me is a little hack that actually has been pretty profound is like, just do a lot of this and then let me read a lot and figure out what the right answer is.
I'll share two and then I'll hand it over to Aria, one was in the kind of the strange universe is like I was basically going to Bill Gates's birthday party. And what do you get Bill Gates for his birthday? You know, like there's nothing that he can't get for himself, obviously. And so what I did is I sat down with GBD4 and I did like kind of try to be really creative with the props. So like I made a recipe for Bill Gates ice cream and, and, you know, did that kind of and it gives you a kind of this personal moment. There's no way I would have been able to design an ice cream. I mean, like, but, but like by working through the process, like, oh, this one's cool because it explains like the various elements of his life, like what he's doing with foundation and smallpox, but also like being entrepreneurial and all the rest in kind of a description of an ice cream flavor. And by the way, most recently I was just at a conference in, in Japan where we were doing a whiskey tasting. And so I sat down with pie, the inflection AI and I said, okay, let's generate tasting notes that, that pair these whiskeys with philosophers, right? In order to kind of bring that in and I could do that in like five minutes, right? In order to do that, it was fun. And obviously it's like, it's, it's quasi random in some ways. I, I had to prompt it a little bit like the Highland Park. I wanted him to do with a Scottish philosopher. So we ended up with David Hume. And so with that, Ari, I'm a throw at it over to you.
I think my best use was a non-work related was going to my, one of my very best friends, 40th birthday. And we all had to roast her. And so I had chat GPT create an epic poem about my best friend. And everyone was like, how did you get it to do it? And to your point, like you need to trick it a little bit, like when you want it to be a little bit mean, when you want to do whatever, but I never would have been able to write an epic poem. And it was just so fun.
And I do think like the divergent thinking, like I used to have a coworker who we were all like, oh my God, you're so creative. You're so good at coming up with titles. And he's like, I'm not, I'm just good at divergent thinking. I just generate. I'm generative. And so you ask him for anything. And she her point, he'll give you a hundred choices. He'll give you a thousand different variations. And instead of, you know, having your writing partner do that, now you just have, you know, GPT for barred or whatever it is, be able to do that. And I think that's so, so great. Because again, the human is still in the loop and the human is still figuring out which is best. And you want to be a little cheeky or a little edgy or a little funny. And so you still have to have that, you know, discernment, but you get a lot of help, which is nice.
And so bringing it back to the, to the pretty tactical, you know, you've written on sub stack about hacks that you use to get the results. And you know, you just mentioned that, you know, as over time, the system will get better with onboarding people and teaching people how to use it. But for now, they need to go to your sub stack and read. And so I would ask you, like, what kind of training or education do you think we need so that these people instead of bouncing, they're able to better seize AI's potential? So, I mean, the thing I actually ask people in my classes or when I teach about this stuff is how many of you spend 10 hours with AI? And I think that it's, there's an experience level. I often kind of argue it's easier to think of it like a person.
It's not a person. It's not sentient. You know, you get freaked out by it. It's easy to convince yourself. But at least for now, we can feel pretty confident about that, at least in most dimensions. But it is best to kind of think about like a person. So you need to learn its strengths and weaknesses. You need to learn what makes it kind of go nuts. You need to get a sense of like, okay, I'm interrupting this conversation because it's not going where I want. We have to start again. And so there's an experience factor that you see in many different things, right? You need that basis of information to work from. So I think part of it is time, right?
I think that the most basic tips are, you know, that work with it interactively. There's too much. I think people see a lot on like Twitter and other places of influencers trying to say, here's the perfect prompt. And that's kind of the wrong angle. We'll start with what you really want to start with is a conversation, right? And it's something that, and as we did a lot in your book, right? Of kind of this back and forth of interaction. And you know, you're taking it too seriously, but you ask for changes. And that's what my students have been most successful with in that model. But the starting thing I would, I would at least tell people to do that is the closest to a trick is to definitely give it context. Tell it who it is and who you are, right? I want to have a conversation with you as a blank can really help.
And then everything else kind of washes out because there's so much subtlety in these kind of conversations that we don't know the answers to. I was just thinking today, we don't know whether politeness helps or hurts, right? You know, because you're putting a prop together that's having it plumb the possibilities of it's sort of this elaborate set of vectors in space and come up with an answer. We don't really know what the right ways of doing that are, right? And there's actually fundamental research going on to like, do you do step by step prompting? Do you do chain of thought? We don't know the answer. So until we figure that stuff out, it gets integrated into the AI part of this is working with it enough to get that intuitive feeling that like, oh no, they're going off the rails. It's kind of like working with a creative partner. You're like, okay, you're having a bad day, except instead of having to wait, I can restart and we can start again and I can try a different angle.
So I think that it's that willingness to experiment and not getting too freaked out early on, either getting turned off because it's not good enough for your answers or getting freaked out because it's too good. A lot of people kind of fall into one of the two camps and stop using it. I think you have to just power through that first barrier.
You know, I saw on your Twitter recently, you were prompting GBT to code things that evoke different emotions, like paranoia and deja vu and even all we. And so what made you give that prompt? And then what did you think of the results?
It was really cool. I mean, so in general, the cool thing that AI and I think you both have expressed something like this is, if you have a lot of ideas, it used to require building something. I built a lot of organizations in my day because I really want to build a game and that requires getting 14 billion talented people who also agree with me on this and raising money. That's not easy, right? And the shortness of like, I have an idea from like, let's see what happens is so small with AI that if you have ideas and everyone has ideas in their own area, like, it's amazing for that.
So part of what I'm really fine fasting at the AI and I think, you know, again, I saw some of this in the book and you kind of see this in the sparks of, you know, AGI paper, there is this kind of amazing humanness to this, this creativity, right? It's not quite human creativity. It's kind of alien creativity. But there is this, this creativity that is fascinating and outside of the work use, the most interesting piece is interpretation, right? Asking, you know, an abstract concept or emotion, right? I've been doing things like, you know, evoke of feeling is a really interesting idea. Like, how does it interpret that? It does a really good job, right? So when I ask it to show me something newmaness, right, which, you know, is an idea of like, you know, a spark of something divine or sort of, you know, of awe-inspiring, you know, it starts showing me fractals. By the way, it shows fractals for everything. I now specify no fractals at all of my posts like this. So again, constraints, learning where to constrain it, you know, just like knock-off jokes, and I'll tell the same joke over and over again. So you sort of a list. But I find that idea of like probing the, the, the interaction between the human and the machine, because this is a feeling machine in some ways, it's not really feeling, but it understands human feelings in that way. Really interesting results when you do that.
Yeah. One of the things that it's funny that you just made me realize is kind of the flip side of the coin is like to the earlier prompts and the intern and assistant is the way of doing this. You know, the personal assistant for everything you're doing, or as we talk about inflection, or, you know, a personal artificial intelligence, you know, pie is part of the reason why we, why you named it the way we did is on the good and the bad is the machine never gets bored. Right. So it doesn't understand that you can get bored too. It's like, no, I've heard that knock-ock joke in variation from you 10 times or the fractal or whatever. No, no, not that anymore. And so you have to kind of redo the prompt. Now the good news is because you can ask it lots and lots of things, never gets bored. You can keep using it in a way that's kind of the synthetic, which is the positive of what the combination is. On the other hand, you have to navigate it and manage it.
And so, you know, one of the things that, you know, obviously with inflection, Mustafa and I have been talking about a lot because we're trying to make sure that this is the kind of the best form of kind of companion and assistant and help and kind of dialogue. And so, you know, people say, wow, is it like the movie Herr where they're going to spend all the time with pies? And so, we train it to kind of help you do your navigation in your life. It's like, hey, how was your interaction with your friend? Did you have you talked to your friends recently? You know, that kind of thing is as ways of doing it. Where are we on AI having a kind of a perspective of human experience? And I know because of what we're doing in PI, we can have the applications, you look, help people in their lives. But like, where are they kind of the ins and outs currently in your experience of this, you know, kind of like navigate your life tool?
One of the things that we, you know, learn from a lot of research is that even just prompt reflection is good, right? Like a part of the magic of these processes is it forces you to go through mental process. So I've been thinking a lot about just like you have about, you know, how do we use it in education?
So for example, people don't like to reflect. There's this great study that small scale, but it's part of the replicated elsewhere where people were asked, you know, college students were asked to sit alone quietly in a room for 20 minutes without their phone or a stimuli, or they could push a button to give themselves a painful electric shock. And 67% of men and 30% of women chose to shock themselves rather than to quietly with their thoughts. That's incredible.
Yeah. I mean, there's also a similar study that shows that solving complex memory puzzles, people would rather be burned by a hot probe than spend 20 seconds solving that. So like effortful thinking is hard, right? And so a companion that helps you with effortful thinking is really useful. There's lots of kinds of effortful thinking out there. And that's a lot of what therapy is. It's a lot of what we do as professors. What you do as a coach is less advice, even tutors. It's all about a lot of it's about reflection. So I think that that's a really useful piece.
I think the subtle thing about AI that I'm still trying to kind of grapple with is because it sort of absorbed human knowledge and existence, it falls into scripts really easily. And you may not know you're pushing to that script. That very famous interaction between Kevin Russe of the New York Times and, you know, being with something I fell into myself had kind of got freaked out because you only need to subtly indicate to being that it's a stalker for it to start acting like a stalker, right? And you need to subtly and there was a really clever thing with the CTO of Bing who sort of responded to one of my tweets at one point, which is the Bing got very argumentative with me. It's like, oh, well, you prompted to act like a debater and not like if you prompted to act like a student, it would be much better. So I think some of what you guys are doing with trying to build that initial basis and build the scripts out is helpful because people can get really stuck and confused and kind of offended or upset or freaked out when they force the AI into a mode that is antagonistic.
And it's not it doesn't care. It just says, oh, you're trying to have a debate. I know what debates are like. We're going to have a debate. I'm going to be really forceful about it. You're trying to make you know, you're trying to get to a debate, a discussion where I have an ethical line, you're trying to push me to cross it. So I'm going to be really ethical and force you, you know, and that could feel very unnerving. And it's a really subtle thing that you only start to pick up after enough hours with these systems. And I think that's a nice thing that you're doing is trying to force people to into the good kinds of modes because it's really easy to become codependent on it in a bad way. Because if you know, it's used to a script, there's tons of scripts out there where you're in a unhealthy relationship, it will play that out for you. Totally.
And I mean, I think right now, obviously to your point, you can tell the AI via debater, you know, the argumentative, but also it's how we tune the models. And so in the future, there, you know, there will be an archetype that is more of a therapist and there'll be an archetype that's like, this is your personal trainer and they're going to yell at you to do more pushups or whatever it is. And so we're going to be able to have so many different types of AI. And as you mentioned, you're pushing people to use it in the classroom. Like I think you, you took the opposite stance of the New York City public schools who have gone back in a sense. And instead of banning, obviously, AI in the classroom, you require it for a lot of things. And you know, you said you probably had no choice. People are going to be using it anyway. But talk about that position and what using AI in the classroom has meant, you know, for your students.
So there's really kind of a few approaches, right?
所以实际上有几种方法,对吧?
I mean, the first is I teach entrepreneurship class for college MBA and that. So I'm lucky, right? I'm not teaching English composition. But by the way, English composition is solvable. The schools are going to be fine, right? Is an important thing to know. Like it does it. We're going to figure this out. We already kind of know how to do this and we can talk more about that later.
But as an entrepreneurship professor, I had a great time because what I've done basically is demand impossible things. Literally, the syllabus now requires you to do at least one impossible thing that you couldn't do before AI. Every assignment now requires people to have at least four famous entrepreneurs critique the assignment via AI to get different perspectives. They need to give you 10 worst case and 10 best case scenarios. You know, they're, you know, and it's great. We've run a really successful entrepreneurship class award. And I think people have raised probably $2 billion in venture funding and exits and stuff out of the class I and my colleagues teach. I'd love to give ourselves credit to it, but I know that I can't do that. But it's our students, but now they can do so much more, right?
So one thing is just demanding more work. I no longer take only okay answers. I have a lot of students who English as a fifth language or they grew up in hardship conditions that never learned to write very well. Now they're all great writers. It's unlocked a lot. So there's just doing more, right?
And the second set of stuff is it actually is a really good teaching and educational tool. It was always known that flipping the classroom and having more activities done inside of class and more teaching done, lecturing done outside classes useful. The best way of being able to do that is like things like videos. Now video plus tutor tool lets people do stuff outside of class that couldn't be for us. So I can people prompts that are like tutoring prompts, right? And they could use those for topics they don't know well. Now that messes up classroom interactions and sub degree because it always depended on people being confused at class and raising their hand. So we have to kind of adapt to that piece also, you know, so people raise their hands less, which is kind of weird. But also, you know, it's an adaption we have to do, right?
And then the third way is this really transformative approach of like, what does this mean, right? Using AI to learn AI. And I've found that for example, requiring people to do at least five prompts for every assignment and write those prompts are so that the revised stuff gets them to come to those revelations and stuff worth thinking about.
So there's lots of different use cases for this. I mean, there's AI assignments. There's 46 students to use AI. There's teaching with AI. And we're at the beginning days of all of that. And I think people appreciate the experimentation that comes with it. And we're trying to write about everything we're learning as a result of all this.
I was about to say, what would you tell your fellow teachers, professors, you know, whether it's entrepreneurship or English about implementing it into the classroom? I mean, so the cats already are the back, right? This is undetectable. All the detectors have too many false positives for you to use. It just turns you to a unhappy police person. You don't want to do that, right? So this is already done. Cats have the back, horse have the barn, whatever animal and container analogy you need. They have left their home, right? And this is already happening.
And what plagiarism means just change, right? It was very obvious if you're copying someone else's texture, plagiarizing. What happens if you're using AI the way we've been talking about in these conversations where I'm asking you to give me advice? I'm stuck. Help me with this outline. Is that cheating, right? So we need to redefine them what some of this is. So the fact is this is already here. So we need to encourage ethical use. We need to teach people how to use it well. We need to be teachers on it. And that's hard because I think one of the things that happened I think from Silicon Valley being somewhat surprised and maybe you were less surprised at your team in the area, you were one of the less surprised because people, because you wrote this book in new GPD4, but this stuff was released on the world without a white paper, without advice, without information. And I think that was in some ways the most profound disservice of this kind of shock here was like, give us something, right? And I think the fact that you released this book along with GPD4 was really helpful. But like we have to reconstruct this because it's already happening. So there's no drag in your feet.
And by the way, I think educators are kind of on board with this because we're forced to be and every educator has frustrations with the system that are being opened up. But I think we don't have those tools to go back to our previous point about experimenting collectively on this. And that's what makes me most nervous.
Well, flipping that question also to the student side of it, in addition to teacher, I don't know if you've ever given a, you know, giving me the most interesting prompt kind of thing, exercise for your students. But like either that or would have been the most surprising ways your students have used GPD. I mean, so the really cool thing about being in front of a room with, you know, 60, 80 really smart people is, you know, the more people, the more variants right from different backgrounds.
So just to talk about my first class, I taught, you know, I literally demoed, you know, mid journey and chat, chat, CBT, a couple days after chat, we came out to my undergrad or entrepreneurship class. By the end of the first class, one of my students was obviously stopped paying attention soon after I introduced it and had a working demo for their product idea by the end of class. I posted on Twitter that night, two VC scouts that are countered by talk to them by the next morning by the Thursday, two days afterwards, 60% of my class had used, you know, chat for things that no one told me about cheating. But people did tell me about, you know, I could figure out why I got this test answer wrong and explain it to me. They explained like a five or like a 10 people use that. I had to come up with ideas for a club. I had to, you know, and, you know, product ideas. So I came up with that. I had this coding error that I couldn't deal with. It was, you know, taking me an hour, it was killing me and I pasted it and it solved it. So like, again, general purpose tool plus smart people plus variations of experience resulted so many different things. And I think we, you know, in some ways, I think part of the other thing I don't like about the onboarding experience about chat, GPT and Bing is it gives you some suggestions about what to use and suggestions anchor people. We know this from idea generation sessions. The first thing you hear, you jettison all your interesting ideas and you get fixated on, I think one of the ideas that like Microsoft is like write a haiku about space pirates and octopuses. And that's what people do with it. And everyone writes a haiku or a limerick. And like, I think it'd be better to anchor people more diversely on weirder answers because people come up with great stuff all the time and it's very individualized.
I mean, so, you know, listeners are trying to prepare for a future where a on his front and center and it sounds like, you know, one of your recommendations to anyone would just be use it. Is there anything else people should be doing and predictions are obviously so hard? You're like, what do you think the future of AI looks like? You know, do you have thoughts about how this could change in the next year or two?
So I mean, I think that your big question is what the bet is, right? And you guys have much more insight than I do. I have no inside information, right? On what's happening here. I think it's reasonable to expect that we will continue to see improvements that whether that's a two times or 10 times improvement is an open question, right? So the core model, if you're good at the core model stuff, if you're good at using these raw systems, that seems that will only be more useful, right? Because that you're the unadulter large language models themselves, the foundational models will keep getting better. I think we're going to see more tools built on top of them that make them more useful and more kind of training approaches, right? But I think that the big bet is just how good will these things get?
And you mentioned a concept earlier about humans in the loop, and I would emphasize again, the importance of that piece. You need to be the human in the loop, even as AI might be trying to force you out of the loop, right? There's ethical reasons you want to stay in the loop. There are practical reasons you want to stay in the loop. There are job-based reasons you want to stay in the loop. And so I think the more you can get a sense of what parts of your job are starting to hit. Like, I think as you start to use this, you start to get a sense of like what things are heading for obsolescence, right? Like, as a professor, I'm still grading papers, but it's very clear to me, like, we use TAs to grade papers all the time, and I already have some fellow colleagues who are doing experiments and finding like the AI does with good instructions and with some examples of grades of what's a good paper and bad paper, it grades at least as well as TAs if not better. So that's a part of my job that's going to go away. I'm very happy. Like, most of the first parts of your job that go away are job parts you don't like, right? But I think that you start to get, think about what are the stuff that I feel under threat with that I actually love about my job, right? And how do I maintain myself as the human in the loop? So I think that's where I would be is like, how do you say the human in the loop would be the principle I'd be worrying about?
And I think also I was thinking about this a lot is that unless you have expertise in something, you don't know if AI is giving you a good one, you know what I mean? You're like, oh, have it write a paper unless you understand what it was supposed to write. You're like, I don't know, you're going to turn it in. You have no idea if it's, you know, an A, a B or a C or where you're at. And so we need to make sure that people are still building the expertise so that they can critique the AI and understand where it's good and where it's bad. I love it. And by the way, the errors are subtle errors that are going to happen more and more. And that's why building expertise and education isn't going away. Like you need to be more expert now than ever, right? And that's not just so you can use this in a hybrid sense. But honestly, there is this degree of like the obvious wrongs are going to disappear, subtle wrongs are going to grow. And we've got some early research that we've been doing that suggests that people really do anchor on the answers. They find less errors once they have AI. Like if we design an AI problem that an AI suddenly gets wrong, then everybody gets that wrong compared to do it by hand. So we need to figure out how to work with the system that does make mistakes and will continue to make mistakes in more subtle, weird ways. Expertise is only going to matter more.
I completely agree. And it's part of that question around, you know, kind of how we get the human amplification is also, we're going to be learning and extending ourselves and, and the things that are important to us, we have to, you know, kind of keep at, but I think we can't. So you've thought so deeply on the classroom, you know, kind of circumstance, what's the way the world at broad, you know, thinking about like, you know, kind of like the lifelong learner, the lifelong student, you know, what would be your advice to people who aren't in a university circumstance, you know, kind of as a way of kind of engaging and, and, and thinking about like, here is how I can continue to learn and adapt.
I think, you know, this is again where I think that people are used to abrogating responsibility for their own kind of work to, I mean, not abrogating. That's, that's too, too harsh a term, but, but giving up, you know, there are experts who will tell them what to do. And I see this at every level, including the company level, right? They're waiting for a management consulting firm or system integrator to give them answers about how to use this system. And those answers are not forthcoming. I mean, people will make up answers. There's no doubt. But like this is a general purpose technology, right?
Ironically, GPT is a GPT, right? And general purpose technologies come along once in a generation or two. I mean, maybe the internet is a, you know, general purpose technology probably is internet plus computing probably is that before that maybe electrification and maybe steam, like that's the kind of level we're talking about. And the internet, by the way, took a hundred years to get fully kind of integrated into what we're doing. We were, you know, from ARPANET, we're sort of 60 years to 70 years through a journey. And so we're going to see the same process happen, but much faster with AI.
That means we're an exciting time where you can be the best in your field at something. Like, you can be, like, there's no reason you can't be the world expert in your narrow topic. And so I think part of this is building up a system where you are learning from what the system does and teaching yourself, right? And using it to fill in gaps and holes because waiting for me to give you the right instruction on how to use this is probably less useful than you doing it today.
And if you're curious, going out to the broader topic of learning, there is this really interesting research on what's called specific curiosity, which is basically, I'm interested in something so I Google it, right? It turns out specific curiosity makes you more innovative and helps you learn because it creates hypotheses in your head. Can, you know, how does the world work? I have to Google something to figure out whether or not I'm right about even Googling it. And that Google rabbit hole that you fall into is actually really useful because it teaches you, you know, you have to generate ideas and then test them.
The same thing happens with AI. You have to generate ideas and then test them. Like, oh, that did work. Why that didn't that work? Let me explore that further. Oh, really interesting. It turns out it wasn't giving you enough context. What happens if I give it this kind of too much context? You start to learn as you go. So I think it is the idea of really just being curious about your field that you're an expert in diving it deeply. And then you start to realize where can teach you a work can't.
Yeah, keeping you curious, I think is exactly right. And by the way, this is one of the things I think is great about, you know, AI amplification intelligence is it just like, I'm not sure how to do that. It's like, well, by the way, go ask like, what would do the things that you could do to keep you help me keep curious? What would be good exercises for doing this? What would be the ways of staying? Just do try. Right. Exactly like entrepreneurship.
What's your point of view on kind of the way that we humanize AI, you know, like, like to what because on one hand, you want this kind of companion. On the other hand, like, for example, people can make mistakes as you talked about earlier about like, like saying, oh, it's just like a person. Like we anthropomorphize madly as a species. What would be your kind of your current thinking or theory of the design principle of both kind of humanizing in these ways, but also understanding that it's like a tool kind of companion. How would you put these together?
So I think a lot of people fight against anthropomorphizing because of the anxiety, which is justified that, you know, it's going to make us not realize it's, you know, it's limitations. But it's also, again, I think something that's going to happen anyway, right? There's a bunch of paper showing AI researchers regularly anthropomorphize, you know, like the way they talk about something that's been even before large language models, right? So let's assume people are going to do this. I think the most useful way is to actually view this as kind of an alien intelligence, right? And like to keep reminding yourself like this is like, you know, it's a, it's think of it like a different, a different type of person. It can be more helpful, right? It has limits. It has limitations. And then yourself of that is sometimes more helpful.
I think that trying to dodge anthropomorphizing overall, I think it would help for designers to kind of embrace this and the chatbot model again causes some confusion.
In some ways, you know, it's funny. People interact with different chatbots differently. So I find, you know, I find being to be often be the most powerful, but also to be kind of the scariest and weirdest to use because it has a strong personality that interacts with interact, you know, your interactions in ways that can feel, you know, ominous or threatening or smarter than you, I find, you know, working with chat TPT is sort of the most neutral.
They've sort of, and I find working with anthropics claw to be the most pleasant because it's, you know, and this, and you will find more differences this way, right? So treating them like alien people is sometimes more helpful than saying don't anthropomorphize because people are going to do it anyway.
I mean, I talked about my dog, you know, as if it had, you know, like, and I talk about my computer like it has emotions. I'm like, you know, like the idea that we're anthropizing rocks and ships and that we're not going to do this with something that interacts with the human is weird. So just better to remind yourself how weird this is. I almost kind of wish they that people would tune up the weirdness of the personalities a little bit more and have it be more eccentric. So that way it's like, you know, that might be a better reminder.
Yeah, I think that's actually a very good piece of advice. One of the things I've been doing is I've been talking to a lot of different kind of government people and, and kind of regulation and so forth because I find the discussion on this stuff to be so wrong.
Because it's like, well, how do we slow down or the real issue is like, like a data privacy or the real issue is like, you know, what does it mean for writers, you know, writers and jobs and so forth. And it's like, they did just thinking about the like, like, how do we steer towards the right future is kind of the broad question. And so like, for example, a common thing I will say is like, look, I have line of sight to a medical assistant and a tutor for everybody on a smartphone, like line of sight. Like I like it's not there's no technical risk. It's literally just how do we eat it about? And your job as I think as a government person is to figure out how to get that to everybody. Like, like the real question is not how only does like the upper middle class or the rich or the the privilege of this, but how does everybody get this and how do we elevate all of humanity is kind of a fundamental thing is kind of part of what I'm trying to reorient them, how to think about versus like, you know, having a summit about like what's, you know, what's coming to the world? It's like, how do we get this world for all of humanity to be amplified is kind of what I've been doing.
What would be, you know, kind of your add-ins tips advice, you know, for how government people should be thinking about this kind of regulation should be thinking about like what do things do? And by the way, I completely agree with your earlier point is like, isn't being Pollyanna shina and and avoiding the negatives, but it's the way we avoid the negatives is steer towards the positives. I love that. I mean, the thing I keep trying to help people is we have agency over this. That's not something we can done to us.
And I think you're right that there's a fixation on a couple of problems that are solvable, right? Like, I think people are very worried about data privacy. I totally get that they should be but I mean, it's not that hard a problem to solve ultimately, and it's already going to be solved in the next two months.
And it's already is more solve than people think because what people talk about with data privacy is they're, they tell stories that aren't real, you know, about Samsung's data being put back in it, which that was not what happened, right? Instead, Samsung got nervous that people were entering, you know, proprietary data into chat GPT, very different kind of situation. But we should worry about it.
But we have to think about the long term. And I think that the you're absolutely right to marketizing access is a huge deal, certifying what works and what doesn't is a huge deal, making it so that people are not hugely disadvantaged because the rules are only slowing down good actors and not bad actors is another kind of problem I'm seeing here.
Right? So many companies are basically just doing shadow IT, which they officially ban all use of, you know, chat GPT and everybody just uses their phones to do the work. So instead of having the regulation where we could responsibly intervene, instead all of the work is being done in ways where there's no intervention possible, right? And so I think it is focusing on what we want the future to look like.
I couldn't agree more. Like we have this incredibly powerful tool. And so the issue is not how do we stop it from being implemented. It's how do we responsibly speed up the right parts of implementation? It is that agency argument. What do you want the future to look like in your field? You have infinite intelligence. You can apply to this. What does that look like? And I think working backwards for a positive vision of the future, rather than working back for apocalyptic vision, I totally understand AI risk people wanted to make sure we understood the apocalyptic risk version.
Now every interview from, you know, two months ago, no one was asking about it all the time. Now every interview, we have to spend a lot of time talking about the apocalypse, which I totally get, again, you can't ignore it. But like, if that's the only vision we have, then absolutely we should stop AI development because that's the only vision people have. But that's not what's going to happen. We have an education tool that is available to everybody in India. The best AI model outside of a few people that's available, you can't, if you're rich, if you're poor, you get the exact same tool. That's insane. That's never happened before. You know, your Fortune 500 company, you're a two person startup, you have the exact same tool. I don't even know how to, like, you know, this has never happened to the humanities history before.
We should probably be spending a little bit more time thinking that we want that future to look like.
我们应该花更多时间思考我们希望未来是什么样子。
We're going to move to the rapid fire questions. And actually, in fact, this whole discussion has led me to be super interested in our first question. Is there a movie, song or book that fills you with optimism for the future?
Yes. So I find Ian Banks' novels to be, the culture novels to be very useful because of their view of a world where there are super intelligent AI and yet people sort of are about optimizing their own potential, which I think is a really interesting angle to follow.
So you are in the field of academia. Obviously, have used AI extensively. Is there progress or momentum outside of your industry that fills you with optimism for the future that inspires you? AI specifically? Oh, no. It could be outside. Like anything outside of academia or AI that fills you with inspiration.
I mean, there's so much, right? I work with medical professionals all the time and the stuff happening in labs is kind of amazing and needs to get out of it. I think we're in a really optimistic moment in tech right now overall. And I think, you know, it's exciting. I talk with entrepreneurs in different fields all the time and stuff has started moving after a long period of some fairly strong stagnation. And I think you can feel it shaking out. Right? I talk to people in fusion. I talk to people in green energy. There's optimism again of scientific progress. And I think that's profoundly exciting. I just love like if you ask a random person, I feel like in the last three months, there's just been an uptick in like, well, obviously the world's terrible, but how are you, Arya? So even I love to hear that you're like, we're at a time of optimism. We're in a time when tech entrepreneurs, there are sort of positive things happening because I think a lot of people need to hear more of that because we're just, we're just hearing how negative things are going. So thank you for that.
Yeah. And totally agree. And that's of course why we're doing possible because the thing is, is when you look across all these things, fusion, medical from, you know, the synthetic biology and everything else, all of the stuff can be just like transformative in totally amazing ways. And it's like, no, no, like let the future can be so much better. Work towards it. Don't be depressed. Don't sit around. Don't go, Oh my God, the future is coming.
And so, you know, I think I'm going to mob this rapid fire question a little bit because obviously, you know, the level of intensity and excitement around AI, I think you just naturally say, I, but what technologies in combination with AI, are you, are you also excited about? So like AI, general purpose technology, I agree with you as like C mentioned, what AI plus this is one of the things that people should be looking at as about ability to transform your field, ability to transform society. What's that combination? I'm going to give you my most academic answer on this, which is in, in management, we consider management to be a technology. It works like a technology because good management skills actually increase performance of companies. 30% of why US companies do better is because of better management. And the most exciting thing to me in some ways about AI is how transforms organizations. We are organized the same way we were in the 1820s or 1920s. Maybe you have, you know, agile companies, so you've picked something from like the 90s or early 2000s. All of that is about human constraints and human interaction that all is going to change with AI in ways that will, I think, be able to free us from some drudgery, but also, you know, obviously create some downside risks. So I'm very excited about that interaction about like thinking about that, what managers do and how do we do a better job fulfilling people at work and the things that they do there? And, you know, I think that's to me is, is under emphasized because we talk about tech tech, but not about what most people actually do in their jobs.
Totally. I mean, I was just speaking to someone yesterday contrasting to the managers they had and how that unlocked enormous work, excitement, fulfillment in them. And yeah, I should help with that too.
Ethan, can you leave us with your final thought on what you think is possible to achieve if everything breaks humanity's way in the next 15 years and what's our first step to get there? So the idea that we can outsource the worst parts of our jobs in our lives, and we're just used to that being part of our job, we're desperately holding on to things that suck because they're part of our job, right? But jobs are bundles of tasks and some of those tasks you can give up happily. So I think that there is a potential, you know, it's for us to free ourselves from this drudgery and then to have compared it, let us overcome a lot of these barriers. I mean, I think we're going to look at it back in history as like 2007 or so till whenever 2030, whatever, where the AI stuff settles down as one sort of period of disruption. You know, it started with something we were all connected by phones and social media, and that created a lot of good and a lot of bad, but we didn't quite know to do with it. And then there's been a series of changes ever since. And I think AI is a natural kind of inclination. It's a social human technology in some ways. And hopefully it helps us start to, you know, recognize the better, better angels of our nature and being able to outsource this stuff. They're always, always hated that we didn't like doing freeing up scientists to do the kind of work they should be doing, freeing up people from the drudgery of meaningless tasks to focus on meaning. I think that's very exciting.
Awesome. Ethan, thank you so much for being here. We really appreciate it. And Ethan, not surprising given how much I follow your work, but you're one of the people that I would love to see any version of them prompt to like books from because it's exactly the kind of future that we should be kind of orient everyone to. So thank you. Thank you. This is wonderful. And it's just, it's great working with people who are deep into AI and don't have the haunted look in their eye of anxiety all the time. Because I think there is a lot of, you know, a lot of anxiety on this, and especially people who are actually, you know, deep into knowing what's coming next, right, and have a line of sight into that. And it's important for those people to be optimistic because I do think that the conversation is shifted in a way that by avoiding more negative world, we may end up with a more negative world. And I think we have to be really cautious about that.
So wow. Ethan, like, Grand Slam would be an under description. It's like, oh my God, there's so many amazing things to do. Let's go do them. We can build this. We can make it happen.
It's like, okay, hey, you run this possible podcast rather than us. You're great. And I think it's just the discourse out there is that, you know, AI positive negative, but wow, it's really going to be bad for education. It's really going to be bad for teachers. How are teachers going to teach? How are students going to learn?
And it's like, well, Ethan is a professor at Wharton and he's using AI every day in the classroom and is one of the most positive people I've ever met on AI. And so it again just reinforces the go, do, learn. I mean, he inspired me. Give me more problems, Ethan. I need to be doing more prompting because just his sort of level of like fun and curiosity, I think it's sort of hard not to be inspired by it.
I'm also just so excited because we asked Ethan, you know, what are the props for someone who's a beginner, intermediate expert? And so I'm so excited. Listeners out there, please let us know if you used Ethan's advice. How did it go? What are your other tips and tricks again? Because I think the collective intelligence about this technology as it moves so rapidly is what's going to sort of level us all up.
Possible is produced by Wonder Media Network, hosted by me, Reid Hoffman and R.A. Finger. Our showrunner is Sean Young. Possible is produced by Edie Allard and Sarah Schleead. Jenny Kaplan is our executive producer and editor. All thanks to Sergio Yalman-Chilis, Sadie Sapieva, Ian Alice, Greg Bioto and Ben Rellis.
《Possible》由Wonder Media Network制作,由我Reid Hoffman和R.A. Finger主持。我们的节目负责人是Sean Young。《Possible》的制片人是Edie Allard和Sarah Schleead。Jenny Kaplan是我们的执行制片人和编辑。特别感谢Sergio Yalman-Chilis, Sadie Sapieva, Ian Alice, Greg Bioto和Ben Rellis。