How do you even develop machines that are going to be really helpful to a certain population? For example, having these robots for kids with autism and therapists were worried they said, you're trying to replace therapists, not at all. But then they actually saw it from the other side. They said, oh my goodness, this is great. I spent like one hour or two hours a day with a particular child and then I want them to practice things at home. And so isn't it grand if you can have this robot that ends up playfully doing this stuff? So that's the vision, the vision is that it's complementing the ecosystem of human support, which is never perfect.
Hi, I'm Reed Hoffman. And I'm Aria Fingar. We want to know what happens if, in the future, everything breaks humanity's way. In our first season, we spoke with visionaries across many fields, from climate science to criminal justice and from entertainment to education. For this special mini-series, we're speaking with expert builders and skilled users of artificial intelligence. They use hardware, software, and their own creativity to help individuals use AI to better their personal everyday lives. These conversations also feature another kind of guest, AI. Whether it's inflections pie or open AI's GBD4, each episode will include an AI-generated element to spark discussion. You can find these additions down in the show notes.
In each episode, we seek out the brightest version of the future and learn what it takes to get there. This is possible. So excited for the second episode of our summer arc on AI. And especially because we've been talking so much about large language models and how people used to talk about robotics, but now everyone's only talking about software. Not true. The guest today is an incredible researcher, practitioner, who was talking about AI and hardware, which means robotics. While of course we see in the digital world and so with a rapid evolution in the large language models, ultimately a lot of our lives are embedded. And a lot of the things we need to learn need to do are embedded. And part of what in particular about our selection of today's guest is the understanding that embedded doesn't mean like our am robot. But actually something that is there with us and actually in fact focuses on things like social interaction and human assistance.
And first, everyone talks about LMS, little robots. And then no one's talking about this really important part of robots that I think is an important theme. It's not all of robotics, but it's a super important theme to add strongly in the discourse. And that's part of the reason why we couldn't be more delighted to talk to Maya. Maya Matarak is a renowned computer scientist, roboticist and professor at the USC Botrybury School of Engineering. She is founding director of the USC Robotics and Autonomous System Center and co-director of the USC Robotics Research Lab. Maya, we are so thrilled to have you here today.
You are known as a pioneer in the field of robotics. Can you talk about your earliest memory of seeing or engaging with a robot? And like, what about that moment made you want to pursue robotics?
Well, my path to robotics is actually, I would say unusual in the sense that I'm not one of those people that tinkered into basement. We didn't have basements. I come from the former Yugoslavia, we lived in a high rise. So there was no option for basement. Instead, I came on Ponda Path because when I was in college, AI just started to emerge. It was at that time in what was called an AI winter. But I thought it was really interesting. But for some reason, even from then, from my college days, which now seems like it was in the dark ages, what I really was interested in is behavior in the real world. And so I was interested in psychology and people, but I was also interested in AI. And if you think about bringing together a lot of aspects of AI and the real world and people, well, that's really robotics because robotics is AI in the physical world around us. And I thought, oh, I came upon it and I read up on it. And I really read books and stuff. And I thought they were super boring. I'll be very honest about that. And I did not probably see my first robot really until I was in graduate school and looking around at possible labs. And he was the robotics labs of my former advisor that caught my eye because they had the funnest robots. So that's why. So a very unusual path, no early dreams of this at all. Oh, but I love it.
I think you said like you were interested in the real world. You were interested in how to help people. You were interested in having fun. I think we should like we shouldn't forget that those are things that can lean to amazing careers in STEM and computer science and AI and all the things. In fact, what I would love to tell people is that there is no standard path. And even though I think often in the media in particular, we get this kind of typical path showcased like, oh, you know, I've been dreaming about this since I was a child. But in fact, I think it's perfectly lovely to dream of many things and arrive wherever you arrive and keep moving. And so I think it's not at all necessary to have thought about this before. My dad was an engineer. So I have to say probably I had some early influences and it turns out I'm really an engineer at heart. But how that was going to manifest itself? I didn't know. And it's still unfolding. And that's a great thing about just learning continually. So no, no strict paths people. Come on. Let's keep it open.
Love that. Yeah, indeed. And, you know, part of it is to think about like kind of the actual interesting space of robotics because, you know, probably most of our listeners are open to robots and home and workspaces, but you know, kind of not prime to the full range and some of the stuff that you really do, which we're going into. You know, pop culture shows us, you know, doomsday scenarios, killer robots goes all the way back to the eponymous Terminator. So why do you think people are more fearful of physical robots than a non-physical AI? How does that affect how you do your research? Do you work? You know, what kinds of things you're doing?
You know, it's really interesting that you would say that people are more fearful of physical robots because all studies show that when you take people who have not interacted with robots and you put them next to real robots, people are curious. We project our internal expectations onto machines. If the machine behaves in any even remotely lifelike way, no matter what it looks like, it may even be just the boring box. But if it moves in a particular way, we're going to be interested, we're going to be engaged, we're going to expect it to be intelligent, emotional, intentional. And so in fact, in reality, people aren't scared. Now, of course, in the media, there have been a lot of portrayals. I used to just call it so boring. And I've worked with really fun folks in the entertainment industry. And I would always start out by saying, please not lead with killer robots. So boring. Come on, be interesting. So I think the media will tell us that people are afraid of robots. But in reality, when people are confronted with a physical machine, we're actually quite interested. We are very social species and we're social with machines as well. So that is the truth.
So that actually tees me out for my next question so well. You pioneered a subfield of robotics called socially assistive robotics. Can you tell us about that space and how does it fit into sort of the broader landscape of robotics?
I am always happy to talk about my field. So of course. So when I started out in robotics, of course, when you start out, you work on stuff that's out there. And so my first work was on getting a robot to navigate. And people are still working on robots navigating, although we're pretty darn good at it. We have autonomous cars, so the field is moving along.
And then I worked on teams of robots because I always liked social interaction. And I got a team of little, we called it a nerd herd. And I had 20 because I was very fortunate that my advisor could make that happen. And so I was hurting these robots, it was crazy.
And it was really only after I had kids, or at least my first kid, that I started to think about not only what I'm doing, which is fun and intellectually curious, but really why. Kids will do that or kind of real, the real world and life will do that. And when I started thinking about why am I doing this and I wanted to have a really good answer when my kid asked me, not just what do you do, mommy, but why do you do it?
And I realized that why had to be more than I get a lot of papers published and people think it's nice, but who are these people? I want to be able to tell my kids that I make robots to help people. And so I then really started to look at how can we actually make robots to help people, and that's hard. And I looked at rehabilitation robotics, I looked at people who needed physical help right after an injury or a disability.
And that was really hard. It was a field that existed, but I really wanted to do something that people would have in their lives continually because even back then, I had the sense that people are in different paths all the time and we need help occasionally, if not all the time, different kinds of help. And I don't mean physical help, I mean, really sort of emotional support.
And as I was working through that, I realized that, oh, and this is 20 years ago, you know, AI and robotics was about ready to go into a field where you can create machines that will be with you in your home even every day, not rolling around around your home, not manipulating objects in your home. We're still not there even now 20 years later, but we can have machines that will be there and support you on your journey of the specific thing you need, whether you're trying to teach a child with autism, social skills like I gaze, whether you're trying to help an Alzheimer's patient, just kind of exercise their brain and talk to someone or a stroke patient to just do the boring daily rehab exercising. That's the niche that I wanted to get into. And you know, it's great because it's very easy for me now to say, hey, mommy makes robots to help people. So I got there.
Well, yes, for sure. But this is a great setup to one of the things I think is really a key part of your work that made us really excited to have this podcast and talk to you, which is how is that helping people that the interaction, the tracking the eye gazes are, how does that help us become more human? Like what's in that helping? It obviously does help in tasks and help in navigation of the home environment or other kinds of things. But what's also the way that extends our humanity through it?
I am so glad you asked. So first of all, my first premise that I arrived at by talking to a lot of stroke patients and their families, families with kids without them, you know, we spent a month and more in homes with kids without them collecting data. So learning from from what people said they needed.
So there was a very loud and clear message from everyone that people wanted to have a sense of purpose. They wanted to have a sense of ability and autonomy like, you know, I am who I am because of all of these things that I can do. It's difficult to know who you are if you don't have a sense of anything that you can do and feel good about doing.
The thing is that when people are empowered, they also start to think outward. And so we have had robots, for example, that just behave empathetically to the user and in that way, make the user more empathetic. And the interesting kind of positive cycle that gets created there is, you know, studies show that if you're behaving empathetically, you're actually healthier. You get a health benefit from being a nice person. So, you know, even if you're totally selfish and don't even care about anyone else, you should still be nice because it benefits you. But as it happens, it also benefits other people. So, you know, we're wired to be these social helpful empathetic entities.
And so when we create these machines, we're very conscious that we're not just trying to help one person with the exclusion of others. But we're saying, you know, look, look, the robot is modeling good behavior for you, positive behavior, helpful behavior that empowers you, which makes you the kind of person who does that in return.
I know it sounds kind of like earthy crunchy, but we are really completely based on the literature from basically we read a lot of behavioral economics, behavioral science, social science and neuroscience so that we can understand what works here in the human head. So, and what makes us humans feel better and behave better? And then we implement that on our robots. That's fantastic. I love the conversation about empathy and it's interesting how it sort of spans hardware and software. And everyone always says to your point, the best engineers and the best product managers are empathetic. They understand their users, the consumers. And so when we're thinking about empathy with the robots, like if these are empathetic acting robots, what relationship should we, like, should the robots have with empathy? Should they feel it? Should they understand it? Should they recognize it? Like, how does that work on the robotic side? In addition to, of course, it, you know, helping humans to be more empathetic themselves.
So, empathy is a really interesting challenge because first of all, if you look at the science of empathy, there's disagreement there already. So, in the neuroscience of empathy and cognitive size of empathy, some people will say empathy is what you feel, right? So I feel empathetic towards you. Other people like Simon Baron Cohen, who is a well-known neuroscientist from Europe, have written a lot about empathy being actually what you do. So it's about the behavior, not about what you feel. That's really important because that means if we believe that empathy is how we behave, then robots can be empathetic because robots cannot feel. There's no feeling. Why can't they feel? They don't feel because they don't have the mush that we have. You know, they don't have nerve transmitters and they don't have hormones and they don't have all these other things that make us feeling creatures.
We've done a bunch of studies where we've one created robots that are empathetic and that's very easy. Others have done this as well, right? You can basically have a robot that says things like, oh, I'm so sorry you feel that way and I know how you feel and I've been through that as well. So you can make a robot appear and behave empathetically and therefore, according to Baron Cohen, also be empathetic. What's more interesting to me is not how a robot should be empathetic, but how a robot can get a person to be empathetic because as we know from science, again, if the person is empathetic towards themselves, right, self-accepting and towards others, their health outcomes will be better, you know, they will be a better person to be around. So it's all together a good thing.
So we've been really working with what should the robot do to make you empathetic? And it's really interesting. We've done some studies with relatively pathetic robots, like really needy and kind of like, oh, I'm failing. Oh, I'm failing again. It's very funny and it's amazing. People are just really helpful. Frankly, I was surprised to find out that kind of a snarky, funny robot was very unpopular and that kind of just a blah. I have failed again. My cameras are not receiving information was more well received. And then the really needy. Oh, no, I can't. The force is not with me. I'm banging my head against the wall. That robot got the most help for the longest time. So I don't want to overgeneralize. I think if that were in your home, you might grow tired of it. Sure. But if you're just encountering a robot in the real world and it's needy, it turns out that is actually better received than a very transactional cold robot. Or I really like, oh, well, I guess I'd lost my leg, but whatever. Right. So it's very interesting how empathy is very, I'm just going to use Star Wars technology. It's strong within us. So it can be. And it is something that sorry, it's something that we can evoke and we can reinforce.
Well, I love it. I think that part of the notion of like the human condition, the human elevation, is we also like to feel wanted, like to feel needed, like to feel that we're important. And that's part of the work you've done. And on path to. One of the things that I think is also super interesting here, because obviously the huge amount of public discourses around the software of AI right now, you know, A100s, A100s. What are the things that are particularly important for the hardware side to contribute?
Well, thank you for asking about the hardware side, because we are right now, you know, in the world so embroiled on the AI, which is purely software side that we're kind of pushing this aside, robotics is almost still fringe. But the reality is it's the physical embodiment. It's the physical manifestation of intelligence in the world.
And as we know, how people appear and how we physically behave has a huge impact on how we relate to one another every little bit about how we design and build a robot is important. And that's why it's so hard. So first of all, you know, there's the basic stuff about safety. And I don't even want to talk about that because we already know robots must be safe. Okay, I mean, that's a given, right? And that's still hard. And so that's why we don't we still don't have robots all over the place, but we're getting there. And that's why the robots that my lab works with are small and safe and often soft. And you know, they're just because we don't we want to make sure safety is not even an issue.
But after safety, that this begins now begins the hard part. So what should it look like? How tall should it be turns out just how big the robot is has a huge impact in how you psychologically perceive it and how you respond to it physiologically without even realizing it. If it's sort of more than about three quarters of your size, it's going to impact you physically differently. And so you'll be reluctant at a certain level. You can accustom yourself to it, but you'll be more reluctant than if it's smaller.
How does it look? Does it look like an animal? Does it look like a biologically existing creature? Does it look like the human or does it look like nothing at all that's really familiar to you? This is incredibly important. I'm engaged in conversations both with companies and obviously in research about, you know, what would you want a robot to look like? And it really depends, right? If you're talking about, so if you start talking about humanoid robots, then you have this huge load of human expectations.
If you build something that people really have to get used to and they have to get over certain things, well, you know, that's not good design. Good design is that it has the metaphors that evoke just the right expectations that you lean into and you really enjoy. So that's why the design is really important and it's very wide open. So now it has an open field for it to disappoint you or, you know, engage you and endear you. And that's a lot of expectations on that robot.
Like, wow, but if you do it right. And so my favorite, favorite example is Wally. Wally is just a, you know, one of my favorite robotics movies. Actually it is my favorite robotics movie because here's a thing that looks like, what is that? Like some kind of old rusty mechanical, you know, garbage collector, but it's so completely endearing and you just got to love it. Now that is good design. Whereas in comparison you have in the same movie, you have Eve, which is egg like. So eggs we get that's like sort of biological cool. It's also like kind of apple white plastic key. Nothing warm about Eve. Nothing. Yeah, she's very, she's very like out there. Oh, high tech, but you know what a hugger, but you want to hug Wally and you know, he smells, but you still want to hug him. So that's what I mean by good design. And you know, that's those are, you know, creative people who drew animation. But in the, in the end, when we start designing robots that will really be wonderful for people, that's the kind of creativity that we need.
No, I mean, it's so interesting. Like you said, it's, it's obvious, but still so critical. Like what is the design of this robot going to be? And so let's like go down a level.
If you're an average listener of the possible podcast, how could their lives look different? If they were looking, you know, if they were using socially assisted robots on a daily basis, socially assistive.
Oh, socially assistive robots on a daily basis. Just to explain that. So the idea is that we were looking at assistive robots, assistive robots to help people, usually people who really need help, right? So I don't mean like, yeah, it's going to go affect you a beer. You should do that yourself. But socially assistive is that they're assistive. They're helping you. They're assisting you. And then they're doing that socially through social interaction rather than physically.
For example, if you've had a stroke and you can't reach something and it reaches it for you, what we would like to say is like, can we, if at all we could get you to reach it yourself? That's what we like to like, I'm going to give you grit and support and like make you feel better.
So here's the idea. Imagine that, you know, you're getting up in the morning and it's a, it's a, you'll have some challenge. Maybe you've had a stroke and so a part of your body is disabled. Something, maybe it's just your dominant arm, right? So you're supposed to exercise, you're supposed to do things like reach for your coffee maker with your stroke affected limb.
Well, that's going to be really inefficient and it's going to look like crap and it's going to demoralize you every time because, you know, you just like to be the self that you were before the stroke happened. Now, maybe you're fortunate and you have amazing people in your life 24 seven who are going to be like, you can do it. And by the way, they're not going to enable you by reaching for that for instead of you, because if everybody gives it for you, you will forever be disabled. And if you use the other arm, you'll forever be disabled.
So you have to fight your brain to get better. Who is going to be there to constantly support you and say, sometimes that is fantastic. Great job. Or you're dropped to be, you know what? You tried. That's better than not trying. And sometimes, sometimes, but necessary will say, okay, really? Really? Are we going to sit now again? Come on, get off, get off the chair. Come on, let's go reach for that thing.
So you need a coach and people in your life are not necessarily always available or able. They have their own stuff that they have to deal with. So the idea is we want to have this companion robot that's going to help you through what is these days called the journey. So there's technology that can help you, you know, get you out the door.
Now we've also worked with kids with autism and they would want to understand things like, is this person interested even in talking to me? How can I tell? And so they can have this companion robot to talk them through it, to practice social gaze, to practice like, okay, look at me, but now don't keep looking at me because that's creepy, occasionally look away. Where do you look away? Doesn't matter. Just look somewhere and then look back and they can practice because unfortunately other kids are not going to practice with them.
So you can imagine, you know, elderly with Alzheimer's, they're lonely, they're isolated, they could be staring at some screen, whatever, but you know, and they can look at pictures of their family, but you know, for how many hours a day. So this is a thing that can talk to them and talk to them about their family and always be there, always be pleasant, always be happy, never gets tired. It can tell jokes just the right vintage of jokes. We've done that.
The point is in everyone's lives, right, there are many challenges and there's a lot of expectation that other people will help us with these challenges while every person has their own challenges. And so here the idea was we want to create these socio-emotional, but physically embodied companions. They're not lovers. They're not friends in the sense of this is not your friend. And although people sometimes perceive them as friends. So they feel that certain niche in people's lives, but they're not replacing therapists. They're not replacing friends. They're not replacing teachers because they can't. That's not their purpose.
Right. Absolutely. It's so interesting. We last week we spoke to Mustafa Suleiman who is doing a personal intelligence through not hardware, but software and talking about AI and how to your point that this is not a replacement for humans. This is for the millions of people who to your point don't have someone for them 24 hours a day.
And so you hit on this a little bit, but can you talk about why it's important for the physical embodiment of a robot to be there as opposed to just a screen or just a speaker? What is the importance of that actual robot versus just the software?
So there's been a very long debate about, you know, by people who are not in robotics and also people who are not in neuroscience, why we need physically embodied companions and social partners. And if anything is going to demonstrate to us why the after effects of the quarantine in the pandemic will. So we will see effects in early child development, in teen development, in adult isolation. We're seeing all of that. We can be fully connected through various social media. We can fully connect it, be fully connected through screens and video conferencing.
And yet what you're seeing is extremely increased rates of anxiety, depression, isolation, child development delays. Kids who missed one or two years by not being around their peers are now, you know, and their social development is two years delayed. This is happening because we humans are evolved to be social creatures. So we need from from day one, we need to look. You know, if we're sighted, we need to look at human faces. We need to see the smiles. We need to see the crinkled eyes, the Duchenne smile.
We need to, we need that for feedback. And you know, back in the 60s, there were these wonderful experiments, or maybe slightly cruel and sad experiments, but they took baby monkeys and they put them on artificial monkey mothers, some that they had wonderful fur on them and others that they were just metal, but they had a bottle of milk. And what did the baby monkeys prefer? If we were just transactional creatures, we just go for the milk. But no, all the monkeys went to the furry mother, even if they were starving.
We humans are fundamentally social creatures. We need social support around us. And by that, I mean really around us in the physical world. And there have been, I would say, thousands of studies in science to show this.
So we actually did a meta review. This is something we do in science, right? We look at all the studies and we do like a summary of it all statistically. And they show basically side by side comparisons. If I take a human of any age and have them compare a screen based interaction with a real robot interaction in the real world, the real robot interaction is going to make them retain, learn more, retain the information longer, and report enjoying it more.
I mean, I have a friend who's an occupational therapist at a local elementary school. And I can imagine her being so much augmentation, amplification, positive things happening if she had a robot in her classroom to help with someone for students.
We've actually found that after the initial stuff of talking about, for example, having these robots for kids with autism and therapists were worried they said, you're trying to replace therapists, not at all. But then they actually saw it from the other side. They said, oh my goodness, this is great. Because I spent like one hour or two hours a day with a particular child. And then I want them to practice things at home. And either the parents have to do it and parents have enough, you know, parents don't actually want to practice therapy with their kids. They'd like to just be parents, but they don't have the luxury of being parents, right? So now they also have to be therapists. So isn't in grand if you can have this robot that ends up playfully doing this stuff and then parents can be parents. So that's the vision. The vision is that it's complementing the ecosystem of human support, which is never perfect. As a parent, I will second that notion.
So this brings us to our story generated by Chachi BT. It's about a family with three generations under one roof. And it spotlights through the family members. There's Mr. Johnson. He's an 80 year old grandfather who needs help with his medication. Lisa is a 40 year old mom who needs support with preparing to repair a roof, an eight year old Ethan who needs assistance with his homework and getting to soccer practice. And so this robot steps in to support each of their needs.
And so first of all, if you hate this story, that's fine. I didn't write it. You can critique it. But I want to ask like, what did you see in this story that you were like, oh, that's interesting. That could happen. That could be in our future. Or what was wrong, what seemed promising, what seemed way off. Like what are your reflections?
Actually, I love the story. And in fact, you know, it's interesting. And in some ways, not surprising that this vision comes out because among other things I was part of a recent grant proposal in which we had a very similar vision, except it wasn't necessarily as one robot because it's more like a, because the state of the art now is that you will not at any time really soon have a robot that can do many physical things. But it can certainly in terms of intelligence talk to various different people. As long as it can uniquely recognize you, then it will be able to help you and talk to you.
And so I think that's very likely and very realistic and very needed. And so we really, we wrote a grant proposal in which we literally came up with a vision of a family, which is very much like in the story. So you know, you have the busy mom who is trying to take care of her elderly parents, but also her kids. And the kids are maybe having like one of these very, very common issues now, right? They might be suffering from anxiety, they might, you know, all this kind of stuff or bullying or something like that. So I would say that it's the story is spot on.
And then so what is the solution? The idea of having one robot, you see that in the movies, you see that in a lot of, you know, literature, it makes sense. It's kind of the butler notion that people had or a made notion. I'm not a huge fan of those because it puts a tremendous burden on this one entity in maybe a robot. But think about it. What if three of us are in the house at the same time and we all need something now? What, you know, who gets, you know, who gets to go first?
Right. There's actually a whole debate about are we going to ultimately be creating this other race of servants, right? And I prefer to think of it as your buddy. And so it seems much more likely to me that the kid will have a buddy that they can play with and what the mom will have is something else, maybe kind of an, an assistant identity. So I think there are different versions of who feels what roles, but what does the future look like? Maybe people will be filling these roles and their robots will be doing something else. And I think it's just really important to keep thinking about this and not just drill down one path because, oh, gee, we can and not consider possible outcomes really, you know, a bit long term.
I love all that. I mean, this is part of the reason why, you know, I kind of erode impromptu is kind of human amplification and, and such. I'm actually quite bullish on the fact that we'll always figure out things to do because even if the robot was doing all the manufacturing, you know, we'll go out and play pickleball or, you know, or do other kinds of things because of that human human. That's the exact like kind of go touch the fur. Also why Mustafa and I, I'm very curious if you ever play with pie, kind of what your kind of thing is because it's the same thing. It's kind of like, how does it kind of help you in your life versus draw you away from it? And I think this also gets to, I completely agree with the whole embedded, you know, kind of keep you engaged in your life. Have you also thought about this in kind of VR and AR because, you know, there obviously we have this, there's been an ongoing discussion of metaverse and other kinds of things. I'm curious if you've thought about that environment as well as the real world one and what your reflections are between those.
Indeed, you know, I said early on, I made this point that there's a big difference in how our brains perceive interacting with the screen versus interacting in the real world. Now if you go into virtual reality and the virtual reality environment and immersion is really well done, then you can almost trick your brain. So your brain feels like you really are interacting almost in a physical world. You don't have touch, which is important because touch is incredibly important. I mean, the lack of physical touch is a part of the loneliness epidemic that we have actually. So we are our brains are really wired for this physical experience. We want touch, we want smell. But I do think the metaverse is coming no matter what, but how soon it'll come and what economic bumps will happen along the road, whatever.
The issue is that our physical bodies are not just vessels. They are how we experience the world. The science of embodiment shows us that. That's why things like mindfulness work so well, because, you know, when you're in the moment and you're experiencing things, you're happier. Oh, lo and behold, how come? Well, that's what we're built to do. We're built to do in this experiences in this physical world. And so that's why I think there are a lot of wonderful things you can do in a metaverse kind of virtual reality. The things that excite me about it are, for example, you could teach tolerance, right? If you want to understand what it's like to be an elderly person, you can be put in an environment that's immersive and maybe put on a suit and you can really feel like you're in an 85 year old. And that is going to in 30 minutes make you understand and possibly, let's say, be a better engineer for people who are 85 than any amount of books you can read and so on. I'm excited about that. People have done virtual reality training for understanding the climate crisis and just like putting yourself in various places and experiencing it. Fantastic. We can really expand our space of experiences. And yes, we can democratize experience where now everyone can go to Everest, right? But the downside is if everyone can go to Everest in VR and no one goes, well, maybe that's better for Everest, right? Like let's protect Everest. But some other things, right?
If you never get out of your house, so it's actually really exciting. There's a whole new field that just arose in the last two years. And I just know because one of my PhD students was literally doing the work, Tom Greschel. That's why I know this is how we luck out. We professors, we seem to know everything, but really we just have a lot of smart students.
So there's a new field called VAMHRI and it really stands for virtual, virtual and mixed reality for human robot interaction or human machine interaction. So it's really interesting, right? You can put what I love is augmented reality. So augmented reality is the idea that you can put lightweight goggles like glasses. Our perception of the world is augmented. We could be augmented in a share way so you and I can now have a shared world and we can be in this world, but also this world is much more interesting because we have the shared world between us. Now that's exactly what's happening with humans and robots.
A human user can wear these lightweight glasses and can see things that the robot can also see. And the beauty of that is we humans experience the world in much richer ways than robots do. But when we create the shared world, this mixed reality world, now the robot can experience so many more things. And so for example, we had kids playing with physical robots in a shared world in which there were these floating code blocks and they were coding. They were moving the blocks around them. They were pushing them and shoving them and throwing them. And can you imagine? That's pretty fun. Like how fun is coding? Usually not this fun. These kids were having a great time. There were fifth graders. They were having a great fun. But the most important thing is later we test them on their coding skills and they're way better coders.
By playing, by integrating play and freedom in what they were doing, they were not afraid. They were more creative. They were more curious. They learned way more. And so this is just an interesting way to think about imagine learning in this augmented world with companions. And so for example, you know, you go to school, you interact with your friends, you do all the stuff, and then you come home and you interact with your learning buddy in this interesting augmented world and you're not missing out on the physical experience, but you get this extra layer. So I think augmented reality is going to do a lot to improve our experiences without leaving our full brains behind. I worry about that in complete immersion.
One of the things that your answer reminded me and kind of a follow up is that you're one of the very few people I know who are asked the better verse goes immediately to here's how we can increase empathy, right? Like empathy for old people, etc. And it reminded me of one of the questions that I had for you, which is what kind of learnings around how do builders and designers, like what are the things to really increase empathy? Like what would be kind of like a couple of bullet points to just, you know, to just kind of go? Like here is what's really important for getting this empathetic interaction.
That's a really good and hard question. And so I'll try to get it right, but they're bigger experts on this. But at least I would say two things that have been shown to work well. So one is, and we all know this is listening, right? So asking a lot of questions like, how do you feel? How did it go? And then not solving problems. So empathy is all about tell me about you. It's about you. And then the other part is using feeling oriented language.
This was actually very surprising to us. We ran a study to see if it was okay for robots to talk about feelings because remember, they don't have any and we can pretend that they do, but they really don't. They don't. And so is it okay for a robot to say, I know how you feel because it really doesn't. And I was actually surprised and this is good because we should as research would be surprised. If I ever never surprised, then it feels like I'm biasing my studies. I was surprised, I actually thought that because I would feel like if a robot said to me, oh, I know how you feel, I'd be like, do you really? Because I don't think you do.
But actually people like it whenever a robot says, well, how did that feel at all? I know how you feel. And we were dealing with a group of users also who were actually suffering from anxiety. And then we were dealing with users who also were grappling with recovery from cancer. They actually liked the robot companion referring to understanding their feelings. So feelings oriented language really comes across as empathetic and is well received.
And people often think intuitively that if you have a supportive agent that is supportive consistently, then you will get bored of that and you won't like it. But I don't know, think about humans in your life. Oh my gosh, I have this parent slash friend who is always supportive. Oh, I'm so bored with that. I don't want that anymore. Right? Who doesn't want that anymore? So the point is if you have an empathetic agent that is consistently empathetic, it just cannot be wrote repetition. But if it is meaningfully empathetic, people do not get bored with that. Everybody needs support. Totally.
Oh, Maya, we could talk for so much longer, but we want to get to our rapid fire questions. So you actually already mentioned some movies that I also love, but is there a movie song or book that fills you with optimism for the future? So Wally, actually, I'm going to go back to that. But Wally does fill me with optimism because Wally actually does a lot to show some bad things that people can get themselves into by not planning ahead. And then the way that we are infinitely malleable as a species. So we can do better and be better and soak in the machines that we create. So I am going to still go back to Wally. But if you want another robot movie, which is more fun than maybe optimistic, but I do like robot in Frank, that's an often overlooked robotics movie. And I think it's quite well done, really great acting, great understanding of what robots are like. It has a scene in which two robots come together and instead of some kind of taking over the world, one says, I'm operating at expectation level. And the other one says me as well. And then they just go their separate ways. I thought that was good.
Where do you see progress from momentum outside of your industry that inspires you? Yeah, no, that's actually, I'm actually really, really excited. And I feel like if I had taken a different path, I would have loved to have been in bioengineering this intersection now of biology and engineering where we're looking at things like everything from on the one hand, like prosthetics. Okay, that's obvious. That's kind of even closer to what I do. That, you know, the restoring vision, restoring physical ability like this. And this whole mush where we're going from genes to cells to physical ability. So even, you know, gene therapies and things like that, I'm extremely excited about this. And I think there are areas there with AI that will actually, you know, just really make huge impacts. And so that is, you know, getting into the whole area of personalized medicine, where we're going away from one size fits all. Oh my God, we ran this trial. And now we have to use this. And I said, it's like, this is, we're going to understand you as a human so thoroughly that we can really not only help you with a specific issue that you're having, you know, cancer or something horrible like that, but also we can predict and anticipate and hopefully prevent that's huge. So the field of medicine is just so exciting. I could see an alternate reality in which I do that, but I'm good with what I'm doing. I love that personalized medicine is so fascinating and could make such a huge impact.
And so our final question, can you leave us with a final thought on what it's possible to achieve in the next 15 years, if everything goes humanities way, and then what's the first step to get there?
I thought about this and he worries me. This worries me because I think it's very rare that everything goes the humanities way for all of humanity. I want to be optimistic, but I'm a bit concerned about the particular place we're in with AI because it's going to disrupt the economy in a massive way and it could be really positive, but it may not be.
So I would really like us to like do some serious thinking. That's not the same as I'm not necessarily saying we pause, but I will say one thing, which is in which you asked, but I'll say here's the thing we should do. The big tech folks open AI, Google, all the other folks, they need to not just say that they they've welcome regulation. They need to tell us what needs to be regulated because they know the best.
It is not the job of academics and it is most definitely not the job of politicians because they have no clue. So the people who are creating technology need to be responsible for also suggesting specific regulation. I understand this will be super biased and they're obviously, but that's everybody's biased. It is their responsibility if we do that because I know on the inside there are a lot of really responsible folks who care, but they aren't proposing what should be done. I want them to work with us. Let's say in academia because we can't do this alone.
So I think if we do that now, if we think very hard about how to put proper guardrails around by very people who are creating the systems, that's when we can end up somewhere really much, much better. That's the step in the right direction is government academics and industry all working together to be inclusive and think about everyone for AI. I'll take it.
In terms of vision and I don't think it's 15 years, but in terms of this grand vision, there's always this discussion about like, oh, well, people need to be taken care of and people will take care of people while technology will do all the other things. And the part I don't know is there's a big trench in between where we are now and that. And I don't know how we get through that trench, right? Because you cannot just take 60 year old people who have worked in like, let's say, food delivery with trucks and suddenly make them caregivers for people with Alzheimer's. So what do you do? How do you transition to get into eventually this other world in which it would be fantastic to think that we have a lot of leisure time and we're taking care of one another and machines are doing all the crappy stuff plus and then some.
And so I want us to think about that trench, like how are we going to get through that trench? If we can figure out bridging that thing, then we can get to the other side, which is going to be really awesome, I hope. I love it. Figure out how to transition to more care.
Maya, thank you so much for being here. It was eye opening and I loved hearing about robots. Really wonderful.
Maya,非常感谢你的到来。这真是开阔了眼界,我很喜欢听你谈论机器人。真是太棒了。
Oh, thank you. And thank you for asking me such wonderful questions. I don't get to talk about this often, so this is great. That was super exciting. It isn't often that you talk to people who are very deeply engineering sophisticated, who are solving problems like engineering. And the engineering problem they're focused on is empathy and is the amplification of humanity through that. So as you noted, Ariana, on the pod, we're going to talk to her for another hour or two and completely lost track of the time. And to your point, she literally was using the same words as Mustafa. And so it was so interesting to hear someone working on software and AI and large language models talking about empathy, how we can have people as therapists, how we're definitely not replacing humans. This is an addition for this is a compliment. And Maya was saying the same exact thing. She was just talking about hardware and she was talking about robots and she was talking about how we can have them sort of present in our everyday 3D lives. And I thought it was especially interesting when she wove in the AR and VR and how that actually when you look at people's brain scans does give the same stimulation at times as in person. And so that could be such an iteration on the field of empathy and helping folks out, having therapists and coaches and all that stuff. Well, that's definitely one of the things we're going to get both by her robotics and by various of the AI, chatbots, pie, others, which is we're going to actually be running a what is the real typology of how you have empathetic interactions, compassion and interactions, understanding interactions. And we're going to begin to understand this in a much broader way.
And you know, I think her neurologist point was kind of as simple as, you know, empathy is as empathy does. You know, I thought that that was, you know, like, well, that's a very important lesson to remember. I mean, if you'd asked me before this episode, can a robot be empathetic? I would have said, I was really not like that has to do with intent that has to do, you know, XYZ.
And it's like, right, well, actually, all that matters is the person who's feeling it. And if you are a stroke victim and this robot can, you know, perform tasks that are empathetic to you, then that's incredible. And also, you know, she talked about so many times, it's like the classic, you know, teach a man to fish tail, but like, we're not doing anyone a service for certain things when we're just doing everything for them and we certainly know that when you have five, six and seven year olds, but it's also true as people age or whatever it might be.
And how do we use these robots to help people help themselves? How do we use these robots to amplify what everyone wants to be doing? And again, like takeaway, the drudgery, but for the stuff that we want to be doing. So for our own independence, how wonderful to have someone right there with us helping us along on that journey. And it makes sense that she's starting with the most needful, whether it's, you know, children on the spectrum or injured or people who experience some kind of disability, maybe recovery, because that's obviously the most important thing to show the lens of it.
I was super interested as that work also broadens out to Susie and Joe average. One of the, you know, one of the questions that we had on our long question list was about commercialization and was about scale. And you know, how does she take that from, you know, in the lab or helping a few folks to sort of broad adoption and who are the future customers? And so, you know, we'll have to have her on the pot again because we, you know, in a year or two, we got to hear about how that scale is working. So I'll be interested to watch the space.
Me too. Possible. It's produced by Wonder Media Network, hosted by me, Reid Hoffman and RA Fingar. Our showrunner is Sean Young. Possible is produced by Edie Allard and Sarah Schlee. Jenny Kaplan is our executive producer and editor. Dennis Collins. Professor bootyred Elster. But you can talk a little bit about what deals with the technology and art or a technology in particular.