Welcome to the Index Ventures AI Summit. I'm Brian Walsh, the editor of FuturePerfectiveBox.com, and I'm very happy to be here discussing the present and future of natural language processing with two of the smartest people in the field. Sam Altman is the CEO of OpenAI, which is about the world of GPT-3 model, among other innovations. Kevin Scott is the Chief Technology Officer and Executive Vice President of Microsoft. So let's dive right in.
2021 was really another landmark year in NLP and models like GPT-3 continue to mature. We're going to see some real commercial applications arising in the space. Given that, what are you both expecting from NLP in 2022, both in the collaborations we near two companies, potentially, but also in the larger industry and they'll be saying we can start with you and then Kevin.
Sure. GPT-3 is a model that we're still embarrassed about. We think that it shows the promise of what's going to happen here, but it's still extremely early days. And I think 2022 will be a year where we see the language models get good enough for a very broad swath of business applications. I think the market did a good job of finding where GPT-3 is strong enough, but when these models get more robust, when we're able to do new things like have these models do a better job of following humans' instructions and preferences and intent. And instead of getting a great result one out of 100 times, you get it every time. I think we're going to see that the applications are immense for what people can do with these models. So I'm excited just to see like this 2022 be the year where natural language goes from this like incredibly promising glimpse of the future to a sort of a technology that we depend on for lots of things.
The thing that I will, I so totally agree with everything that Sam just said and the thing that I will add is for a while we've been both hoping and expecting that these large models would start behaving like proper platforms that you could train a thing and invest in the training once and then be able to use them very broadly across a huge number of applications and use cases. We saw that more in 21 than we ever have before and I'm really excited to see that trend continue in 22 and I think things like the Codex model that OpenAI built and GitHub Copilot is a good example of what I mean by a platform model where you're leveraging all of the great work that you're doing to build the models to do something that is surprising to folks what it actually manifests.
Well just to sort of drill down on that a little bit in terms of those commercial applications. I mean you know Microsoft introduced I think since GTP-3 functionality for its cloud customers this year, OpenAI opened up GTP-3 somewhat more as well. You know you mentioned Codex and that's a great example. Are we seeing what companies maybe outside those sectors are actually using these models for? Is it for customer services for something else? I mean what sort of jumps out of use now?
Well first of all to echo what Kevin said about GitHub Copilot. Everyone that I talk to that uses GitHub Copilot not everyone almost everyone says something of like I cannot believe that I used to work without this kind of tool like this has just become so so important to what I do and I'm so dependent on it and I think that's just going to keep going. I think this is like that happens to be my favorite example because I think it's so amazing.
But I think we're just going to hear more and more well existing areas like Copilot, Copilot are just going to get better and better and then we're going to see more and more areas where the AI tooling that people use as a platform for everything else that they do is just going to become an incredibly integral part of people's workflows. I can give a bunch of specific examples and I'm happy to do that if you'd like but what I would say is the general trend that I am finding most interesting right now among all the different successes people are having.
Search, copy generation automated AB testing, you know really good classification, customer service, whatever you want. The trend of this idea that the way we're going to interact with computers and all of AI, our most of AI is natural language as the interface. This is super exciting to me. And I think a thing that you're seeing a lot among people who are deploying GPT3 very effectively is that what people actually want is sort of some version of the Star Trek computer. You tell the computer what you want and it goes off and there's Nancy perfectly it does it. You maybe have some dialogue back and forth if you realize oh I actually wanted this other thing or the computer makes I didn't quite get what you mean can you specify it.
But this would be like a great computer interface and it hasn't really been possible until now. And now among many different applications you are seeing people start to just like talk to their computers, talk about what they want and the computer has enough real intelligence and understanding to go off and do that.
Check on what you said. Yeah I think it's this whole idea that leveraging the full power of digital technology your computers the cloud can be made better by having a dialogue with your technology about what it is you want it to do for you. It's a really powerful idea and you sort of see it concretely with GitHub co-pilot where you are in this task of programming which is inherently about telling a computer exactly in very specific terms about what you want to do and like now you have a way to rather than deal with all of the technological arcana that is involved with programming which sort of makes it an inaccessible set of capabilities for folks like you you have to have a particular mindset and go through a lot of training to figure out how to program a computer on its terms.
And I think these language models and with things like codex where you can have a natural language conversation with with your tech to tell it hey here are the set of things that I would like you to do and just iteratively describe those things. I think it's a really really powerful conceptual shift for how we've been used to using technology like computing technology for the past many decades.
So I mean I think that point that Sam made is a really really important one and the places again just to reiterate what he said that we're seeing the greatest bits of success are where entrepreneurs and creative thinkers are taking these models and figuring out how to do that and a whole variety of different domains and use cases. And so it's not ultimately just about codex and copilot it's about like where all of the places where you can have these dialogues with your technology to get it to do complicated things for you.
You both mentioned codex and copo a few times and Sam you said you know you talked to people using now and like I can't believe I how was I doing this work beforehand. Is that kind of a heralding what it would be like for the rest of us those are us who aren't programmers but we use computers or research where we need to gather information in kind of way. Is that sort of an example of how the rest of us will ultimately be working with these models in the future as they do continue to mature.
I'm curious what kind of things but I think so like I think that coding there's a bunch of reasons why coding is a really good environment and why we took it on first. You know it has like a lot of advantages there's a lot of training data. You can sort of evaluate what's right and what's wrong there's some structure to it but but I hope that like for example graphic design at some point goes the same way instead of talking to the computer to create and codefully you're talking to the computer to create an image that you want and sort of lots of other tasks like this.
Yeah I mean my my wife is a historian and we first met she was doing archival research and so her job was to go to these archival facilities in Germany to find four or 500 year old documents that had the information that she needed. It was like this process of you you go to the information you like get a whole bunch of experts to help you retrieve it and I think with technology like this you can imagine you know having greater access to the sorts of things that require a whole bunch of hand work and labor and the thing that's really exciting to me is like again you sort of go back to this programming paradigm like programming is a thing that only a specialized subset of the human population can do whereas if you think about what you're doing with your computer is teaching it how to do things for you like teaching is something that even a toddler knows how to do and so you know I do think that this mode of interacting with your technology means that it becomes way more accessible to everyone and like that's my genuine hope like I just really want more people doing more complicated things with with tech.
You know you say I've even mentioned graphic design as an example obviously other products that open it came out with over the last year, dolly, clippy, both sort of multimodal learning. Can you talk a little bit about and I'd whether you're YouTube kind of about the importance of that approach as well going beyond just text seeing how text images video other things other kinds of media formed together and can be learned in a contextual way as a way to make these models smarter and more effective as well.
Yeah I mean text is super powerful like language is super powerful there are many people that I really respect in the field that think you can get all the way to H.E.I. just with language and clearly it's such a compressed format of information so rich there's. hugely valuable useful things that you can do for people just with language but it's not everything and if you really want the ability for these systems that we're going to build to be maximally useful to people and do sort of all the tasks they'd like to do I think you do need to understand and be able to create visual stuff audio stuff and way way more so like I think I think it's important that we push to multimodal models as good as the text only models can get.
Yeah I mean the other thing too to think about is that there are there are more mathematical domains that these models may be applicable to as well so like one of the really interesting trends over the past handful of years is that people are beginning in scientific disciplines to apply machine learning models to do things like simulating computational fluid dynamics systems or doing finite element analysis or you know things where typically you've got nonlinear partial differential equations or you know some sort of set of hard combinatorial optimization that you're doing that is super hard where you're constantly making tradeoffs about time scale and resolution and accuracy of the results just because the computations are so hard and we're beginning to see these models getting built for some of these domains you know where they can help predict molecular structure to do quantum accurate simulation and that to me is also really really quite exciting.
Actually Sam on along that are we going to see domain specific models specifically here's going to be our medical model here's going to be our science model here's going to be something else or are there some general models that are powerful enough that if you're some fine tuning they can handle all this. Kevin and I have talked about this a bunch of curious what his current thoughts are. You know I mean an analogy I would give is that for a long time people had all these like specialized computer chips and it turned out the CP was like better for everything and people bet on that but then it turned out that if you missed the GPU thing that would have been like really bad for this one super important area so hyper specialization does matter or the right degree of specialization does matter sometimes I don't know what I think is like building very powerful base models that possess some of these metal learning capabilities seems super important but then at least in the short and medium term fine tuning them for the AI doctor the AI lawyer whatever is going to be really important and that's kind of how I would guess the picture goes for a while but yeah eventually I think these single super powerful models should just have a lot of advantages.
Yeah I completely agree with that and I think the thing that you've seen over the past couple of years is that researchers and technologists are presenting themselves with a false dichotomy and that you know that there's some kind of clean separation between like a general model that can do everything it either has to be that or like specialized models that are good at one particular task and the reality is that these large general models are amazing and have done things that no narrow model has been able to do but that when you specialize them in specific ways you can make them even more powerful and I think you'll just continue to see that pattern emerge and like the thing I would encourage folks to think about is like don't believe that you can do one without the other.
That was so interesting. I don't know what I'm trying to say.
这太有趣了,我不知道我在说什么。
Excellent Kevin here talking about large models these are very large models they're very big in their training data they're very big in their computational demands they currently really are the province of pretty big companies I guess first off is there a limit to that kind of scaling I mean do you continue to get returns as they get bigger and bigger and bigger is there is there a limit to that at some point as we're building these and obviously stand on to hear you after that.
Well there's always a limit and I guess we'll we'll be set limits soon it would be the better point there. Well you know this is one of those things where I think it's just very difficult to forecast when it is you might reach the the limit and again you know if I'm giving advice to everyone about how to think about these things is I wouldn't I don't think we're yet at the point where you stop trying to scale nor do I think we're at the point where you sort of look at the successes of scaling and you say like oh we shouldn't do anything else like looking for alternative approaches to like try to find models that have equivalent generalization power at smaller scales so I think we just need to be very.
like have a very broad aperture for how we look at the problem space right now. I mean it's certainly how we're approaching the the problem it's like yes scale scale is scale is doing interesting things so we will continue to invest in scale but it's not the only thing that we're looking at I don't know Sam what do you what do you do so I still strongly agree I think that we are not at the final paradigm yet there is more to discover and as much success as we're having with something else a I think we can still have orders of magnitudes of efficiency gains with algorithmic research on a current approaches but more importantly than that like I think there is still another like Nobel Prize or the discovery technique about how AI is going to work that is going to be different than our current paradigm of training giant transformers and stop to stop looking for that would be like awful so I think it's really important to push harder on research because I think there's a lot to do but then on scale yeah scale is really good like you know when we have built the Dyson sphere around the Sun and gotten compute as efficiently as we got in it we can finally say okay
like you know I'm very willing to entertain the discussion then that we should stop scaling but but short of that I think there's like there's no reason that I see right now to not keep pushing really hard on it.
That gives us a fairly far future timeframe when it comes to the continue to get hopefully get some stuff from scaling actually but at the same time does this limit who can play in this field I mean does this you know limit the number of players if you require that kind of level of resource if you require that level of scale do this work in is that fine is that is that a problem really in terms of who can actually do this work then. I'm going to be concerned.
Look the the reason that we were so excited to partner with Microsoft and back a lot for feels like a really long time ago now but I guess there's only a couple of years yeah is that I think they shared this very deeply held conviction that we had to which is that democratizing this technology access to this technology is like super important to the kind of world that we want to live in and having advanced AI concentrated in the hands of one company which is what some other large tech companies would like to see happen is not a world that we were excited to see so it is true that there are not many people in the world that can like train sort of a GPT-3 class model but it is also true that the best language model in the world right now as far as we know is available to anyone who would like to use via Azure and our own API so I think that's pretty cool and delighted to have Microsoft's support and partnership in that.
当时我们与微软合作并支持他们是非常兴奋的原因,现在看起来好像是很久以前的事了。但我想只有几年吧。我们之所以对此兴奋,是因为我认为微软与我们有着相同的信念,就是普及这种技术,让更多人能够获得、使用这种技术,这对我们要生活的世界来说非常重要。相比其他大型科技公司希望将先进的人工智能集中在一个公司手中,我们不喜欢这种情况。虽然世界上没有太多人可以训练一个类似 GPT-3 的模型,但我们也知道目前世界上最好的语言模型可以通过 Azure 和我们自己的 API 让任何想使用它的人都能够使用。我认为这是非常酷的事情,因此很高兴能够得到微软的支持和合作。
Yeah and that is why you know I keep poking on this notion of platform so if the models themselves weren't behaving as platforms they didn't have these platform characteristics then it would be really really challenging but you know what we're seeing like if you just look inside of Microsoft so before we had models that behave like platforms what you had was a lot of the scale anyway but the way the scale manifested itself is you had a whole bunch of different machine learning teams doing high ambition but very narrow sorts of machine learning engineering you know so you'd have a team doing question answering and search and you would have a team doing you know sort of content moderation stuff on the Xbox platform and you know just dozens and dozens and dozens of these teams building things that you now can build on top of platform models and so being able to package that stuff up in a way where you can offer it to the outside world like I think is more beneficial than what we had before because it's not like the advent of these large models has created a new set of circumstances that make it harder for organizations or people to like really access the power of machine learning like we we had that before so now like you you have platforms that people can lay their hands on and like maybe attempt more things than were possible before because they don't have to build 10 different machine learning teams inside of their company to do the 10 different things that they would like to use machine learning for.
这就是我一直强调“平台”概念的原因,如果模型本身没有表现出 “平台” 特点,那么它们就不会是真正的平台,这将会非常具有挑战性。但是,我们正在看到,比方说在微软内部,早在拥有类似平台行为的模型之前,已经有很多规模优势,但规模的体现方式是:很多不同的机器学习团队,进行高度雄心勃勃但非常狭窄的机器学习工程,比如有一个团队负责问答和搜索,另一个团队负责 Xbox 平台上的内容审核,等等,总之就是有很多团队在构建那些你现在可以在平台模型上搭建的工具。现在,将这些东西打包起来,以一种可以向外界提供的方式展示出来,我认为比以前更有益处,因为大型模型的出现并没有为企业或个人获取机器学习的能力创造出新的环境,以前我们已经走过这条路了。现在,你可以通过平台让人们更容易获得更多的尝试机会,因为他们不必在公司内部构建十个不同的机器学习团队来完成他们想要使用机器学习的十个不同任务。
Kevin how does Microsoft you know through is or control access to GPT3 because I mean democracy is great access is great these are also very powerful models that could potentially be misused in the wrong hands so how how do you make that judgment how do you know that someone's not going to take this and you know for open AICN question and potentially do something wrong with it.
Yeah I think this is one of the things that I am proud as to about in the work that we've been doing over the past few years on machine learning and then honestly our collaboration with open AI is trying to think through how it is that you can bring a product like co-pilot to market and potentially put it into the hands of you know tens of millions of developers that could benefit from it where things are safe that you're not propagating vulnerabilities that you know might exist in the training data that you are trying to prevent the models from baking bias into the code that it generates and so we've done a bunch of like we at Microsoft have this thing called Office of Responsible AI that is a partnership between the legal team and my team and like we have a set of responsible AI guidelines that are part of how everybody at Microsoft approaches their work with machine learning we have a sensitive uses framework that defines which things are like like very sensitive where you should use no machine learning ever at all things that are sensitive enough where you should always have human beings making the final decisions things where with the right level of automated supervision that you can use things safely and things where you know it's just sort of default okay to use the off-the-shelf machine learning platform and we we're trying to be disciplined and rigorous about that.
我认为我们过去几年在机器学习方面所做的工作以及我们与 Open AI 的合作是我为之自豪的一件事情。我们正在思考如何将类似 Co-pilot 的产品引入市场,并将其投入到可能会受益的数千万开发者的手中,确保安全性,防止在训练数据中存在的漏洞被传播,防止模型在生成代码时存在偏见。我们在微软设立了一个负责人工智能合规的面向AI负责办公室与我的团队合作,我们有一套负责任的AI准则,是微软所有人在他们的机器学习工作中采用的一部分,我们有一个敏感使用框架,定义了哪些事情对于使用机器学习是非常敏感的,哪些事情足够敏感,需要人类作最终决定,哪些事情可以通过适当的自动监视安全使用,以及哪些事情可以使用现成的机器学习平台。我们正在尝试做到有纪律、严谨。
You know in with with things like GPD3 that you know Sam's team has a has a API surface area for that and like there's a Azure API surface area for that like we we have a we have a really robust process in place to review what the intended uses are for those APIs and we've got a bunch of monitoring and control in place where if someone violates the terms of use we can suspend their access to the API and it's one of the reasons why you know like this is a little bit of a controversial decision and you know like Sam should talk to the open AI part of this but like we have big models that we built it Microsoft that aren't aren't GPT that we have made the same decision that open AI is made that like we're not going to release them you know the model parameters because they're it makes it very difficult to control for those you know sensitive uses when you just sort of make the model open in the wild like you all of a sudden lose control of it from a safety and responsibility perspective yeah we we've jointly taken some heat from the research community for our decisions there but I'm I'm really proud of it and I think this idea that every model should just be like thrown over the fence and once you do that and push that button like you just accept whatever consequences come rather than put out a model where you can adapt it over time to watch how it's being misused stops certain use cases improve it when you find bias or other problematic behavior in the model like I'm very proud of our joint actions there even though not everybody agrees.
I don't much add to what Kevin said I think yeah we we've just got to really like figure out the right policies and then figure out how to enforce them longer term I do think it'll be important that we build models that understand themselves what acceptable uses and enforce that rather than having you know humans trying to look at stuff or go through policies it's just going to become too complex and I think technically we will be able to solve that alignment problem we may need new techniques the current ones may not scale but I think we'll be able to get models to follow human intention pretty well I'm optimistic about that.
What I think is a harder question that we will have to answer societally is to what to whose values to what values do we align the AI how do we decide what we're going to want these models do I don't think it's open AI's or Microsoft's responsibility to make all those decisions but I do think society is going to have to start that conversation sooner rather than later yeah I'm talking a little bit more about actually about that getting the getting these models to actually understand human intention in that kind of way as you as you say having humans oversee this is not going to scale over time I think we're seeing already with we're having to go with social media so you feel confident that that's something that's that's doable what in in the near future where you could at least instruct this model say all right this is off when it's a system yeah with current models I think our existing alignment techniques work surprisingly well and delightfully they work well for both capability and for safety so if you look at some of the work open eyes don't have instruction following um those models are much less likely to behave in an unaligned way and do something the user doesn't want but they also just function much better like almost all users prefer them vastly to the standard model so that was like a nice example of we were able to align models human feedback and it made it safer and also just. work better for most tasks and I think those alignment tasks that we understand now will continue to work for a lot of things there's a big debate in the field about as we get closer and closer to true AGI are the existing alignment techniques that we have still going to work or do we need some very different approach and we're just going to watch it and measure it the thing that I will add there is the way that we have taken a handful of specific applications to market that are powered by these large models is you either do what Sam said and you get the models themselves sort of aligned or you can put a layer over top of the model it's almost like the editor and chief of a newspaper so you know it is supervising some of the things that the model is doing to ensure that you know it's doing reasonable things you have for instance there is a layer and get a co-pilot that tries to prevent the system from parroting verbatim code that is on the internet like humans know you shouldn't do that because that's a violation of copyright and so like we have a little editorial assistant that helps the model make sure that the things that it's suggesting aren't like exact parroting of other people's code and so like there's a bunch of stuff like that that we can also do is we understand which applications are useful that you can like in very specific ways assure that the model is both serving the needs of the user but also operating inside of the norms that society expects things that are doing that particular task to do.
我认为一个更艰难的社会问题是,我们应该将AI与哪些价值观对齐,如何决定我们希望这些模型具有什么行为。我认为,Open AI或微软并不负责做出这些决定,但我认为社会将不得不尽快开始这种讨论。我想进一步谈谈如何让这些模型真正理解人类意图,正如你所说,有人类看护这并不具备未来可扩展性。但是,目前的模型已经可行,我们现有的对齐技术在功能和安全性方面效果惊人。我认为,在接近真正的AGI时,我们已经拥有的对齐技术是否仍然可行仍存在争议。我认为,我们现在理解的那些对齐任务将继续适用于许多事情。我们可以在模型上添加一个类似于报纸主编的层,确保它在执行任务时符合社会规范。例如,在Get a Co-Pilot的应用中,我们可以添加一个编辑助手,以确保系统不会机械地复制互联网上的代码。我们可以在非常特定的方式下保证模型既可以满足用户需求,同时也可以在符合社会期望的规范范围内运行。
To follow up on what Sam said about the larger question here which is who ultimately decides what what is in alignment whose values are we talking about and we're talking about trying to align these these models okay it's not Microsoft and OpenAI doing it together or other companies who does it is it's a political process in your mind and also it's great that of course the your companies are taking these steps other firms other researchers will be working on these models around the world what can be done to ensure that they're following those same rules as well.
Well look I think the it is most assuredly not us like this technology is going to be so influential on the shape of the future that it has to be a society at large participating in a very large conversation about what it is we we expect the technology to do like what things should we encourage and what things should we discourage and I think you need to have both halves of the conversation like there's an alloy good that these pieces of technology can do that will make everyone's life better and so what I would hope is we can have a conversation you know and it's it's government it's academia it's industry it's like we need everyone to get themselves slightly...
...better educated about what the technology itself is capable of so that you know your mom and mine and like whomever else wants to have a say and how their future unfolds can participate in this conversation and make smart decisions about you know who they're choosing to represent their voice but like I hope that can be a really rich conversation that balances both the positive and the negative that that we need to be thinking about.
And Sam for you I mean I'm also particularly interested in that question about people outside this me or the larger community how we ensure that I mean we think of this another potential existential risk I mean they're they're you know this biotech can be the same way how does the field control to ensure okay there's not a bad player somewhere somewhere releasing a model that you don't have those going to see it cards that you to have really tried to put in place.
Yeah I don't know how we're going to stop on the line to actors from not being less careful than we like about this. I agree with everything Kevin said about the need to get the world's input and this principle that I hold dear is that the people that are going to be most impacted by technology serve the most of voice and how it's used. In this case, the thing everyone's going to be impacted and it does need to be a real global conversation but how we get everybody to listen to that voice I don't know. I hope they do.
When you think back to when opening a started do you feel more or less optimistic? I think about about AI line about about the idea that we if we can bring a GI into the world oh much more to do so in a way. Yeah, you feel more? What? Why? Yeah, um, well, we've been able to make progress at small scale and I think always if you can, if you, if you, if you, if you can make any, if you can have contact with reality and you can make any forward progress at all and then find ways to continue to accelerate, which we've been able to do, you can ride that curve longer than you often think you might be able to. So, you know, there were like a lot of open questions when we started open AI like would we be able to make progress towards AGI at all if so would we be able to like see any indications that we can align it and make it safe and uh I'd say on all of that it's been like a you know pretty good first five years.
Just as a last question of people running out of time at this point, uh you know we talked at the start about what sort of 222 will look like for NLP. I'm curious just let's go 10 years in advance. I mean further on probably too hard what would be like to be interacting as not just a coder but just again like an ordinary knowledge worker potentially with these models in a decade's time. I mean will it be something that is all purpose that is a almost like an all-purpose research assistant will be something different. I mean just how do you imagine that in you know 2032?
I start with Sam and then go ahead. By 23 or two I don't think you'll know you're talking to a model and not you when you might because it'll just be like so much better than any human and helping you out with stuff but but that by 20 I mean that's a long time at the rate this field is going I think it'll I think it will be remarkable and it will it yeah it'll feel like you're just not only talking to your smartest friend but like thousands of smart friends that are domain experts everywhere you want that are like working at superhuman speed to do whatever you need.
And Kevin for you okay sorry I got a full like comprehension of the language. And Kevin for you and Kevin I'm also curious about how this looks for you wrote really like book American dream and this is for you know this is not again just people on the coast this is for everyone here thinking also about the kind of people you wrote about there and the benefits for potential pitfalls for technology there how you see that picture in 10 years time. Yeah I think it's sort of hard to it's really hard to predict what 10 years is going to look like actually one of the it was a book that Sam put me on to so this is um Arthur C. Clarke wrote this book called Profiles of the Future uh where which which I think was initially a set of essays that he wrote that got collected into a book and in one of them he articulates like his three laws and you know like the third one everybody knows which is uh any sufficiently advanced technologies indistinguishable from magic.
But that you know the the salient point that he was trying to make in this book is that you can sort of predict the shape of the future but like trying to predict the particulars is is really challenging and like you can prove to yourself that that everybody's sort of bad at this by just imagining 10 years ago and and you know like honestly putting yourself into that state and like you know could you have imagined 10 years ago what today looks like um you know but that said
I'm trying to agree with with Sam like I think you will have you will have these uh language based technology agents like things that you can talk to and and like ask for help with very complicated tasks um in a much more fluid way than you are able to do today and like what I hope that and and I I think there will be a really robust platform to build these things on top of so it's not just Microsoft's agent or OpenAI's agent it's going to be you know what the entrepreneurs and the creators of the world imagine all of these agents ought to be uh ought to be doing and like hopefully there will be new business models and like the thing that I'm really bullish about is that I don't see why it's not accessible to creative people no matter where they are
whether they're in rural central Virginia where I grew up or there and you know remote parts of Uganda uh like you should be able to like if you have a great idea about a problem that you want to solve like this should. be technology that you can pick up to use to go solve that problem for yourself your family your community wherever it is they're at like that to me is exciting well that's a great place to end it thank you Kevin thank you Sam and thank you to index uh yes on it thank you thank you