In this week's episode of Your History from The Times and Sunday Times, we look back on the extraordinary life of Dr Ruth Westheimer, who escaped Nazi Germany as a child and found fame as a pioneering sex therapist in the 1980s. We also celebrate Eddie Spence, renowned for his skills as a cake decorator for the British royal family. Join me and Atemkin for Your History. Find us on TheTimes.com and wherever you download your podcasts.
Yo. Tikmology. What is it all about? I think it's not a pipe dream. That's number one. AI doctor in your pocket. That's today, right? You literally have GPT-O in your pocket right now. Is a better diagnostician than 99% of physicians based on that one study looking at New England General Medicine? And that's not to say that I'm saying we should replace it, but I do think it's a really interesting time in medicine when the problem for all of time has always been about distribution of expertise. And now potentially we have an infinite distribution of expertise. That's quite a hard concept for us to get our heads around.
Hello and welcome to Danny and the Valley of Weekly Dispatch from Behind the Scenes and Inside the Minds of the Top People in Tech. I'm your host, Danny Forts, and the West Coast correspondent for The Sunday Times. And this week, we're talking about healthcare. But specifically, the problems in healthcare like Dr. Burno, high cost, poor results, the general brokenness of the system. And how AI might just help solve a lot of those problems, but, and here's the twist, not in the way you think.
So depending on who you ask, doctors spend anywhere from three to five hours in front of a computer every day. Writing up notes from patient visits, writing letters, doing general admin. And the first I had heard of these numbers, which sound kind of realistic, was earlier this month when I went to Stanford, they were having an event there. It was an AI in medicine symposium, and it was kind of a first of its kind, gathering of the top minds in healthcare and AI. There's obviously lots of excitement about AI. And the thing that kind of emerged that was most interesting to me was that while there was some talk about this fantastical idea of, you know, we've been talking, hearing about for a while now, of the AI doctor in your pocket, the ability of AI to, you know, train on every single medical case in history and thus become better than any human doctor. That idea, obviously, people are very excited about, that's the promise, that is the hype. But there are lots of reasons why that future, if it does arrive, will arrive very slowly.
On the other hand, AI could break the paperwork log jam like now. Because the combination of real-time transcription plus large language models that can understand, summarize, write notes, means that doctors could be freed from hours of administrative work, which, in effect, would be like instantly hiring millions more doctors by liberating them from that three to five hours a day of busy work. You know, so more free time to, well, have more free time, and also to see more patients, not typing. And that is the promise of this week's guest. Dom Pimenta is the co-founder of a company called Tortoise. They are a UK startup that recently raised a bunch of money from coastal adventures to do exactly what I just described. They've created a note-taking app that, you can probably think of it like a medical co-pilot. You can autonomously write notes, generate letters, it listens into the consultations to summarize them, etc. And Pimenta is a trained cardiologist, and he recognizes can wipe away hours of work, which, you know, if you look at the NHS, this is a very big deal. There are record levels of Dr. Burnout. There are strikes going on for doctors arguing for higher pay, better conditions. And if you could effectively, with this magical tool, dramatically increase the quality of your job or, like, your ability to actually do the thing you trained to do, that feels like a pretty big deal.
In short, AI could revolutionize healthcare, but do it by abstracting away admin, not necessarily by this more sexy idea of, you know, these AI doctors in your pocket. So, I think it's just a really instructive example that kind of separates out the AI hype from what is still a very, potentially very compelling reality and the difference it might make. So, anyhow, I think you're going to really enjoy this conversation, so I will now hand you over to my chat with Dom Pimenta, co-founder and CEO of Tortoise. Enjoy.
We actually talked initially a week ago when I was writing about a story on AI and the NHS and how AI may or should be able to kind of make some serious dense in how healthcare works and more importantly doesn't work today. So, I'd love to just understand basically what is tortoise, because I imagine most of our listeners will not have heard of you. Tortoise is a AI company. We are a healthcare company based in the UK, co-founder by myself. I'm a practicing physician. I co-founder Chris Tan, he's a machine learning engineer. And our thesis is building an AI interface between the doctor and the computer.
So, we really have seen a massive explosion of AI capability in the last two years, but very specifically and very interesting to me, the ability of large language models to understand clinical contexts, like the actual language of medicine. Medicine fundamentally is the language, and now we have technology that can interact with that. So then the question becomes what does AI clinician co-working actually look like? And that's the question we want to answer in the long term as a company.
In the short term, doctors spend 60% of their time on computers. We don't want to, we never, we never wanted to. And those three tasks are summarization, documentation and doing things like tasks and actions. So what we're building is essentially as an AI agent, we call our agent, Ozzler, that takes over that interaction piece, allows the physician to spend twice as much time with their patients, and still actually gets the same, in fact, probably better quality data into the system, and closes a big productivity problem in actually workforce in healthcare systems everywhere, which is just as just not people.
So you have to be much cleverer about what those people are actually spending their time doing. So that's all, yeah. And it's interesting because I kind of came upon the story initially because I was at Stanford last week at this AI in medicine symposium. It was the first one they were doing because as we all know, there's been all this hype around AI over the past, you know, 18 months in particular about how it's going to kind of revolutionize everything, right?
And one of the big examples people keep rolling out is this idea of like an AI doctor in your pocket. And that is something that people have been getting very excited about and kind of marketing is like, oh my goodness, you can have this thing that is trained on every documented medical case in the textbooks and, you know, potentially patient history is all this stuff. And it will be able to administer health in a very cheap, accessible way to anybody who has a smartphone or whatever.
And so that's a really exciting vision. And then we go to where we are today with the healthcare system, which is kind of pretty broken in some pretty fundamental ways and we can talk about the NHS because, you know, there's industrial action. There's doctors striking for better pay. There is burnout. There is stress. And I think depending on who you talk to, it's like anywhere from three to five hours a day is spent just putting in electronic health records, making notes, doing letters, all that kind of stuff. And so it's fair to say that what you guys are doing is initially at least is targeting that piece.
Because if you think about, if you produce that five hours to one, then that's four more hours to actually see humans and deliver actual healthcare. Yeah, exactly. And I think we'll probably look back at this period in history in the 200, 300 years of modern medicine. And look at all the human to human interaction that basically constituted the majority of those interactions for the first two centuries. And then look at these weird 10 years where we just typed on computers instead of looking at our patients as very weird as like, why did you do that? Why did you have to learn to touch type to be a doctor?
That doesn't make sense. But that is actually an essential and a requisite skill, at least in today's world. The intention with digitizing our workflows was never, was never this. It was always about better record keeping, understanding patients, sharing data. None of those benefits ever, ironically, ever actually materialize. Better record keeping maybe as an exception. But certainly the sharing of information amongst healthcare systems in the NHS, your primary care tech still doesn't know what your secretary care doctor is doing.
Your secretary care doctor still doesn't know what the other secretary care doctor did in the other hospital, even if that was two days ago and 100 yards down the road. But what we've also done is a sort of boiling frog approach constantly added more and more digital work to the clinicians because who else is actually going to do that work? Who is going to file the codes? Who's going to do the problem? Doctors like me, we're actually rebellious. We weren't doing it anyway.
I deleted every button I could find on Epic to be able to actually use it the way I wanted to use it to do my actual job until one day I deleted the button bags, which was the labs button, and then I couldn't look up blood tests. And that was a bit of a disaster. So like there is an extreme version of that. But also, you know, the idea that we can go back to being on paper, it's nice in principle and I experienced that. So when I was cardiologist, what about six years ago now, I was working during the cyber attack on the NHS.
So we had about two years of computing. And then for a weekend, every computer in the hospital was down. Closed day and he, my boss called me on the Friday night and he was like, do you remember all the patients? And I was like, yeah, been with them all week. And he's like, did you write them down? I was like, well, no, he's like, we'll write them down now because tomorrow that's how we're going to do the water out. So we did it basically at the top of my head. But I have to say, the care was better, the patients loved it.
We finished the water and had half the time. And I think the realization was the computers are actually getting in the way for most of what we're actually trying to do. Has really escaped us because we just kept adding adding more and more cognitive load. And that is a massive contributor to burnouts. About 50% of physicians globally now are self-reporting symptoms of burnout. I myself have been burnt out at least twice in my career. But we talk about that very facetiously.
But what does that actually mean? What does it mean to be burnt out? It means that your doctor doesn't care. Now that is terrifying. And every time I say that out loud, it still gives me chills to think about that as a potential. And we're seeing the collaries of that right now in the healthcare system. So what is the solution? More workforce fine. It is the solution. But like, they're 10 years away. And actually, when you get to that point in 10 years, all of the pre-existing problems, increasing chronic disease, reduced shortage of resources, inefficient systems have just got worse.
So training more people now is probably just staying still in 10 years time. So technology like ours is not the answer. I genuinely am asking this question now to any of your listeners. I don't know what is. If it's not going to be technology that helps us solve at least the short-term problem. And the power of these models is insanely impressive.
And I remember using GPT three about two years ago and just being shocked that I could ask it, what is heart failure? And it would give me a really coherent answer, better than most medical students. Now, GPT four and GPT omnis, whatever they want to call it, has just come out. The knowledge that's encoded in these models is incredible. But they are not designed and never have been designed actually for knowledge retrieval or for accurate knowledge.
They're transformer models. They're designed for translation tasks, one to the other. The biggest problem we have right now is how do we clinically evaluate these models and bring them in safely? And that's kind of why we're called tortoise because we realize that, and we'll live in this very strange world for the next 20, 30, 40 years, probably. Technology moves exponentially fast. And all our societal systems are just linearly progressing very slowly because there's so much legacy and infrastructure that just doesn't, is not ready for AI yet.
There are hospitals in the UK still on paper, right? That's not going to work. Well, so that was going to ask, like you were talking about this idea of like when we look back and be like, what were we doing then? What is it about that this transition that has happened as you say over the past 10, 15 years or whatever it was that has gone so wrong or that is so clunky because it feels like on just, you know, any kind of normal person who uses computers versus uses, you know, is old enough to use a typewriter, you're like way better experience, obviously, computer.
But it feels like something about the kind of implementation of, you mentioned Epic, which I think is the biggest electronic health company in the world or the software. What is it about that system that has made this so much worse than just writing notes, paper notes down? Because it feels like it should be actually far more efficient. Yeah, it's a super good question. I suppose, I mean, and to be fair to Epic, that's actually one of the better ones. There's lots of EHRs that are way worse. But I think it's the fundamental imposition of a very complex task when you're actually trying to do a very complex task, which is to see a patient.
And there's this myth that human beings can multitask. It's not actually true. Human beings cannot actually multitask. No, I mean, the statistic, like they've asked people, all sorts of, they've tested this in many ways. Trying to even talk to someone on the phone and hold a conversation is about the same reaction speed as if you're drunk when driving a car. Our brains are not these fantastical multitasking beats. We're very good at concentrating on one thing. So there's a cognitive workflow. And then there's also the imposition of the distraction of having to document.
So I'm typing, right? And I'm not looking at you because I'm trying to capture what you're saying. The irony is I'm missing most of what you're saying. And I'm now interrupting you to type down what you're saying. So you lose your flow. The majority of consultations, if at any healthcare system in the world, fundamentally comes down to the consultation between a patient and a clinician at any given moment and any given point. That breaks down into three steps. Talking, so taking a history, doing an examination, which is like physical examination, which is, you know, AI is going to struggle to do that, at least for a little bit, and ordering some tests or some x-rays.
Now you ask anyone on the street what they think the most important part of those three buckets is, they will always say it's the test, right? It's got to be the test, the test of the objectivity. Yeah. It's wrong. It's completely wrong. The tests are almost completely useless in almost every circumstance. It's about 70, 75% the actual talking, the interaction, really listening the right symptoms, asking, listening for the things that you don't say, checking that you actually asked about family history and social history and allergies. We don't spend enough time doing that bit already. The examination adds about 15% and the tests themselves are about 10%.
So now take that knowledge that that is the single most important diagnostic part of that interaction. And the diagnosis is basically what defines how efficiently you flow through any given healthcare system. Right diagnosis, right time, you get the right test, the right treatment, you're out the door. Wrong diagnosis, you're going round and round, your heart, your cum, right diet. And now I'll say, okay, now do all of that, but also type, look at screens with 50 to 60 buttons. Don't miss anything, prescribe something, but don't make a mistake, fill in the forms, otherwise it doesn't get done. If you don't fill in this form with 55 fields, that patient doesn't get their x-ray and that patient might die.
Right. And that's the burnout, right? It's the stress of the meaningless work that actually is important, that loses importance and then causes this really big friction point. And it's funny, I've been doing this for about a few years now, like talking to physicians all over the world. It's a universal problem. And I have not yet defined a single physician that really thinks this was a good idea for clinical care. Systems love it, right? Better data, better billing, more audit capabilities. But they're also under the illusion that they're actually getting the data that they're asking for. I can always continue there not, right? We are not, we're giving up on both sides and just trying to get through the day. And that's actually what's happening to most physicians right now.
But going back to, you know, the best job I ever had was when I had no login for the computer at all. So I had a little composition, it rolled a contract every two weeks, got paid by the hour. So to go and do IT training was half a day that I didn't get paid. So that was the point, I'm only here for two weeks. But every two weeks, they would renew my contract. I ended up working there for like maybe four months. But I couldn't log into the computer, I didn't have any passwords. So what I did was I took a junior doctor with me on the ward, saw 20 patients, took another junior doctor, saw another 25 patients around the hospital every single day. And I didn't realize this, but like as a cardiologist, I was like actually unblocking a lot of people's discharge plans. Like they could go home after being after seeing cardiology. So I was saying 45 patients a day and that seems, that is a lot for a secondary care doctor far more than I've ever saw before or since.
But the irony is I loved it. It's just pure medicine. See a patient make a diagnosis, make a plan, check the blood, move on. Actually doctors don't want to work less hard. We just want to do the bit that we actually enjoy and we train for and thought would give impact to the world. Not the bit that seems to only really care if you fill in the forms or the appointment time. So how does it work? So what have you built? It's called Oslur, which I think you mentioned is there's a reason you called it Oslur. But so what is it and how does it work? Yeah, so well it's interesting. So it's called Oslur. It's named after Sir William Oslur who was the father of modern medicine. And he's very famously ascribed to quote, listen to the patient. They are telling you what is wrong, which is exactly the point I was going to make about listening to the history. But also the irony is here stands for an acronym because I like I cranums operating system leverage in electronic records. And it's all about the leverage. How do you do more with the people that you have by using AI to be able to increase their capabilities? So what that looks like, it's a desktop app. It's installed, we're live in a bunch of primary care and secondary care settings in the UK right now. It listens to the consultations. So it takes the audio of your ambient consultation. So you walk in, I'm a patient, right? Yeah. And you, Dr. Pimenta, walk in. And I'm like, do you have to say, okay, I'm turning this, or do you consent to be recorded? Or like, is that a requirement?
Yeah, I mean, it's, again, interesting open question and lots of our physicians are posting a bit differently. I mean, we do provide patient information consent forms and things like that. Interestingly, patients don't seem to mind. And in fact, they love it. I have one user who's used it, I think 700 times in the last two months. And not single patients ever asked for the actual information consent form about the technology. The doctor's there, you know, we have all our security, cybersecurity badges, data protection, we don't store any data. But other than that, they're getting a doctor who's looking them in the eye and talking to them the whole time with no typing. And they're really like, what is this magical experience? Oh, that's medicine, actually. That's what it's supposed to be. So I'm the patient. You come in with your computer, you set it on whatever the table next to us. And it's recording or you hit record or whatever. And then it is in real time, presumably, because, you know, we have a lot of these tools as journalists, which we use, like kind of automatic real time transcription, which, you know, you're not going to be able to do it. You know, as I've mentioned before, in this poem, many times, has been the biggest productivity unlock in my professional career. Because transcription alone is, you know, for old people like me who used to have to do it where you stop the tape, you reverse it, you press play, you reverse it, you press play again. You're like, what did they say, all that stuff? To create a real-time transcription that it becomes a searchable document, it kind of saves, I don't know how many hours and hours and hours and actually produces a better product.
So you have that, and is that it, basically? No, so, and actually, we don't even use real-time, and I'll tell you why in a second, but like, so, listen to the consultation, and then also any dictation. So, you know, it's a lot more like having a colleague in the room with you that you're sort of talking, they listen to the console, they have the audio of that, then they take any orders for tests and things like that. And then it concatenates that together, and then we transfer that from a transcript, first of all, so we can infer that very fast now. So an hour of audio in about five to ten seconds, which is completely nuts, and it's a really crazy world that we now live in. And then we transfer that to a large language model, we have a stack of them, and we do lots of clinical evaluations for accuracy and performance, and constantly changing the models now as well as things are moving so fast. That makes your medical note, in your style and your templates, your structure, that's super important to physicians. We realize that the output of these models actually sounds and feels like theirs, not because they're trying to pretend, but because of a pattern recognition, that's actually how they understand that this is accurate or complete, like it's a pattern recognition instinct.
Sorry, how do they learn the doctor's style? Do you have to, like, as a doctor, do you have to submit, I don't know, a bunch of notes or things you have written, so it kind of gets your cadence and kind of things like that? Yeah, exactly. And again, back in the day, this would have been to submit 100, 1000 notes. Now we can pretty much do it with one note, and we're building that system at the moment, it's still very much in beta, but it seems to work pretty well. So you give an example of what you're trying to achieve. It generates a template for you, then it will apply that template to future models. And I think over time, we'll probably build a bit more intelligence where it will learn the little corrections that you make and the little voice, the different contexts. Lots of doctors have different templates for different contexts. They have different voices for different situations. But again, it's one of those interesting problems that seems very complicated, but actually there's an acceptance gap where you'll go, okay, it's not 100% me, but it's 85% me, and that's good enough, because I didn't have to write this myself, right? And I recognize it. And I've had that moment, actually, when I use the system, I put my cardiology concise template into it. And I was like, oh, that's me. That's how I write. And it didn't really sound like me, but it looked like me, and it felt like me, and it had all the information that I cared about. And I think that's the important thing for physicians. It's a very personal game.
So yeah, it creates notes, and then also letters. So patience is in the UK. The letter is the medical legal document of record. But now we have the capability to paralyze these functions. So what I mean by that is it produces a note and it produces a letter, but it can also produce a patient letter in the patient language, or a translated note, or a note referral, and all simultaneously. So these aren't any more sequential tasks for a physician. All the documentation can be parallel. And that list could be in the future, maybe 100 different outputs, audits and registers and clinical trial searches, and who knows what else. And coding now as well. So coding is an interesting AI task, because it's a classification task, but it requires clinical reasoning. And we have a bunch of coding tools in the system as well. And the plan is to extend that to taking over the downstream tasks as well. So ordering the blood tests, not just documenting them, ordering the prescriptions and having you review and approve them, and also collecting information.
So before I even see, I'd like to know a lot more about you. I spend a lot of time trying to find information on healthcare record systems. It's usually pretty badly kept. It's kept for medical legal purposes, but not for user capabilities. So trying to pull information out at speed in the way that you want it is like the other big time sink. So basically, by the end of the year, we'll have built a system that essentially means that you don't touch the computer regularly as a physician at all. And only to do specific tasks or things that aren't, we haven't added yet or thought about yet or in emergency or something. And it does feel much lighter. I mean, we built a prototype 18 months ago, and I ran a clinic at the accelerator where we in Chris met, like a simulated clinic with simulated patients. It just felt like magic.
I really did feel like I'm just doing my job and that relief of the burden of documentation that is no longer my responsibility, but it's done for me. It's a kind of a magical feeling. And now definitely feeling that with our customers who were going home early, you know, doctors in the UK never come across that before going home 15 minutes early from your shift as opposed to two hours late. That is, you know, a phenomenal change and really something worth, you know, digging deep much deeper in. Yeah. Your history is a new podcast brought to you from the times and it brings together the real life stories from our obitories desk, which have been published for over a century. In this brand new show, we build on this legacy and explore the endlessly fascinating lives who have enriched and informed our own. Join me and our sponsor, Ancestry, as we journey through your history.
You say you're a cardiologist, so could we go back like where are you from and how did you end up as a doctor and then what made you decide to do this? Well, we've got to get right back to the beginning. Okay. So hi, I'm done. Actually, I do do this, right? I do. We do this thing in the company. We do it. We call it Lifeline and it really helps people understand where they're going to understand how to predict it. I don't give you the whole story, but so you guys have your own podcast. So yeah, yeah, log story short, Catholic parents, both laps or chains, but grew up essentially with the knowledge that life is not about being happy, but there was no like carry over to that. It was like not happiness, but there was a bit of an open question like, okay, what is life important?
And then if you're if you're raised Catholic, you spend a lot of time thinking about death in the afterlife. Right. So I decided that, you know, money doesn't carry over. So that's not that's not that's pointless. Therefore, what is not pointless? Helping people. And then you might see them in heaven and therefore the logic was, you know, that's a good career because that's the riches you take with you. So that was my four year old decision to become a doctor. So it doesn't sound like you're very lapsed. This sounds this all sounds very Catholic and not lapsed. Well, I mean, I mean, I mean, I'm still pretty religious in that sense, not Catholic at all, but I do believe in God. So I think I just didn't change that plan, which is an interesting sort of operating system problem that I have that I make a plan. I just tend to execute it until something else changes my mind. And nothing ever did.
So a four year old self decided to become a doctor. Just did that basically flash forward to 28 years later or something. And I've been a cardiologist in the NHS for about 10 years, planned to go out to do an ML research PhD in cardiovascular machine learning for diagnosis of under diagnosed conditions. So specifically looking at like populations like women, for example, tend to be under diagnosed for heart attack when they go to hospital because the human beings aren't very good pattern recognisers. And sorry, the machine the thing with machine learning right is that it's really good at recognising patterns. Yes. That's the whole thing of that like and consistent as well. And actually, I think kind of sift through mass amounts of data.
Yeah, exactly. And I think it's the consistency that I would argue is actually even more important. So if you have a machine learning algorithm, there's 75% accurate, but it's 75% accurate all the time, you can meaningfully make that better. But if you have a human being that's 90% accurate, but has bad days, sometimes we're going to 40%, you can't actually iterate on human beings. And this is something that's very fundamental to healthcare where it's about repeatable motions done boringly and done well is actually the fundamental to care.
So that's why I really like AI, because I think even if it's bad today, you can make it better, but you can't make humans better in the same way. So that was the idea. I didn't get anywhere because all the funding dried up because COVID came along. So this was like early 2020 in the first wave. And I got redeployed to COVID IT. I worked there for six months and at the same time founded a charity. So that was like my first founding experience. So we raised some money.
What was the charity? So it's now called the Healthcare Workers Foundation. It's still ongoing. At the time we called it heroes, which again was an acronym and I can't remember familiar life for me what it was. I think it was something like healthcare, extraordinary response organization or something. But the point really was like a lot of people wanted to help. There wasn't really an obvious channel. I was working in IT, you're seeing that we didn't have PPE and then lots of colleagues reaching out to me saying, how do we get more PPE because we don't have any protection.
So we built reusable PPE as she was our thesis and we got just a day that we could to put in. And that was really exciting. And the team I had at that moment, I made all the founder mistakes to go to your point about founders. So we had five co-founders. It was me, my wife, my sister, my wife's cousin and one of my good mates. And we fought like cats. Oh my God. Really discoordinated. I didn't know what I was doing. I was a CEO. We had an amazing team, like genuinely the team that worked for that company. I mean, the branding guy now represents massive brands. I think we've just actually been acquired that company. Comms, I think she works for Procter & Gamble Global now. The social media person works for Co-Coder. Like literally the talent that was in that team was nuts. And I didn't recognize it at all. I was like, oh, these are just people who do some stuff. I don't know anything about the world.
So I've been a doctor at that point. And are all of those relationships intact because your wife, your brother, your sister, I mean, that seems like a high risk endeavor. Yeah, I will do it. I'm not going to lie. It was hairy. I've got some scars from it. But no, 100% all intact. And actually, Rosh, my good friend, is the current chairwoman of the charity that's been running it for the last few years herself and doing a really, really good job. I think one of the really amazing moments about it was, yeah, it was super tough. And to be fair, COVID-19, the first two months was a very grim, very grim job. We were very fortunate where I worked. But in other places, it was dark. It was a lot of death. A lot of death.
Yeah. Yeah. Well, yes. And I think that was, I mean, part of talking about burnout, that was probably a big contributor to my burnout, on that stage. But I think what actually was a release for me was the charity because there was a positive channel for energy. There was a lot of good things that we were doing, delivering meals, getting celebrities to like, you know, Jamie Lange from Candy Kittens donated 80 grand and gave loads of candy kittens to hospitals. It was just those mad things. But I think the really interesting thing was the learning curve as a founder is infinite. Like you're constantly learning how to do things that you have no idea.
Like, what's Trello? What's Scrum? How do I do charity governance? You know, why am I fighting with the BSI about the thickness of plastics in their, you know, certification for eyewear? Like, there's some bad thing to do. Right. But, you know, I genuinely loved it. And I think one of the things that happens to medics in general is that you train for years and then you plateau because there really isn't much more to learn. I mean, you can become an academic, but actually the fundamentals are, okay, now you've got to see a bunch of patients for 50 years. And that actually does burn a lot of people out because unless they find some other thing to keep them occupied, what's here was this thing where you could just keep growing and learning and growing and learning.
And I thought, wow, this is, this is probably what I actually want to do. I didn't really have an avenue for that. Anyway, a bit burnt out, quit after the first wave. When COVID was over, that was a good, that was a good moment. Do you remember that? Do you remember COVID was over? That was a joke. So I left, I left because COVID was over. Sorry. So you quit, you quit the NHS. You quit your job as a cardiologist.
Yeah. Yeah. Big thing. It was a big thing at the moment. And my mum, especially, was like, what are you doing? But I just think I was super burnt out, like six months of really hard work. And like, yeah, for sure, lots of reward. I mean, not to be, you know, but also just a lot of really thinking what is happening and just needing a break to like put my head back together after this absolutely crazy period. And then my wife was like, okay, it's great. You quit your job, but you do have three kids.
So I didn't know two kids at that time. So what's the plan? I was like, okay, let me get another job. So I ended up becoming a pharmaceutical physician and I worked on clinical trials. And again, that was really interesting work, worked in first in human trials. I worked on CRISPR as one of the few doctors that worked on the first CRISPR trial in human beings, um, sign the singer RNA. Really loved that work. But you know, black's one event, my wife got sick. I mean, she's fine now. She's back away with our third child, but it meant I had to stay at home.
So became an AI healthcare academic. I was really grateful to run the AI, sorry, the academic part of the company where I was working. And at this moment, we had to learn to code to run the studies because we didn't really have a lot of engineering support. So I learned to code. And then I was like, Oh my gosh, you can talk to computers, which anyone who's listening is vaguely techy is going to think that's the stupid thing that they realized when they were like three.
And here I am at like 35 or whatever going, Oh, you don't need to do software. You can do stuff and code. And it's really powerful to really like that. And again, just got really into the autonomy of creation of running studies and finding things out and discovering things. And I think at some point, and I can't really remember what happened. I realized two things. One, there's not a lot of impact in pharma for doctors at that stage, right? You're giving drugs that someone else invented 10 years ago.
If I didn't want, if I didn't go to work, someone else just did my job. And in fact, I can send to the second ever human being for CRISPR. But the only reason I did that is because the person who's supposed to do it was sick. And I just turned up and did his job for him. So your leverage is like one, if minus not maybe minus point five. But health tech allowed me to leverage my knowledge as a clinician, 10, 100, 200 times over by designing studies that make clinical sense, but then using technologies to deliver them. And that was phenomenally powerful. I was like, wow, this is really heavy impact and doing really good work when it's done properly, which has always been the hard thing to do. And I think the other thing I realized was I just got too old to have a boss. I just remember being in a meeting being like, what, why am I in this meeting? I'm 35. I've got three kids. What are you talking about? Yeah. So I sort of did a quiet quit and I got into a accelerator called entrepreneur first, which is pretty well known in London now. I've actually got an office out here in SF, which I might go visit later.
Was there a plan? Or were you just like, I'm just kind of in. Am I creating and et cetera? But I'm just going to go in there and see what happened. Yeah, you saw my wife. What is there a plan? You know, I think the thesis was AI is cool. It's super powerful. What can we do with it in healthcare where we can unlock a lot of value? And actually I had an early thesis about patient records and decentralizing them and having AI as a companion. I hope your pitch to your wife was better than AI is cool. I mean, it wasn't far off to be honest, but like it's an accelerator. Right. So actually it's a really interesting one. So it's not like why I see where it's an established company.
有没有一个计划?还是你只是觉得,我只是在参与。我是在创造等等,但我只是想进去看看会发生什么。是的,你看到了我的妻子。究竟有没有计划呢?我觉得当时的想法是,人工智能很酷,它非常强大。我们能在医疗保健领域做些什么,从而释放出巨大的价值?实际上,我一开始的设想是关于病人记录以及如何去中心化它们,并把人工智能作为一个辅助。我希望你对你妻子的企划要比“AI 很酷”更好。老实说,其实也差不多,但就像是一个加速器项目。所以,这其实是个很有意思的事情。它并不像是 Y Combinator 那样的成熟公司。
They look for individuals who show really good, found a potential, but actually aren't wedded to any idea. So maybe that was where I lucked out by not really having strong thesis. I hope this is cool. And I was building something else that I met Chris and Chris, Tans, machine learning engineer, he's an academic and he'd always worked on sort of AI human co-working very specifically about teaching AI to use the human environment of computers. So the virtual environment, like your computer desktop is designed for humans like this. You have to into it a lot to actually utilize the system and the mouse and keyboard are actuators.
So if you give that over to an AI model, can you train it to do? And now we'd call them large action models, but this was a couple of years ago when really this didn't even have words for what this was trying to describe it. And then he came up to me one day, he's like, don't doctors spend a lot of time on computers? And I was like, yes, we do. And it's awful. Let's build that. So it's Chris's idea. I'm just along for the ride if I'm honest with you. And that's what we've been doing for the last 18 months is trying to build out these tool sets.
In February, I think it was announced that coastal eventures, Vinod Kosla, who's extremely well known out here, billionaire investor, very, very smart, the first investor in open AI, various other things. I know it was announced in February, which probably means you raised it a year ago. But how did you end up getting Kosla to invest? That's a really good question. No one's asked me that question before. So yes, very astutely we did raise it very close on. So we got some money from the accelerator and then very quickly realized it's a very competitive market. So we really did need to raise quite quickly.
And I think one of the things I've realized about EF is really good at this is it's much easier to be incredibly ambitious than it is to be highly ambitious. So like shooting your shot for the moon shots, because you only have to land one of those moon shots, right? So I remember I met Ross Harper, who's the CEO of Limbik as part of entrepreneur first. And he'd been invested in Kosla a few years prior to me. And we talked about something else. We didn't really make much ends into what we were talking about there. But I did say, you know, can I get an intro to Kosla ventures? Which is a completely random thing. Because you know, London based, I don't know anything about the VC networks, like looking at some West Coast VC. Anyway, made an introduction. Matt, Adina Techley, who's our partner here at KV, and a really, really incredible person. And ended up like having five meetings.
我发现,EF(Entrepreneur First)有一点非常突出,那就是人们要做到极度有雄心实际上比有高度雄心要容易得多。就像去追求那些“登月计划”(指非常宏大的目标),因为你只需要成功实现其中的一个“登月计划”就够了。我记得我遇见了Limbik的CEO Ross Harper,他是作为Entrepreneur First的一部分被投资的。而且他在我之前几年就已经得到了Kosla的投资。我们聊了一些事情,尽管没有深入探讨,但我还是问他能否介绍我去Kosla Ventures。这完全是个随机的请求,因为我在伦敦,根本不熟悉风投网络,更别提了解西海岸的风投了。无论如何,他还是帮我引荐了。我认识了Matt Adina Techley,他是KV(Kosla Ventures)的合伙人,非常了不起的人。最终,我们进行了五次会议。
We're in the middle of a round. So it was competitive. But we sort of went from first meeting to close very, very quickly. So you've built Oslur. But inertia is a powerful thing, especially in the healthcare industry. How has it been trying to actually, because as you said, you have it being used in doctors, offices and hospitals, I think. Was it hard to get in? And like, what is that like when you're trying to like inject something new into this system that is very stayed or stagnant? Yeah, it's a really good question. So I think there's a few elements to that. Was it hard? Yes. It was probably it remains the hardest part. The technology that we're deploying today, it's more or less exactly what we built 18 months ago. But we spent a year building product, getting compliant, figuring out a lot of the compliance workflows. I think AI is a fascinating technology, like, because it's new and old at the same time. And what I mean by that is it's new, for sure, super powerful. But it's starting to replicate situations that are old, for example, having a human assistant. And now an AI assistant. But actually lots of the older doctors recognize that there's a cognitive workflow that they already fit in, where they have a scribe, a human doctor sitting there doing their work. So by emulating that was a very sort of good way of getting into the system. So by using a chat interface, producing notes in your style, so really closely matching what you'd expect a human being to be doing and even having the interaction, which we haven't built out yet, but planning to do.
And I think that's what a lot of people have realized. For example, I've seen some comments reasoning about people interacting with GPT Omni through voice. And for the first time talking to it, I think it's a really good example of how the interaction in the workflow can fundamentally change how you appreciate the technology. And it's been exactly the same mentality. Right. But again, it's really interesting. Some of the older doctors never used to type. So we're bringing them back to a world that they used to live in. This is almost like a nostalgic thing for some of them. And I think it's really interesting to be like, have you ever heard that expression? How do you get a donkey down from a minaret? Have you heard this? No. You just find the part of the donkey that really wants to get down. And I think that's a really good ethos for AI and healthcare. You just need to find the part of the system that's suffering the most. And that is workforce productivity that is burnout. So you come with solutions to fix the biggest pain point, but at the moment, the only pain point that's worth trying to fix, because everything else is meaning. And people are willing to at least give it a go.
Are you aiming to get into America? Because obviously it's a, I think healthcare, whatever the number is people throughout. It's a six-year of the economy out here. I mean, it's a very different system than the centralized NHS system, but presumably there's a lot more opportunity as well. Yeah. The answer is yes, again. It's obviously where there's a massive opportunity in me. And we're looking for US design partners right now and have in early discussions. I think there is a huge opportunity at home. It's a 160 billion pound market to really build a product end to end. That's the strategy at the moment with doctors solving the same problems. Clinically, the systems are very similar at a clinician patient level.
The functions are the same. In fact, the child is the same epic persona, take up 100% of secondary care. Most of US over here, the ratios are slightly different, but this is exactly the same set up. But I also think that the opportunity here is interesting because you do have models that are similar to the NHS, like Kaiser, for example, value-based care systems actually do care about getting patients out in empty beds, whereas lots of other systems are incentivized the other way here. But I think it's really interesting that I don't know if I have you and your sisters, but like, I do think, and I do think, as my company or me, another company, there is real opportunity for a startup to fundamentally disrupt the US healthcare system entirely with the combination of a new healthcare model, with services, with human clinicians, and AI, because the leverage you can then achieve actually does make that cost-structural work. And that actually frees people from insurance and from pre-awth and all the awful stuff that's happened to show that.
But I think incremental change is not going to work. It's just going to actually probably make the problems worse for some of the incentive system. If I speed up your clinicians and I save you time, I mean, we've had US clinicians tell us this, if I save that clinician time, they're like, no, no, I want to. I want to build for that time. Don't speed that up for me. I like that time. And that as a UK clinician, that's been a super-alien to navigate. But healthcare systems are having exactly the same problems in terms of efficiency, capability, audit reports, errors, things like that. So lastly, just kind of going back where we started this idea, if we kind of future-scape a little bit, you're designing these tools with these very rapidly developing AI models, these large language models. And the visions of people like Vinod Kosla is an AI doctor in your pocket for everyone. That could be the future. Do you think that is realistic or do you think this is, again, more of kind of like this Silicon Valley, Hocus Pocus, Hype Machine stuff that we're like, actually, that may be deliverable 20 years from now, but it's just a lot going to be a lot harder than we think.
Yeah, I think I might be allergic to that word, realistic. So let's discount that for a second. But I think it's an interesting thesis to dissect. So I think it's not a pipe dream. That's number one. AI doctor in your pocket, that's today, right? You literally have GPT-0 in your pocket right now. And there's a lot of statistical evidence. I mean, you quoted one of the papers in that newspaper article showing that GPT vision, or whatever you want to call it, VEO, whatever. So I just call it four because I can't remember what it's actually called. Omni, 40. 40, OK, why do they make it say that? Is a better diagnostician than 99% of physicians based on that one study looking at New England General Medicine? And that's not to say that. I'm saying we should replace it. But I do think it's a really interesting time in medicine when the problem for all of time has always been about distribution of expertise. And now potentially we have infinite distribution of expertise. That's quite a hard concept for us to get our heads around. In that perspective, if you wanted a diagnosis and you were worried about your symptoms, I wouldn't recommend this because it's not clinically validated and I'll come on to that point in a second. But you do have a much better capability to get some pretty sound advice than you ever used to ever in the history of humankind.
So in many sense, it's not even a dream, it's a reality. But the medical legal system, the clinical evidence, the clinical validation, all of these systems that we've built to do things safely in medicine are actually called medicine medicine and not like nutrition or wellness or something else doesn't exist yet for AI. And that's the lag. That's the bottom. So if I could prove to you that you asking one question to 100 human primary care physicians who like talk to you and have a diagnosis and examine you and I can give the same data to an AI model and the AI model is better in a clinical trial head to head, I don't know what physicians would I would accept that. But then the medical legal system doesn't have an answer for that. Who's liable if that AI is wrong, even in that one out of 10 million cases, let's say it's egregiously incorrect. So that's why we've always gone for the at least for the next five to 10 years, AI augmentation of existing human clinicians is going to be the paradigm. I really can't see because of the limitations, not the technology at all.
But having said that, you know, people might just get so used to the user technology that you use to use Google that these systems actually evolve and organically become part of the system and then we retroactively have to find a way to make it work as the medical professional it backwards. I mean, that's also a reality. Right. I do know, having taught myself this, this would definitely happen and it will probably happen sooner, we think, as of all these things. But it will take a lot more change than just the technology itself. And I think that's the bit that we're missing and people are kind of waking up to you like with CHI and a few of these other organizations and companies like ours, we're basically building clinical frameworks, assessment systems, evaluation. That's the bit that we're pioneering to actually evaluate the systems at scale to automate that evaluation now as well to create guardrails.
And then actually to run trials. So we've been running trials in the UK and HS now for eight months with one of our partner site hospitals, phase one, phase two, phase three, increasing number of patients, increasing risk, but actually de-risking the accuracy. And that's just for ambience. So this isn't diagnostic support. This isn't decision making. But I think that's the right way of doing it. Setting up actual systems of evidence where you can reliably look at these models in a clinical way. Eventually, it will converge into something like a pharma model and we won't allow AI in healthcare that hasn't been through a rigorous process of testing, has to have provenance. So I think the pipe dream for technologists is that technology is the only is going to be the solution. Well, there is a hostile subware that I talked to recently that they're on paper. So what do they do? Right? Where's their AI going to live when they don't have computers? And that's the reality of healthcare. So that's the delta of societal change moving much slower than exponential technology.
At one point, I think we'd just be like, forget what the AI is saying. It's too complicated. We might just go back to just talking to each other as boring humans. Do you know what I mean? Well, that might be the answer. It just spins off and it's like, AI is over there, but it's got too much now. Yeah, exactly. Just makes some runs or something. Well, look, I really appreciate taking the time. It's fascinating. And yeah, we'll have you back on as things develop and it kind of percolates more into the system, but I think especially with given all the challenges in the NHS and how the NHS in particular is such a kind of a national, it's a national source of angst and debate constantly. It feels like things could move quite quickly there because they kind of they have to, right? Yeah, I have to. For lots of different reasons. So it'll be interesting to watch. But thank you for taking the time. I appreciate it. And good luck.
No, it's great. Well, thanks for having me. And that is all the time we have. I want to thank Dom for making the time on his West Coast swing to sit down and chat. I want to thank you all for listening, for the ratings, for the reviews, for telling your friends and neighbors about this fantastic program. I will be writing this week a little bit off piece. So do check that out. I won't spoil it, but it's a fun one. I think especially for our UK readers, you're going to find it very, very interesting. So do check out the times.co.uk. You can find me on Twitter at Danny Fortson. That is it for me this week. Thank you so much, as always, for listening and rating and reviewing. And we'll talk to you very soon. Bye bye.
不,不需要改动,非常好。好了,非常感谢你们邀请我。而我们今天的时间也到了。我想感谢Dom在他西部行程中抽出时间来和我们聊天。我还想感谢大家的收听、评分、评论,以及向你们的朋友和邻居推荐这个精彩的节目。本周,我会写一些不太寻常的内容,所以请一定要关注。我不会提前剧透,但保证很有趣。我觉得特别是我们的英国读者会发现这篇文章非常有意思。所以请访问 times.co.uk 查阅。你们也可以在推特上找到我,用户名是 Danny Fortson。这就是我这周的内容。非常感谢你们一如既往地收听、评分和评论。我们很快再聊。再见。
Our history is a new podcast brought to you from the times, and it brings together the real life stories from our obituary's desk, which have been published for over a century. In this brand new show, we build on this legacy and explore the endlessly fascinating lives who have enriched and informed our own. Join me and our sponsor, Ancestry, as we journey through your history.