Mustafa Suleyman – New Ways for Technology to Enhance Patient Care – King's Fund 5th July 2016

**Patient Engagement and Collaboration**

We are excited to announce that we will be hosting our first patient engagement meeting on September 7th, and anyone is welcome to attend. This meeting marks an important step towards empowering patients to take an active role in their own treatment and care. We believe that patients have valuable insights and ideas to share with us, and we want to hear from them.

We recognize the importance of designing directly for clinicians, who are at the forefront of patient care. Our clinician-facing tools will be led by doctors and nurses, with input from the clinical community to ensure that our solutions meet their needs. We have already had some success in this area, particularly through our work with Pier Keane, who pitched us an incredible idea for collaboration. This partnership has generated significant momentum and allows us to focus on delivering solutions that are tailored to the specific needs of clinicians.

We also believe that involving patients and clinicians in the design process is essential for driving innovation and improvement in healthcare. Our open ecosystem approach is built on Open Standards and interoperability, which we have been working to implement for some time. The technical implementation can be challenging, but we have made significant progress in converging on standards such as the FHIR API standard and aggregating data in the backend.

One of the key barriers to smaller companies and startups getting involved is access to data. We recognize that this is a critical issue, particularly for experimental apps and innovations. To address this challenge, we need secure and controlled ways of managing the release of data to authorized parties. This is where our work on canonical records comes in – we believe that this will align the ecosystem around a single backend, enabling more competition, transparency, and innovation.

We do not believe that there is a false dichotomy between comprehensive top-down systems and clinician-led innovation. We think that both approaches have value, and we aim to deliver solutions that can facilitate both. Our commitment to Open Standards and non-proprietary standards means that our work will be accessible and editable by the community.

**Governance and Accountability**

At DeepMind Health, governance is a top priority. Before launching our organization, we announced an independent review panel of unpaid reviewers who meet four times a year and have a small budget for a secretary to conduct audits. These reviewers are not bound by any NDA or contract and can speak their minds freely. We have appointed nine expert reviewers who will have the freedom to hold us accountable and provide guidance to the community.

These reviewers are part of our efforts to build trust with stakeholders outside our organization. They will be able to share their insights and observations about our work, providing valuable feedback and direction. This independent review panel is an essential component of our governance structure, ensuring that we remain accountable to those who matter most – patients, clinicians, and the broader healthcare community.

**Conclusion**

We are committed to creating an open ecosystem that drives innovation in healthcare. Our approach is built on Open Standards, interoperability, and collaboration with patients and clinicians. We believe that by empowering patients to take an active role in their own treatment and care, we can drive meaningful change and improvement in healthcare. Thank you for listening, and we look forward to continuing this conversation with the community.

"WEBVTTKind: captionsLanguage: enwhat a phenomenal start to the morning that was um super energizing I enjoyed that and hopefully I'll touch on a couple of the themes that Matthew brought up so um I'm going to talk a little bit about um some new ways that we think maybe technology might be able to enhance patient care but before I get into that let me sort of give you a bit of background on um deep mind it's a a company that I started six years ago a British artificial intelligence research company um and back in 2014 we were acquired by Google we became Google Deep Mind and we're responsible for all of Google's general purpose um AI efforts and fundamentally our mission is to create General artificial intelligence so these are algorithms that are capable of learning directly from the raw data and can perform well across a very wide range of tasks much like a human being learning from scratch to do inference and prediction in unstructured environments but the second part of our cor mission is to use the technologies that we develop to make the world a better place place and really try to tackle some of what I think are society's toughest social problems that we're facing at our heart we're a uh a research organization uh we're based in King's cross in London we have 300 or so staff coming from 40 different countries and 200 of the best uh machine learning and artificial intelligence research scientists in the world so 200 postdocs and phds the largest concentration of AI researchers anywhere in the world with 12 tenard professors full-time in the office um and we publish everything we're very committed to uh uh being open and Publishing 80 or so peer-reviewed research Publications since we founded the company but we have very much what we call a hybrid culture um we started the company as a response to what me and my co-founders saw as a kind of failure of conventional organizational structures and institutions to respond to the complexity of of the tough Social Challenges that we're facing so we like to think of ourselves as retaining the long-term focus of Academia um whilst dropping some of the somewhat more painful challenges in Academia where it's really difficult to scale things up with good infrastructure and engineering and at the same time sometimes getting trapped in the cycle of um of of Publications that aren't always critical to the long-term research effort that you're focused on but secondly combining that with the pace and the scale and the agility of a commercial environment where we really can afford to have a very large um research engineering support infrastructure around uh our our core researchers all the while underpin underpinned by um the social values and the mission of the third sector and I think in some sense that makes us reasonably unique in attempting to combine all of these three uh uh different ethoses in a single organization and so let me briefly tell you how we attempt to go about building general purpose learning algorithms everything starts with an agent you can think of an agent as a control system for uh a robotic arm or a self-driving car or a recommendation engine um and that agent has some goal that it's trying to optimize we handc code that goal it's the only thing that we give the agent we say these are the things that you should find rewarding in some environment and the environment can also be very general so it could be a simulator to train a self-driving car or it could be YouTube where you're already trying to recommend videos that people find entertaining and engaging and the the agent is able to take a set of actions in some environment so it's able to experimentally interact independently and autonomously um in the environment and that environment then provides back a set of observations about how the state has changed in that environment as a result of the agent interacting with that environment um and of course the the environment passes back a reward which the agent is able to learn from so it's really learning through through feedback or through the reinforcement learning process and we train in the Atari test bed so the these are the uh uh 100 or so games from the 70s and 80s where the environment is actually the Atari simulator and the raw pixels are provided to the agent in their rawest form so really just think about an RGB a red green blue um uh pixel that is coming in 30,000 frames 30,000 pixel inputs per frame so 24 frames per second and it's wide up to the action buttons but it's not told what any of this stuff what any of the actions might do and so really it has to learn its own representation of what is rewarding or useful or effective in some environment um and so there's zero Pro pre-programming we don't tell the agent anything about the structure of the environment only that its goal is to maximize score and so there's a single agent that learns to play all of these games and I think the kind of best intuition is Imagine um a robot that is able to control the joystick and the fire button at the arcade and is really just processing the raw pixels that are coming into its visual stream so here's a little demo video of of a classic game called breakout um where basically you're controlling the paddle at the bottom here and this is only after 100 games so the agent really has just been dropped into the world and it performs really badly it's sort of randomly exploring the space figuring out what happens if I move left or right when the ball seems to bounce down here after these 300 or so games the agent gets pretty good and this is pretty much Human Performance it's learned purely from the raw pixel inputs that anytime it bounces the ball up and hits one of the blocks it gets more score but then unexpectedly we left it training for about 500 or so games and the really interesting thing is it learned this cool strategy of tunneling up the sides and sending the ball up the back to get as as much score as possible with as minimum effort that is required and so this is an example of a very very crude early small scale um system in a toy-like environment that has learned something that we hadn't pre-programmed into it that turns out to be a very effective strategy and we were lucky enough to um have got a nature paper for this work back in 2015 and and we were given the front cover which as as you'll know uh research scientists were very very proud of um a few months back we went on to tackle one of the grand milestones in artificial intelligence the classic game of goats 3,000 years old there are 40 million players around the world and the really interesting thing about the game is its complexity so there are 10 to the power of 170 possible board positions on a 19 by1 19 board so that's a 10 with 70 Z next to it possible board configurations on a 19 by9 board and and the objective is to really Place black stones or white stones to surround your opponent and gain territory so it's a very unstructured game unlike chess for example where all of the pieces are worth different amounts and you can use the the value of those pieces to bias your uh training you can put lots of energy into trying to protect your queen for example and sacrifice your pawns but there's no such structure in go in go so it was widely recognized to be one of the really hard outstanding challenges in artificial intelligence and in fact just to put it in context it's been estimated that there are something on the order of 10 to the power of 70 atoms in the known universe so there are in every liquid solid and gas fewer atoms in the known universe than there are possible board positions in the game of goost so traditional techniques of writing rules or handcrafting features as it's known um just does doesn't work you have to train an algorithm to learn its own representations from scratch and we were really uh lucky to be able to beat the go world champion the first time a computer system's ever beaten a human and in fact we bet we beat the best human in the world which was really exciting in Korea uh 3 months back 280 million people watched it live um it's one of the the most popular games out there uh and that's more viewers than there were in the Super Bowl which is pretty cool 35,000 press articles and Incredibly 10 times more go boards sold in two weeks um so they sort of ran out of go boards which was which was pretty cool um and we were again lucky enough to get a nature paper for this and and we were given the front cover which the first time that a uh a computer science lab or an AI lab got two nature front covers um in 12 months so pretty cool but really what we're about is making these sorts of advances in a research and Engineering environment and then trying to apply our expertise to make the world a better place so why did we pick Health well about a year and a half ago um a bunch of my team got together and spent about three or four months trying to figure out where we should go next I mean we could pick all sorts of different applications application areas and there were three core principles guiding um what we were looking for clearly we wanted Rich complex data environments which uh are relatively unstructured that require us to make interesting predictions as you can see we're clearly motivated by social impact and that's something that was very important to us to find an opportunity to take our technology um into into the real world and make a meaningful difference and of course we wanted to build a sustainable business model around this but the kind of most remarkable thing about health compared to all of the other things that we looked at like synthetic energy developing sustainable food water water purification nanomaterials Material Science in general we looked at all these different areas the remarkable thing about health was that there's an incredible margin for improvement um in if if we're successful in being able to deploy Cutting Edge and modern technology systems I mean there really is no other sector that I can think of in the world that is so far behind The Cutting Edge in terms of technology and if we're successful um that represents a massive opportunity for us to have a beneficial impact and many of you will obviously be familiar with this being the kind of default reality so much of the valuable information that describes the current state of a patient's condition or the the current efficiency of operation of a particular Ward exists in in this kind of data environment and so when we look at the the current health it system some research that was done recently at Imperial talking mostly to Junior doctors and nurses people will repeatedly say things like it's clunky it's not intuitive everything takes too many steps um and stuff doesn't mean what we all think it means or it took an hour to teach a doctor how to prescribe paracetamol or I don't think I'm stupid but it couldn't be more complicated if I can take a computer out of a box and understand how to use it straight away why is it so useless that I need 6 hours of e-learning to be able to order a blood test and I think it's this really strange dissonance this sort of frustration that in your lunch break you're using you know Facebook and Uber and you're on messaging platforms and they're at the very Cutting Edge of the technology that's available to you in the world and then in the afternoon you go back and use a desktop based system that struggles to boot and you maybe using pages and fax machines to go about your daily work and so we launched Deep Mind health Health in this context back in February of 2016 front of 400 or so Clans at the raw Society of medicine and in many ways it was like an incredibly daunting Prospect I mean you know in some sense it's uh as many people have already pointed out is kind of a graveyard of failed technology efforts over the last 20 years and so I think in that context we really had to think about what are we going to bring that's going to be very different clearly we have machine learning and artificial intelligence but I think a lot of this is about the the approach that we take to developing software and how you put both patients and clinicians at the very Forefront of that and so the approach that we take is to frame everything as starting with an observation of what does a user do on a day-to-day basis we spend lots of time immersing ourselves um in Wards and in uh uh uh and with nurses in the mess rooms trying to observe what they do Define their challenge gather as many insights as we can and then immediately start to build something as fast as possible we want to show uh what a rough design might look like here are some wireframes and then develop that a little bit further test it and then as we start to develop a solution to try and measure build and learn and then just rinse and repeat try and do that in very very quick iterative cycles and So within three weeks or so of um meeting our first nurses signing our agreements with the RO free back in uh September and October we had a working uh prototype obviously not connected to any data but that that nurses and doctors could actually point to and say this button's in the wrong place this color is difficult to read this menu hierarchy is sort of in the wrong order and so we can instantly get feedback and deliver pretty much um what nurses and doctors tell us they want to see and so this is kind of our Mantra ABC always be clinician Le and every single project that we will work on and the and the projects that we've worked on so far have been brought To Us by um a nurse or a doctor who has some idea some insight into how they can change the behavior in their day-to-day operation and how a technology solution might actually work and here you can see this is a one of our whiteboards in the office always reminding ourselves who our users are and what they need and so how might patient care be better supported by um technology well I think obviously there's an enormous um opportunity for improvement one in 10 patients experience some kind of harm in hospitals and half of those um are completely preventable or avoidable harm and in many of those cases 50% of um of those cases detection of of the patient deterioration in question has actually been delayed and that's actually a communication and coordination issue and largely I think this is because of these current limitations most of the really valuable data sits on paper and on charts and isn't logged or tracked or recorded there's no auditable log that you can verify of the pag of messages that have been sent the reminders that have been sent those that have been missed I mean who knows maybe that person's not even at work at the moment maybe they're on holding and you don't know until they don't reply for two or three um pages so I think there's two core patient safety challenges that have framed everything that we do um in Deep Mind Health the first is how can we do a better job of identifying which patients are at risk of deterioration largely in real time and then the second is once we've once we've identified which patients are at risk how do we actually intervene I mean we don't want this to end up just as a as a report that's that advises on some reorganization of of facilities in on on a ward we actually want to deploy technology in real time that enables clinicians to do a better job of escalation and intervention and so on our patient safety challenge number one how do we do better detection we've looked at acute kidney injury over the last sort of 12 months or so and this is a remarkably important problem 25% of all admissions present with some kind of evidence of an Aki and there are 40,000 or so deaths in England due to Aki alone it's estimated that something like 20% of these are actually pre preventable and that the cost could be as much as a billion and a half pound and so um sort of a couple years ago in 2014 NHS England issued a patient safety alert uh to to mandate that the acute kidney inj injury algorithm um be deployed in in hospitals and so once again the first thing that we did is try to observe users in um their day-to-day setting so we went into the RO free and we mapped out the pathway what is the experience from a patient perspective today and it turns out it's actually really complicated um there are lots of different stages to the path that a patient might go through and what we noticed is that there are a whole series of life-threatening and complicated stages in that pathway which actually seem to be where we're missing on the key bits of deterioration and so what we wanted to do is take a step back and see how could we intervene earlier to do better risk assessments more realtime prevention and monitoring and then hopefully try to redirect patients through the pathway um toward towards potentially a full recovery and then a discharge and there and once we've broken it down into these sorts of steps we have a a shared visualization between us and the clinicians on where all of the key intervention opportunities actually sit and so um in response to this we developed streams um our Aki alerting platform for for blood test results so I'm going to show you a little demo of what this looks like um we start with um our login screen um we're able to securely log in on on mobile well and as soon as you're you've logged in you're presented with a list of all of the patients um that that appear to be alerted on the acute kidney injury um algorithm and you can scrub down that list maybe tap on Robert Jones here can see all of the blood test results you can see that their creatinin has spiked over uh the last uh sort of few weeks of the Baseline and we can also compare that to any other blood test result and pinch and zoom scroll through the others like CRP or potassium and so that's the very simple intervention that we've built so far um keeping it really focused on one very specific condition um using the blood test results but in the future I think there's a real opportunity for us to go much much further and extend this um to a to a broader patient Centric collaboration platform so here's a view of what things might look like in the future um we might be able to search through um all of the patients Maybe by consultant by Ward or by specialty we can scrub down our list of patients and pick um or we can type in um a particular patient that we're interested in because we're on that Ward then we go through to Robert Jones we can see their date of birth their medical record number the ward they're on we can go through their overview see their current medication see their Vital Signs here we can see that the patient temperature is pretty high and their blood test results their blood pressure is high um maybe we want to look at their past diagnosis we can see that this patient has pneumonia can see their previous procedures and if we um we can also see their current medication that they're on and if we swipe uh right we can see that um we can go across to the timeline and see that this patient is uh uh being treated for sepsis they have an Aki 2 we can see when the last blood test was ordered that it's in the lab at the moment that it's ready for review we can also scroll down and say look at um the the SATs for example or we can see that this patient had been previously transferred can obviously go across and take another look at the blood test results and all of this information is in the palm of your hand able to uh be reviewed whenever you need it here we have um the X-ray on mobile you can we can see um that this patient has pneumonia and we can also see the report from the radiologist who might want to zoom in and um uh and then share it with a colleague and so if we if we think about what this does for us this actually essentially puts in the palm of our hand the ability to detect in real time patients that are are a risk of deterioration but that's only sort of one part of the challenge the next key thing that we need to be able to do is escalate and better intervene and this is where messaging and commenting becomes so important so take for example with the the the X-ray that we looked at just now um here we see that a registar is able to make a comment on the X-ray on that report um and then Plus in a respiratory consultant to get an expert View and that that that exchange can happen in an auditable uh way that allows us to verify um you know retrospectively if needed um what the uh senior clinician um has actually said and what action is subsequently taken here we see an example of the blood test results where um a uh an F1 is able to call on the specialist nurse to come and see the patient as soon as possible and potentially organize a renal ultrasound and so that gives you a bit of a sense of some of the things we're doing in clinical decision support and direct patient care how we're trying to build apps that um put this kind of information and this kind of communication and coordination in the in in the palm of a clinician's hand quite separately to this we're also embarking on a research program to see if our machine learning um and and and AI Technologies can actually help with some aspects of diagnosis so let me tell you a little bit about one of the problems we've been focusing on sight loss and diabetic retinopathy as well as AMD is something that actually affects uh 625,000 people in the UK and 100 million people worldwide and obviously diabetes is a very much growing problem and the remarkable thing is that if you do have diabetes you're 25 times more likely to suffer some kind of sight loss uh but most interestingly the very most severe types of sight loss uh due to diabetic retinopathy can actually be prevented um through earlier detection so one of the things that we've been thinking about is how could we potentially help with better realtime classification of those um Radiology exams coming through to enable a more sensible triage of which patients uh require more immediate responses and so the current reality is that in human performance there's a there's a great deal of backlog in reporting which means that the results aren't available potentially in clinic for for weeks and there's also a lack of consistency between different graders and sometimes the reporters will miss some of the sensitive changes in diabetic retinopathy and AMD with machine learning one of the things that we hope we might be able to do um is to do much faster and near instant results um but also more consistent and more standardized uh performance and I think this will also help us to understand to adjust for some of the normal variations that we see which will allow us to increase our specificity so recently we've been working with um The morfield Eye Hospital and so I want to show you a very short video so that you can hear directly from our friends at morefields um sang and also Piers Keane who we've been collaborating with on the project my name is Pierce Keane uh I'm a consultant opthalmologist at morefields ey hospital I'm Professor s Co I'm a consultant eye surgeon here at morfield Eye Hospital morfield is the oldest Eye Hospital in the world and we see over 600,000 patient visits a year which is more than twice any other Hospital in either United States or Europe it's wide ranging hospital where we see very common conditions and also very rare conditions every week we do many thousands of ooc scans ooc gives us very high resolution images of the eye in a very non-invasive manner some people are waiting long periods of time to see specialists in the eye clinics and therefore have the potential to lose sight in that waiting period This is where Deep Mind is able to help us understand these huge data sets and then put it together so it benefits towards making a good diagnosis and achieving the best possible treatment for our patients if you have an O scan done a machine learning algorithm will be able to tell you if it's something urgent versus something that's not so urgent I think that this has huge potential for patients it will allow us to get much earlier detection of these blinding diseases and as a result of that I think we'll get much earlier intervention and much earlier treatment for these patients so this is very much early work um but um we're committed to publishing all of the results of our work including our algorithms our methodologies and our Technical implementations and so hopefully when we're ready you'll hear more from us on the results of that research towards the back end of this year so I think it's really important that we um outline at least four of our key principles that Define everything that we do um with respect to deepmind health I think the onus is very much on us to find a new way of working with the NHS and we really do want to take this opportunity to invite everybody to participate with us in helping us to do things quite differently and I think there's at least four key things um first of all being patient Centric being clinician Le um committing to open and operable standards and the highest levels of transparency and accountability so let me touch a little bit on each one of those the first thing to say is that I think we really have to focus on improved patient safety outcomes and experience and I think in order to do that we actually need to be put the Patient First in everything that we do and it's an incredibly energizing speech for Matthew just now to hear him talk about the patient portal um talk about the patient owning the data and having access to their data and becoming an active participant um both in guiding their own treatment but also in ultimately holding accountable the system that is giving them Care at the moment I think the experience is to always be on the receiving end of um treatment and all these other things that are happening to you I personally have a partner who has a chronic condition and we've been in and out of hospital for four years or so and the number of times that we refill in the same form very patiently with our date of birth and address and those details the number of times that we're asked you know what are the drugs that you've just taken this morning which expert or which specialist did you see earlier can be very very draining and I think if there is a canonical record of this information then it'd be much easier for us to play an active part in um our own treatment and so um we're really pleased to announce that we're going to have our first um patient engagement meeting on the 7th of September so anybody is welcome to come and join that um please email us at patients at deepmind Health we're looking for feedback we're looking for people to help co-design with us we're looking for ideas um and in general any thoughts or suggestions that people have are most welcome um we also recognize that it's incredibly important and delivers enormous momentum for us to be designing directly for clinicians um first and foremost and so all of our clinician facing tools will be led by um doctors and nurses I actually met um Pier Keane because he cold emailed me on LinkedIn and a couple days later we got together for a coffee and he pitched me this incredible idea and I was like this is really awesome we have to collaborate on this the same story with our work with the RO free and so I think this um not only generates an enormous amount of momentum but allows us to really be led by the specific needs that nurses and doctors have and so once again um please email us if you have uh any ideas or you want to be involved in user testing or if you want to give us some feedback on something that we could be doing better um we can only be successful if we hear from people um who are at the front line of Service delivery and I I want to say something really important about how we go about creating this open ecosystem to drive Innovation people have been talking about Open Standards and interoperability for 20 years um and it's actually not that difficult the technical implementation can be painful but we're actually converging on the standard the fire API standard and aggregating the data in the back end despite the fact that it's often spread across 100 plus databases of different schemas and in different standards in the in in in many hospitals is actually very tractable this is not a technically challenging problem this is not a research problem um and we've actually had some success in in in starting to think about how how we might do that um with the RO free and I think one of the barriers you know to the to the question um uh earlier this morning one of the the barriers to smaller companies um startups to the doctor hackers that have been mentioned to getting involved is that it's all well and good designing a really nice app but if you can't get hold of the data then you're never going to be able to test it and tral it and so there has to be a secure and controlled way of managing the release of that data to the appropriate um uh uh experimental app and I think that is what will align the um ecosystem around a single backend canonical record um that I think is what we really need to drive much more competition in this market there needs to be much more transparency and competition and so I think this is where patients and clinicians really need um tools that actually encourage uh The Innovation process um and and I think it's sort of a false dichotomy to sort of on the one hand say that the comprehensive top- down systems uh are all that we can hope for on one side versus the peace meal and clinician Le Innovation on the other I think I actually think we we need both and we can have both um and that's something that we'll be working towards and so we've made a commitment before but we'll restate it again that all of our work um will be uh uh built on Open Standards non-proprietary standards um that um can be edited and and updated by the community in the conventional way we have a there's actually a long history of doing this at Google finally I want to touch on the importance of governance um this is something that is uh very dear to me and I feel very passionately about um before we launched uh Deep Mind Health in February we announced that we would have an independent review panel of unpaid reviewers who meet four times a year have a small budget for a secretaria to do audit and can talk to any member of my team they can one-on-one interview any member of my deepmind health team they can look at any of our agreements our product road map they can diligence our technical infrastructure and all of our IG and um they're they they're not bound by any NDA or any contract and so they're free to speak their mind but we've appointed nine um expert reviewers who I think will have the freedom to try to um hold us accountable and um build trust in the community so um that they're able to sort of say you know what they think of how we're doing to give us guidance and Direction but also to um have conversations with people outside the organization about what they see and about what we're achieving so with that I want to say thank you very much for listening to me cheerswhat a phenomenal start to the morning that was um super energizing I enjoyed that and hopefully I'll touch on a couple of the themes that Matthew brought up so um I'm going to talk a little bit about um some new ways that we think maybe technology might be able to enhance patient care but before I get into that let me sort of give you a bit of background on um deep mind it's a a company that I started six years ago a British artificial intelligence research company um and back in 2014 we were acquired by Google we became Google Deep Mind and we're responsible for all of Google's general purpose um AI efforts and fundamentally our mission is to create General artificial intelligence so these are algorithms that are capable of learning directly from the raw data and can perform well across a very wide range of tasks much like a human being learning from scratch to do inference and prediction in unstructured environments but the second part of our cor mission is to use the technologies that we develop to make the world a better place place and really try to tackle some of what I think are society's toughest social problems that we're facing at our heart we're a uh a research organization uh we're based in King's cross in London we have 300 or so staff coming from 40 different countries and 200 of the best uh machine learning and artificial intelligence research scientists in the world so 200 postdocs and phds the largest concentration of AI researchers anywhere in the world with 12 tenard professors full-time in the office um and we publish everything we're very committed to uh uh being open and Publishing 80 or so peer-reviewed research Publications since we founded the company but we have very much what we call a hybrid culture um we started the company as a response to what me and my co-founders saw as a kind of failure of conventional organizational structures and institutions to respond to the complexity of of the tough Social Challenges that we're facing so we like to think of ourselves as retaining the long-term focus of Academia um whilst dropping some of the somewhat more painful challenges in Academia where it's really difficult to scale things up with good infrastructure and engineering and at the same time sometimes getting trapped in the cycle of um of of Publications that aren't always critical to the long-term research effort that you're focused on but secondly combining that with the pace and the scale and the agility of a commercial environment where we really can afford to have a very large um research engineering support infrastructure around uh our our core researchers all the while underpin underpinned by um the social values and the mission of the third sector and I think in some sense that makes us reasonably unique in attempting to combine all of these three uh uh different ethoses in a single organization and so let me briefly tell you how we attempt to go about building general purpose learning algorithms everything starts with an agent you can think of an agent as a control system for uh a robotic arm or a self-driving car or a recommendation engine um and that agent has some goal that it's trying to optimize we handc code that goal it's the only thing that we give the agent we say these are the things that you should find rewarding in some environment and the environment can also be very general so it could be a simulator to train a self-driving car or it could be YouTube where you're already trying to recommend videos that people find entertaining and engaging and the the agent is able to take a set of actions in some environment so it's able to experimentally interact independently and autonomously um in the environment and that environment then provides back a set of observations about how the state has changed in that environment as a result of the agent interacting with that environment um and of course the the environment passes back a reward which the agent is able to learn from so it's really learning through through feedback or through the reinforcement learning process and we train in the Atari test bed so the these are the uh uh 100 or so games from the 70s and 80s where the environment is actually the Atari simulator and the raw pixels are provided to the agent in their rawest form so really just think about an RGB a red green blue um uh pixel that is coming in 30,000 frames 30,000 pixel inputs per frame so 24 frames per second and it's wide up to the action buttons but it's not told what any of this stuff what any of the actions might do and so really it has to learn its own representation of what is rewarding or useful or effective in some environment um and so there's zero Pro pre-programming we don't tell the agent anything about the structure of the environment only that its goal is to maximize score and so there's a single agent that learns to play all of these games and I think the kind of best intuition is Imagine um a robot that is able to control the joystick and the fire button at the arcade and is really just processing the raw pixels that are coming into its visual stream so here's a little demo video of of a classic game called breakout um where basically you're controlling the paddle at the bottom here and this is only after 100 games so the agent really has just been dropped into the world and it performs really badly it's sort of randomly exploring the space figuring out what happens if I move left or right when the ball seems to bounce down here after these 300 or so games the agent gets pretty good and this is pretty much Human Performance it's learned purely from the raw pixel inputs that anytime it bounces the ball up and hits one of the blocks it gets more score but then unexpectedly we left it training for about 500 or so games and the really interesting thing is it learned this cool strategy of tunneling up the sides and sending the ball up the back to get as as much score as possible with as minimum effort that is required and so this is an example of a very very crude early small scale um system in a toy-like environment that has learned something that we hadn't pre-programmed into it that turns out to be a very effective strategy and we were lucky enough to um have got a nature paper for this work back in 2015 and and we were given the front cover which as as you'll know uh research scientists were very very proud of um a few months back we went on to tackle one of the grand milestones in artificial intelligence the classic game of goats 3,000 years old there are 40 million players around the world and the really interesting thing about the game is its complexity so there are 10 to the power of 170 possible board positions on a 19 by1 19 board so that's a 10 with 70 Z next to it possible board configurations on a 19 by9 board and and the objective is to really Place black stones or white stones to surround your opponent and gain territory so it's a very unstructured game unlike chess for example where all of the pieces are worth different amounts and you can use the the value of those pieces to bias your uh training you can put lots of energy into trying to protect your queen for example and sacrifice your pawns but there's no such structure in go in go so it was widely recognized to be one of the really hard outstanding challenges in artificial intelligence and in fact just to put it in context it's been estimated that there are something on the order of 10 to the power of 70 atoms in the known universe so there are in every liquid solid and gas fewer atoms in the known universe than there are possible board positions in the game of goost so traditional techniques of writing rules or handcrafting features as it's known um just does doesn't work you have to train an algorithm to learn its own representations from scratch and we were really uh lucky to be able to beat the go world champion the first time a computer system's ever beaten a human and in fact we bet we beat the best human in the world which was really exciting in Korea uh 3 months back 280 million people watched it live um it's one of the the most popular games out there uh and that's more viewers than there were in the Super Bowl which is pretty cool 35,000 press articles and Incredibly 10 times more go boards sold in two weeks um so they sort of ran out of go boards which was which was pretty cool um and we were again lucky enough to get a nature paper for this and and we were given the front cover which the first time that a uh a computer science lab or an AI lab got two nature front covers um in 12 months so pretty cool but really what we're about is making these sorts of advances in a research and Engineering environment and then trying to apply our expertise to make the world a better place so why did we pick Health well about a year and a half ago um a bunch of my team got together and spent about three or four months trying to figure out where we should go next I mean we could pick all sorts of different applications application areas and there were three core principles guiding um what we were looking for clearly we wanted Rich complex data environments which uh are relatively unstructured that require us to make interesting predictions as you can see we're clearly motivated by social impact and that's something that was very important to us to find an opportunity to take our technology um into into the real world and make a meaningful difference and of course we wanted to build a sustainable business model around this but the kind of most remarkable thing about health compared to all of the other things that we looked at like synthetic energy developing sustainable food water water purification nanomaterials Material Science in general we looked at all these different areas the remarkable thing about health was that there's an incredible margin for improvement um in if if we're successful in being able to deploy Cutting Edge and modern technology systems I mean there really is no other sector that I can think of in the world that is so far behind The Cutting Edge in terms of technology and if we're successful um that represents a massive opportunity for us to have a beneficial impact and many of you will obviously be familiar with this being the kind of default reality so much of the valuable information that describes the current state of a patient's condition or the the current efficiency of operation of a particular Ward exists in in this kind of data environment and so when we look at the the current health it system some research that was done recently at Imperial talking mostly to Junior doctors and nurses people will repeatedly say things like it's clunky it's not intuitive everything takes too many steps um and stuff doesn't mean what we all think it means or it took an hour to teach a doctor how to prescribe paracetamol or I don't think I'm stupid but it couldn't be more complicated if I can take a computer out of a box and understand how to use it straight away why is it so useless that I need 6 hours of e-learning to be able to order a blood test and I think it's this really strange dissonance this sort of frustration that in your lunch break you're using you know Facebook and Uber and you're on messaging platforms and they're at the very Cutting Edge of the technology that's available to you in the world and then in the afternoon you go back and use a desktop based system that struggles to boot and you maybe using pages and fax machines to go about your daily work and so we launched Deep Mind health Health in this context back in February of 2016 front of 400 or so Clans at the raw Society of medicine and in many ways it was like an incredibly daunting Prospect I mean you know in some sense it's uh as many people have already pointed out is kind of a graveyard of failed technology efforts over the last 20 years and so I think in that context we really had to think about what are we going to bring that's going to be very different clearly we have machine learning and artificial intelligence but I think a lot of this is about the the approach that we take to developing software and how you put both patients and clinicians at the very Forefront of that and so the approach that we take is to frame everything as starting with an observation of what does a user do on a day-to-day basis we spend lots of time immersing ourselves um in Wards and in uh uh uh and with nurses in the mess rooms trying to observe what they do Define their challenge gather as many insights as we can and then immediately start to build something as fast as possible we want to show uh what a rough design might look like here are some wireframes and then develop that a little bit further test it and then as we start to develop a solution to try and measure build and learn and then just rinse and repeat try and do that in very very quick iterative cycles and So within three weeks or so of um meeting our first nurses signing our agreements with the RO free back in uh September and October we had a working uh prototype obviously not connected to any data but that that nurses and doctors could actually point to and say this button's in the wrong place this color is difficult to read this menu hierarchy is sort of in the wrong order and so we can instantly get feedback and deliver pretty much um what nurses and doctors tell us they want to see and so this is kind of our Mantra ABC always be clinician Le and every single project that we will work on and the and the projects that we've worked on so far have been brought To Us by um a nurse or a doctor who has some idea some insight into how they can change the behavior in their day-to-day operation and how a technology solution might actually work and here you can see this is a one of our whiteboards in the office always reminding ourselves who our users are and what they need and so how might patient care be better supported by um technology well I think obviously there's an enormous um opportunity for improvement one in 10 patients experience some kind of harm in hospitals and half of those um are completely preventable or avoidable harm and in many of those cases 50% of um of those cases detection of of the patient deterioration in question has actually been delayed and that's actually a communication and coordination issue and largely I think this is because of these current limitations most of the really valuable data sits on paper and on charts and isn't logged or tracked or recorded there's no auditable log that you can verify of the pag of messages that have been sent the reminders that have been sent those that have been missed I mean who knows maybe that person's not even at work at the moment maybe they're on holding and you don't know until they don't reply for two or three um pages so I think there's two core patient safety challenges that have framed everything that we do um in Deep Mind Health the first is how can we do a better job of identifying which patients are at risk of deterioration largely in real time and then the second is once we've once we've identified which patients are at risk how do we actually intervene I mean we don't want this to end up just as a as a report that's that advises on some reorganization of of facilities in on on a ward we actually want to deploy technology in real time that enables clinicians to do a better job of escalation and intervention and so on our patient safety challenge number one how do we do better detection we've looked at acute kidney injury over the last sort of 12 months or so and this is a remarkably important problem 25% of all admissions present with some kind of evidence of an Aki and there are 40,000 or so deaths in England due to Aki alone it's estimated that something like 20% of these are actually pre preventable and that the cost could be as much as a billion and a half pound and so um sort of a couple years ago in 2014 NHS England issued a patient safety alert uh to to mandate that the acute kidney inj injury algorithm um be deployed in in hospitals and so once again the first thing that we did is try to observe users in um their day-to-day setting so we went into the RO free and we mapped out the pathway what is the experience from a patient perspective today and it turns out it's actually really complicated um there are lots of different stages to the path that a patient might go through and what we noticed is that there are a whole series of life-threatening and complicated stages in that pathway which actually seem to be where we're missing on the key bits of deterioration and so what we wanted to do is take a step back and see how could we intervene earlier to do better risk assessments more realtime prevention and monitoring and then hopefully try to redirect patients through the pathway um toward towards potentially a full recovery and then a discharge and there and once we've broken it down into these sorts of steps we have a a shared visualization between us and the clinicians on where all of the key intervention opportunities actually sit and so um in response to this we developed streams um our Aki alerting platform for for blood test results so I'm going to show you a little demo of what this looks like um we start with um our login screen um we're able to securely log in on on mobile well and as soon as you're you've logged in you're presented with a list of all of the patients um that that appear to be alerted on the acute kidney injury um algorithm and you can scrub down that list maybe tap on Robert Jones here can see all of the blood test results you can see that their creatinin has spiked over uh the last uh sort of few weeks of the Baseline and we can also compare that to any other blood test result and pinch and zoom scroll through the others like CRP or potassium and so that's the very simple intervention that we've built so far um keeping it really focused on one very specific condition um using the blood test results but in the future I think there's a real opportunity for us to go much much further and extend this um to a to a broader patient Centric collaboration platform so here's a view of what things might look like in the future um we might be able to search through um all of the patients Maybe by consultant by Ward or by specialty we can scrub down our list of patients and pick um or we can type in um a particular patient that we're interested in because we're on that Ward then we go through to Robert Jones we can see their date of birth their medical record number the ward they're on we can go through their overview see their current medication see their Vital Signs here we can see that the patient temperature is pretty high and their blood test results their blood pressure is high um maybe we want to look at their past diagnosis we can see that this patient has pneumonia can see their previous procedures and if we um we can also see their current medication that they're on and if we swipe uh right we can see that um we can go across to the timeline and see that this patient is uh uh being treated for sepsis they have an Aki 2 we can see when the last blood test was ordered that it's in the lab at the moment that it's ready for review we can also scroll down and say look at um the the SATs for example or we can see that this patient had been previously transferred can obviously go across and take another look at the blood test results and all of this information is in the palm of your hand able to uh be reviewed whenever you need it here we have um the X-ray on mobile you can we can see um that this patient has pneumonia and we can also see the report from the radiologist who might want to zoom in and um uh and then share it with a colleague and so if we if we think about what this does for us this actually essentially puts in the palm of our hand the ability to detect in real time patients that are are a risk of deterioration but that's only sort of one part of the challenge the next key thing that we need to be able to do is escalate and better intervene and this is where messaging and commenting becomes so important so take for example with the the the X-ray that we looked at just now um here we see that a registar is able to make a comment on the X-ray on that report um and then Plus in a respiratory consultant to get an expert View and that that that exchange can happen in an auditable uh way that allows us to verify um you know retrospectively if needed um what the uh senior clinician um has actually said and what action is subsequently taken here we see an example of the blood test results where um a uh an F1 is able to call on the specialist nurse to come and see the patient as soon as possible and potentially organize a renal ultrasound and so that gives you a bit of a sense of some of the things we're doing in clinical decision support and direct patient care how we're trying to build apps that um put this kind of information and this kind of communication and coordination in the in in the palm of a clinician's hand quite separately to this we're also embarking on a research program to see if our machine learning um and and and AI Technologies can actually help with some aspects of diagnosis so let me tell you a little bit about one of the problems we've been focusing on sight loss and diabetic retinopathy as well as AMD is something that actually affects uh 625,000 people in the UK and 100 million people worldwide and obviously diabetes is a very much growing problem and the remarkable thing is that if you do have diabetes you're 25 times more likely to suffer some kind of sight loss uh but most interestingly the very most severe types of sight loss uh due to diabetic retinopathy can actually be prevented um through earlier detection so one of the things that we've been thinking about is how could we potentially help with better realtime classification of those um Radiology exams coming through to enable a more sensible triage of which patients uh require more immediate responses and so the current reality is that in human performance there's a there's a great deal of backlog in reporting which means that the results aren't available potentially in clinic for for weeks and there's also a lack of consistency between different graders and sometimes the reporters will miss some of the sensitive changes in diabetic retinopathy and AMD with machine learning one of the things that we hope we might be able to do um is to do much faster and near instant results um but also more consistent and more standardized uh performance and I think this will also help us to understand to adjust for some of the normal variations that we see which will allow us to increase our specificity so recently we've been working with um The morfield Eye Hospital and so I want to show you a very short video so that you can hear directly from our friends at morefields um sang and also Piers Keane who we've been collaborating with on the project my name is Pierce Keane uh I'm a consultant opthalmologist at morefields ey hospital I'm Professor s Co I'm a consultant eye surgeon here at morfield Eye Hospital morfield is the oldest Eye Hospital in the world and we see over 600,000 patient visits a year which is more than twice any other Hospital in either United States or Europe it's wide ranging hospital where we see very common conditions and also very rare conditions every week we do many thousands of ooc scans ooc gives us very high resolution images of the eye in a very non-invasive manner some people are waiting long periods of time to see specialists in the eye clinics and therefore have the potential to lose sight in that waiting period This is where Deep Mind is able to help us understand these huge data sets and then put it together so it benefits towards making a good diagnosis and achieving the best possible treatment for our patients if you have an O scan done a machine learning algorithm will be able to tell you if it's something urgent versus something that's not so urgent I think that this has huge potential for patients it will allow us to get much earlier detection of these blinding diseases and as a result of that I think we'll get much earlier intervention and much earlier treatment for these patients so this is very much early work um but um we're committed to publishing all of the results of our work including our algorithms our methodologies and our Technical implementations and so hopefully when we're ready you'll hear more from us on the results of that research towards the back end of this year so I think it's really important that we um outline at least four of our key principles that Define everything that we do um with respect to deepmind health I think the onus is very much on us to find a new way of working with the NHS and we really do want to take this opportunity to invite everybody to participate with us in helping us to do things quite differently and I think there's at least four key things um first of all being patient Centric being clinician Le um committing to open and operable standards and the highest levels of transparency and accountability so let me touch a little bit on each one of those the first thing to say is that I think we really have to focus on improved patient safety outcomes and experience and I think in order to do that we actually need to be put the Patient First in everything that we do and it's an incredibly energizing speech for Matthew just now to hear him talk about the patient portal um talk about the patient owning the data and having access to their data and becoming an active participant um both in guiding their own treatment but also in ultimately holding accountable the system that is giving them Care at the moment I think the experience is to always be on the receiving end of um treatment and all these other things that are happening to you I personally have a partner who has a chronic condition and we've been in and out of hospital for four years or so and the number of times that we refill in the same form very patiently with our date of birth and address and those details the number of times that we're asked you know what are the drugs that you've just taken this morning which expert or which specialist did you see earlier can be very very draining and I think if there is a canonical record of this information then it'd be much easier for us to play an active part in um our own treatment and so um we're really pleased to announce that we're going to have our first um patient engagement meeting on the 7th of September so anybody is welcome to come and join that um please email us at patients at deepmind Health we're looking for feedback we're looking for people to help co-design with us we're looking for ideas um and in general any thoughts or suggestions that people have are most welcome um we also recognize that it's incredibly important and delivers enormous momentum for us to be designing directly for clinicians um first and foremost and so all of our clinician facing tools will be led by um doctors and nurses I actually met um Pier Keane because he cold emailed me on LinkedIn and a couple days later we got together for a coffee and he pitched me this incredible idea and I was like this is really awesome we have to collaborate on this the same story with our work with the RO free and so I think this um not only generates an enormous amount of momentum but allows us to really be led by the specific needs that nurses and doctors have and so once again um please email us if you have uh any ideas or you want to be involved in user testing or if you want to give us some feedback on something that we could be doing better um we can only be successful if we hear from people um who are at the front line of Service delivery and I I want to say something really important about how we go about creating this open ecosystem to drive Innovation people have been talking about Open Standards and interoperability for 20 years um and it's actually not that difficult the technical implementation can be painful but we're actually converging on the standard the fire API standard and aggregating the data in the back end despite the fact that it's often spread across 100 plus databases of different schemas and in different standards in the in in in many hospitals is actually very tractable this is not a technically challenging problem this is not a research problem um and we've actually had some success in in in starting to think about how how we might do that um with the RO free and I think one of the barriers you know to the to the question um uh earlier this morning one of the the barriers to smaller companies um startups to the doctor hackers that have been mentioned to getting involved is that it's all well and good designing a really nice app but if you can't get hold of the data then you're never going to be able to test it and tral it and so there has to be a secure and controlled way of managing the release of that data to the appropriate um uh uh experimental app and I think that is what will align the um ecosystem around a single backend canonical record um that I think is what we really need to drive much more competition in this market there needs to be much more transparency and competition and so I think this is where patients and clinicians really need um tools that actually encourage uh The Innovation process um and and I think it's sort of a false dichotomy to sort of on the one hand say that the comprehensive top- down systems uh are all that we can hope for on one side versus the peace meal and clinician Le Innovation on the other I think I actually think we we need both and we can have both um and that's something that we'll be working towards and so we've made a commitment before but we'll restate it again that all of our work um will be uh uh built on Open Standards non-proprietary standards um that um can be edited and and updated by the community in the conventional way we have a there's actually a long history of doing this at Google finally I want to touch on the importance of governance um this is something that is uh very dear to me and I feel very passionately about um before we launched uh Deep Mind Health in February we announced that we would have an independent review panel of unpaid reviewers who meet four times a year have a small budget for a secretaria to do audit and can talk to any member of my team they can one-on-one interview any member of my deepmind health team they can look at any of our agreements our product road map they can diligence our technical infrastructure and all of our IG and um they're they they're not bound by any NDA or any contract and so they're free to speak their mind but we've appointed nine um expert reviewers who I think will have the freedom to try to um hold us accountable and um build trust in the community so um that they're able to sort of say you know what they think of how we're doing to give us guidance and Direction but also to um have conversations with people outside the organization about what they see and about what we're achieving so with that I want to say thank you very much for listening to me cheers\n"