The Importance of Diversity in AI Research: A Conversation with Linna Abraham, DeepMind CEO
Diversity is still a significant problem in the tech sector, and it's crucial that we address this issue to create better technology. According to Linna Abraham, CEO of DeepMind, "we're like at the same numbers are flat but they're all steps being taken to address it." She emphasizes the importance of diversifying candidate pools, job descriptions, and education programs to increase accessibility in AI research.
Abraham notes that tokenism is not enough; we need to make a real effort to create a more inclusive field. "This isn't just about making better technology diversity and diverse perspectives will create a faster and safer and with better results," she explains. She highlights the importance of teaching curiosity and minimizing bias in code, as well as ensuring that research is representative of society.
Abraham also stresses the need for ethics to be a keystone at every stage of the process, rather than an afterthought. "We have to be thinking about our responsibility for the technology we develop," she emphasizes. This includes being mindful of the impact on people around us and involving both technologists and policymakers in the conversation.
In her experience, Abraham notes that there's something special about being based out of London versus Silicon Valley. "You're surrounded by technologists... it's part of your daily life you need to be thinking about the work that you're doing and how it is going to impact all the people around you." However, she also recognizes that AI research cannot just rest on the shoulders of its designers; the public and government must also have a hand in ensuring the technology is used responsibly.
For Abraham, understanding AI requires not just technical expertise but also awareness of ethics, diversity, and fairness. She believes that people want to understand what they're using and want their representatives to be informed about these issues as well. By acknowledging this need for public engagement and education, we can create a more inclusive and responsible AI research community.
The Conversation Continues: Beyond DeepMind
Head over to the show notes to explore the world of AI research beyond DeepMind. You'll find resources, stories, and feedback from listeners who are eager to discuss the future of AI with experts like Linna Abraham.
In this ongoing conversation, we invite you to share your thoughts on ethics, diversity, and fairness in AI research. Whether it's a question, a resource, or a story, please let us know how you'd like to contribute to the discussion. You can message us on Twitter or email us at podcast@deepmind.ai.
By engaging with each other and sharing our perspectives, we can work towards creating a more inclusive and responsible AI research community that benefits everyone. The future of AI is too important to be left solely in the hands of technologists; it's time for public policy, education, and community engagement to play a role in shaping this exciting and rapidly evolving field.
As Abraham notes, "the solution is probably a bit grand," but she's confident that with diversity and inclusion at the forefront of our work, we can create better technology that makes a positive impact on society. Join us as we continue the conversation about AI research, ethics, and diversity – and let's work together to build a brighter future for all.
"WEBVTTKind: captionsLanguage: enwelcome back to the sixth episode of deep mind the podcast my name is Hanna Frey I am a mathematician who's worked with data and algorithms for the last decade or so and I spent the last year at deep mind an organization that is trying to solve intelligence and then use it to solve some of society's problems there are an awful lot of people working in the field of artificial intelligence moving forward our understanding of the whole area and for many of them is a terrifically exciting place to be with breaking new frontiers of problem solving and seeing great leaps ahead but before any of that hits the outside world the first Inklings of me breakthroughs here at deep mind come in the form of regular poster sessions have a Marla can carry out the following simple tasks where I'm gonna give you a number and a secret symbol and what the sum between those two are and you have to infer from that what the values but it requires our agents to have some properties that we think are desirable like learning to learn having a memory and processing those memories we're studying analogical reasoning and logical reasoning is very important because it's key to scientific discovery also human reasoning the main question we ask is how can we design your own networks that are able to do our logic to be single my poster is about verification of neural networks in this day and age when we deploy in your networks into the real-world applications we want to make sure that these newer networks are safe for example if you have an image classifier we don't ever want to predict the cat to be like a car with something like that I always say deep mind is a bit like academia on steroids okay it's still like an emu but we have a lot of compute a lot of great people clustered together a lot of help to like manage ourselves so while there is obvious excitement about AI research this new era of artificial intelligence also comes with concerns there is unease about the way it might be implemented used and abused for the rest of this episode we are looking at the more human side of technology and the fight to find a future of AI that works for everyone in 2017 deepmind set up dedicated teams working on how AI impacts ethics and society with the aim of making sure that the algorithms designed in this building a positive force for good bang on I know you're thinking surely algorithms aren't ever good or bad in and of themselves it's how they used that matters after all GPS was invented to launch nuclear missiles and now helps to deliver pizzas and speakers playing pop music on repeat have been deployed as a torture device isn't the technology itself just neutral good question and I think something a lot of people say and believe and I can see why they say that this is Verity Harding co-lead of deep mind ethics and society I hit let's earth famous saying about as long as there's been fire there's been arson but you can use something that's for good you can use it for bad but I think actually as we're developing increasingly so stated technologies that have real impact on people's lives it's not really an acceptable thing to say you can't be building something that's going to have this kind of monumental impact or potentially transformative impact and not care about how it's going to be used it's that part of a concern then the technology that might have been built for one purpose ends up being used in a different way I think definitely that's some of it I think definitely that's some of it because you could foresee a situation where you're building a facial recognition tool because you want to allow somebody to quickly find pictures of their husband or wife or mom or dad and that's a great thing but that facial recognition technology once developed could of course be used to target political dissidents and pick them out of a crowd you know and so I think that is definitely one of the concerns that you might create something for one purpose and it be used for another on the topic of facial recognition Brad Smith the president of microsoft recently refused a request by a US police department to install their algorithm in cop cars and body cameras and he's publicly called for more careful thought and societal dialogue about potentially regulating the use of the technology and here at deep mind more generally there is a strong sense that the people behind the science have a duty to investigate the wider and perhaps less predictable impacts of their work I don't think it's okay to build something whether that be a product or a service and put it out there and and just just hope that you make the world a better place I think it's important that you are deliberate and intentional about why you're building this who are you building it for what are you hoping to do what is your intention with this technology and if you start from that premise then I think you're more likely to get to a better outcome where you do the good that you hoped you were going to do the problem is that without these steps it is very easy for unintentional consequences to creep up on you you only need to look at the news stories about social media from the past two years to see just how much algorithms have changed our society in unexpected ways Lila Abraham is deepmind CEO and has over 20 years experience working in the tech sector she has seen firsthand how hard a booming industry has found it to keep up with being responsible in 2006 I went into the middle of the Amazon and we built a computer lab and health care so we put in Internet and computers etc we knew we had a responsibility not to just leave it there but to Train folks to take care of it to think about the sustainability but I think that's kind of where things tend to end epochs means something very different now and responsibility means something very different now because technology is in everybody's hands it's no longer limited to a few people for a specific application it's a lot easier to get into the tech sector and to make technology that can have value to people and at the same time that comes a lot of responsibility that I don't think in general the tech sector has taken into account but the last few years have shown how hugely transformative and disruptive AI can be and brought sharply into focus the very possible negative outcomes of ill thought through technology but as Verity told me the tide is slowly beginning to turn much of the drive for a conversation about ethics is coming from within the technology community itself three years ago in 2016 some of the scientists from different labs at different companies met at a conference and we're talking about how excited they were about the potential for AI to do a lot of good but acknowledging that a technology that's so powerful that it has the potential to be transformative in a very good way must also have the potential to be transformative in in not-so-good away and so they came together to say well what can we do about it and so the partnership on AI was born it includes members from Amnesty International Electronic Frontier Foundation the BBC and print ten university amongst many many others and together they're hoping to come up with best practices in AI making sure that society stays firmly at the forefront of engineers minds so the partnership on AI interestingly was founded by the biggest tech companies so it was founded by deep mind but also Google Facebook Amazon IBM and Apple one thing that's really interesting about the partnership in AI is that the board membership is made up of independent board members and representatives of the company and so it's creating a space where those different groups aren't siloed from each other having debates in different rooms and not listening but somewhere where honest people with the best of intentions can come together and challenge each other and scrutinize each other and hold each other accountable but also have Frank open honest debate about issues where reasonable people can disagree I really believe that the outcome of that will be better decision-making both in companies but but elsewhere as well because it sometimes get quite heated in those conversations you know my experience of it is that it doesn't get heated it but it's passionate so people aren't angry with each other and there's there's not aggressive argument but people are very honest I'm very open and very challenging but that's been received really well in in all cases how do you protect against rogue companies just who are not a part of these groups just doing whatever they want if enough companies and enough groups sign up to something and it becomes the norm its then really obvious when people aren't doing it and I do think people being kind of cooled out for that it'll no longer be tenable to not operate in the way that everybody else is operating it's not just theoretical concerns about runaway applications of AI that's prompting these conversations but real examples of algorithms that have already been let loose on the world with real question marks about whether their benefits outweigh their harm a notorious example is the use of AI in the criminal justice system now you may have heard of these algorithms already when a defendant appears in court the AI can assess a defendant's chances of going on to commit another crime in future and that risk score is then used by a judge to help decide whether the defendant should be awarded bail and in some cases how long someone's sentence should be there is good justification for something like this because there is an enormous amount of luck involved in the human judicial system studies have shown that if you take the same case to a different judge you will often get a different response if you take the same case to the same judge on a different day you'll often get a different response judges don't like giving the same response too many times in a row and so if a series of successful cases of bail hearings have gone before you your chances of being successful fall and there is even evidence to suggest that judges tend to be a lot stricter in towns where the local sports team has lost recently using AI to help make these decisions can help to eliminate a lot of that inconsistency but you have to tread pretty carefully if you without thought and care and due attention to the history of racial prejudice in the criminal justice system build something that claims to be able to predict somebody's likelihood of reform and rehabilitation and reoffending then it is likely at least in my view that that's going to fail if you build something with the intention of addressing those biases and you work to include the community in some way you could there potentially be a beneficial outcome maybe but I haven't seen it yet and by fail you're really talking about treating black defendants differently to white defendants absolutely and once you tend to look at the algorithms and the data that they've been they've been built on oftentimes you can see whether they were built on data that was already biased so of course this was the outcome the issue came to public attention in 2016 after a group of US investigative journalist from ProPublica published a damning report of one particular company's criminal risk scores their study showed that the algorithm was twice as likely to wrongly categorize black defendants as being likely to reoffend than white defendants now I should just point out that deepmind does not build these systems but the whole industry alongside the partnership in AI has been part of the conversation of how to address them one of those people is William Isaac a social scientist at deep mind he says that the 2016 Pro Publica investigation made people realize that switching over to algorithms doesn't make decisions any more objective even with AI and ML tools you are getting into the social environment where you actually have the same norm the same kind of like systemic biases they're still all present so it's really hard to say that somehow this will replace all of the kind of subjective preconceived notions about certain groups or historical biases against them and that you can start all over again and so I think that was the wake-up call was that is not as objective as it seems and that as a result we still have to grapple with those questions the problem is the data which gives the algorithm predictive abilities a questions like how many times were you arrested as a juvenile but if you are say a young black man in America it doesn't matter how Laura biting you are the chances are that you will have had many more negative interactions with the police then someone exactly like you who happened to be white and if you're using that data to dictate who deserves to be given bail or not then you and serious risk of perpetuating societal imbalances going forwards this is Silvia Chiappa a staff research scientist at deep mind research don't fully understand what this furnace is about they also look like a messy area in the sense that involves it is not purely technical problem and it's very difficult to understand what how to define fairness and it's difficult to separate the technical part from the ethical one this is an important point because defining exactly what you mean by fair is surprisingly tricky of course you'd want an algorithm that makes equally accurate predictions for black and white defendants the algorithm should also be equally good at picking out the defendants who are likely to reoffend whatever racial group they belong to and as ProPublica pointed out the algorithm should make the same kind of mistakes at the same rate for everyone regardless of race ethically you'd want all of those things to be true but technically that's not always going to be possible if your data set has bias in it there are some kinds of fairness that are mathematically incompatible with others and even if you could guarantee all of these things there are still a number of ethical issues to contend with how do you measure fairness who is excluded from your definition how do you make those decisions transparent and ultimately how do people contest the decisions made by those algorithms see I told you it was tricky coming at this from two very different perspectives William and Silvia started looking into the bigger issue of fairness in algorithms even though we had kind of different frameworks me as a social scientists and Silvia is a machine learning researcher the actual overlap between how we would approach this and basically the assumptions that are embedded within it were remarkably similar and actually part of us was saying like oh like look at these papers and social science that are kind of making the same point they just hadn't actually had a way to actually communicate that formally you're listening to a podcast from the people at deep mind in April 2009 teen William and Sylvia Co published a paper on fairness in AI entitled a causal Bayesian networks viewpoint on fairness in it they show that no matter how fair algorithms might be if the data they're learning from is biased we still can't trust their results I don't think it is possible to find technical solutions that are completely satisfactory at some point we need to take decisions whether the kind of unfairness is acceptable or not but we can't advance a lot and that's why we need more researchers involved and not just machine learning researcher person researcher from different community to be raising awareness about this this problem find solutions but as we will never be able to find completely satisfactory a solution from a technical viewpoint at that point we need to take decision this is important to talk about these such that we are something that is missing at the moment I do think this is fundamentally like a societal ethical question and challenge and it will require lots of stakeholders to address if you have let's say data set a facial recognition tool that's designed to find missing children what threshold do you set as a society where you say ok this is acceptable we maybe are less successful at identifying children with darker faces what threshold do we say that's acceptable because that's not a technical question that's a social and political question and a normative question even if you do have a classifier or facial recognition software that's fair the application of it may be in unfair ways and so that might present a second question that is separate from the actual kind of like if you decide on the threshold right if you're just using it in neighborhoods that are predominantly one group or one ethnicity that presents a whole nother set of challenges for whether or not that's an ethical use of a particular technology you can't assess whether these algorithms are good or bad in isolation they don't exist on there you have to place them in the context of the world that they're being used like the criminal justice system or in health care here's Verity Harding again this is what I mean by it being a kind of much bigger discussion that potentially the use of algorithms is highlighting my fear is that people will kind of get a checkmark that says we've tested and this algorithm isn't biased and therefore you should feel free to use it and that to me isn't going far enough I think there needs to be a further discussion then about but is this making those decisions that were already bad worse or more quickly and therefore more of them and you know that that kind of thing but things are changing he's William on what has happened since that ProPublica story broke they're going back and reconsidering what measures they collect right and going back and trying to create more robust data sets thinking about who is collecting the actual data itself will it be ever perfect where we have bias free purely pure data no I don't think that that's that something is ever gonna happen but I do think that people will be skeptical when people ask about what datasets are used and they don't get a satisfactory answer right I do think people will ask is this data set representative does it have balance across different groups so people will start asking the right questions and interrogating datasets and in models more aggressively and I think that will lead to better outcomes and crucially more people are now being included as part of the conversation in the aftermath of some of my work and among many others on predictive policing many cities in California actually started implementing citizen boards so when police departments wanted to acquire a new police technology that included uses of machine learning or artificial intelligence that they had to go in front of a citizen board and actually have the the local community evaluate the tool for different metrics including fairness and bias getting different voices involved in the conversation is essential to making sure that we build a future that belongs to all of us because what seems obvious to one person just wouldn't occur to another your perspective is hard-coded into the work that you create there are clear examples of this everywhere outside of AI able-bodied people designing buildings that disabled people can't use or new tights and plasters that only work if your skin is one particular color presumably the same as the designers and the algorithms that we've created they're really highlighting this issue like the ones used to automatically screen CVS and predict which candidates will fit best in a company here's Verity Harding again if it's based on historically discriminatory hiring decisions by either intentionally or unintentionally bias humans then it's going to kind of recreate those those patterns like if you've got a company where white men have succeeded yeah and they're looking for candidates he'll succeed it's going to pick out white male series yes exactly and if the people building the technology are all white males as well then the likelihood of paying attention to that potential bias and being aware of area we all have our blind spots then the likelihood increases we've seen driverless cars that don't spot pedestrians with darker skin tones tumor screening algorithms that aren't as effective patients with ethnicities other than white European and lots and lots and lots of issues around gender all of this is kind of inevitable unless you have a range of different viewpoints in your design process the most important thing in my point of view for ensuring that these things are if not not biased but that you're being intentional about what you're building and aware of the potential bias is that your team is a diverse team is that you have a broad set of voices involved and it's actually much simpler to do that than it's suggested and the issue of gender diversity has been a particular focus of late I think there's plenty of young women and girls who are really excited by science and STEM subjects and it's an easy get out to say that there aren't enough women in stem and that's why workforces aren't diverse but actually it's much more about making sure that it's a safe space for women and girls to work that they're not discriminated against once they're there that you're able to not just attract and hire them but that you're able to keep them and make sure that it's a place where they feel comfortable working and so I think it's much more important that we look at how women are treated in science than just dismiss it as something that girls aren't interested in a young age Linna abraham deepmind CEO is very conscious that diversity is still a problem in the tech sector as a whole talk about things that keep me up at night right so here I am a professional of 25 plus years with an engineering background a mom also raising nine-year-old twin daughters I would have hoped by now we would have solved the problem and yet we haven't we're like at the same numbers are flat but they're all steps being taken to address it there's the short term stuff you can do which are things like you diversify your candidate pools you if you're doing university recruiting you look at a broad range of universities and ones that have that have a broader student representation and have support structures often in place to help the students through their academic and communities you look at job descriptions and ensure that you don't have unconscious bias reflected and your job descriptions you so once you're in the recruiting pipeline then you may need to make sure candidates have the right experience we are being very deliberate about how we invest back in education ai is something that will change future generations so how do we make this a field that's more accessible so for example whether its funding diversity scholars and universities or funding AI chairs and universities to to increase the pipeline and I think that helps fuel some of the academic aspects as well as support our like long-term recruiting this isn't just tokenism that we're talking about here this is about making better technology diversity and diverse perspectives will create a GI faster and safer and with just a better a better result because one of the things I worry about is how do we avoid our own internal bias a lot of the work around deep reinforcement learning started from specific pockets and many people grew up in those labs or those universities and you know we they brought their former colleagues and so we have a pretty strong network of people that have known each other for a long time which is fantastic and they can really advance certain aspects of our of our work and yet there are other areas that are emerging how do you teach curiosity how do you how do we ensure that we minimize bias and the code that we're writing who's to say what intelligence is and isn't unless you have a better representation from society and that's just on the research side on the operation side to you think about things like okay think about public policy like if you're asking governments to think about how they're going to treat artificial intelligence then you want people that are representative of the the constituents of the population if we want to be focused on education making sure that we're not just focused on the specific schools but our broader range so we're bringing more people into this space I just think it's going to be imperative for us to truly solve intelligence that we're just going to need to have more diversity so he's positive that well solution is probably a bit grand their words to suggest but it is part of the way forward making ethics a kind of keystone every stage of the process rather than having is an afterthought oh absolutely we have to be thinking about our responsibility for the technology we develop and to candidly to as a whole every step along the way and I think that there's something quite special about being Pittock ordered out of London versus being based out of Silicon Valley I'd love Silicon Valley it's where my career has has really developed and yet you're surrounded by technologists you know from the billboard signs to the marketing and promotion here in London it's so multicultural and I feel like it's part of your daily life you need to be thinking about the work that you're doing and how it is going to impact all the people around you but of course we can't just leave the solution solely in the hands of the people who are designing these things it's our future to the public and government should also have a hand in this my impression is that people want to understand what they're using and want to understand what makes it work and how it works but they more importantly want their representatives and the people tasked with keeping them safe and secure to understand it too and I think that's why we've seen a bit of a breakdown in recent times if you want to know more about ethics diversity and fairness then head over to the show notes where you can also explore the world of AI research beyond deep mind and we'd welcome your feedback or your questions on any aspects of artificial intelligence that we're covering in this series so if you want in the discussion or point us to stories or resources that you think other listeners would find helpful then please let us know you can message us on Twitter or you can email us podcast at deep mind calm youwelcome back to the sixth episode of deep mind the podcast my name is Hanna Frey I am a mathematician who's worked with data and algorithms for the last decade or so and I spent the last year at deep mind an organization that is trying to solve intelligence and then use it to solve some of society's problems there are an awful lot of people working in the field of artificial intelligence moving forward our understanding of the whole area and for many of them is a terrifically exciting place to be with breaking new frontiers of problem solving and seeing great leaps ahead but before any of that hits the outside world the first Inklings of me breakthroughs here at deep mind come in the form of regular poster sessions have a Marla can carry out the following simple tasks where I'm gonna give you a number and a secret symbol and what the sum between those two are and you have to infer from that what the values but it requires our agents to have some properties that we think are desirable like learning to learn having a memory and processing those memories we're studying analogical reasoning and logical reasoning is very important because it's key to scientific discovery also human reasoning the main question we ask is how can we design your own networks that are able to do our logic to be single my poster is about verification of neural networks in this day and age when we deploy in your networks into the real-world applications we want to make sure that these newer networks are safe for example if you have an image classifier we don't ever want to predict the cat to be like a car with something like that I always say deep mind is a bit like academia on steroids okay it's still like an emu but we have a lot of compute a lot of great people clustered together a lot of help to like manage ourselves so while there is obvious excitement about AI research this new era of artificial intelligence also comes with concerns there is unease about the way it might be implemented used and abused for the rest of this episode we are looking at the more human side of technology and the fight to find a future of AI that works for everyone in 2017 deepmind set up dedicated teams working on how AI impacts ethics and society with the aim of making sure that the algorithms designed in this building a positive force for good bang on I know you're thinking surely algorithms aren't ever good or bad in and of themselves it's how they used that matters after all GPS was invented to launch nuclear missiles and now helps to deliver pizzas and speakers playing pop music on repeat have been deployed as a torture device isn't the technology itself just neutral good question and I think something a lot of people say and believe and I can see why they say that this is Verity Harding co-lead of deep mind ethics and society I hit let's earth famous saying about as long as there's been fire there's been arson but you can use something that's for good you can use it for bad but I think actually as we're developing increasingly so stated technologies that have real impact on people's lives it's not really an acceptable thing to say you can't be building something that's going to have this kind of monumental impact or potentially transformative impact and not care about how it's going to be used it's that part of a concern then the technology that might have been built for one purpose ends up being used in a different way I think definitely that's some of it I think definitely that's some of it because you could foresee a situation where you're building a facial recognition tool because you want to allow somebody to quickly find pictures of their husband or wife or mom or dad and that's a great thing but that facial recognition technology once developed could of course be used to target political dissidents and pick them out of a crowd you know and so I think that is definitely one of the concerns that you might create something for one purpose and it be used for another on the topic of facial recognition Brad Smith the president of microsoft recently refused a request by a US police department to install their algorithm in cop cars and body cameras and he's publicly called for more careful thought and societal dialogue about potentially regulating the use of the technology and here at deep mind more generally there is a strong sense that the people behind the science have a duty to investigate the wider and perhaps less predictable impacts of their work I don't think it's okay to build something whether that be a product or a service and put it out there and and just just hope that you make the world a better place I think it's important that you are deliberate and intentional about why you're building this who are you building it for what are you hoping to do what is your intention with this technology and if you start from that premise then I think you're more likely to get to a better outcome where you do the good that you hoped you were going to do the problem is that without these steps it is very easy for unintentional consequences to creep up on you you only need to look at the news stories about social media from the past two years to see just how much algorithms have changed our society in unexpected ways Lila Abraham is deepmind CEO and has over 20 years experience working in the tech sector she has seen firsthand how hard a booming industry has found it to keep up with being responsible in 2006 I went into the middle of the Amazon and we built a computer lab and health care so we put in Internet and computers etc we knew we had a responsibility not to just leave it there but to Train folks to take care of it to think about the sustainability but I think that's kind of where things tend to end epochs means something very different now and responsibility means something very different now because technology is in everybody's hands it's no longer limited to a few people for a specific application it's a lot easier to get into the tech sector and to make technology that can have value to people and at the same time that comes a lot of responsibility that I don't think in general the tech sector has taken into account but the last few years have shown how hugely transformative and disruptive AI can be and brought sharply into focus the very possible negative outcomes of ill thought through technology but as Verity told me the tide is slowly beginning to turn much of the drive for a conversation about ethics is coming from within the technology community itself three years ago in 2016 some of the scientists from different labs at different companies met at a conference and we're talking about how excited they were about the potential for AI to do a lot of good but acknowledging that a technology that's so powerful that it has the potential to be transformative in a very good way must also have the potential to be transformative in in not-so-good away and so they came together to say well what can we do about it and so the partnership on AI was born it includes members from Amnesty International Electronic Frontier Foundation the BBC and print ten university amongst many many others and together they're hoping to come up with best practices in AI making sure that society stays firmly at the forefront of engineers minds so the partnership on AI interestingly was founded by the biggest tech companies so it was founded by deep mind but also Google Facebook Amazon IBM and Apple one thing that's really interesting about the partnership in AI is that the board membership is made up of independent board members and representatives of the company and so it's creating a space where those different groups aren't siloed from each other having debates in different rooms and not listening but somewhere where honest people with the best of intentions can come together and challenge each other and scrutinize each other and hold each other accountable but also have Frank open honest debate about issues where reasonable people can disagree I really believe that the outcome of that will be better decision-making both in companies but but elsewhere as well because it sometimes get quite heated in those conversations you know my experience of it is that it doesn't get heated it but it's passionate so people aren't angry with each other and there's there's not aggressive argument but people are very honest I'm very open and very challenging but that's been received really well in in all cases how do you protect against rogue companies just who are not a part of these groups just doing whatever they want if enough companies and enough groups sign up to something and it becomes the norm its then really obvious when people aren't doing it and I do think people being kind of cooled out for that it'll no longer be tenable to not operate in the way that everybody else is operating it's not just theoretical concerns about runaway applications of AI that's prompting these conversations but real examples of algorithms that have already been let loose on the world with real question marks about whether their benefits outweigh their harm a notorious example is the use of AI in the criminal justice system now you may have heard of these algorithms already when a defendant appears in court the AI can assess a defendant's chances of going on to commit another crime in future and that risk score is then used by a judge to help decide whether the defendant should be awarded bail and in some cases how long someone's sentence should be there is good justification for something like this because there is an enormous amount of luck involved in the human judicial system studies have shown that if you take the same case to a different judge you will often get a different response if you take the same case to the same judge on a different day you'll often get a different response judges don't like giving the same response too many times in a row and so if a series of successful cases of bail hearings have gone before you your chances of being successful fall and there is even evidence to suggest that judges tend to be a lot stricter in towns where the local sports team has lost recently using AI to help make these decisions can help to eliminate a lot of that inconsistency but you have to tread pretty carefully if you without thought and care and due attention to the history of racial prejudice in the criminal justice system build something that claims to be able to predict somebody's likelihood of reform and rehabilitation and reoffending then it is likely at least in my view that that's going to fail if you build something with the intention of addressing those biases and you work to include the community in some way you could there potentially be a beneficial outcome maybe but I haven't seen it yet and by fail you're really talking about treating black defendants differently to white defendants absolutely and once you tend to look at the algorithms and the data that they've been they've been built on oftentimes you can see whether they were built on data that was already biased so of course this was the outcome the issue came to public attention in 2016 after a group of US investigative journalist from ProPublica published a damning report of one particular company's criminal risk scores their study showed that the algorithm was twice as likely to wrongly categorize black defendants as being likely to reoffend than white defendants now I should just point out that deepmind does not build these systems but the whole industry alongside the partnership in AI has been part of the conversation of how to address them one of those people is William Isaac a social scientist at deep mind he says that the 2016 Pro Publica investigation made people realize that switching over to algorithms doesn't make decisions any more objective even with AI and ML tools you are getting into the social environment where you actually have the same norm the same kind of like systemic biases they're still all present so it's really hard to say that somehow this will replace all of the kind of subjective preconceived notions about certain groups or historical biases against them and that you can start all over again and so I think that was the wake-up call was that is not as objective as it seems and that as a result we still have to grapple with those questions the problem is the data which gives the algorithm predictive abilities a questions like how many times were you arrested as a juvenile but if you are say a young black man in America it doesn't matter how Laura biting you are the chances are that you will have had many more negative interactions with the police then someone exactly like you who happened to be white and if you're using that data to dictate who deserves to be given bail or not then you and serious risk of perpetuating societal imbalances going forwards this is Silvia Chiappa a staff research scientist at deep mind research don't fully understand what this furnace is about they also look like a messy area in the sense that involves it is not purely technical problem and it's very difficult to understand what how to define fairness and it's difficult to separate the technical part from the ethical one this is an important point because defining exactly what you mean by fair is surprisingly tricky of course you'd want an algorithm that makes equally accurate predictions for black and white defendants the algorithm should also be equally good at picking out the defendants who are likely to reoffend whatever racial group they belong to and as ProPublica pointed out the algorithm should make the same kind of mistakes at the same rate for everyone regardless of race ethically you'd want all of those things to be true but technically that's not always going to be possible if your data set has bias in it there are some kinds of fairness that are mathematically incompatible with others and even if you could guarantee all of these things there are still a number of ethical issues to contend with how do you measure fairness who is excluded from your definition how do you make those decisions transparent and ultimately how do people contest the decisions made by those algorithms see I told you it was tricky coming at this from two very different perspectives William and Silvia started looking into the bigger issue of fairness in algorithms even though we had kind of different frameworks me as a social scientists and Silvia is a machine learning researcher the actual overlap between how we would approach this and basically the assumptions that are embedded within it were remarkably similar and actually part of us was saying like oh like look at these papers and social science that are kind of making the same point they just hadn't actually had a way to actually communicate that formally you're listening to a podcast from the people at deep mind in April 2009 teen William and Sylvia Co published a paper on fairness in AI entitled a causal Bayesian networks viewpoint on fairness in it they show that no matter how fair algorithms might be if the data they're learning from is biased we still can't trust their results I don't think it is possible to find technical solutions that are completely satisfactory at some point we need to take decisions whether the kind of unfairness is acceptable or not but we can't advance a lot and that's why we need more researchers involved and not just machine learning researcher person researcher from different community to be raising awareness about this this problem find solutions but as we will never be able to find completely satisfactory a solution from a technical viewpoint at that point we need to take decision this is important to talk about these such that we are something that is missing at the moment I do think this is fundamentally like a societal ethical question and challenge and it will require lots of stakeholders to address if you have let's say data set a facial recognition tool that's designed to find missing children what threshold do you set as a society where you say ok this is acceptable we maybe are less successful at identifying children with darker faces what threshold do we say that's acceptable because that's not a technical question that's a social and political question and a normative question even if you do have a classifier or facial recognition software that's fair the application of it may be in unfair ways and so that might present a second question that is separate from the actual kind of like if you decide on the threshold right if you're just using it in neighborhoods that are predominantly one group or one ethnicity that presents a whole nother set of challenges for whether or not that's an ethical use of a particular technology you can't assess whether these algorithms are good or bad in isolation they don't exist on there you have to place them in the context of the world that they're being used like the criminal justice system or in health care here's Verity Harding again this is what I mean by it being a kind of much bigger discussion that potentially the use of algorithms is highlighting my fear is that people will kind of get a checkmark that says we've tested and this algorithm isn't biased and therefore you should feel free to use it and that to me isn't going far enough I think there needs to be a further discussion then about but is this making those decisions that were already bad worse or more quickly and therefore more of them and you know that that kind of thing but things are changing he's William on what has happened since that ProPublica story broke they're going back and reconsidering what measures they collect right and going back and trying to create more robust data sets thinking about who is collecting the actual data itself will it be ever perfect where we have bias free purely pure data no I don't think that that's that something is ever gonna happen but I do think that people will be skeptical when people ask about what datasets are used and they don't get a satisfactory answer right I do think people will ask is this data set representative does it have balance across different groups so people will start asking the right questions and interrogating datasets and in models more aggressively and I think that will lead to better outcomes and crucially more people are now being included as part of the conversation in the aftermath of some of my work and among many others on predictive policing many cities in California actually started implementing citizen boards so when police departments wanted to acquire a new police technology that included uses of machine learning or artificial intelligence that they had to go in front of a citizen board and actually have the the local community evaluate the tool for different metrics including fairness and bias getting different voices involved in the conversation is essential to making sure that we build a future that belongs to all of us because what seems obvious to one person just wouldn't occur to another your perspective is hard-coded into the work that you create there are clear examples of this everywhere outside of AI able-bodied people designing buildings that disabled people can't use or new tights and plasters that only work if your skin is one particular color presumably the same as the designers and the algorithms that we've created they're really highlighting this issue like the ones used to automatically screen CVS and predict which candidates will fit best in a company here's Verity Harding again if it's based on historically discriminatory hiring decisions by either intentionally or unintentionally bias humans then it's going to kind of recreate those those patterns like if you've got a company where white men have succeeded yeah and they're looking for candidates he'll succeed it's going to pick out white male series yes exactly and if the people building the technology are all white males as well then the likelihood of paying attention to that potential bias and being aware of area we all have our blind spots then the likelihood increases we've seen driverless cars that don't spot pedestrians with darker skin tones tumor screening algorithms that aren't as effective patients with ethnicities other than white European and lots and lots and lots of issues around gender all of this is kind of inevitable unless you have a range of different viewpoints in your design process the most important thing in my point of view for ensuring that these things are if not not biased but that you're being intentional about what you're building and aware of the potential bias is that your team is a diverse team is that you have a broad set of voices involved and it's actually much simpler to do that than it's suggested and the issue of gender diversity has been a particular focus of late I think there's plenty of young women and girls who are really excited by science and STEM subjects and it's an easy get out to say that there aren't enough women in stem and that's why workforces aren't diverse but actually it's much more about making sure that it's a safe space for women and girls to work that they're not discriminated against once they're there that you're able to not just attract and hire them but that you're able to keep them and make sure that it's a place where they feel comfortable working and so I think it's much more important that we look at how women are treated in science than just dismiss it as something that girls aren't interested in a young age Linna abraham deepmind CEO is very conscious that diversity is still a problem in the tech sector as a whole talk about things that keep me up at night right so here I am a professional of 25 plus years with an engineering background a mom also raising nine-year-old twin daughters I would have hoped by now we would have solved the problem and yet we haven't we're like at the same numbers are flat but they're all steps being taken to address it there's the short term stuff you can do which are things like you diversify your candidate pools you if you're doing university recruiting you look at a broad range of universities and ones that have that have a broader student representation and have support structures often in place to help the students through their academic and communities you look at job descriptions and ensure that you don't have unconscious bias reflected and your job descriptions you so once you're in the recruiting pipeline then you may need to make sure candidates have the right experience we are being very deliberate about how we invest back in education ai is something that will change future generations so how do we make this a field that's more accessible so for example whether its funding diversity scholars and universities or funding AI chairs and universities to to increase the pipeline and I think that helps fuel some of the academic aspects as well as support our like long-term recruiting this isn't just tokenism that we're talking about here this is about making better technology diversity and diverse perspectives will create a GI faster and safer and with just a better a better result because one of the things I worry about is how do we avoid our own internal bias a lot of the work around deep reinforcement learning started from specific pockets and many people grew up in those labs or those universities and you know we they brought their former colleagues and so we have a pretty strong network of people that have known each other for a long time which is fantastic and they can really advance certain aspects of our of our work and yet there are other areas that are emerging how do you teach curiosity how do you how do we ensure that we minimize bias and the code that we're writing who's to say what intelligence is and isn't unless you have a better representation from society and that's just on the research side on the operation side to you think about things like okay think about public policy like if you're asking governments to think about how they're going to treat artificial intelligence then you want people that are representative of the the constituents of the population if we want to be focused on education making sure that we're not just focused on the specific schools but our broader range so we're bringing more people into this space I just think it's going to be imperative for us to truly solve intelligence that we're just going to need to have more diversity so he's positive that well solution is probably a bit grand their words to suggest but it is part of the way forward making ethics a kind of keystone every stage of the process rather than having is an afterthought oh absolutely we have to be thinking about our responsibility for the technology we develop and to candidly to as a whole every step along the way and I think that there's something quite special about being Pittock ordered out of London versus being based out of Silicon Valley I'd love Silicon Valley it's where my career has has really developed and yet you're surrounded by technologists you know from the billboard signs to the marketing and promotion here in London it's so multicultural and I feel like it's part of your daily life you need to be thinking about the work that you're doing and how it is going to impact all the people around you but of course we can't just leave the solution solely in the hands of the people who are designing these things it's our future to the public and government should also have a hand in this my impression is that people want to understand what they're using and want to understand what makes it work and how it works but they more importantly want their representatives and the people tasked with keeping them safe and secure to understand it too and I think that's why we've seen a bit of a breakdown in recent times if you want to know more about ethics diversity and fairness then head over to the show notes where you can also explore the world of AI research beyond deep mind and we'd welcome your feedback or your questions on any aspects of artificial intelligence that we're covering in this series so if you want in the discussion or point us to stories or resources that you think other listeners would find helpful then please let us know you can message us on Twitter or you can email us podcast at deep mind calm you\n"