#68 The Future of Responsible AI (with Maria Luciana Axente)

**The Need for Change: Data Scientists as Advocates for Responsible AI**

In today's world, technology is advancing at an unprecedented rate, and Artificial Intelligence (AI) is no exception. As data scientists, we have a unique opportunity to shape the future of this technology and ensure that it is developed and used in ways that benefit humanity. However, this requires a fundamental shift in our mindset and approach to AI development.

We need to start recognizing that our job as data scientists is not just about writing code or processing data sets, but about changing people's lives. We have the power to create technology that can either improve or harm society. It is our responsibility to acknowledge this impact and work towards creating AI that accounts for its negative implications. This requires a literacy in AI ethics and the understanding of the value chain of AI development.

**The Importance of Grassroots Activism**

Until we have grassroots activism at the data scientist level, we won't be able to completely change or transform this mindset of the whole technology world. We need people inside the industry to acknowledge that their job is much more impactful than they may realize. It's not just about writing code or processing data sets; it's about creating technology that can improve people's lives.

We need to start having conversations about the implications and impact of AI, rather than just learning about how to build machine learning algorithms or voice assistants. We need to work from there on how to best build AI in a way that achieves positive outcomes and keeps negative consequences under control.

**The Role of Data Scientists in Shaping the Future of AI**

As data scientists, we have a unique position in this industry. We are the ones closest to building these tools, and we have the leverage to make a change. If we don't take responsibility for our work and advocate for responsible AI development, companies may not be able to provide us with the frameworks, methodology, or policies that we need.

We need to challenge everything we take at face value and inform ourselves on what is really going on in this technology. We need to understand who's behind it, where it is, and how to make a change from where we are. This requires a bottom-up approach, where people like us understand the true potential of AI and work towards creating a different outcome.

**The Urgency of Action**

We are at a unique moment in history where we have the power to shape the future of technology. We can either get it right or get it wrong. The technology that's currently being developed has the potential to be both incredibly beneficial and incredibly harmful.

It's essential that we raise awareness about the potential risks and consequences of AI, not just for the sake of sounding alarmist, but because it's a necessary step towards creating positive change. We need to work together as a community to ensure that AI is developed and used in ways that benefit humanity.

**The Power of Individual Action**

As individuals, we have the power to make a difference. We can start by questioning our assumptions about AI and challenging everything we take at face value. We can inform ourselves on what's really going on in this technology and work towards creating positive change from where we are.

We don't need to wait for companies or governments to provide us with frameworks or policies; we can start advocating for responsible AI development ourselves. This requires a willingness to be active participants in this conversation, rather than just saying it's someone else's job.

**The Future of AI Development**

As we move forward, it's essential that we have a top-down approach combined with a bottom-up approach. Companies need to develop the right frameworks around responsible AI development, while individuals like us need to understand the true potential of AI and work towards creating positive change.

We are still a small community, but I'm hopeful that the new generation coming up will be inspired to take on this vision and join us in pushing the boundaries of what's possible with AI. Together, we can create technology that benefits humanity and avoids the pitfalls of unregulated AI development.

**Conclusion**

In conclusion, as data scientists, we have a unique responsibility to shape the future of AI development. We need to acknowledge the impact of our work and work towards creating technology that accounts for its negative implications. By starting grassroots activism at the data scientist level and working together, we can create positive change and ensure that AI is developed and used in ways that benefit humanity.

"WEBVTTKind: captionsLanguage: enhello this is adel neme from datacamp and welcome to dataframed a podcast covering all things data and its impact on organizations across the world i think we can all agree that one of the first things the general public thinks about when they hear the term ai is the ethics of ai and its potential impact on humanity a lot of the time though the concept of risks associated with ai is driven by conceptions in the popular media and pop culture and high profile failures of ai in public these failures have prompted many organizations to think twice about the models they put out in production and to analyze the risks of ai in order to deploy them responsibly this is why i'm excited to have maria luciana xente on today's podcast maria is responsible ai and ai for good lead at pwc uk in her role maria leads the implementation of ethics in ai for the firm while partnering with industry academia governments ngos and civil society to harness the power of ai in an ethical and responsible manner while acknowledging its benefits and risks in many walks of life she has played a crucial part in the development and setup of pwc uk's ai center of excellence the firm's ai strategy and most recently the development of pwc's responsible ai toolkit the firm's methodology for embedding ethics in ai maria is a globally recognized ai ethics expert an advisory board member for the uk all party parliamentary group on ai a member of bsi iso and ieee ai standard groups a fellow of the rsa and an advocate for gender diversity children and youth rights in the age of ai throughout the episode maria and i talk about her background where responsible ai intersects and diverges from the ethics of ai the state of responsible ai in organizations today how ai responsibility is linked to organizational culture and values and most importantly what data scientists can do today to ensure that their work is used ethically and responsibly by organizations and how bottom-up activism can nudge organizations in the proper direction if you enjoyed today's conversation with maria and want to check out previous episodes of the podcasts and show notes make sure to go to www.datacamp.com community podcast maria it's great to have you on the show i'm really excited to talk to you about the state of responsible ai ai governance and accountability and how organizations can start with the responsible ai journey but before can you give us a brief introduction about your background and how you got into the data and ethics space of course hello everyone and thank you adele for inviting me it's a pleasure to be talking to you and let's start exploring what's what's the buzz what's the fuss about ethics of the air why would you talk about it then what can we do about it so a bit of about my background so i work with bwc uk which probably most of you know it's a professional services company and the question might be what is pwc doing in the aia space and hopefully throughout the conversation we'll be having today you get a bit of a more more flavor why companies like pwc need to have a role in shaping the story of um ethical and responsible ai so i joined the firm seven years ago my background was a business in digital transformation so i got to set up businesses transform businesses and then move into the technology and digital space and you know work on this transformation with the help of technology and at some point i had the opportunity to focus from the wide range of emerging technology which was the case before that into something a bit more specialized which is the ai and it was a match made in heaven because when we started exploring pwc with other lenses and technology we realized how important it is to understand the whole context where this novel technology is being developed and used and how important to understand the consideration that needs to be part of the design much beyond the traditional boundaries of experience design and back into how businesses or the context will change as a result of using um a tool that has its own agency on operates very differently from all previous technologies so the last four years i've been part of the ai center of excellence there's been a fascinating journey because i was there from the beginning so i helped set it up you know put the strategy together it was very much like a new venture that we develop and gradually based on my previous experience and and my education i started exploring the ethical layer and what are the key moral consideration we need to consider what does it mean from a business perspective and if we have this vision of a good life with ai then how do we make it happen what needs to be in place what needs to be changed and that's why we came up with this concept of responsible ai that allows us to be able to create a vision of a good life with ai but also create what it's needed to achieve that vision and be very practical because ultimately we we are a for-profit company that you know needs to demonstrate value added from what we create so we can't afford just to vision about you know emerging technology we need to be able to deliver it in a way that is sustainable that's great and i'm excited to unpack all of that with you so i want to first start off by asking how you would define responsible ai you know over the past decade a lot of the discussion on air risk has fallen under the umbrella term of ai ethics and over the past few years we've seen the gradual rise of responsible ai i'd love if you can define responsible ai specifically in how it intersects and doesn't with ai ethics i think that's an excellent question and i think i am i'm very grateful that you have asked it because i think we need to start framing those concepts and and understand how they overlap if there's ever overlap what's the relationship between them so that we have that clarity we know our call is if you don't frame a new concept well enough then you will struggle to make it happen you'll struggle to get it off the ground so this is quite i would say uh personal to us and to myself is is is a leading responsibility i work for pwc uk is how we would define that the two terms are in the following manner to say that the ethics of ai is a new apply ethic discipline that is in in the process of being developed some might say it's a branch of digital ethics or data ethics or is information philosophy in general but it's definitely a new branch of applied ethics that is concerned with studying the moral implication of the technology with label ai that that those technology that ultimately have some key unique characteristics that makes them quite different from all the the technology we've seen before mainly to the fact that they display or they own the possess agency that means on one hand that they will interact with an external environment they will adapt to this external environment to a certain extent shape that external environment and they have a certain degree of autonomy from human supervision and these types of new assets require new ways of thinking in terms of normative question is it right or easily wrong for us to be using those machine how should we treat those type of agents so that's the discipline of the ethics of ai which is a bit more abstract and also it's a bit more forward-looking because we're stepping into areas we haven't been before you know in the history of humanity we haven't had assets like this that will operate alongside us and push the boundaries of what's wrong with rights we have thought about it in fictional work but we haven't had them in real life so we have to be able to reason and discuss and debates what are those moral implications and in this process the ethics of ai will allow us to formulate a good life with ai so what does it mean if we have these potent tools that have both benefits and risks how do we make sure they use it in a way that is aligned with some human goals we are not just creating ai for the sake of doing it or because we have some sort of a god complex we aligning it with the purpose of humanity and those you know purposes that we have as a human race are something that are very interesting because we haven't had to come together to say or maybe we did i'll get to that in a second but to be that precise we want to help everything to flourish in this way in fact we have it's the human rights but the ai ethics is that discipline that allows us to not only understand the moral implication but also say this is where we're heading this is what's acceptable and what's acceptable is not just to do this for the sake of doing it is because we want to achieve a human related goal related to that is how do we achieve the division right because every vision has to have some sort of a modus operandi or or or an operating modalities that will allow us to get there so we need to have another body of discipline that that collects everything that we need to be able to achieve that vision and that's where where responsibility is being placed because responsibility is much more tactical it actually feeds into now that we understand what's morally permissible and also what are the risk area where things can go wrong we can take that vision and translate it into you know tactical plans and we have a set of approaches and tools and ways of thinking that are multidisciplinary in nature that are holistic in nature because we realize that the disruption ai brings it's gonna transform who we are so therefore we need to approach this much more holistically but responsible ai is the engine that allows us to get to that vision if you want and has you know different approaches it has the risk angle which means not only understanding what are the risks attached to ai the new risk that ai brings but also what current risk in each organization's and at the society level and personal level will be augmented by ai then you have a new operating model how do you govern how how you control over a self-learning artifact that operates in a different way that it's stochastic in nature and therefore we need to upgrade our linear processes into something that's much more real-time and dynamic and also if we agree that we have a moral vision what will be the values that need to be incorporated in each and every single use case and context that specific application will operate and bring it all together the risk side the governance side but also the the value of the values that need to be incorporated gives you this new discipline that requires input from a wide range of disciplines uh case by case example by examples that come together to actually say this is what we need to be doing to achieve that vision of a good ai in this specific context so correct me if i'm wrong in summary the ethics of ai is about how we should align our moral values with ai systems whereas responsible ai is more about how to operationalize these moral values alongside other disciplines in order for us to create value out of responsible ai is that correct it's something on those lines but i will say the ethics of ai is not the framework it's a vision it's where we want to go you know framework is the one that gets us there okay awesome so you work with a lot of data and business leaders trying to integrate ethical practices into their ai development process how do you view the current state of responsible ai adoption do you think that this is something the majority of companies are investing in and thinking about i think that's an interesting story and i think we need to separate a little bit the buzzes being made uh publicly and the marketings and common narrative that we see out there from the reality in the field i think and really to understand the state of responsible ai we we have two avenues there's a put put aside the noises being created because obviously the the last few years have brought us a lot of bad examples of ai of examples where ai has been misused under user overuser abuse and has been widely reported in the press and as a result we've seen a lot of public attention given to the negative implications and consequences of ai but when we look at the state of responsibility within enterprises there are reasons to be optimists i think it's all start with the fact that when it comes to a technology as powerful and yet unknown as ai we need to change completely the mindset it's not just about the set of benefits that that technology will deliver it's a set of potential risk potential arms harms that are attached with it and we need to bring the two imbalance and we need to have this mindset of not only can we build it because we have what it takes we have the data we have the processing power but should we do it and that's a start change for the can i do it which is the philosophy that underpins the computer science community right and if you start thinking which should i do it suddenly you see that there are both benefits and risk attached to a new endeavor and when you put together a business plan or to a certain extent any plan to use this technology you will compare the two and you always proceed with cautious because you always out wide the benefits versus risk and that's something that gives us reason to be optimist because the public narrative have helped the executives working in ai or that have some sort of oversight over ai to reconsider right so it's about benefits and risk and starting with that many organizations have actively started to to bring it all together so that if that's the case if we need to consider benefits as well then obviously let's go on the risk side and identify all the potential risk that could be triggered by ai and not only identify them but being able to define mitigating strategies and with that understand if our organization are capable of mitigating risk associated with the ai and the research that we've been doing last year so we've surveyed about a thousand executive and responsible ai from across the world they've told us that an increasing number of executives will have ai risk strategies and they will consider the risk side as part of the wider air strategy on the other hand you know if we go beyond the the risks of ai and acknowledging that ultimately the vast majority of ai risk are in fact ethical risks we have seen quite a lot of uptake when it comes to the broader outlook of ethics of ai in setting up initiatives that will enable companies to explore these moral consequences of ai and with that being able to have a more long-term view not only so reactive which is which is linked to the risk and saying we will be looking to create internal policies that will allow us to explore from the perspective of our companies what is the direction we should be traveling we're using ai and various technology attached to it in some context who should need who needs to be involved what is the decision process and what are ultimately the values that we should be embedding in coding in the processes that will lead to build and and and deployment and use of those technologies and make sure that we have we have a way of controlling it all and our sound sounds grand and sounds like a lot needs to be done we have seen a lot of the companies having uh code of conducts and principles you know quite a high number we've seen companies being interested in setting up ethical boards that will allow them to explore and debate and and and have a trans and transparent and constructive um process um in ethical decision making but also developing impact assessment and other types of tools that will allow them to to um weight what are the the the consequences of of the um uh the ai that they're being developed and this is that that's enough evidence from us to say that the things are going in the right direction a lot a lot more needs to be done but um at least we received those clear signal that where someone mentioned the world ai you know the next uh um the the very next thought that comes in mind is that we need to be thinking about ethics risk identification and accountability and we need to make sure that we have all this baked in whatever plan we have ahead of us so that not only we are um you know taking all the fruits and all the benefits that are being promised by this technology but we remain in control and we understand that where are the things that can go wrong and how how best to deal with it that's very encouraging and you mentioned here the presence of reactive organizations who are reacting to ethical risks and ones that are more proactive what do you think the main differentiator between organizations that take responsible ai is a serious level of value as opposed to others i think there are two groups of companies one it's where we'd see the ai adoption quite high and i'll put for a second the tech companies because they are in a different bucket altogether and i think the the challenges that the big tech companies will have are less to do with the ethics of the technology they produce and and use it's more to do with corporate and business ethics than anything and talking about the non-tech companies everyone else let's say i think from what we've seen from our clients is very much those those implications are linked with the maturity of ai adoption more maturity in understanding ai and also deploying air scale triggers those consideration because when you start seeing how ai operates within your business and more likely you they might have experienced some of those negative implications like especially around bias and discrimination it makes them be much more cautious in the way they approach um ai but also it's very much linked with the industry that operates so for example when we look at the various ethical principles that are or not the priority for different industries we have found not to our surprise that in fact reliability robustness and security is the most important concern for all of the companies so it's prioritizing making sure the solution is robust and stable but then when we look at others priorities they differ by sector right and worth mentioning that at the same level with reliability robustness and security we have data privacy that has been you know atop the ethical priority for forever also because we in some parts of the world um it's it's a mandatory requirement therefore embedding uh privacy in all the data-driven technologies it's done as part of a compliance process but when it comes for various industries for example in technology media and and telecom human agency is a top concern in um public services and health beneficial ai is a top concern right energy accountability is a priority for the executives in this field so on one hand besides the the maturity of adoption the other one is how the industries are shaped and what sort of a application they will be deploying and how those application will empower not the operations will will they be more closer to the customer the the will will it be um will it will it touch the personal data or would it be application that more back office type of ai and when you put this in balance you'll see that you know companies that are mature they will stop thinking not just start thinking about it they already have initiatives in place the ones that are starting this journey will consider those different implications but they will probably slow in adopting because it will be linked very much with the um the pace and adoption of ai so you mentioned here robustness security and reliability and i would like to maybe segue into discussing how to operationalize responsible ai within the organization you know one of the best resources i've seen on responsible ai is a framework your team developed on a responsible ai titled the responsible ai toolkit can you walk us through that framework and the different components that go into it yes thank you for your appreciation uh i think we're very proud of the work we've been doing because we put a lot of passion in it so when we created the tool three years ago now when we launched it two years ago there was a bunch of us from different territories that came together based on our previous experiences client work and internal experience to create a tool that is both flexible and and forward-looking but also holistic in nature because we understood that with the potential of ai to be that disruptive responsible ai needs to to be able to bring together under one umbrella approaches that will allow for this flexible yet holistic approach so we ended up creating a set of lego-like type of a toolkit uh both code-based and non-code based that are catered towards different needs of different problems to be solved so we have assets that will test for reliability and robustness and security so for explainability or fairness and and discrimination which are very much the plug and play type of things it's it's very much where the whole of the industry is in terms of creating solutions that will allow for a ad hoc testing of the performance of the algorithms but at the same time we also have non-code based assets there are more advisory consulting in nature that will allow for an assessment of where the organization is in terms of understanding what values they are standing standing by when it comes of developing ai how well they are able to translate those values into principles and then into design requirements but also how well in sync those values are with the context of the organization and with the regulation in various territories and lastly is how do you remain in control how you develop a governance models that look across the air life cycle that starts not just with the business requirements you know that's that's the model model design it's the application design but the reality is is that ai life cycle starts with the strategic overlook when you decide where what are the big um the prior strategic priorities where ai is going to be incorporated and how you approach it and who's going to be involved in it and um the governance of ai brings together you know all sorts of tools that allows you to operate various flavors of ai or various types of ai either you build it in the house or you acquire for third party you have to have a virtual operating model that allows you to be at least in control to be in control at least at least until you know this disruption or this change that ai triggers in terms of the linearity of business processes the way the structure of the jobs the the working culture will gradually adapt to the autonomous agents and having those different assets allow us to say to our clients uh you know if your main concern is about identifying risks we can help you identify those risks create the right controls but also update your operating model so that you actually have the ability to address those risks like bias or packness in the same way if companies will say i'm concerned if they're based out in denmark for example denmark's for example where there is a legal requirement for the companies to have a code of conduct on data ethics and say what should be my ethical principles how should i align it how should i translate it for uh you know uh in my internal policies and who should be involved in in bringing this policy to life we will be able to address this question but in the same time you know we always keep an eye on the longer vision the longer vision which is a good life with ai and while we do all those different individual pieces of work the reason of having such a holistic approach to responsible a is to say that there are more steps for you to take if you are you know serious if you are committed to deliver ethical ai which implies you regardless if you start the the journey from addressing the risk of ai you should address most all if possible all of those different elements because without it it will be difficult to achieve that ethical outcome with ai i'd love to unpack what you mentioned here um so one of the things you mentioned is helping organizations create an ethical code of contact and integrating these values into their ai systems can you walk us through what the process looks like and what organizations can do here you mentioned their the denmark use case here assuming all organizations are like denmark how can they go about operationalizing the ethical charter i think first of all it is to understand that you have to go back to the values of the organization you can't just you know pick up ethical principles that you want to apply for ai out of the skies or align it with other organization before understanding who you are as a as a group and this is where most of the the tech companies will have a problem because there seems to be a disconnect between the values that they have sign up as a collective and the organizational values that need to drive the vision and the ambition of all those organizations and how those values are actually translated into the way they operate including the tech they develop and use and that's probably the first and most important step acknowledge that you have those values and acknowledge that in the world of ai the translation of those values into design requirements requires much more honesty than before if before you wouldn't have that much visibility or a way of proving if you have your values aligned or embedded or not now it's the time you know i keep on saying to people a good organization produce good ai you know bad organization they will produce a different type of ai i'm not saying bad i'm saying less ethical organization so i think it's really important as a first step to understand that you have a set of values that ultimately need to be reflected in everything you do and say not just say everything you do the second level is if that's the case what are the key ethical moral consideration that are being triggered by ai this is where we spend quite a bit of time to understand based on the research of so many brilliant experts we had in the field that have have been thinking about it for a very long time you know looking at ai and assessing and iterating the various moral implications they cope up with those ethical principles based on that and we started with um you know groups as uh the asilomar one and then we had um the ieee initiative that probably is the largest to date because they spend three or four years and they engage more than 300 experts in the field with the view of collecting those ethical and moral issues and then being able to distill in distilling in some guidance that is easy to be used by people with less experience the engineers ones that need very clear guidance and and rules on how to work with those implications how to translate that into something much more recent is like the oecd and europe european commission first obviously the eu trustworthy rules of ai and oecd they're all interlinked because ultimately it started with a group of experts alongside some philosophers that have braced them together about those implications and the um various group doing this separately they then more and more other groups will iterate and further enhance uh this thinking and gave us close to 200 different documents um 155 different principles which when we put together we aggregate and distill into nine mathematical principles and those are data privacy robustness and security transparency slash explainability beneficial ai accountability safety lawful net compliance and human agency human agencies of course but that's our way in pwc of grouping it all together to say that if you look at all those 155 different principles but they're being drafted by all those groups um uh for-profit not-for-profit multinational or supranational organization they all have a lot of in common and it's just a matter of how you express some of the issues so when we aggregated we find those nine but if you then look at how oecd have done this or the european commission have done this it's very similar right being able to aggregate all and say those are all the moral considerations we have right now and while there will be others we need to be thinking about it more long term what happens right now it needs to have those rules um uh incorporated or guiding the design right and that's the second step is saying looking at all those different principles pick up the ones that are more relevant to one's organization trace it back to the values but also very importantly demonstrate how those ethical principles that are said to be translated into norms and design requirements are aligned with human rights because ultimately human rights is the value system that has been signed up by almost all 190 countries in the world is actually a law that it's biting so therefore we need to demonstrate how a specific uh principles is linked with various um uh human rights articles and how various applications either fulfill that principles or are in in danger danger of breaking and i'm not gonna spend too much time in going in that direction but to say that the third step in this process of operationalizing is to make sure when you build this charter you consult with everyone in your organization right it's not enough for a group of people or just you know um someone who owns this in one organization say okay i cherry pick the principles i draft them and here you go that's not gonna fly for long you need to go through the process and this is where everyone is actually not everyone but a lot of companies are leapfrogging or trying to over cut the corner here is to say i have the principles is enough you know i'm just going to push this policy this is the slowest and more painful journey you need to be bringing together different groups different stakeholders and being able to sign off those principles and being able to negotiate those principles with everyone who would influence how the values would be incorporated will be impacted by and then only by you do this and in some cases might take you know yes to formulate this policy oecd principles it took about two years to formulate a pager but the process behind that that extensive consultation with the stakeholder it's the secret source of ensuring the ethical principles will then be properly operationalized because in this process not only that you get the support and and consultation with everyone who needs to be involved but you also start the process of changing mindsets because in that process you start debating why those principle matters and being able to iterate based on current examples you have in your organization how you use data in ai or potential one what's going to happen and you start the process of operationalizing the principles by engaging everyone and by the time you start designing framework and tools you already are halfway there because people will have a higher degree of awareness and understanding of why this is important and why this needs to be done that's fantastic and you mentioned ai governance and accountability to be one dimension of the responsible ai toolkit ai governance can require the collaboration and accountability of data scientists business leaders experts process managers operation specialists and really a variety of different people and personas present within any organization and they may not all have the same you know quote-unquote data language or data skills or the same level of data literacy you know pointing it out obviously data literacy and ai literacy are important for organizations but how do you think we should expand our conception of data literacy to address ai ethics risk and responsibility uh within the workplace i think before we talk about air and digital and data literacy i think we need to start talking about digital literacy in general and also literacy to the extent of the implication of technology i think every single time we we aim to educate people on technology we avoid describing but what can go wrong you know and is there another alternative and as a result we go along thinking that ai technology in general is a panacea for every single problem humanity has and we need to step away from this type of attitude and reconsider all together but what we're trying to solve here and what are the consequences of of building a technology like ai and more and more we have scholars now that come out and say there are a lot of hidden costs of developing ai that we don't see and we take for granted the level of sophistication of ai which in fact there's a lot of hidden work and efforts that are not i are not being acknowledged and there's a beautiful book that um has just come out um it's going to be available in uk probably in in the next few days i think it's available in europe already it's called the atlas of ai by one of my favorite people in ai called kate crawford and what kate does they're absolutely brilliant is to describe ai um as a phenomenon that brings together uh not just data and algorithms which is the refer framing of everyone uh working in this field especially the engineers but all the other elements that come together to give us the data and the algorithms you know what are the natural resources that are being uh harvested from the surface of the planet and what are the ecological cost of doing this what are the environmental cost of training a model a language model for example and if we start replicating that and if we start having more of this type of models with trillion of of parameters um what does it mean for the environment but also very much stressing how much of a hidden labor goes into labeling uh the data and how much this labor is being kept out of the supply chain if you want of ai and gives the impression that ai is more intelligent than it is um to the extent that she concludes and besides you know the data that's not an oil it's not a natural resource that is there to be harvested but in fact is about people's life so we still have to find and agree a narrative of what data is for us before we go any further and kate's conclusion is that ultimately ai is neither artificial and neither intelligent and i don't want to ruin the surprise for our listener in giving too much away from the book but what i say is that the book is exactly the type of narrative we need to be having when it comes to ai understanding the full length of the impact of ai and where it is coming from and who owns it and what are the interests behind so that we collectively come together and challenge those entities and those who at the moment seem to be disproportionately own parts of what enables to build powerful ai and being able to say we need to have a different approach to it we need to consider it in a different way and while some might say we probably it's a bit too late i would say this is exactly the right time to reconsider it's exactly the right time for people who now join ai to rethink the whole phenomenon and say through the context of raising inequality which we know now that ai can make so so much worse without even knowing with that hidden automation that already exists in so many parts of the world in the public services and also the impact on the environment it's the right time to have this conversation is the right time to unveil the different the hidden parts of ai and stop thinking that it's just a data set and a model and see what's behind that and how do we get to that data set what's how we created this data set and who are the people how represented is the data set then if we are going into that direction how will we change that that the life of so many different people those are i know sounds existence of question but i think in order for us to avoid an absolute disaster using ai we need to start thinking in those terms and while this is done so brilliantly by people like kat crawford and and her brilliant crew at ai now and so many other activist groups around the world i think in our own little teams uh and organization i think what we can learn and inspire from those scholars and visionary is to say we need to be thinking beyond the immediate uh borders of our perception and vision into what else what i do as a data scientist will actually change and how will that change the level of responsibility and accountability should have and start acting as advocates for change and sometimes the boundaries of accountability needs to be pushed from the team uh higher up to the business unit than higher up to the company but also society and until we have this grassroots activism at the data scientist level we won't be able to to completely change or transform this mindset of the whole technology world because we need people inside the people who build this to acknowledge that their job is much more impactful that writing a piece of code or processing a data set it's actually changing people's life and while there's no law out there nothing forces you to think about it but i have confidence that that there are a lot of good people out there that work in this industry that will understand what's at stake and and they will learn how to to be the good agents for change and build ai in a way that accounts for this negative implication and that's the literacy i wanna i want us to start having in these places not so much learning about oh this is how you build a machine learning algorithm this is this is how to build a voice assistant no it's understanding the implication and an impact and then work from there how to best build that so that we achieve the positive outcome and being able to to keep the negative under control i completely agree with you on this vision of ai and data literacy that incorporates ai ethics and the ai value chain and what it looks like as we are ending on this inspiring note what is your final call to action for listeners on the show don't take things at face value challenge challenge everything we need you where you are to challenge to inform yourselves on what is the the real potential of this technology who's behind it where where it is and how can you individually make a change where you are and i wouldn't say this if i wouldn't have been exposed to the fantastic work of people like a crawford and so many like her that advocates tirelessly for a different approach on ai and i think only by us individually informing ourselves and and and trying to find ways where we are and change our mindset first before we ask our companies to provide us with frameworks with methodology with policies i think we have a lot of leverage ourselves being as the prime builders or the ones that are closest to building those tools to make a change and while things are going in the right direction and i'm hoping to see much more progress in the realm of a top-down approach companies that develop the right framework around responsible ai that's not going to get us too far if we don't have the bottom-up approach where people like yourselves understand that truly this is a unique moment in history where we have with our hands a technology that can either get us in a very good place as humanity or in a dark place and although i was never too much of a fan of what people like elon musk or stephen hawking have said i think there is a benefit of raising the bad or the alarm in that direction because it's almost like it says that's where you don't want to go so if you don't want to go there get yourself together and work towards a different outcome than the one we just show you that it's possible because it's possible to get there you know no matter how much you deny it technology that's very little unknown uh by the vast majority of people including politicians um can be easily politicized and no it's not going to be the ai that's going to take over the world it's going to be other people developing and using the ai in a way that will grab more power into their own hands so we need to be careful for that and the best way to do it is to to start be active participants in this and not just say it's just my job to code it's just my job to clean this data set it's much more than that guys and only only if we come together we can do it we're still a small community um but i'm hoping that the new generation is coming the ones are training to to step into the um the ai jobs of the future they will be inspired to to take of this vision and they will join us and together we will continue to push the boundaries of how is being created right now and how ai should be develop the news uh in the future maria thank you so much for coming on the podcast i really appreciate sharing your insights thank you very much for having me that's it for today's episode of data framed thanks for being with us i really enjoyed maria's impassioned call to action on how data scientists can assume more responsibility around their work and her insights on the state of responsible ai if you enjoyed this podcast make sure to leave a review on itunes our next episode will be with brent dykes on effective data storytelling for more impactful data science i hope it will be useful for you and we'll catch you next time on data framedhello this is adel neme from datacamp and welcome to dataframed a podcast covering all things data and its impact on organizations across the world i think we can all agree that one of the first things the general public thinks about when they hear the term ai is the ethics of ai and its potential impact on humanity a lot of the time though the concept of risks associated with ai is driven by conceptions in the popular media and pop culture and high profile failures of ai in public these failures have prompted many organizations to think twice about the models they put out in production and to analyze the risks of ai in order to deploy them responsibly this is why i'm excited to have maria luciana xente on today's podcast maria is responsible ai and ai for good lead at pwc uk in her role maria leads the implementation of ethics in ai for the firm while partnering with industry academia governments ngos and civil society to harness the power of ai in an ethical and responsible manner while acknowledging its benefits and risks in many walks of life she has played a crucial part in the development and setup of pwc uk's ai center of excellence the firm's ai strategy and most recently the development of pwc's responsible ai toolkit the firm's methodology for embedding ethics in ai maria is a globally recognized ai ethics expert an advisory board member for the uk all party parliamentary group on ai a member of bsi iso and ieee ai standard groups a fellow of the rsa and an advocate for gender diversity children and youth rights in the age of ai throughout the episode maria and i talk about her background where responsible ai intersects and diverges from the ethics of ai the state of responsible ai in organizations today how ai responsibility is linked to organizational culture and values and most importantly what data scientists can do today to ensure that their work is used ethically and responsibly by organizations and how bottom-up activism can nudge organizations in the proper direction if you enjoyed today's conversation with maria and want to check out previous episodes of the podcasts and show notes make sure to go to www.datacamp.com community podcast maria it's great to have you on the show i'm really excited to talk to you about the state of responsible ai ai governance and accountability and how organizations can start with the responsible ai journey but before can you give us a brief introduction about your background and how you got into the data and ethics space of course hello everyone and thank you adele for inviting me it's a pleasure to be talking to you and let's start exploring what's what's the buzz what's the fuss about ethics of the air why would you talk about it then what can we do about it so a bit of about my background so i work with bwc uk which probably most of you know it's a professional services company and the question might be what is pwc doing in the aia space and hopefully throughout the conversation we'll be having today you get a bit of a more more flavor why companies like pwc need to have a role in shaping the story of um ethical and responsible ai so i joined the firm seven years ago my background was a business in digital transformation so i got to set up businesses transform businesses and then move into the technology and digital space and you know work on this transformation with the help of technology and at some point i had the opportunity to focus from the wide range of emerging technology which was the case before that into something a bit more specialized which is the ai and it was a match made in heaven because when we started exploring pwc with other lenses and technology we realized how important it is to understand the whole context where this novel technology is being developed and used and how important to understand the consideration that needs to be part of the design much beyond the traditional boundaries of experience design and back into how businesses or the context will change as a result of using um a tool that has its own agency on operates very differently from all previous technologies so the last four years i've been part of the ai center of excellence there's been a fascinating journey because i was there from the beginning so i helped set it up you know put the strategy together it was very much like a new venture that we develop and gradually based on my previous experience and and my education i started exploring the ethical layer and what are the key moral consideration we need to consider what does it mean from a business perspective and if we have this vision of a good life with ai then how do we make it happen what needs to be in place what needs to be changed and that's why we came up with this concept of responsible ai that allows us to be able to create a vision of a good life with ai but also create what it's needed to achieve that vision and be very practical because ultimately we we are a for-profit company that you know needs to demonstrate value added from what we create so we can't afford just to vision about you know emerging technology we need to be able to deliver it in a way that is sustainable that's great and i'm excited to unpack all of that with you so i want to first start off by asking how you would define responsible ai you know over the past decade a lot of the discussion on air risk has fallen under the umbrella term of ai ethics and over the past few years we've seen the gradual rise of responsible ai i'd love if you can define responsible ai specifically in how it intersects and doesn't with ai ethics i think that's an excellent question and i think i am i'm very grateful that you have asked it because i think we need to start framing those concepts and and understand how they overlap if there's ever overlap what's the relationship between them so that we have that clarity we know our call is if you don't frame a new concept well enough then you will struggle to make it happen you'll struggle to get it off the ground so this is quite i would say uh personal to us and to myself is is is a leading responsibility i work for pwc uk is how we would define that the two terms are in the following manner to say that the ethics of ai is a new apply ethic discipline that is in in the process of being developed some might say it's a branch of digital ethics or data ethics or is information philosophy in general but it's definitely a new branch of applied ethics that is concerned with studying the moral implication of the technology with label ai that that those technology that ultimately have some key unique characteristics that makes them quite different from all the the technology we've seen before mainly to the fact that they display or they own the possess agency that means on one hand that they will interact with an external environment they will adapt to this external environment to a certain extent shape that external environment and they have a certain degree of autonomy from human supervision and these types of new assets require new ways of thinking in terms of normative question is it right or easily wrong for us to be using those machine how should we treat those type of agents so that's the discipline of the ethics of ai which is a bit more abstract and also it's a bit more forward-looking because we're stepping into areas we haven't been before you know in the history of humanity we haven't had assets like this that will operate alongside us and push the boundaries of what's wrong with rights we have thought about it in fictional work but we haven't had them in real life so we have to be able to reason and discuss and debates what are those moral implications and in this process the ethics of ai will allow us to formulate a good life with ai so what does it mean if we have these potent tools that have both benefits and risks how do we make sure they use it in a way that is aligned with some human goals we are not just creating ai for the sake of doing it or because we have some sort of a god complex we aligning it with the purpose of humanity and those you know purposes that we have as a human race are something that are very interesting because we haven't had to come together to say or maybe we did i'll get to that in a second but to be that precise we want to help everything to flourish in this way in fact we have it's the human rights but the ai ethics is that discipline that allows us to not only understand the moral implication but also say this is where we're heading this is what's acceptable and what's acceptable is not just to do this for the sake of doing it is because we want to achieve a human related goal related to that is how do we achieve the division right because every vision has to have some sort of a modus operandi or or or an operating modalities that will allow us to get there so we need to have another body of discipline that that collects everything that we need to be able to achieve that vision and that's where where responsibility is being placed because responsibility is much more tactical it actually feeds into now that we understand what's morally permissible and also what are the risk area where things can go wrong we can take that vision and translate it into you know tactical plans and we have a set of approaches and tools and ways of thinking that are multidisciplinary in nature that are holistic in nature because we realize that the disruption ai brings it's gonna transform who we are so therefore we need to approach this much more holistically but responsible ai is the engine that allows us to get to that vision if you want and has you know different approaches it has the risk angle which means not only understanding what are the risks attached to ai the new risk that ai brings but also what current risk in each organization's and at the society level and personal level will be augmented by ai then you have a new operating model how do you govern how how you control over a self-learning artifact that operates in a different way that it's stochastic in nature and therefore we need to upgrade our linear processes into something that's much more real-time and dynamic and also if we agree that we have a moral vision what will be the values that need to be incorporated in each and every single use case and context that specific application will operate and bring it all together the risk side the governance side but also the the value of the values that need to be incorporated gives you this new discipline that requires input from a wide range of disciplines uh case by case example by examples that come together to actually say this is what we need to be doing to achieve that vision of a good ai in this specific context so correct me if i'm wrong in summary the ethics of ai is about how we should align our moral values with ai systems whereas responsible ai is more about how to operationalize these moral values alongside other disciplines in order for us to create value out of responsible ai is that correct it's something on those lines but i will say the ethics of ai is not the framework it's a vision it's where we want to go you know framework is the one that gets us there okay awesome so you work with a lot of data and business leaders trying to integrate ethical practices into their ai development process how do you view the current state of responsible ai adoption do you think that this is something the majority of companies are investing in and thinking about i think that's an interesting story and i think we need to separate a little bit the buzzes being made uh publicly and the marketings and common narrative that we see out there from the reality in the field i think and really to understand the state of responsible ai we we have two avenues there's a put put aside the noises being created because obviously the the last few years have brought us a lot of bad examples of ai of examples where ai has been misused under user overuser abuse and has been widely reported in the press and as a result we've seen a lot of public attention given to the negative implications and consequences of ai but when we look at the state of responsibility within enterprises there are reasons to be optimists i think it's all start with the fact that when it comes to a technology as powerful and yet unknown as ai we need to change completely the mindset it's not just about the set of benefits that that technology will deliver it's a set of potential risk potential arms harms that are attached with it and we need to bring the two imbalance and we need to have this mindset of not only can we build it because we have what it takes we have the data we have the processing power but should we do it and that's a start change for the can i do it which is the philosophy that underpins the computer science community right and if you start thinking which should i do it suddenly you see that there are both benefits and risk attached to a new endeavor and when you put together a business plan or to a certain extent any plan to use this technology you will compare the two and you always proceed with cautious because you always out wide the benefits versus risk and that's something that gives us reason to be optimist because the public narrative have helped the executives working in ai or that have some sort of oversight over ai to reconsider right so it's about benefits and risk and starting with that many organizations have actively started to to bring it all together so that if that's the case if we need to consider benefits as well then obviously let's go on the risk side and identify all the potential risk that could be triggered by ai and not only identify them but being able to define mitigating strategies and with that understand if our organization are capable of mitigating risk associated with the ai and the research that we've been doing last year so we've surveyed about a thousand executive and responsible ai from across the world they've told us that an increasing number of executives will have ai risk strategies and they will consider the risk side as part of the wider air strategy on the other hand you know if we go beyond the the risks of ai and acknowledging that ultimately the vast majority of ai risk are in fact ethical risks we have seen quite a lot of uptake when it comes to the broader outlook of ethics of ai in setting up initiatives that will enable companies to explore these moral consequences of ai and with that being able to have a more long-term view not only so reactive which is which is linked to the risk and saying we will be looking to create internal policies that will allow us to explore from the perspective of our companies what is the direction we should be traveling we're using ai and various technology attached to it in some context who should need who needs to be involved what is the decision process and what are ultimately the values that we should be embedding in coding in the processes that will lead to build and and and deployment and use of those technologies and make sure that we have we have a way of controlling it all and our sound sounds grand and sounds like a lot needs to be done we have seen a lot of the companies having uh code of conducts and principles you know quite a high number we've seen companies being interested in setting up ethical boards that will allow them to explore and debate and and and have a trans and transparent and constructive um process um in ethical decision making but also developing impact assessment and other types of tools that will allow them to to um weight what are the the the consequences of of the um uh the ai that they're being developed and this is that that's enough evidence from us to say that the things are going in the right direction a lot a lot more needs to be done but um at least we received those clear signal that where someone mentioned the world ai you know the next uh um the the very next thought that comes in mind is that we need to be thinking about ethics risk identification and accountability and we need to make sure that we have all this baked in whatever plan we have ahead of us so that not only we are um you know taking all the fruits and all the benefits that are being promised by this technology but we remain in control and we understand that where are the things that can go wrong and how how best to deal with it that's very encouraging and you mentioned here the presence of reactive organizations who are reacting to ethical risks and ones that are more proactive what do you think the main differentiator between organizations that take responsible ai is a serious level of value as opposed to others i think there are two groups of companies one it's where we'd see the ai adoption quite high and i'll put for a second the tech companies because they are in a different bucket altogether and i think the the challenges that the big tech companies will have are less to do with the ethics of the technology they produce and and use it's more to do with corporate and business ethics than anything and talking about the non-tech companies everyone else let's say i think from what we've seen from our clients is very much those those implications are linked with the maturity of ai adoption more maturity in understanding ai and also deploying air scale triggers those consideration because when you start seeing how ai operates within your business and more likely you they might have experienced some of those negative implications like especially around bias and discrimination it makes them be much more cautious in the way they approach um ai but also it's very much linked with the industry that operates so for example when we look at the various ethical principles that are or not the priority for different industries we have found not to our surprise that in fact reliability robustness and security is the most important concern for all of the companies so it's prioritizing making sure the solution is robust and stable but then when we look at others priorities they differ by sector right and worth mentioning that at the same level with reliability robustness and security we have data privacy that has been you know atop the ethical priority for forever also because we in some parts of the world um it's it's a mandatory requirement therefore embedding uh privacy in all the data-driven technologies it's done as part of a compliance process but when it comes for various industries for example in technology media and and telecom human agency is a top concern in um public services and health beneficial ai is a top concern right energy accountability is a priority for the executives in this field so on one hand besides the the maturity of adoption the other one is how the industries are shaped and what sort of a application they will be deploying and how those application will empower not the operations will will they be more closer to the customer the the will will it be um will it will it touch the personal data or would it be application that more back office type of ai and when you put this in balance you'll see that you know companies that are mature they will stop thinking not just start thinking about it they already have initiatives in place the ones that are starting this journey will consider those different implications but they will probably slow in adopting because it will be linked very much with the um the pace and adoption of ai so you mentioned here robustness security and reliability and i would like to maybe segue into discussing how to operationalize responsible ai within the organization you know one of the best resources i've seen on responsible ai is a framework your team developed on a responsible ai titled the responsible ai toolkit can you walk us through that framework and the different components that go into it yes thank you for your appreciation uh i think we're very proud of the work we've been doing because we put a lot of passion in it so when we created the tool three years ago now when we launched it two years ago there was a bunch of us from different territories that came together based on our previous experiences client work and internal experience to create a tool that is both flexible and and forward-looking but also holistic in nature because we understood that with the potential of ai to be that disruptive responsible ai needs to to be able to bring together under one umbrella approaches that will allow for this flexible yet holistic approach so we ended up creating a set of lego-like type of a toolkit uh both code-based and non-code based that are catered towards different needs of different problems to be solved so we have assets that will test for reliability and robustness and security so for explainability or fairness and and discrimination which are very much the plug and play type of things it's it's very much where the whole of the industry is in terms of creating solutions that will allow for a ad hoc testing of the performance of the algorithms but at the same time we also have non-code based assets there are more advisory consulting in nature that will allow for an assessment of where the organization is in terms of understanding what values they are standing standing by when it comes of developing ai how well they are able to translate those values into principles and then into design requirements but also how well in sync those values are with the context of the organization and with the regulation in various territories and lastly is how do you remain in control how you develop a governance models that look across the air life cycle that starts not just with the business requirements you know that's that's the model model design it's the application design but the reality is is that ai life cycle starts with the strategic overlook when you decide where what are the big um the prior strategic priorities where ai is going to be incorporated and how you approach it and who's going to be involved in it and um the governance of ai brings together you know all sorts of tools that allows you to operate various flavors of ai or various types of ai either you build it in the house or you acquire for third party you have to have a virtual operating model that allows you to be at least in control to be in control at least at least until you know this disruption or this change that ai triggers in terms of the linearity of business processes the way the structure of the jobs the the working culture will gradually adapt to the autonomous agents and having those different assets allow us to say to our clients uh you know if your main concern is about identifying risks we can help you identify those risks create the right controls but also update your operating model so that you actually have the ability to address those risks like bias or packness in the same way if companies will say i'm concerned if they're based out in denmark for example denmark's for example where there is a legal requirement for the companies to have a code of conduct on data ethics and say what should be my ethical principles how should i align it how should i translate it for uh you know uh in my internal policies and who should be involved in in bringing this policy to life we will be able to address this question but in the same time you know we always keep an eye on the longer vision the longer vision which is a good life with ai and while we do all those different individual pieces of work the reason of having such a holistic approach to responsible a is to say that there are more steps for you to take if you are you know serious if you are committed to deliver ethical ai which implies you regardless if you start the the journey from addressing the risk of ai you should address most all if possible all of those different elements because without it it will be difficult to achieve that ethical outcome with ai i'd love to unpack what you mentioned here um so one of the things you mentioned is helping organizations create an ethical code of contact and integrating these values into their ai systems can you walk us through what the process looks like and what organizations can do here you mentioned their the denmark use case here assuming all organizations are like denmark how can they go about operationalizing the ethical charter i think first of all it is to understand that you have to go back to the values of the organization you can't just you know pick up ethical principles that you want to apply for ai out of the skies or align it with other organization before understanding who you are as a as a group and this is where most of the the tech companies will have a problem because there seems to be a disconnect between the values that they have sign up as a collective and the organizational values that need to drive the vision and the ambition of all those organizations and how those values are actually translated into the way they operate including the tech they develop and use and that's probably the first and most important step acknowledge that you have those values and acknowledge that in the world of ai the translation of those values into design requirements requires much more honesty than before if before you wouldn't have that much visibility or a way of proving if you have your values aligned or embedded or not now it's the time you know i keep on saying to people a good organization produce good ai you know bad organization they will produce a different type of ai i'm not saying bad i'm saying less ethical organization so i think it's really important as a first step to understand that you have a set of values that ultimately need to be reflected in everything you do and say not just say everything you do the second level is if that's the case what are the key ethical moral consideration that are being triggered by ai this is where we spend quite a bit of time to understand based on the research of so many brilliant experts we had in the field that have have been thinking about it for a very long time you know looking at ai and assessing and iterating the various moral implications they cope up with those ethical principles based on that and we started with um you know groups as uh the asilomar one and then we had um the ieee initiative that probably is the largest to date because they spend three or four years and they engage more than 300 experts in the field with the view of collecting those ethical and moral issues and then being able to distill in distilling in some guidance that is easy to be used by people with less experience the engineers ones that need very clear guidance and and rules on how to work with those implications how to translate that into something much more recent is like the oecd and europe european commission first obviously the eu trustworthy rules of ai and oecd they're all interlinked because ultimately it started with a group of experts alongside some philosophers that have braced them together about those implications and the um various group doing this separately they then more and more other groups will iterate and further enhance uh this thinking and gave us close to 200 different documents um 155 different principles which when we put together we aggregate and distill into nine mathematical principles and those are data privacy robustness and security transparency slash explainability beneficial ai accountability safety lawful net compliance and human agency human agencies of course but that's our way in pwc of grouping it all together to say that if you look at all those 155 different principles but they're being drafted by all those groups um uh for-profit not-for-profit multinational or supranational organization they all have a lot of in common and it's just a matter of how you express some of the issues so when we aggregated we find those nine but if you then look at how oecd have done this or the european commission have done this it's very similar right being able to aggregate all and say those are all the moral considerations we have right now and while there will be others we need to be thinking about it more long term what happens right now it needs to have those rules um uh incorporated or guiding the design right and that's the second step is saying looking at all those different principles pick up the ones that are more relevant to one's organization trace it back to the values but also very importantly demonstrate how those ethical principles that are said to be translated into norms and design requirements are aligned with human rights because ultimately human rights is the value system that has been signed up by almost all 190 countries in the world is actually a law that it's biting so therefore we need to demonstrate how a specific uh principles is linked with various um uh human rights articles and how various applications either fulfill that principles or are in in danger danger of breaking and i'm not gonna spend too much time in going in that direction but to say that the third step in this process of operationalizing is to make sure when you build this charter you consult with everyone in your organization right it's not enough for a group of people or just you know um someone who owns this in one organization say okay i cherry pick the principles i draft them and here you go that's not gonna fly for long you need to go through the process and this is where everyone is actually not everyone but a lot of companies are leapfrogging or trying to over cut the corner here is to say i have the principles is enough you know i'm just going to push this policy this is the slowest and more painful journey you need to be bringing together different groups different stakeholders and being able to sign off those principles and being able to negotiate those principles with everyone who would influence how the values would be incorporated will be impacted by and then only by you do this and in some cases might take you know yes to formulate this policy oecd principles it took about two years to formulate a pager but the process behind that that extensive consultation with the stakeholder it's the secret source of ensuring the ethical principles will then be properly operationalized because in this process not only that you get the support and and consultation with everyone who needs to be involved but you also start the process of changing mindsets because in that process you start debating why those principle matters and being able to iterate based on current examples you have in your organization how you use data in ai or potential one what's going to happen and you start the process of operationalizing the principles by engaging everyone and by the time you start designing framework and tools you already are halfway there because people will have a higher degree of awareness and understanding of why this is important and why this needs to be done that's fantastic and you mentioned ai governance and accountability to be one dimension of the responsible ai toolkit ai governance can require the collaboration and accountability of data scientists business leaders experts process managers operation specialists and really a variety of different people and personas present within any organization and they may not all have the same you know quote-unquote data language or data skills or the same level of data literacy you know pointing it out obviously data literacy and ai literacy are important for organizations but how do you think we should expand our conception of data literacy to address ai ethics risk and responsibility uh within the workplace i think before we talk about air and digital and data literacy i think we need to start talking about digital literacy in general and also literacy to the extent of the implication of technology i think every single time we we aim to educate people on technology we avoid describing but what can go wrong you know and is there another alternative and as a result we go along thinking that ai technology in general is a panacea for every single problem humanity has and we need to step away from this type of attitude and reconsider all together but what we're trying to solve here and what are the consequences of of building a technology like ai and more and more we have scholars now that come out and say there are a lot of hidden costs of developing ai that we don't see and we take for granted the level of sophistication of ai which in fact there's a lot of hidden work and efforts that are not i are not being acknowledged and there's a beautiful book that um has just come out um it's going to be available in uk probably in in the next few days i think it's available in europe already it's called the atlas of ai by one of my favorite people in ai called kate crawford and what kate does they're absolutely brilliant is to describe ai um as a phenomenon that brings together uh not just data and algorithms which is the refer framing of everyone uh working in this field especially the engineers but all the other elements that come together to give us the data and the algorithms you know what are the natural resources that are being uh harvested from the surface of the planet and what are the ecological cost of doing this what are the environmental cost of training a model a language model for example and if we start replicating that and if we start having more of this type of models with trillion of of parameters um what does it mean for the environment but also very much stressing how much of a hidden labor goes into labeling uh the data and how much this labor is being kept out of the supply chain if you want of ai and gives the impression that ai is more intelligent than it is um to the extent that she concludes and besides you know the data that's not an oil it's not a natural resource that is there to be harvested but in fact is about people's life so we still have to find and agree a narrative of what data is for us before we go any further and kate's conclusion is that ultimately ai is neither artificial and neither intelligent and i don't want to ruin the surprise for our listener in giving too much away from the book but what i say is that the book is exactly the type of narrative we need to be having when it comes to ai understanding the full length of the impact of ai and where it is coming from and who owns it and what are the interests behind so that we collectively come together and challenge those entities and those who at the moment seem to be disproportionately own parts of what enables to build powerful ai and being able to say we need to have a different approach to it we need to consider it in a different way and while some might say we probably it's a bit too late i would say this is exactly the right time to reconsider it's exactly the right time for people who now join ai to rethink the whole phenomenon and say through the context of raising inequality which we know now that ai can make so so much worse without even knowing with that hidden automation that already exists in so many parts of the world in the public services and also the impact on the environment it's the right time to have this conversation is the right time to unveil the different the hidden parts of ai and stop thinking that it's just a data set and a model and see what's behind that and how do we get to that data set what's how we created this data set and who are the people how represented is the data set then if we are going into that direction how will we change that that the life of so many different people those are i know sounds existence of question but i think in order for us to avoid an absolute disaster using ai we need to start thinking in those terms and while this is done so brilliantly by people like kat crawford and and her brilliant crew at ai now and so many other activist groups around the world i think in our own little teams uh and organization i think what we can learn and inspire from those scholars and visionary is to say we need to be thinking beyond the immediate uh borders of our perception and vision into what else what i do as a data scientist will actually change and how will that change the level of responsibility and accountability should have and start acting as advocates for change and sometimes the boundaries of accountability needs to be pushed from the team uh higher up to the business unit than higher up to the company but also society and until we have this grassroots activism at the data scientist level we won't be able to to completely change or transform this mindset of the whole technology world because we need people inside the people who build this to acknowledge that their job is much more impactful that writing a piece of code or processing a data set it's actually changing people's life and while there's no law out there nothing forces you to think about it but i have confidence that that there are a lot of good people out there that work in this industry that will understand what's at stake and and they will learn how to to be the good agents for change and build ai in a way that accounts for this negative implication and that's the literacy i wanna i want us to start having in these places not so much learning about oh this is how you build a machine learning algorithm this is this is how to build a voice assistant no it's understanding the implication and an impact and then work from there how to best build that so that we achieve the positive outcome and being able to to keep the negative under control i completely agree with you on this vision of ai and data literacy that incorporates ai ethics and the ai value chain and what it looks like as we are ending on this inspiring note what is your final call to action for listeners on the show don't take things at face value challenge challenge everything we need you where you are to challenge to inform yourselves on what is the the real potential of this technology who's behind it where where it is and how can you individually make a change where you are and i wouldn't say this if i wouldn't have been exposed to the fantastic work of people like a crawford and so many like her that advocates tirelessly for a different approach on ai and i think only by us individually informing ourselves and and and trying to find ways where we are and change our mindset first before we ask our companies to provide us with frameworks with methodology with policies i think we have a lot of leverage ourselves being as the prime builders or the ones that are closest to building those tools to make a change and while things are going in the right direction and i'm hoping to see much more progress in the realm of a top-down approach companies that develop the right framework around responsible ai that's not going to get us too far if we don't have the bottom-up approach where people like yourselves understand that truly this is a unique moment in history where we have with our hands a technology that can either get us in a very good place as humanity or in a dark place and although i was never too much of a fan of what people like elon musk or stephen hawking have said i think there is a benefit of raising the bad or the alarm in that direction because it's almost like it says that's where you don't want to go so if you don't want to go there get yourself together and work towards a different outcome than the one we just show you that it's possible because it's possible to get there you know no matter how much you deny it technology that's very little unknown uh by the vast majority of people including politicians um can be easily politicized and no it's not going to be the ai that's going to take over the world it's going to be other people developing and using the ai in a way that will grab more power into their own hands so we need to be careful for that and the best way to do it is to to start be active participants in this and not just say it's just my job to code it's just my job to clean this data set it's much more than that guys and only only if we come together we can do it we're still a small community um but i'm hoping that the new generation is coming the ones are training to to step into the um the ai jobs of the future they will be inspired to to take of this vision and they will join us and together we will continue to push the boundaries of how is being created right now and how ai should be develop the news uh in the future maria thank you so much for coming on the podcast i really appreciate sharing your insights thank you very much for having me that's it for today's episode of data framed thanks for being with us i really enjoyed maria's impassioned call to action on how data scientists can assume more responsibility around their work and her insights on the state of responsible ai if you enjoyed this podcast make sure to leave a review on itunes our next episode will be with brent dykes on effective data storytelling for more impactful data science i hope it will be useful for you and we'll catch you next time on data framed\n"