#210 Trust and Regulation in AI _ Bruce Schneier, Internationally Renowned Security Technologist

The Power and Potential of AI: A Path Forward for Humanity

As a security person, I am often portrayed as pessimistic about the impact of Artificial Intelligence (AI) on society. However, I am mostly optimistic about its potential to revolutionize various aspects of our lives. The key difference between my outlook and that of some experts is not about the capabilities of AI itself, but rather how we choose to utilize them.

In the United States, for instance, healthcare is a prime example where AI has made significant strides in improving patient outcomes and streamlining medical processes. However, unlike in Europe, where healthcare is often provided through a collective bargaining system, the US relies heavily on employment-based insurance. This dichotomy has resulted in a healthcare system that is increasingly tied to one's job security, leaving many without access to basic medical services during times of unemployment.

This precarious situation underscores the need for us as a society to reevaluate our approach to delivering essential human services. As AI continues to advance and automate various tasks, we must find innovative ways to decouple these services from employment status. This might involve exploring alternative models for healthcare delivery, such as universal access programs or community-based initiatives.

Fortunately, AI possesses tremendous potential in various domains that can significantly enhance our lives. For instance, AI acts as an effective mediator and moderator in online discussions, helping to build consensus and facilitate constructive debate. Its ability to sensemake complex issues, explain intricate concepts, and provide a neutral perspective makes it an invaluable asset in the realm of education and civic engagement.

Moreover, AI is poised to revolutionize the field of healthcare by augmenting the capabilities of medical professionals. In areas where there are significant shortages of doctors, AI-assisted nurses can provide high-quality care that rivals human counterparts in many cases. This synergy between humans and machines has the potential to bridge the gap in healthcare access globally.

The research applications of AI are also vast and promising. AI's exceptional pattern-matching abilities make it an ideal tool for analyzing large datasets, identifying patterns, and generating insights that can accelerate scientific discovery. In the realm of drug discovery, AI is already being utilized to simulate complex interactions between molecules and predict potential side effects, paving the way for more effective treatments.

One area where I believe AI will have a significant impact is in enhancing democracy and civic engagement. By serving as moderators and facilitators, AI can help create online spaces that foster respectful dialogue and encourage diverse perspectives. This can be particularly valuable in local government meetings, citizen assemblies, and other decision-making forums where open discussion is essential.

In addition to its potential benefits, it's essential to acknowledge the challenges posed by AI's increasing presence in our lives. As technology advances, there will be those who seek to exploit its capabilities for nefarious purposes, such as spreading misinformation or manipulating public opinion. Therefore, it's crucial that we prioritize AI literacy and develop strategies to mitigate these risks.

In conclusion, I firmly believe that AI has the potential to become a powerful force for good in society. However, this requires us to approach its development with caution and responsibility. By embracing AI's capabilities while also acknowledging its limitations, we can harness its power to create a more equitable and just world. As a society, we must come together to demand change and push for policy initiatives that prioritize the development of responsible AI technologies.

Ultimately, it is up to us as individuals and as a collective to shape the future of AI in ways that align with our values and aspirations. By engaging in open discussions, writing letters to our representatives, and demanding action from policymakers, we can create an environment where AI is developed and deployed in ways that serve humanity's best interests.

One final thought: I recently saw an episode of NPR where they interviewed a podcast host who had an AI version of himself discuss various topics. The conversation was remarkably insightful, and it highlighted the potential for AI to engage with humans in meaningful ways. As we move forward, it will be essential to continue exploring these interactions and developing new technologies that foster collaboration between humans and machines.

Actionable Steps

In order to unlock the full potential of AI, I urge everyone to start writing letters to their local representatives, urging them to prioritize the development of responsible AI technologies. By advocating for policy changes that promote accountability and transparency in AI development, we can create an environment where these technologies serve humanity's best interests.

Furthermore, it's essential that we prioritize education and awareness about AI, ensuring that everyone has access to accurate information about its capabilities and limitations. By doing so, we can foster a more informed public discourse around the role of AI in our lives.

As we embark on this journey with AI, let us remember that technology is merely a tool – it's how we choose to utilize it that will ultimately determine its impact on humanity.

"WEBVTTKind: captionsLanguage: enAI pretends to be a person AI pretends to have a relationship with you it doesn't and it is social trust so in the same way you trust your phone your search engine your email provider it is a tool and like all of those things it is a pretty untrustworthy tool right your phone your email provider your social networking platform they all spy on you right they are all operating against your best interest My worry is that AI is going to be the same that AI is fundamentally controlled by large for profit corporations hi Bruce welcome to the show thanks for having me I'd love to talk about uh trust in AI but before we get to that can you just tell me what trust means to you I wrote a book about trust I don't see it on my bookshelf right now uh it is an incredibly complex concept and it has many meanings it's an overloaded word kind of like security so asking people what trust means to them I think is bad because it means many things in many contexts to everybody there's difference between trust and trustworthiness right in in the security context trust is often something you have to trust not something that's trustworthy uh I do write about the difference between a more intimate uh personal trust and social trust but there's a lot of Scholars about trust and they have very different definitions and it really depends on what angle you're taking so I tend to keep it open because many people have different needs when they talk about the need for trust and as security person we often have to provide for them all but you know in terms of AI I think we should differentiate between interpersonal trust you might trust a friend and a more social trust you might trust a bank teller or an Uber driver can you just elaborate on that like how are the how is like what's the difference between interpersonal trust and social trust like when would you need one or the other you might need it's not really about need it's about the relationship so interpersonal trust is you trusting a friend right it's based on your knowledge of them it's less about their behavior and more about uh their their inner selves I trust a friend I trust a spouse I trust a relative we know what that means I don't know what they're going to do but I kind of notably informed by who they are when I say I trust an Uber driver it's a very different form of trust right I don't know this person right I I don't even know their last name I just met them but I trust that their behavior means I'm going to get taken my destination right safely and they could be a bank robber at night I don't know but it doesn't matter so for social trust there are systems of society that enable that I mean the reason I don't just get in the car of random strangers but I get in the car of random strangers who are Uber drivers that there's an entire system based on surveillance based on competition of a star rankings based on whatever background checks Uber does based on my history based on on their history that enables us to trust each other in that interaction right same thing when I hand cash over to a bank teller I don't know who they are I'm giving them my money but I know there's an entire system that the bank has to allow me to trust that person in that circumstance if we walked outside the bank down the block I would never give that person my money like ever but in the bank I would and that's the difference it's a really important difference because social trust scales interpersonal trust is only based on who I know it's not going to be more than 100 people in society and it'll probably be less than that but social trust I could trust thousands millions of people right I flew in an airplane yesterday think of all the people I trusted during that process including like all the passengers not to leap over and attack me and you know I mean that it's a little bit funny but if we were chimpanzees we couldn't do that so social trust is a big deal it is unique to humans and it's it makes Society work so it seems like um social trust is then going to be incredibly important when it comes to AI so uh in the same way that um you mentioned the the bank example where there's hundreds of different people involved in s of creating all these systems to make sure that social trust Works what's the equivalent that's needed for trusting AI so AI is interesting because AI pretends to be a person AI pretends to have a relationship with you it doesn't and it is social trust so in the same way you trust your phone your search engine your email provider it is a tool and like all of those things it is a pretty untrustworthy tool right your phone your email provider your social networking platform they all spy on you right they are all operating against your best interest My worry is that AI is going to be the same that AI is fundamentally controlled by large for-profit corporations that surveillance capitalism will be uh you know unavoidable as a business model and these systems will be untrustworthy you know we want social trust with them but we won't have it and because they are relational because they are conversational they will fool us right we will be fooled into thinking of them as friends and not as Services whereas at best they're going to be services so I worry a lot about people uh Miss place trust in AI that certainly seems like uh it could be a very B problem when you sort of start giving away all your sort of most personal details or intimate thoughts to uh some AI that isn't actually as trustworthy you know so yes so yes and no right already your search engine knows more about you than your spouse than your best friends right we never lie to our search engines they know about our hopes our dreams our fears whatever we're thinking about you know similarly our social networking platforms know a lot about us our phones right they know where we go who we're with what we're doing so we are already giving a lot of personal data to untrustworthy Services AI is going to be like that and you know when I think about AI digital assistance I think is one of the Holy Grails of personal Ai and AI that will be my uh travel agent and secretary and life coach and relationship counselor and you know coners and all of those things we're going to want it to know everything about us we're gonna want it to be intimate because it'll do a better job and now how do we make it so that right it's not also spying on us and that's going to be hard so in terms of being able to know when it's appropriate to have those sort of um trustworthy interactions or those intimate interaction ctions like um is there any way to gauge how trustworthy some AI is or if you just got to assume that it's it's not trustworthy we have to assume that the AIS that are run by meta and by Google are and by Microsoft and Amazon are going to be no more trustworthy than all of their other services I mean that would be foolish to think that Google whose business model is spying on you would make an AI that doesn't spy on you I mean maybe won't but that's not the way to bet and you know surveillance capitalism is the business model of the internet like pretty much everything spies on us your car spies on you your refrigerator spies on you right your drones whatever it is you know we see again and again how surveillance is pervasive in all of these systems so we can't expect different here and I think it' be foolish if we did you know if we want different we're going to have to legislate only hope I think you're right that like at the moment all the sort of most powerful AIS are created by these large technology companies so you mentioned the idea of legislation like is there is there an alternative to this sort of corporate AI I push for the notion of of public AI I think we should have at least one model out there I don't need I don't a lot I just need one that is not built by a for-profit corporation for their own benefit and it could be University doing it it could be a government doing it it could be a Consortium an right an NGO and as long as it is public and by this I don't mean uh a corporate model that has been made public domain so the Llama model doesn't count right that is still a corporate model even though we have access to its details it needs to be a model built ground up on on sort of nonprofit principles this feels important they you get a different kind of AI I don't need it to dominate I don't need it to to transplant all the corporate AIS I need it in the mix right I need it to be an option that people can choose and an option that we researchers can study in opposition to the corporate Ai and sort of understand where the Contour are uh you know it's not a big ask uh models are expensive to build but in the scheme of government expenditures they're cheap and models are getting cheaper all the time but this feels important you know if we're to understand how much and how we can trust corporate AI we're going to need non-corporate AI to compare it to otherwise we're not gonna be able to make good decisions uh so this is a really interesting idea having this sort of counterweight to the corporate a just having a a public model um so there are different sort of levels of um openness and so I know for example The Meta models they sort of open weights but they don't give you enough details toce everything so how just talk me through how much of it needs to be open and publicly available I think uh all of it I mean built ground up by the public for the public and and in a way that requires political accountability not just Market accountability so openness transparency responsible to public demands uh so we know the training data we know how it's trained we we have the weights we have the model and and that now becomes something that anybody can build on top of so universal access to the entire stack now this becomes a foundation for you know a free market in AI Innovations now we're getting some of that with both the hugging face model out of France and with the Llama model out of meta but they are still propri Ary built and then given to the public which is not enough we you're not going to get a model that isn't responsive in the same way to corporate demands and maybe that doesn't matter but we don't know if it doesn't matter yet and I think we need to know right this is just too important to solely give to you know the near-term financial interests of a bunch of uh Tech billionaires you going to need a better way to think about this and and the goal isn't to make AI into friends but you're never going to get interpersonal trust with AI I mean all I want is reliable service in the way Uber is a reliable service even though you know I don't know or you know in this interpersonal way trust anybody involved in that system right the the way the mechanism Works allows me to use Uber anywhere on the planet without even thinking about it and that's kind of the way trust works well like when I get in an airplane I don't even think about you know do I trust the the pilot the plane the maintenance Engineers the I mean I know because of this social trust that Delta Airlines puts well-trained and well-rested crews in cockpit it's on scheduled I don't have to think about it right when I go to lunch in uh about an hour or so I'm not going to walk into the kitchen and like check their sanitation levels I know that there are Health codes here in Cambridge Massachusetts that will ensure that I'm not going to die of food poisoning like are these perfect no people do occasionally get sick in restaurants airplanes occasionally fall out of the sky but largely it's good enough right and you know there are Uber drivers that commit crimes against passengers it's largely good enough and actually I think ta Uber drivers are interesting Taxi Driver used to be one of the country's most dangerous professions it was incredibly risky to be a taxi driver in a big city and Uber changed that right through surveillance that enables this social trust there a a lot you I had realized that taxi driver was such a a dangerous occupation but it's interesting that um these sort of additional regulations like um so for example you mentioned with restaurants there are regulations around sanitation and making sure that the food is going to be uh healthy you're not to put food poison people is sort of beneficial and helps in scale I think a lot of people when they say Okay more regulation there good reaction is going to be oh well regulation stops Innovation so what sort of regulation can you have for AI that is gonna so let's stop for a second regulation stops Innovation is a argument given to you by people who don't want to be regulated it is not true do we have problems with innovation in healthare do we have problems with innovation in automobile design I we put regulation in place because unfettered Innovation kills people so we're okay with it taken a few years for a drug to hit the market because if you get it wrong people die same thing with airplanes uh and if you wander around I mean I what is the lack of innovation in restaurants in your city because of Health codes none zero that is a fundamentally argument do not fall for it that's one uh two is if it inhibits Innovation maybe we're okay with that right if Innovation means people die we want slower Innovation you can only move fast and break things if the things you break are not consequential once the things you break are life and property then you can't move that fast and if you don't like it get into a different industry I don't care so I have no sympathy for companies that don't want to be regulated that is the price for operating in society and sometimes we regulate people out of business but 150 years ago we said to uh in our entire industry you can't send 5-year-olds up chimneys to clean them right and if it hurts your business too freaking bad we no longer send five-year-olds of chunties to clean them because we are more moral than that right you can't sell pajamas that catch on fire if it hurts your business model get a new business model you can't do it so I'm okay with putting restrictions on corporations right they exist because it is a clever way to organize Capital right and markets that's it they have no moral imperative to to exist right if Facebook can't exist without being regulated I mean I want Facebook to spy stop spying on its users if it can't exist because of that maybe it goes away and gets replaced by a company that can do social networking without spying on their users there's no rule that Facebook has to exist sorry I'm a little strided on this no uh I'm I'm glad you're giving your opinions um so in that case uh what sort of regulations do you think should be uh do you think should apply to AI in general my feeling is we don't need a lot of new regulations for AI because we want what we want to regulate is a human behavior right so if an AI at a university is racist in their admissions policies that's illegal but if a human admissions official is racist that's also illegal I don't care whether it's an AI or an AI plus a human or a non-ai computer or or a entirely human system right it's the output that matters and that same thing that's true for loan applications or policing or any other place you're going to see bias you know if I am concerned about ai's creating fake videos in political campaigns I'm equally concerned if those videos are made in a you know with actors in The Sound Stage like there's no difference so in general my feeling is that we want to regulate the behavior of the humans and not the tools they are using now after saying that AI is a certain type of tool and we'll need some specific rules right just like we like poisoning someone is illegal and also we make it harder for average people to buy certain poisons right we do both so there will be need to be some AI specific rules but in general regulate the human behavior and not the AI because the humans are the ones who are morally responsible for what's going on that certainly seems to help uh I guess people or managers become more accountable like okay we're putting this AI tool interaction but um actually we are responsible for what and it's the same if it's a non- AI tool right you you put a tool in and a tool is an extension of your power and responsibility so tool does damage then it's your your fault you know and we have experience with this you know a lot of talk is about what happens when AI start killing a are killing people you know robots have been killing people for decades in there in you know there's a steady stream not a lot of industrial robot accidents where people die in the US and Europe Europe and Asia right this is not new so we have experience with robots and AI systems taking human life and in general it is it's the company it's the maintainers you know courts are good at figuring out you know who's at fault and we we as Society have experience with that that is not going to be a new thing so it seems like existing laws are going to largely cover like the use cases okay so how do you feel about the sort of new raft of AI regulations and so we've got the EU AI Act is sort of the most recent but there are quite a few on the way eua Act is really I think the only one that is real I mean us we have an executive order but you know no one thinks Congress will be able to pass any regulation we can't even regulate social media and we've been trying for what a decade so I'm not optimistic about about regulating AI the EU AI Act is good I mean it's a good first attempt you can tell it it exhibits the problems with regulation of the technology the technology changes the eua ACT is being written before generative AI becomes a thing then you know GPT hits the mainstream they're frantically rewriting the law is kind of half and half but you know what is the big thing next year and will the law cover it and this is the problem of writing Tech specific laws the tech changes what doesn't change humans but in general I like the eui ACT it's a really good attempt I like the idea that they breaking applications into four different uh threat levels right and and regulating differently you know I think we can do more uh you know from in banking as a good example we regulate large Banks more heavily than small Banks so banking regulation is key to how big you are because we want to have you know we need regulation but we don't want to strangle little Banks now in Tech the big companies like regulation because you know it it weeds out the competition and Facebook will able to meet any regulation any government throws at it it's Facebook's competition that won't so you know that kind of thinking is needed here but the tech is moving so fast so we going to need a regulatory environment that is flexible and agile and we're not good at that at Society so so uh is it should be the larger companies that have more regulation or the the larger AI models I think that's proba at least think about me I don't have answers here but I kind of been poking out the questions but yes looking at uh some kind of tiered approach I think would be interesting here if we're just looking at regulations then that's going to put a lot of burden on governments to um bre trust in AI are there any things Beyond regulation we can do to increase trust I think not I mean corporations are fundamentally untrustworthy I mean you cannot have a a interpersonal trust relationship with them you can have a social relationship with them but corporations are precisely as immoral as the law will get let them get away with that's that's the way it works right the you know government establishes the rules and the market plays on top of the rules if we want to force more trust you need regulation it's it you know no company no industry has improved safety or security without being mandated by the government like ever and it's planes it's trains automobiles Pharmaceuticals workplace uh Food and Drugs consumer goods right I mean these are all industries that produce dangerous things until they were forced not to the market doesn't reward Safety and Security in the same way and the market rewards fake trust just as much as real trust especially when you have individuals working in a near-term uh reward Horizon so you've got these market failures where you need non-market mechanisms so if we need regulation is it gonna have to be every country is sort of making its own regulations or do we need some kind of uh Global uh synchronization here Global synchronization doesn't exist so it doesn't matter what you need you're not getting it I'm I'm fine with country doing their own things this is again companies complain there's you know lots of different regulations it's hard to meet them all and again my response is too freaking bad get used to it like you're a corporation internet you're a multinational corporation if you can't handle it don't be a multinational corporation this isn't hard and so I'm okay with a patchwork I'm okay with a patchwork of states right we we have uh state privacy rules that are different in every state Med is doing just fine they complain a lot but they're doing just fine so I'm good with a patchwork I don't think you're going to get International harmonization like anytime soon because we can't get it on even easier things and this is just moving so fast now we don't have planetary government I suppose yeah s probably prosan cons of having a planetary government and uh yeah maybe the patchwork is best at least for now um so I'm actually a fan of planetary government I think in general we as a species have two sets of problems we have Global problems and we have local problems we no longer have like France siiz problems so I tend to like government that is very big and very small right now the medium siiz seems to mean it was very important in an industrial age but it seems less important today that is a bigger conversation than this yeah I feel it's a whole separate podcast episode um all right so one thing that seems to have come up in conversation a few times already is social media and it seemed like social media went from like being the sort of darling of like this is going to save democracy and with things like the Arab Spring and all this sort of like 10 15 years ago and being kind of like yeah this is causing a lot of problems and uh yeah it's widely sort of uh well there's disillusionment anyway um so are there any lessons that the AI industry can learn from social media so yes and actually I wrote a an essay it appeared in the MIT Tech Journal like last month called Five Lessons AI can learn from social media where I talk about the our inability to regulate social media and uh you know causing all these problems and how we can learn from their mistakes and it's things like virality it's things like surveillance lockin it in monopolization I think that's the big one that uh the biggest problem with social media is that they're monopolies and that means they just don't have to be responsive to their users or their customers which are the the advertisers actually but but they're users most importantly and and anything we can do to break up the tech monopolies will be incredibly valuable in in all of this and that's the biggest lesson so you think um in this case I guess for anyone think about regulation you think well okay these AI companies they're going to be monopolies and therefore you need to regulate them as monopolies is that is that the idea yeah I mean I I I me monopolization as a if you are a monopoly you have a lot more power to shape the market and not respond basically to uh to Market demands right you can uh operate outside you know the basic tenets of a capitalist market system and and that and that breaks that breaks the system so I need there to be competition I need I need sellers to compete for buyers right that is how I get a vibrant market and if sellers aren't competing with each other for buyers you don't have that Dynamic and that's what's really important um and do you think that sort of that Monopoly Market is going to be inevitable for uh for AI particularly for generative AI or do you think there will be course it's not inevitable it's it's it's it's only enabled by the ability of Corporations to ignore antitrust law you know we have laws in place to try to protect prevent monopolies we didn't enforce them for a few decades which is why we have the big Tech monopolies uh they are starting to be enforced again now but you know the the power imbalance is great so we'll see how uh how it goes but you know EU is doing than the US and in a sense the EU is the regulatory superpower of the planet me I look to them more than the US to uh you know to keep these companies in check it occurs to me that we've been talking about um corporations as being this sort of like single entity but actually they consist of people and in but that's that's kind of a I mean know what's his name said that Ronney said that I mean it it's it's true and it's they are uh Charlie stro science fiction writer talks about corporations of slow AI it's a really interesting parallel that yes they're people but they are this soot technical organism and they have things they do are are greater than the individual people it's like saying that a car is is like you know metal and screws I mean yes and no and you know if you think about it I let's take meta I mean meta could deci de tomorrow not to surveil its people and if they did the CEO would be fired and replaced with a less ethical CEO right the people can't operate like with full autonomy because of the system they're embedded in so corporations are not just a bunch of people in a room they are a highly structured multi-human entity and you cannot reduce them to just people I suppose that's true there are certainly limits on things I could say like I would never tell people it's a terrible idea to about data and AI because they Immortal I mean they're Immortal in a way that that the people aren't right they outlive the people in them the people in them come and go it's like your skin cells right you know the cells in your body come and go but who you are is greater than a pile of cells maybe that's a better analogy well whiskin Sals as part of human Corporation something all right um so just human being from about though so within the corporations there are people who are sort of building these AI tools who are working on these things and we have a lot of them listening in the audience so do you have any advice for the people who are building uh AI or making use of AI um at work like what do they need to do um to create more trustworthy AI or can they have some sort of effect I think really pay attention to applications it matters applications right you know if I have an AI That's a political candidate chatbot and it you know says we should nuke New Zealand right that's a gaff if I have a AI chatbot that's helping someone fill out immigration paperwork and it makes a mistake they get deported right so the use matters a lot what is the cost of failure what is the trust environment is it adversarial is it not right if I have an AI that is advising me you know in corporate negotiations you know how much does it matter if the ai's parent company is on the other side of those negotiations so pay attention to the use case a lot and and that really determines whether it makes sense like AI assistants are doing uh a lot of legal work as long as there are human lawyers reviewing it that's fantastic right makes lawyers more effective so think about how the human and AIS interact think about uh the systems and the trust that needs to be in the system the cost of getting it wrong in addition to how often things get get wrong and and notice that that will change all the time right this this field is moving incredibly fast what is true today won't be true in six months so any decision you make needs to be constantly Revisited I think that's quite important like think about what goes wrong because of when you're building something you think well just can I make something that gives the right answer sometimes but thinking about how it can be misused thinking about what happens when it fails is equally important um and how about for people who are just using AI is there anything um they can do in order to uh make sure that AI becomes more trustworthy over time ad ad better laws is the answer but really no I mean like you know AI is already embedded in your mapping software right I mean the AI is giving you directions what could you do you either use it or you don't right I mean AI is controlling your feed on Tik Tok or Facebook what are your options there aren't any so really for us as consumers the AIS are handed to us embedded in systems and that's pretty much like alltech and we either choose to use the systems or not uh this is where the monies are a problem because often we have no choice I mean you I mean I can tell you like don't get a cell phone but that's like dumb advice in the 21st century it's not real that's not viable so again I need government to step in and ensure that you know you can use the cell phone without without it being too too bad and most people believe they have more protections in their consumer goods than they do you know with with phones and inter cars and right there's a story that broke a couple weeks ago about GM spying on its uh drivers and selling information to insurance companies people were surprised there I was surprised like cashmir Hill who writes about privacy for the New York Times was surprised you know but should we be surprised no but we believe like we have more protections than we do so it seems like if we don't have those protections then that's going to break down the social trust then to go back to your original point I and it does and I think this is why you're seeing what we mentioned a little bit earlier this backlash against social media you know we thought it was all good now we think it's all bad I mean the truth is in the middle but you know we thought we could trust them it seems like there are some sort of good possible Futures and some bad possible futures um what's your sort of ideal situation here like what happens next that you think will make things go well I think that AI as a syst of tech is phenomenal and and and a lot of what goes well is human plus Ai and a lot of go what goes poorly is AI replacing human at least today right few years it might be different so the more we leverage these Technologies to enhance humans and really to enhance human flourishing the better we're going to do now that is not necessarily where the Market's going to push Market will push towards replacement because that is cheaper but that has a lot of Downstream effects right you know massive job loss is just not good for society and and we might want to think about ways to you know ways to help people that aren't tied to their jobs the United States all of your stuff is tied to your job unlike Europe right the US right in Europe they got healthc care through the political process in the US we got Healthcare through collective bargaining and that didn't matter in the 50s they both worked but here we are in this decade it means your Healthcare is tied to your job in a way it's not in Europe and that's not serving us well right now so if we're going to see massive unemployment because of AI we need some figure out some other way to deliver basic human services that aren't tied to your job so that's not great because we as Society really have trouble doing all these things so that's kind of meandered a bit here get me back on track no uh that's we started off going down the happy path and then sort of turned into disaster Zone I think there enormous power in AI I am mostly optimistic I know I'm a security person and I say a lot of pessimistic things but I am mostly optimistic here I think this will be incredibly powerful for democracy for society and uh so so that's that's a lot of what I'm writing about now and I think it's true so maybe we'll try and finish on a happy note so what is it that you're most optimistic about I think there's enormous power in AI as a mediator and moderator and consensus Builder and and a lot of the things I think AI will do for us are things humans can p perfectly capably do we just don't have enough humans so putting an AI moderator in every Reddit group uh in local online government meetings in in you know citizen assemblies on different issues be incredibly powerful AI doing adjudication I think AI is sensemaker explaining to people political issues AI is an infinitely patient teacher so instead of reading a book you engage in a conversation they it makes you a better person if we can get that working at scale the enormous values of AI as a doctor I mean there are parts of this planet where people never see a doctor because there aren't enough doctors right but an AI assisted nurse will be just as good in in almost all cases I mean there's phenomenal potential there AI is doing research especially research that is like big data pattern matching where you seeing articles about Ai and Drug Discovery I think there's a lot of potential there and so really I look for for places where there aren't enough humans to do the work and AI can make the humans that are doing the work more effective lots of uh incredibly positive things there I really like the idea of um an AI moderator that's something that hasn't cropped up in many discussions really so that that's pretty uh course I just said that you are replaceable in this podcast so yeah there is a story a few months ago I was interviewed by a podcaster back like last summer chat gbt is becoming a big thing initially and they said I win the chat gbt and asked a bunch of interview questions I asked it to come up with interview questions for you and and here they are and they were fantastic interview questions like one of them was if you were an action figure what was it what would your accessories be like I'd never gotten that question before that's pretty okay so I guess my days are numbered as a as a podcast maybe look out for AI Richie um isn't there an NPR episode on AI where they had the AI come up with a podcast uh and it asked questions and it came up with a little sketch on the topic it's most like three-part look it up I I think it was some NPR program might have been all things considered absolutely actually I just recently saw um Reed Hoffman interviewing an AI version of himself and uh that was exciting uh yeah it's uh it's it's getting very close uh to yeah Richie is replaceable um all right so uh just finish up uh do you have any advice is there like some action you think people should take in order to you know get towards this happy path of uh AI being good I mean to me this has to become a political issue and nothing will change unless government forces it government will not force it unless we the people demand it me I want these things discussed at presidential debates me I want them to be political issues that people campaign on like that matter in the same way that inflation matters and unemployment matters and and you know us China policy matters it needs to matter to us otherwise the tech monopolies are going to just roll over the uh the government because that because that's what they do right they have the money they have the lobbying and it's very hard to get uh policy that the money doesn't want it's really hard I think call Action there for everyone to start writing letters to their uh their local representative to get their AIS to write letters to the local represent oh yeah to write letter there we go technology solving the problem again nice uh all right thank you so much for coming on the show Bruce that was brilliant good luck thank you ohAI pretends to be a person AI pretends to have a relationship with you it doesn't and it is social trust so in the same way you trust your phone your search engine your email provider it is a tool and like all of those things it is a pretty untrustworthy tool right your phone your email provider your social networking platform they all spy on you right they are all operating against your best interest My worry is that AI is going to be the same that AI is fundamentally controlled by large for profit corporations hi Bruce welcome to the show thanks for having me I'd love to talk about uh trust in AI but before we get to that can you just tell me what trust means to you I wrote a book about trust I don't see it on my bookshelf right now uh it is an incredibly complex concept and it has many meanings it's an overloaded word kind of like security so asking people what trust means to them I think is bad because it means many things in many contexts to everybody there's difference between trust and trustworthiness right in in the security context trust is often something you have to trust not something that's trustworthy uh I do write about the difference between a more intimate uh personal trust and social trust but there's a lot of Scholars about trust and they have very different definitions and it really depends on what angle you're taking so I tend to keep it open because many people have different needs when they talk about the need for trust and as security person we often have to provide for them all but you know in terms of AI I think we should differentiate between interpersonal trust you might trust a friend and a more social trust you might trust a bank teller or an Uber driver can you just elaborate on that like how are the how is like what's the difference between interpersonal trust and social trust like when would you need one or the other you might need it's not really about need it's about the relationship so interpersonal trust is you trusting a friend right it's based on your knowledge of them it's less about their behavior and more about uh their their inner selves I trust a friend I trust a spouse I trust a relative we know what that means I don't know what they're going to do but I kind of notably informed by who they are when I say I trust an Uber driver it's a very different form of trust right I don't know this person right I I don't even know their last name I just met them but I trust that their behavior means I'm going to get taken my destination right safely and they could be a bank robber at night I don't know but it doesn't matter so for social trust there are systems of society that enable that I mean the reason I don't just get in the car of random strangers but I get in the car of random strangers who are Uber drivers that there's an entire system based on surveillance based on competition of a star rankings based on whatever background checks Uber does based on my history based on on their history that enables us to trust each other in that interaction right same thing when I hand cash over to a bank teller I don't know who they are I'm giving them my money but I know there's an entire system that the bank has to allow me to trust that person in that circumstance if we walked outside the bank down the block I would never give that person my money like ever but in the bank I would and that's the difference it's a really important difference because social trust scales interpersonal trust is only based on who I know it's not going to be more than 100 people in society and it'll probably be less than that but social trust I could trust thousands millions of people right I flew in an airplane yesterday think of all the people I trusted during that process including like all the passengers not to leap over and attack me and you know I mean that it's a little bit funny but if we were chimpanzees we couldn't do that so social trust is a big deal it is unique to humans and it's it makes Society work so it seems like um social trust is then going to be incredibly important when it comes to AI so uh in the same way that um you mentioned the the bank example where there's hundreds of different people involved in s of creating all these systems to make sure that social trust Works what's the equivalent that's needed for trusting AI so AI is interesting because AI pretends to be a person AI pretends to have a relationship with you it doesn't and it is social trust so in the same way you trust your phone your search engine your email provider it is a tool and like all of those things it is a pretty untrustworthy tool right your phone your email provider your social networking platform they all spy on you right they are all operating against your best interest My worry is that AI is going to be the same that AI is fundamentally controlled by large for-profit corporations that surveillance capitalism will be uh you know unavoidable as a business model and these systems will be untrustworthy you know we want social trust with them but we won't have it and because they are relational because they are conversational they will fool us right we will be fooled into thinking of them as friends and not as Services whereas at best they're going to be services so I worry a lot about people uh Miss place trust in AI that certainly seems like uh it could be a very B problem when you sort of start giving away all your sort of most personal details or intimate thoughts to uh some AI that isn't actually as trustworthy you know so yes so yes and no right already your search engine knows more about you than your spouse than your best friends right we never lie to our search engines they know about our hopes our dreams our fears whatever we're thinking about you know similarly our social networking platforms know a lot about us our phones right they know where we go who we're with what we're doing so we are already giving a lot of personal data to untrustworthy Services AI is going to be like that and you know when I think about AI digital assistance I think is one of the Holy Grails of personal Ai and AI that will be my uh travel agent and secretary and life coach and relationship counselor and you know coners and all of those things we're going to want it to know everything about us we're gonna want it to be intimate because it'll do a better job and now how do we make it so that right it's not also spying on us and that's going to be hard so in terms of being able to know when it's appropriate to have those sort of um trustworthy interactions or those intimate interaction ctions like um is there any way to gauge how trustworthy some AI is or if you just got to assume that it's it's not trustworthy we have to assume that the AIS that are run by meta and by Google are and by Microsoft and Amazon are going to be no more trustworthy than all of their other services I mean that would be foolish to think that Google whose business model is spying on you would make an AI that doesn't spy on you I mean maybe won't but that's not the way to bet and you know surveillance capitalism is the business model of the internet like pretty much everything spies on us your car spies on you your refrigerator spies on you right your drones whatever it is you know we see again and again how surveillance is pervasive in all of these systems so we can't expect different here and I think it' be foolish if we did you know if we want different we're going to have to legislate only hope I think you're right that like at the moment all the sort of most powerful AIS are created by these large technology companies so you mentioned the idea of legislation like is there is there an alternative to this sort of corporate AI I push for the notion of of public AI I think we should have at least one model out there I don't need I don't a lot I just need one that is not built by a for-profit corporation for their own benefit and it could be University doing it it could be a government doing it it could be a Consortium an right an NGO and as long as it is public and by this I don't mean uh a corporate model that has been made public domain so the Llama model doesn't count right that is still a corporate model even though we have access to its details it needs to be a model built ground up on on sort of nonprofit principles this feels important they you get a different kind of AI I don't need it to dominate I don't need it to to transplant all the corporate AIS I need it in the mix right I need it to be an option that people can choose and an option that we researchers can study in opposition to the corporate Ai and sort of understand where the Contour are uh you know it's not a big ask uh models are expensive to build but in the scheme of government expenditures they're cheap and models are getting cheaper all the time but this feels important you know if we're to understand how much and how we can trust corporate AI we're going to need non-corporate AI to compare it to otherwise we're not gonna be able to make good decisions uh so this is a really interesting idea having this sort of counterweight to the corporate a just having a a public model um so there are different sort of levels of um openness and so I know for example The Meta models they sort of open weights but they don't give you enough details toce everything so how just talk me through how much of it needs to be open and publicly available I think uh all of it I mean built ground up by the public for the public and and in a way that requires political accountability not just Market accountability so openness transparency responsible to public demands uh so we know the training data we know how it's trained we we have the weights we have the model and and that now becomes something that anybody can build on top of so universal access to the entire stack now this becomes a foundation for you know a free market in AI Innovations now we're getting some of that with both the hugging face model out of France and with the Llama model out of meta but they are still propri Ary built and then given to the public which is not enough we you're not going to get a model that isn't responsive in the same way to corporate demands and maybe that doesn't matter but we don't know if it doesn't matter yet and I think we need to know right this is just too important to solely give to you know the near-term financial interests of a bunch of uh Tech billionaires you going to need a better way to think about this and and the goal isn't to make AI into friends but you're never going to get interpersonal trust with AI I mean all I want is reliable service in the way Uber is a reliable service even though you know I don't know or you know in this interpersonal way trust anybody involved in that system right the the way the mechanism Works allows me to use Uber anywhere on the planet without even thinking about it and that's kind of the way trust works well like when I get in an airplane I don't even think about you know do I trust the the pilot the plane the maintenance Engineers the I mean I know because of this social trust that Delta Airlines puts well-trained and well-rested crews in cockpit it's on scheduled I don't have to think about it right when I go to lunch in uh about an hour or so I'm not going to walk into the kitchen and like check their sanitation levels I know that there are Health codes here in Cambridge Massachusetts that will ensure that I'm not going to die of food poisoning like are these perfect no people do occasionally get sick in restaurants airplanes occasionally fall out of the sky but largely it's good enough right and you know there are Uber drivers that commit crimes against passengers it's largely good enough and actually I think ta Uber drivers are interesting Taxi Driver used to be one of the country's most dangerous professions it was incredibly risky to be a taxi driver in a big city and Uber changed that right through surveillance that enables this social trust there a a lot you I had realized that taxi driver was such a a dangerous occupation but it's interesting that um these sort of additional regulations like um so for example you mentioned with restaurants there are regulations around sanitation and making sure that the food is going to be uh healthy you're not to put food poison people is sort of beneficial and helps in scale I think a lot of people when they say Okay more regulation there good reaction is going to be oh well regulation stops Innovation so what sort of regulation can you have for AI that is gonna so let's stop for a second regulation stops Innovation is a argument given to you by people who don't want to be regulated it is not true do we have problems with innovation in healthare do we have problems with innovation in automobile design I we put regulation in place because unfettered Innovation kills people so we're okay with it taken a few years for a drug to hit the market because if you get it wrong people die same thing with airplanes uh and if you wander around I mean I what is the lack of innovation in restaurants in your city because of Health codes none zero that is a fundamentally argument do not fall for it that's one uh two is if it inhibits Innovation maybe we're okay with that right if Innovation means people die we want slower Innovation you can only move fast and break things if the things you break are not consequential once the things you break are life and property then you can't move that fast and if you don't like it get into a different industry I don't care so I have no sympathy for companies that don't want to be regulated that is the price for operating in society and sometimes we regulate people out of business but 150 years ago we said to uh in our entire industry you can't send 5-year-olds up chimneys to clean them right and if it hurts your business too freaking bad we no longer send five-year-olds of chunties to clean them because we are more moral than that right you can't sell pajamas that catch on fire if it hurts your business model get a new business model you can't do it so I'm okay with putting restrictions on corporations right they exist because it is a clever way to organize Capital right and markets that's it they have no moral imperative to to exist right if Facebook can't exist without being regulated I mean I want Facebook to spy stop spying on its users if it can't exist because of that maybe it goes away and gets replaced by a company that can do social networking without spying on their users there's no rule that Facebook has to exist sorry I'm a little strided on this no uh I'm I'm glad you're giving your opinions um so in that case uh what sort of regulations do you think should be uh do you think should apply to AI in general my feeling is we don't need a lot of new regulations for AI because we want what we want to regulate is a human behavior right so if an AI at a university is racist in their admissions policies that's illegal but if a human admissions official is racist that's also illegal I don't care whether it's an AI or an AI plus a human or a non-ai computer or or a entirely human system right it's the output that matters and that same thing that's true for loan applications or policing or any other place you're going to see bias you know if I am concerned about ai's creating fake videos in political campaigns I'm equally concerned if those videos are made in a you know with actors in The Sound Stage like there's no difference so in general my feeling is that we want to regulate the behavior of the humans and not the tools they are using now after saying that AI is a certain type of tool and we'll need some specific rules right just like we like poisoning someone is illegal and also we make it harder for average people to buy certain poisons right we do both so there will be need to be some AI specific rules but in general regulate the human behavior and not the AI because the humans are the ones who are morally responsible for what's going on that certainly seems to help uh I guess people or managers become more accountable like okay we're putting this AI tool interaction but um actually we are responsible for what and it's the same if it's a non- AI tool right you you put a tool in and a tool is an extension of your power and responsibility so tool does damage then it's your your fault you know and we have experience with this you know a lot of talk is about what happens when AI start killing a are killing people you know robots have been killing people for decades in there in you know there's a steady stream not a lot of industrial robot accidents where people die in the US and Europe Europe and Asia right this is not new so we have experience with robots and AI systems taking human life and in general it is it's the company it's the maintainers you know courts are good at figuring out you know who's at fault and we we as Society have experience with that that is not going to be a new thing so it seems like existing laws are going to largely cover like the use cases okay so how do you feel about the sort of new raft of AI regulations and so we've got the EU AI Act is sort of the most recent but there are quite a few on the way eua Act is really I think the only one that is real I mean us we have an executive order but you know no one thinks Congress will be able to pass any regulation we can't even regulate social media and we've been trying for what a decade so I'm not optimistic about about regulating AI the EU AI Act is good I mean it's a good first attempt you can tell it it exhibits the problems with regulation of the technology the technology changes the eua ACT is being written before generative AI becomes a thing then you know GPT hits the mainstream they're frantically rewriting the law is kind of half and half but you know what is the big thing next year and will the law cover it and this is the problem of writing Tech specific laws the tech changes what doesn't change humans but in general I like the eui ACT it's a really good attempt I like the idea that they breaking applications into four different uh threat levels right and and regulating differently you know I think we can do more uh you know from in banking as a good example we regulate large Banks more heavily than small Banks so banking regulation is key to how big you are because we want to have you know we need regulation but we don't want to strangle little Banks now in Tech the big companies like regulation because you know it it weeds out the competition and Facebook will able to meet any regulation any government throws at it it's Facebook's competition that won't so you know that kind of thinking is needed here but the tech is moving so fast so we going to need a regulatory environment that is flexible and agile and we're not good at that at Society so so uh is it should be the larger companies that have more regulation or the the larger AI models I think that's proba at least think about me I don't have answers here but I kind of been poking out the questions but yes looking at uh some kind of tiered approach I think would be interesting here if we're just looking at regulations then that's going to put a lot of burden on governments to um bre trust in AI are there any things Beyond regulation we can do to increase trust I think not I mean corporations are fundamentally untrustworthy I mean you cannot have a a interpersonal trust relationship with them you can have a social relationship with them but corporations are precisely as immoral as the law will get let them get away with that's that's the way it works right the you know government establishes the rules and the market plays on top of the rules if we want to force more trust you need regulation it's it you know no company no industry has improved safety or security without being mandated by the government like ever and it's planes it's trains automobiles Pharmaceuticals workplace uh Food and Drugs consumer goods right I mean these are all industries that produce dangerous things until they were forced not to the market doesn't reward Safety and Security in the same way and the market rewards fake trust just as much as real trust especially when you have individuals working in a near-term uh reward Horizon so you've got these market failures where you need non-market mechanisms so if we need regulation is it gonna have to be every country is sort of making its own regulations or do we need some kind of uh Global uh synchronization here Global synchronization doesn't exist so it doesn't matter what you need you're not getting it I'm I'm fine with country doing their own things this is again companies complain there's you know lots of different regulations it's hard to meet them all and again my response is too freaking bad get used to it like you're a corporation internet you're a multinational corporation if you can't handle it don't be a multinational corporation this isn't hard and so I'm okay with a patchwork I'm okay with a patchwork of states right we we have uh state privacy rules that are different in every state Med is doing just fine they complain a lot but they're doing just fine so I'm good with a patchwork I don't think you're going to get International harmonization like anytime soon because we can't get it on even easier things and this is just moving so fast now we don't have planetary government I suppose yeah s probably prosan cons of having a planetary government and uh yeah maybe the patchwork is best at least for now um so I'm actually a fan of planetary government I think in general we as a species have two sets of problems we have Global problems and we have local problems we no longer have like France siiz problems so I tend to like government that is very big and very small right now the medium siiz seems to mean it was very important in an industrial age but it seems less important today that is a bigger conversation than this yeah I feel it's a whole separate podcast episode um all right so one thing that seems to have come up in conversation a few times already is social media and it seemed like social media went from like being the sort of darling of like this is going to save democracy and with things like the Arab Spring and all this sort of like 10 15 years ago and being kind of like yeah this is causing a lot of problems and uh yeah it's widely sort of uh well there's disillusionment anyway um so are there any lessons that the AI industry can learn from social media so yes and actually I wrote a an essay it appeared in the MIT Tech Journal like last month called Five Lessons AI can learn from social media where I talk about the our inability to regulate social media and uh you know causing all these problems and how we can learn from their mistakes and it's things like virality it's things like surveillance lockin it in monopolization I think that's the big one that uh the biggest problem with social media is that they're monopolies and that means they just don't have to be responsive to their users or their customers which are the the advertisers actually but but they're users most importantly and and anything we can do to break up the tech monopolies will be incredibly valuable in in all of this and that's the biggest lesson so you think um in this case I guess for anyone think about regulation you think well okay these AI companies they're going to be monopolies and therefore you need to regulate them as monopolies is that is that the idea yeah I mean I I I me monopolization as a if you are a monopoly you have a lot more power to shape the market and not respond basically to uh to Market demands right you can uh operate outside you know the basic tenets of a capitalist market system and and that and that breaks that breaks the system so I need there to be competition I need I need sellers to compete for buyers right that is how I get a vibrant market and if sellers aren't competing with each other for buyers you don't have that Dynamic and that's what's really important um and do you think that sort of that Monopoly Market is going to be inevitable for uh for AI particularly for generative AI or do you think there will be course it's not inevitable it's it's it's it's only enabled by the ability of Corporations to ignore antitrust law you know we have laws in place to try to protect prevent monopolies we didn't enforce them for a few decades which is why we have the big Tech monopolies uh they are starting to be enforced again now but you know the the power imbalance is great so we'll see how uh how it goes but you know EU is doing than the US and in a sense the EU is the regulatory superpower of the planet me I look to them more than the US to uh you know to keep these companies in check it occurs to me that we've been talking about um corporations as being this sort of like single entity but actually they consist of people and in but that's that's kind of a I mean know what's his name said that Ronney said that I mean it it's it's true and it's they are uh Charlie stro science fiction writer talks about corporations of slow AI it's a really interesting parallel that yes they're people but they are this soot technical organism and they have things they do are are greater than the individual people it's like saying that a car is is like you know metal and screws I mean yes and no and you know if you think about it I let's take meta I mean meta could deci de tomorrow not to surveil its people and if they did the CEO would be fired and replaced with a less ethical CEO right the people can't operate like with full autonomy because of the system they're embedded in so corporations are not just a bunch of people in a room they are a highly structured multi-human entity and you cannot reduce them to just people I suppose that's true there are certainly limits on things I could say like I would never tell people it's a terrible idea to about data and AI because they Immortal I mean they're Immortal in a way that that the people aren't right they outlive the people in them the people in them come and go it's like your skin cells right you know the cells in your body come and go but who you are is greater than a pile of cells maybe that's a better analogy well whiskin Sals as part of human Corporation something all right um so just human being from about though so within the corporations there are people who are sort of building these AI tools who are working on these things and we have a lot of them listening in the audience so do you have any advice for the people who are building uh AI or making use of AI um at work like what do they need to do um to create more trustworthy AI or can they have some sort of effect I think really pay attention to applications it matters applications right you know if I have an AI That's a political candidate chatbot and it you know says we should nuke New Zealand right that's a gaff if I have a AI chatbot that's helping someone fill out immigration paperwork and it makes a mistake they get deported right so the use matters a lot what is the cost of failure what is the trust environment is it adversarial is it not right if I have an AI that is advising me you know in corporate negotiations you know how much does it matter if the ai's parent company is on the other side of those negotiations so pay attention to the use case a lot and and that really determines whether it makes sense like AI assistants are doing uh a lot of legal work as long as there are human lawyers reviewing it that's fantastic right makes lawyers more effective so think about how the human and AIS interact think about uh the systems and the trust that needs to be in the system the cost of getting it wrong in addition to how often things get get wrong and and notice that that will change all the time right this this field is moving incredibly fast what is true today won't be true in six months so any decision you make needs to be constantly Revisited I think that's quite important like think about what goes wrong because of when you're building something you think well just can I make something that gives the right answer sometimes but thinking about how it can be misused thinking about what happens when it fails is equally important um and how about for people who are just using AI is there anything um they can do in order to uh make sure that AI becomes more trustworthy over time ad ad better laws is the answer but really no I mean like you know AI is already embedded in your mapping software right I mean the AI is giving you directions what could you do you either use it or you don't right I mean AI is controlling your feed on Tik Tok or Facebook what are your options there aren't any so really for us as consumers the AIS are handed to us embedded in systems and that's pretty much like alltech and we either choose to use the systems or not uh this is where the monies are a problem because often we have no choice I mean you I mean I can tell you like don't get a cell phone but that's like dumb advice in the 21st century it's not real that's not viable so again I need government to step in and ensure that you know you can use the cell phone without without it being too too bad and most people believe they have more protections in their consumer goods than they do you know with with phones and inter cars and right there's a story that broke a couple weeks ago about GM spying on its uh drivers and selling information to insurance companies people were surprised there I was surprised like cashmir Hill who writes about privacy for the New York Times was surprised you know but should we be surprised no but we believe like we have more protections than we do so it seems like if we don't have those protections then that's going to break down the social trust then to go back to your original point I and it does and I think this is why you're seeing what we mentioned a little bit earlier this backlash against social media you know we thought it was all good now we think it's all bad I mean the truth is in the middle but you know we thought we could trust them it seems like there are some sort of good possible Futures and some bad possible futures um what's your sort of ideal situation here like what happens next that you think will make things go well I think that AI as a syst of tech is phenomenal and and and a lot of what goes well is human plus Ai and a lot of go what goes poorly is AI replacing human at least today right few years it might be different so the more we leverage these Technologies to enhance humans and really to enhance human flourishing the better we're going to do now that is not necessarily where the Market's going to push Market will push towards replacement because that is cheaper but that has a lot of Downstream effects right you know massive job loss is just not good for society and and we might want to think about ways to you know ways to help people that aren't tied to their jobs the United States all of your stuff is tied to your job unlike Europe right the US right in Europe they got healthc care through the political process in the US we got Healthcare through collective bargaining and that didn't matter in the 50s they both worked but here we are in this decade it means your Healthcare is tied to your job in a way it's not in Europe and that's not serving us well right now so if we're going to see massive unemployment because of AI we need some figure out some other way to deliver basic human services that aren't tied to your job so that's not great because we as Society really have trouble doing all these things so that's kind of meandered a bit here get me back on track no uh that's we started off going down the happy path and then sort of turned into disaster Zone I think there enormous power in AI I am mostly optimistic I know I'm a security person and I say a lot of pessimistic things but I am mostly optimistic here I think this will be incredibly powerful for democracy for society and uh so so that's that's a lot of what I'm writing about now and I think it's true so maybe we'll try and finish on a happy note so what is it that you're most optimistic about I think there's enormous power in AI as a mediator and moderator and consensus Builder and and a lot of the things I think AI will do for us are things humans can p perfectly capably do we just don't have enough humans so putting an AI moderator in every Reddit group uh in local online government meetings in in you know citizen assemblies on different issues be incredibly powerful AI doing adjudication I think AI is sensemaker explaining to people political issues AI is an infinitely patient teacher so instead of reading a book you engage in a conversation they it makes you a better person if we can get that working at scale the enormous values of AI as a doctor I mean there are parts of this planet where people never see a doctor because there aren't enough doctors right but an AI assisted nurse will be just as good in in almost all cases I mean there's phenomenal potential there AI is doing research especially research that is like big data pattern matching where you seeing articles about Ai and Drug Discovery I think there's a lot of potential there and so really I look for for places where there aren't enough humans to do the work and AI can make the humans that are doing the work more effective lots of uh incredibly positive things there I really like the idea of um an AI moderator that's something that hasn't cropped up in many discussions really so that that's pretty uh course I just said that you are replaceable in this podcast so yeah there is a story a few months ago I was interviewed by a podcaster back like last summer chat gbt is becoming a big thing initially and they said I win the chat gbt and asked a bunch of interview questions I asked it to come up with interview questions for you and and here they are and they were fantastic interview questions like one of them was if you were an action figure what was it what would your accessories be like I'd never gotten that question before that's pretty okay so I guess my days are numbered as a as a podcast maybe look out for AI Richie um isn't there an NPR episode on AI where they had the AI come up with a podcast uh and it asked questions and it came up with a little sketch on the topic it's most like three-part look it up I I think it was some NPR program might have been all things considered absolutely actually I just recently saw um Reed Hoffman interviewing an AI version of himself and uh that was exciting uh yeah it's uh it's it's getting very close uh to yeah Richie is replaceable um all right so uh just finish up uh do you have any advice is there like some action you think people should take in order to you know get towards this happy path of uh AI being good I mean to me this has to become a political issue and nothing will change unless government forces it government will not force it unless we the people demand it me I want these things discussed at presidential debates me I want them to be political issues that people campaign on like that matter in the same way that inflation matters and unemployment matters and and you know us China policy matters it needs to matter to us otherwise the tech monopolies are going to just roll over the uh the government because that because that's what they do right they have the money they have the lobbying and it's very hard to get uh policy that the money doesn't want it's really hard I think call Action there for everyone to start writing letters to their uh their local representative to get their AIS to write letters to the local represent oh yeah to write letter there we go technology solving the problem again nice uh all right thank you so much for coming on the show Bruce that was brilliant good luck thank you oh\n"