Cognitive Biases in Data Science with Drew Conway - #39

The Intersection of Industrial AI and Human Expertise

Industrial AI is a rapidly evolving field that aims to pair cutting-edge software and technology with human expertise in specific industries, such as manufacturing and engineering. Drew Conway, an expert in industrial AI, believes that this partnership is essential for bridging the gap between humans and machines. He notes that, "industrial AI is how you know fundamentally trying to pair something that's like at The Cutting Edge of software and technology with someone who's been working in a particular field, like a CNC machine or a boiler."

Conway emphasizes the importance of understanding human expertise and leveraging it to create tools that shift cognitive responsibility away from humans. He highlights that "people first" is one of his company's core values, and this means acknowledging that humans will never have the same level of knowledge as experts in their field. Instead, Conway aims to build software that supports these individuals by providing them with information and insights in a timely manner.

The Current State of Industrial AI

Conway acknowledges that industrial AI is not widely discussed outside of professional conferences. He notes that labor dynamics in these industries are becoming increasingly fixed or shrinking, creating an asymmetry between humans and machines. Humans are drowning in data, while systems are struggling to keep up with the pace of information. This is where Conway's team comes in – they aim to leverage human expertise to bridge this gap.

The Approach

Conway explains that his approach is centered around shifting cognitive responsibility from routine observational tasks to computers. By doing so, humans can focus on high-value decision-making and strategic thinking. He emphasizes the importance of increasing cognitive margin for individuals working with machines. This means leveraging technology to augment human capabilities, rather than replacing them.

The Asymmetry Problem

Conway highlights the problematic asymmetry between humans and machines in industrial settings. Humans have an incredible capacity for learning and adaptation, but this comes at a cost – they are drowning in data and struggling to keep up with machine performance. Conway's team is working to address this issue by building tools that support human expertise.

The Solution

Conway's solution lies in leveraging human expertise to create software that supports humans in their work. By doing so, he aims to shift cognitive responsibility from routine tasks to computers, freeing humans to focus on high-value decision-making and strategic thinking. This approach is centered around increasing cognitive margin for individuals working with machines.

The Role of Experts

Conway emphasizes the importance of leveraging human expertise to create tools that support these individuals. He notes that single experts in massive multinational companies have invaluable knowledge that can be applied to build more effective software solutions. Conway's team aims to tap into this expertise, creating software that works in tandem with human intuition and judgment.

The Future of Industrial AI

Conway envisions a future where industrial AI is seamlessly integrated into industries such as manufacturing and engineering. By leveraging human expertise and leveraging technology to augment human capabilities, he believes that humans can focus on high-value tasks and strategic thinking. This approach has the potential to revolutionize industry and create new opportunities for growth and innovation.

Engaging with Alium

For those interested in learning more about Alium and their work in industrial AI, Drew Conway suggests checking out their website at ali.io. They are also active on Twitter (@DrewConway) and can be reached via email. Alium is hiring, and Conway encourages anyone interested to reach out and learn more.

Conclusion

Industrial AI holds great promise for industries such as manufacturing and engineering. By leveraging human expertise and leveraging technology to augment human capabilities, Conway believes that humans can focus on high-value tasks and strategic thinking. This approach has the potential to revolutionize industry and create new opportunities for growth and innovation.

"WEBVTTKind: captionsLanguage: enhello and welcome to another episode of twiml talk the podcast where I interview interesting people doing interesting things in machine learning and artificial intelligence I'm your host Sam charington a few announcements before we get to the show first I'd like to take a second to give a virtual high five to everyone who entered our latest giveaway in which two lucky listeners will get a chance to attend the recently rebranded AI conference in San Francisco compliments of this week in machine learning in Ai and our friends over at O'Reilly this has been our most active giveaway to date with over 100 of you submitting entries just wow we haven't picked the winners yet as of the time of this recording but by the time you hear this winners will be posted over at twim ai.com aisf and we'll give them a proper announcement on next week's show next up about a month ago during my conversation with Chelsea Finn I thought out loud about starting a virtual paper reading group after receiving lots of positive support for the idea we finally have a Meetup to call our own on August 16th we'll kick off the inaugural twiml online Meetup where our first presenter Community member Joshua manella will lead a discussion on Apple's Gans paper learn from simulated and unsupervised images through adversarial training which is one of the best paper Award winners from this year cvpr conference if you've already signed up great we look forward to seeing you there if not head over to twiml ai.com Meetup to get registered finally if you've been paying attention you know that after almost a year of procrastinating I finally launched my email newsletter I've been having a blast with it and you definitely want to subscribe we've got some fun stuff in store exclusively for newsletter subscribers so make sure you bounce on over to twiml ai.com newsletter to sign up all right as you all might know a few weeks ago I was in San Francisco for Wrangle a great conference brought to you by our friends over at Cloud era Wrangle is a super fun event each year it brings an interesting and diverse community of data scientists to an intimate and informal setting this year a music venue in sf's Mission District for great talks on real data science projects and issues not to mention cowboy hats and barbecue while I was there I had a chance to sit down with a few of the event great speakers including Drew Conway founder and CEO of ovium and a former data scientist with the CIA sherith Ral a listener and fan of this podcast and an engineering manager over at instacart and Aaron Shellman a statistician and data science manager with zyren a company using robots and machine learning to engineer better microbes this show features my interview with Drew whose Wrangle keynote could have been called Confessions of a CIA data scientist the focus of our interview and the focus of Drew's presentation is an interesting set of observations he makes about the role of cognitive biases in data science if your work involves making decisions or influencing Behavior based on datadriven analysis IES and it probably does or will you'll want to hear what he has to say a quick note before we dive in as is the case with my other field recordings there's a bit of unavoidable background noise in this interview sorry about that and now on to the show all right everyone I am here at the Wrangle conference the guest of Claud who's sponsoring our series here and I am with Drew Conway who is the founder and CEO of alium it's going to be with you Sam it's great to have you on the show uh so you just did a really interesting presentation that I tweeted a little bit about why don't we take a minute to have you introduce yourself to the audience sure so I am the founder and CEO of alivium we're a new york-based company that builds what we call human- centered AI for the industrial industry and and what that means for us is we build uh software products that exist at the intersection of complex machine data so think streaming data from an oil refinery uh data from automated robotic systems and uh human knowledge one of the things that I can even talk a littleit about in the context of what I just presented here at Wrangle is you know me my career has really been one that has kind of moved me through a path of working alongside and building software tools for people who need to make decisions from data and and alium in many ways is a kind a culmination of a lot of thinking I've had over my career as to how best to extract maximum value from these complex streams of data and present a human being you know a man or woman who's working inside an industrial operation with the right information at the right time to make the best decision so the company's just about 2 years old um we work primarily in what we call processed manufacturing so sort of distinct from discret manufacturing and the way to the way that I like describe as like scret manufacturing is screwing in bolts and and and rivets and process manufacturing is typically you know pipes and boilers things like that um and the reason that we focus on on that second half is is our approach to learning is really one where this continuous nature of information is more well suited for for what we're doing mhm interesting particularly interesting because I've been spending a lot of time researching industrial AI or industrial applications of AI and and we're in the midst of a series of podcasts on Industrial AI although we're connecting here in a totally different context and so uh why don't we jump into that tell us about your presentation here at Wrangle sure so yes so the early part of my career when I first started a career was actually as a what was called then a computational social scientist inside the US intelligence community and so uh this was a few years before data science was a profession sort of well-known profession and and I think probably the people who were doing the work that I was doing then now are probably called data scientist but what my job primarily was was to think about how to build statistical programming and Cal software tools to support decision- making inside the intelligence Loop and basically that that meant my work was split primarily into two big halves one half was what I would call principally Big basic research so things that are more academic in nature I spent a lot of time thinking about graph theoretic models of uh Network change and network um networks moving over time the other half of my work was sort of much more technical in nature so it was really building custom software and even kind of documentation systems for taking in a very wide breath of different kinds of data so all the way from space-based assets and satellite IM iny to signals intelligence to ground sensing to an unstructured written report from some PFC in the field and being able to kind of distill all that information down in a reasonably timely way to help you know a sergeant major who's in the field and needs to know whether that they go you know knock on this door is the person they're looking for going to be there if they go inspect this this Shack will they find the weapons that they're looking for yeah so it was anyway it was it was a fascinating um place to kind of start my career and what I spoke about this morning was really a lot of the lessons that I learned from that with respect to how data science as a sort of profession and as industry can kind of get wrapped around the axle on bias and how to overcome that and how to help yourself as a professional data scien is really how to help the folks around you better leverage and use the tools that you have because everybody brings bias to their work and and I think my experience in the inel community was one where I was sort of acutely aware of that because the stakes were relatively high right right interesting so yeah one of the one of the silly questions that I had for you was did you have to get your slides approved by the uh no no no no so yeah you know all things in in in my slides were uh you know sufficiently generalized that there's no need to do that but you know one thing that I did mention in the talk which is is sort of an interesting context for being a data science inside the Intel Community is sort of your access to tools you know so we you know we as a sort of collective community of people doing this work we think about just this ready access to the latest and greatest open source tools and then you know as soon as Google or Facebook or whomever kind of Open Source this great new thing well let's figure out a way to play with it and going to get it into our workflow that is absolutely not the case when the equipment that you're working you know the literally the computers that you're doing your work on are classified pieces of equipment and so one the stories that I told in my talk was one where I was trying to overcome this motivated reasoning as an example of how that problem of motivated reasoning can actually be addressed through a kind of deliberate technical approach to analyzing data and in this case we were we were looking at satellite imagery the context here was the sort of NeverEnding search for weapons of mass destruction in Iraq you know the timing here is sort of mid 2000s 2005 2006 and as I mentioned in my talk there I had the opportunity to work with these two what we call IMT or image intelligence analysts who had been working for I mean by the time I got to know them I think they've been past their 30-year Mark and so these were these were people with a tremendous amount of expertise and they were tasked with analyzing image images satellite images taken of Iraq and try to identify any places where you might see we of mass destruction and one of the projects that we worked on which they themselves asked for was their intuition was that there were no they were never going to find stuff and unfortunately because of this issue of motivated reasoning they were continuing to be asked to just look and look and look at no end and so we basically came up with this novel idea is like well why don't we try to automate this process so that you can in aggregate show by automatically analyzing images and using very simple classification to try to identify where oh if and there are any indications of suspicious activity if that actually exists because you know two men doing this on their own they could be at this for the rest of their careers and so the reason I bring this up because it ultimately became a really interesting exercise and trying to get new tools in the building so uh you know one of my greatest achievements I think as a in this job was actually not technical but bureaucratic in that I was able to actually get you know early versions of Scientific Python and open CV installed on a classified machine so that we could write a simple classifier to try to help these guys out and ultimately the the success was that a we were able to do that but B we were able to show and aggregate that there was really nothing there and ultimately were able to use that as a way to dis dislodge these guys from having to continue to pursue something that ultimately they knew knew they were not going to find and you think that's changed at all uh the ability to get new technologies open source into these environments over you know since you left that I think it's improved quite a bit you know and it's funny to think I mean 10 years is is a long time in the sort of tech uh timeline and there you know large bureaucratic institutions like the military and Department defense are always trailing indicators but one of the things that I've been very impressed by with the folks that I know that continue to work in this space is their sort of progression to being able to bring new Tools in you know there there's part of this which is there I think there are better tools available commercially and so that's always been an easy way to acquire new stuff but having a not only an appetite but a a sort of Avenue for bringing in open source tools I mean I know even even um AWS works with the Intel Community now and creates you know distributed compute for them to actually use some of these tools and it's I think it's been obviously a real Boon for their work but I think more importantly for recruiting and retention for you know very talented people yeah so you mentioned motivated reasoning in the context of the story of pulling technology into uh one of the agencies tell us you know what that means and and uh spend some time talking about the the other biases that you talked about your cuz that was the bulkier presentation was these biases so motivated reason is the easy one right we see it all the time it's basically I I ask you a question because I I want you to give me the answer that I already know right and motivated reasoning is a particularly Sticky Wicket when it comes to intelligence gathering because policy makers will oftentimes have predetermined policy biases right and so your job and as I mentioned in my talk you know particularly if you're working in the civilian intelligence you know principally the CIA you have one customer your customer is the president and and by extension the White House yeah and so you know motivated reasoning can be very problematic if you have a customer with high levels of motivated reasoning and there are a number of examples of that throughout history you know I spoke about the sort of search for weapons of mass destruction in the early part of the 2000s but there are lots of other biases right and I think the next stop and and the one that I mentioned uh during the talk is his idea of confirmation bias which in many ways I think is sort of the you know the the cousin or the the direct relative of motivated reasoning which is maybe I don't already have a predetermined policy outcome that I'd like to see but I sure have a lot of bias in accepting you know analysis that confirms a thing that they already think is true right and and that is again this incredibly problematic lens through which to observe information that in and of itself is very hard to collect you I think in the in the context of data science people people often talk about bias in the data generating process and it's true it's everywhere you know if you work at a big social media company you have access to a tremendous amount of data but some engineer along the way chose to collect that clickstream you know that wasn't that was done by a product manager who decided that they wanted to track a specific kind of action right and that's bias right when you move into a kind of Intel context where there's policy decisions that need to be made off of it you have an extremely limited set of kinds of information that you can get and often times that information is is brought to you in sort of opportunistic way you know you know one of the this is well after my time in the Intel Community but of course after you know after the raid at Osama bin Len's base camp a lot of the the most valuable stuff that came out of that were all the laptop computers and information that they really glean there you know obviously that data is highly biased and that was a that was a collection of opportunity there was no there was no like AB test and no experiment to try to identify which would be the best Pathway to do that yeah and so confirmation bias becomes extremely dangerous if you have sudden access to a new data set that you didn't already have and without taking a deliberate approach to analyzing it may reinforce that bias in a very dangerous way yeah the last one that I mentioned during the talk and this is sort of where I left the talk and I think is something that we as a community really need to think about in the context of our work today now is sort of Flatout denial that we we as data scientists and we really as sort of technicians because I wouldn't bucket this only for people who you know write statistical code right all all folks who build tools with software have this attachment to both information that is biased and imperfect and tools that are you know sort of uh approximations of of good ways of estimating what we think is occurring in the real world but the real world is really hard to measure and so this kind of Trifecta of imperfection means that everything that we do should be you know fraught with caveats and considerations for all of that stuff but what that means is that we are we accept the fact that we open the door to people who will try to poke holes and deny that that's true even if we can make a very confident assessment of something and we can show it to be true but because of good hygiene around doing data science it's very easy to P holes in that and I think there's I think part of what we need to think about as a community is well how do we prepare ourselves for that how do we get better at communicating this kind of you know these foibles in our in our work that we cannot pull away but also how do we how do we get the consumers out of that information to be more willing to accept and more educated to a certain extent about that reality I mean one of the things that I mentioned during my talk was you know this the election in 2016 US election in 2016 I think was a great example where people just had this expectation that you know you get Whoever has the higher percentage of winning is the winner right because we have this kind of horse race mentality but anybody who's ever been to a casino would know that if you if there was a game at the casino that gave you you know 25% out of winning you would never leave that game yeah you know and that's essentially the game that we ended up playing and we just got the one in four chance that we didn't expect to see right and I think part of that is this you know people just need to get a better understanding of and it's it part of it I think is sort of becoming more numerate but I think part of it is just we we're responsible for that the data commun is responsible for kind of conditioning folks to understand that better I think folks like Nate silver do a great job and there's I think there's a lot of success in the world with that but we need more of it yeah yeah it's it's it strikes me that it's a bit of a fine line between you know denial and maybe it's not a fine maybe it's a golf but you know you know there's Denial on one hand and then there's you know challenging the results and I think you know we need to be as a community be open to having our results being challenged because the flip side of that is this idea that you know data and quote unquote AI is matching right and whatever it says that's what we got do right and that's a whole different you know that a whole different set of challenges I totally agree and I think you know we are we are right at the beginning of what I think will be a very interesting you know future immediate future for you know sort of General consumer because it seems like the technology trend is to move in the direction of effectively using black boxes or Solutions and that means that we accept the fact that it will be very difficult to understand why a system makes a choice MH and I think we are opening ourselves up to a really difficult set of circumstances that will very likely come sooner or later where you know int you know intelligent software systems whether you want to call it AI or sort of a general class of just not decision support systems but decision- making systems that we don't understand and you know I think the danger now is that in some sense we've move so quickly in the direction of having access to tools like this that the education component is just not kept up and I think the flip side of this which is in many ways the compliment to it is I also think that there's an unnecessary amount of fear right I think now we have this pendulum swinging back the other direction where you have folks creating a kind of anxiousness around the arrival of tools like this and systems like this that doesn't really meet with the reality of how that technology trend is doing and so consumers are being conditioned now to not Trust ATM machines and to not trust Subway Subway doors that open and close on their own and I think this friction Now is really literally starting to heat up in a sense that again we're the folks that build these systems and I think because of that it's partly our responsibility to be able to go out and and try to talk to folks and say you know this is this is how this works you know fine we don't I think we lack in some sense good champions for this stuff right we have lots of really good entrepreneurs and technologists who can build this stuff we need to find our Champions who can actually kind of go out there and and try to help folks understand how this is going to change their their lives you know for good and and potentially in ways that we don't even understand yet yeah yeah I've got to ask the ATM and Subway door are those specific examples no no I'm just thinking you know you hear you hear you know all these you know a lot of kind of fear uncertainty and doubt around stuff and so you know actually these be more examples for my personal life where I'm you know asked by family members you know when when am I going to show up to the ATM and it's going to be hacked or when am I going to show up you know autonomous vehicles going to like turn on their owners you know everybody you know people have seen you know Fast and Furious movies and they're like is this how things are going to work and I don't know but I also know that that future is a little bit further away than Hollywood is presenting right and there's a long way to go before we actually have to start thinking about that and I think we because there is time part of our job as a community is to try to fill that that information Gap in a little bit the speaker after you said that in kind of running through similar sets of issues uh more applied societally gave the the audience the advice that you know as you're thinking about these systems You're Building think about the degree to which it resembles an episode of black mirror right yeah I think that's that you know if there's if if there's a useful rubric that's probably a really good one right like if if if you start to bleed into Black Mirror territory then you're probably have made some some choice that uh you might want to reconsider yeah and for those that don't know Black Mirror probably the best analogy is like a British version of The Twilight Zone yeah yeah it's it's a British version of The Twilight Zone but is you know the Twilight Zone modern yeah dealt with lots of different kinds of social issues B Mir tends to focus on technology as the kind of specific or or consistent thread and uh well I'm a big fan so yeah same here same here so what what are the specific you know the the top three things that you want folks to take away from your presentation and and not just take away but like go do right I mean I think the first one you walk out of my talk and you go back to your you know maybe tomorrow or this afternoon you go back to your desk at the office and sit down and think about how confirmation bias motivated reasoning and uh OPP opportunities for denial exist in the current decision-making workflow in in your company your organization and and how your own work may be contributing to that or helping to save it and and really you know not not to be overly negative but you know I think that there is everybody can think of examples of this and I run a company I can think of examples I mean I'm I have aot I'm heavily motivated to sell my products and that will impact my decision- making and I have told my team I have high expectations of them to step in and say hold on Drew like why are you why do you think that that's a good choice to make and this would be specifically in product building even model selection for some of the you know algorithms that we may choose you know this this stuff is this stuff is not you know abstract right these are real examples and I think the second one you know which is I think maybe more fun is you know go out and seek the firstand you know go try try to try to meet folks and see systems where this kind of intersection of biased data and decision- making exists in your community you know local government in you know in in your local community organizing you know this stuff is everywhere and one of the things that's nice about kind of professionally applying statistical methods and Computing to this in this context is that you can bring your own expert piece to potentially a much smaller scale problem that impacts many many more people's lives and the lives of people that you are your neighbors and you know what I've had I had a great opportunity in my life to to work with you know the government in New York City and and work with local communities there and you know I was I was I was a co-founder of an organization called Data kind which really kind of grew out of this really this question of how do you how do you bridge the gap between talented data Sciences engineers and product managers and a social sector that has great data really interesting problems but doesn't have access to those talented data scientists and engineers and data kind exists to do that and you know you can go sign up on data kind's website that's one way to do it but I think an easier way to do it is just you know show up to a community meeting see what see what the Board of Ed is talking about you know they're making decisions about what what data to collect on students you can help you know help them make better choices and the final one I'll say which was the final the sign of kind of called action of my talk which is you know if you're in a position of hiring people which I'm sure many of your listeners are you ought to really think hard about hiring veterans I had an opportunity as I said to to to work alongside active service folks in in the military and they're some of the most brilliant dedicated just you know technically competent folks that even now have ever had a chance to work with and I think there are many underrepresented groups in technology and I think veterans is a group that people don't talk a lot about and I think the the transition from a career as a signals you know a signal analyst on a sub on a submarine to working as a data scientist is actually a lot narrower than certainly folks in that higher technologist think and part of the issue is that a lot of people who come out of out of the service and then try to trans you know transition into a professional job they just don't even know that these jobs exist you know and and I've worked with an organization in New York called the Iraq and Afghanistan Veterans of America to try to build you know access you know just to kind of make bring these two communities together and again you know I think this is this is something that I do in New York here in San Francisco these all these organizations exist and I would encourage folks in all their communities to to try to work and do that yeah how do the bi the biases that you talked about Express themselves in your customers when they're trying to apply industrial Ai and and data driven decision making what how do you see them show up yeah that's a great question and you know we see a lot of the the bias coming from customers at a point now where the buzz around Ai and machine learning in the context of their work so you know I know you're in in you're doing this series I mean the industry 4.0 you know digital transformation all these kind of term terms of art basically what that means is okay we're not a software company but we believe that we need to transform part of our business to rely more heavily on data and software because that's what we think we need to do but those are just words and so when you get to actual practical implications you start talking to folks who you know their job is to make sure that the oil refinery has no safety issues ever again how do we do that and because there's a potentially very large white space around what Ai and machine learning could actually do to solve that problem it's very difficult for that you know site coordinator plant manager to think creatively about how to solve those problems and so the biases that we see is you know particularly in the beginning the first year of of alium the bias that we saw is that when you when you try to rely on customers creativity they come back and say I don't understand this I don't I don't see how it could possibly help my work because there's no real tangible connection to all of this fancy math and compute that you want to throw at it and how this translates to my Workforce getting home safely every day and my facility producing at maximum yeld and so I think for us the Journey that we've gone through as a company is actually trying to figure out how to more narrowly go after a set of really a set of metrics and kind of value of values that can apply to answering that question because the denial bias that's like I'm going to poke holes in this because it's easier for me to say no than it is to try to do the work to understand this right you know it's hard to it's hard to fault a customer for that and they're you know our customers are folks who have worked 15 20 25 even 30 years you know in an in an oil rig on a you know in a manufacturing floor in a power station like software is not their job right software is my job so I should be able to help them with that and so how I can kind of minimize the Gap in that is is is a way of minimizing that bias to try to help them understand and connect the dots between the work they do every day the service that we can provide and then how that makes that work better yeah interesting comments around kind of the RO the role of software I think we throw around ideas like software is eating the world and you know you have big industrial companies like GE and Ford saying that they have kind of committed themselves to become software companies right and so you know on the one hand I think those you know I I guess maybe I'm a cheerleader for the industry and say Hey you know we you know these companies you know everyone at these companies need to start thinking a little bit more at least like a software person in some way but you know know you're right one of the things that has come up repeatedly in talking about the industrial AI in particular is how you know you are you know fundamentally trying to pair you know something that's like at The Cutting Edge of you know software and Technology with you know someone who's you know been working in a particular you know working with a a machine you know a CNC machine you know or a uh you know a boiler or something like that and they know it's like they're the C n c Whisperer right that thing makes a certain noise they know that it's going to need some care and feeding and they know how to do it Y and I think it's an interesting responsibility for us as technologist to try to figure out how to bridge these worlds yeah uh you know at alium we we we really think about this as a first class part of what we're trying to do you know we we have you know set of company values and and one that we we really hold de is this idea we want to put people first and I think you know that's that's sort of easy to say and in the context of what our work what that means is I have I have no idea what a CNC Whisperer knows and I'll never know like I will I could I could start today and work through a m of my career and never be as good as that I mean we've met people who who are those those those folks I mean single single individuals working in massive multinational companies that themselves have institutional knowledge that is probably invaluable to those organizations and so to build software to support that I think the the admission that you have to make as a technologist is that you'll never know what they know and so how can you build tools that shift the cognitive responsibility of that individual away from having to cons constantly be you know checking that CNC machine and pulling data off of it so that it knows exactly that series of vibrations or or heat or spin that may indicate imminent failure to providing them with information in a timely manner that draws them to that you know opportunity to make a choice to make a decision and then they can get back to the work that they're really good at and I think in some sense it's just a matter of increasing cognitive margin you know there's there's this you know this this upward slope of data right constantly everywhere and I think in the industrial space which is is largely not talked about outside of those kind of professional conferences is just as steep or or more you know but the labor Dynamics in those in those Industries are you know fixed or shrinking and so now you have this this very problematic asymmetry between humans having to understand and and deal with data and make decisions and systems that are just drowning them in information so for us the simplest way to bridge that Gap is well let's leverage that expertise that they have so that that that like shifting of cognitive responsib ability for the routine boring observational stuff can go to the computer and the you know imminent high value need to know now decisions can go to those experts and leverage that and really Stitch that stuff together right right awesome well what's the best way for folks to check out what you're doing track you down engage with you yeah so you know websites ali.io that's Al l l u v IU m. iio alium the easiest way to engage with me you know I'm I'm on Twitter at Drew Conway c n w y I'm I'm pretty easy I'm a pretty easy Google so you know I I uh I I answer I answer my emails I like to I like to engage with folks so if you have any questions about the company we are um we are hiring as I think everybody is so you know for us the the nucleus of of our company is really around data science and engineering but I think with a specific bent on streaming data and you know semi-supervised and unsupervised learning so if that's the kind of stuff with you know High volumes of streaming data that get you excited we'd love to hear from you great all right well thanks so much thank you Sam it was a lot of fun all right everyone that's our show for today thanks so much for listening and for your continued support of this podcast for the notes for this episode to ask any questions or to let us know how you like the show leave a comment on the show notes page at twiml a.com talk 39 thanks again to our sponsor for the Wrangle conference series Cloud era to learn more about Cloud era and the company's data science workbench family of products visit them at cloud era.com and be sure to tweet to them at @cloud c u d r a to thank them for their support of this podcast if you're interested in joining our Meetup you can register for that at twiml ai.com Meetup and don't forget to sign up for the newsletter at twiml a.com newsletter thanks again for listening and catch you next timehello and welcome to another episode of twiml talk the podcast where I interview interesting people doing interesting things in machine learning and artificial intelligence I'm your host Sam charington a few announcements before we get to the show first I'd like to take a second to give a virtual high five to everyone who entered our latest giveaway in which two lucky listeners will get a chance to attend the recently rebranded AI conference in San Francisco compliments of this week in machine learning in Ai and our friends over at O'Reilly this has been our most active giveaway to date with over 100 of you submitting entries just wow we haven't picked the winners yet as of the time of this recording but by the time you hear this winners will be posted over at twim ai.com aisf and we'll give them a proper announcement on next week's show next up about a month ago during my conversation with Chelsea Finn I thought out loud about starting a virtual paper reading group after receiving lots of positive support for the idea we finally have a Meetup to call our own on August 16th we'll kick off the inaugural twiml online Meetup where our first presenter Community member Joshua manella will lead a discussion on Apple's Gans paper learn from simulated and unsupervised images through adversarial training which is one of the best paper Award winners from this year cvpr conference if you've already signed up great we look forward to seeing you there if not head over to twiml ai.com Meetup to get registered finally if you've been paying attention you know that after almost a year of procrastinating I finally launched my email newsletter I've been having a blast with it and you definitely want to subscribe we've got some fun stuff in store exclusively for newsletter subscribers so make sure you bounce on over to twiml ai.com newsletter to sign up all right as you all might know a few weeks ago I was in San Francisco for Wrangle a great conference brought to you by our friends over at Cloud era Wrangle is a super fun event each year it brings an interesting and diverse community of data scientists to an intimate and informal setting this year a music venue in sf's Mission District for great talks on real data science projects and issues not to mention cowboy hats and barbecue while I was there I had a chance to sit down with a few of the event great speakers including Drew Conway founder and CEO of ovium and a former data scientist with the CIA sherith Ral a listener and fan of this podcast and an engineering manager over at instacart and Aaron Shellman a statistician and data science manager with zyren a company using robots and machine learning to engineer better microbes this show features my interview with Drew whose Wrangle keynote could have been called Confessions of a CIA data scientist the focus of our interview and the focus of Drew's presentation is an interesting set of observations he makes about the role of cognitive biases in data science if your work involves making decisions or influencing Behavior based on datadriven analysis IES and it probably does or will you'll want to hear what he has to say a quick note before we dive in as is the case with my other field recordings there's a bit of unavoidable background noise in this interview sorry about that and now on to the show all right everyone I am here at the Wrangle conference the guest of Claud who's sponsoring our series here and I am with Drew Conway who is the founder and CEO of alium it's going to be with you Sam it's great to have you on the show uh so you just did a really interesting presentation that I tweeted a little bit about why don't we take a minute to have you introduce yourself to the audience sure so I am the founder and CEO of alivium we're a new york-based company that builds what we call human- centered AI for the industrial industry and and what that means for us is we build uh software products that exist at the intersection of complex machine data so think streaming data from an oil refinery uh data from automated robotic systems and uh human knowledge one of the things that I can even talk a littleit about in the context of what I just presented here at Wrangle is you know me my career has really been one that has kind of moved me through a path of working alongside and building software tools for people who need to make decisions from data and and alium in many ways is a kind a culmination of a lot of thinking I've had over my career as to how best to extract maximum value from these complex streams of data and present a human being you know a man or woman who's working inside an industrial operation with the right information at the right time to make the best decision so the company's just about 2 years old um we work primarily in what we call processed manufacturing so sort of distinct from discret manufacturing and the way to the way that I like describe as like scret manufacturing is screwing in bolts and and and rivets and process manufacturing is typically you know pipes and boilers things like that um and the reason that we focus on on that second half is is our approach to learning is really one where this continuous nature of information is more well suited for for what we're doing mhm interesting particularly interesting because I've been spending a lot of time researching industrial AI or industrial applications of AI and and we're in the midst of a series of podcasts on Industrial AI although we're connecting here in a totally different context and so uh why don't we jump into that tell us about your presentation here at Wrangle sure so yes so the early part of my career when I first started a career was actually as a what was called then a computational social scientist inside the US intelligence community and so uh this was a few years before data science was a profession sort of well-known profession and and I think probably the people who were doing the work that I was doing then now are probably called data scientist but what my job primarily was was to think about how to build statistical programming and Cal software tools to support decision- making inside the intelligence Loop and basically that that meant my work was split primarily into two big halves one half was what I would call principally Big basic research so things that are more academic in nature I spent a lot of time thinking about graph theoretic models of uh Network change and network um networks moving over time the other half of my work was sort of much more technical in nature so it was really building custom software and even kind of documentation systems for taking in a very wide breath of different kinds of data so all the way from space-based assets and satellite IM iny to signals intelligence to ground sensing to an unstructured written report from some PFC in the field and being able to kind of distill all that information down in a reasonably timely way to help you know a sergeant major who's in the field and needs to know whether that they go you know knock on this door is the person they're looking for going to be there if they go inspect this this Shack will they find the weapons that they're looking for yeah so it was anyway it was it was a fascinating um place to kind of start my career and what I spoke about this morning was really a lot of the lessons that I learned from that with respect to how data science as a sort of profession and as industry can kind of get wrapped around the axle on bias and how to overcome that and how to help yourself as a professional data scien is really how to help the folks around you better leverage and use the tools that you have because everybody brings bias to their work and and I think my experience in the inel community was one where I was sort of acutely aware of that because the stakes were relatively high right right interesting so yeah one of the one of the silly questions that I had for you was did you have to get your slides approved by the uh no no no no so yeah you know all things in in in my slides were uh you know sufficiently generalized that there's no need to do that but you know one thing that I did mention in the talk which is is sort of an interesting context for being a data science inside the Intel Community is sort of your access to tools you know so we you know we as a sort of collective community of people doing this work we think about just this ready access to the latest and greatest open source tools and then you know as soon as Google or Facebook or whomever kind of Open Source this great new thing well let's figure out a way to play with it and going to get it into our workflow that is absolutely not the case when the equipment that you're working you know the literally the computers that you're doing your work on are classified pieces of equipment and so one the stories that I told in my talk was one where I was trying to overcome this motivated reasoning as an example of how that problem of motivated reasoning can actually be addressed through a kind of deliberate technical approach to analyzing data and in this case we were we were looking at satellite imagery the context here was the sort of NeverEnding search for weapons of mass destruction in Iraq you know the timing here is sort of mid 2000s 2005 2006 and as I mentioned in my talk there I had the opportunity to work with these two what we call IMT or image intelligence analysts who had been working for I mean by the time I got to know them I think they've been past their 30-year Mark and so these were these were people with a tremendous amount of expertise and they were tasked with analyzing image images satellite images taken of Iraq and try to identify any places where you might see we of mass destruction and one of the projects that we worked on which they themselves asked for was their intuition was that there were no they were never going to find stuff and unfortunately because of this issue of motivated reasoning they were continuing to be asked to just look and look and look at no end and so we basically came up with this novel idea is like well why don't we try to automate this process so that you can in aggregate show by automatically analyzing images and using very simple classification to try to identify where oh if and there are any indications of suspicious activity if that actually exists because you know two men doing this on their own they could be at this for the rest of their careers and so the reason I bring this up because it ultimately became a really interesting exercise and trying to get new tools in the building so uh you know one of my greatest achievements I think as a in this job was actually not technical but bureaucratic in that I was able to actually get you know early versions of Scientific Python and open CV installed on a classified machine so that we could write a simple classifier to try to help these guys out and ultimately the the success was that a we were able to do that but B we were able to show and aggregate that there was really nothing there and ultimately were able to use that as a way to dis dislodge these guys from having to continue to pursue something that ultimately they knew knew they were not going to find and you think that's changed at all uh the ability to get new technologies open source into these environments over you know since you left that I think it's improved quite a bit you know and it's funny to think I mean 10 years is is a long time in the sort of tech uh timeline and there you know large bureaucratic institutions like the military and Department defense are always trailing indicators but one of the things that I've been very impressed by with the folks that I know that continue to work in this space is their sort of progression to being able to bring new Tools in you know there there's part of this which is there I think there are better tools available commercially and so that's always been an easy way to acquire new stuff but having a not only an appetite but a a sort of Avenue for bringing in open source tools I mean I know even even um AWS works with the Intel Community now and creates you know distributed compute for them to actually use some of these tools and it's I think it's been obviously a real Boon for their work but I think more importantly for recruiting and retention for you know very talented people yeah so you mentioned motivated reasoning in the context of the story of pulling technology into uh one of the agencies tell us you know what that means and and uh spend some time talking about the the other biases that you talked about your cuz that was the bulkier presentation was these biases so motivated reason is the easy one right we see it all the time it's basically I I ask you a question because I I want you to give me the answer that I already know right and motivated reasoning is a particularly Sticky Wicket when it comes to intelligence gathering because policy makers will oftentimes have predetermined policy biases right and so your job and as I mentioned in my talk you know particularly if you're working in the civilian intelligence you know principally the CIA you have one customer your customer is the president and and by extension the White House yeah and so you know motivated reasoning can be very problematic if you have a customer with high levels of motivated reasoning and there are a number of examples of that throughout history you know I spoke about the sort of search for weapons of mass destruction in the early part of the 2000s but there are lots of other biases right and I think the next stop and and the one that I mentioned uh during the talk is his idea of confirmation bias which in many ways I think is sort of the you know the the cousin or the the direct relative of motivated reasoning which is maybe I don't already have a predetermined policy outcome that I'd like to see but I sure have a lot of bias in accepting you know analysis that confirms a thing that they already think is true right and and that is again this incredibly problematic lens through which to observe information that in and of itself is very hard to collect you I think in the in the context of data science people people often talk about bias in the data generating process and it's true it's everywhere you know if you work at a big social media company you have access to a tremendous amount of data but some engineer along the way chose to collect that clickstream you know that wasn't that was done by a product manager who decided that they wanted to track a specific kind of action right and that's bias right when you move into a kind of Intel context where there's policy decisions that need to be made off of it you have an extremely limited set of kinds of information that you can get and often times that information is is brought to you in sort of opportunistic way you know you know one of the this is well after my time in the Intel Community but of course after you know after the raid at Osama bin Len's base camp a lot of the the most valuable stuff that came out of that were all the laptop computers and information that they really glean there you know obviously that data is highly biased and that was a that was a collection of opportunity there was no there was no like AB test and no experiment to try to identify which would be the best Pathway to do that yeah and so confirmation bias becomes extremely dangerous if you have sudden access to a new data set that you didn't already have and without taking a deliberate approach to analyzing it may reinforce that bias in a very dangerous way yeah the last one that I mentioned during the talk and this is sort of where I left the talk and I think is something that we as a community really need to think about in the context of our work today now is sort of Flatout denial that we we as data scientists and we really as sort of technicians because I wouldn't bucket this only for people who you know write statistical code right all all folks who build tools with software have this attachment to both information that is biased and imperfect and tools that are you know sort of uh approximations of of good ways of estimating what we think is occurring in the real world but the real world is really hard to measure and so this kind of Trifecta of imperfection means that everything that we do should be you know fraught with caveats and considerations for all of that stuff but what that means is that we are we accept the fact that we open the door to people who will try to poke holes and deny that that's true even if we can make a very confident assessment of something and we can show it to be true but because of good hygiene around doing data science it's very easy to P holes in that and I think there's I think part of what we need to think about as a community is well how do we prepare ourselves for that how do we get better at communicating this kind of you know these foibles in our in our work that we cannot pull away but also how do we how do we get the consumers out of that information to be more willing to accept and more educated to a certain extent about that reality I mean one of the things that I mentioned during my talk was you know this the election in 2016 US election in 2016 I think was a great example where people just had this expectation that you know you get Whoever has the higher percentage of winning is the winner right because we have this kind of horse race mentality but anybody who's ever been to a casino would know that if you if there was a game at the casino that gave you you know 25% out of winning you would never leave that game yeah you know and that's essentially the game that we ended up playing and we just got the one in four chance that we didn't expect to see right and I think part of that is this you know people just need to get a better understanding of and it's it part of it I think is sort of becoming more numerate but I think part of it is just we we're responsible for that the data commun is responsible for kind of conditioning folks to understand that better I think folks like Nate silver do a great job and there's I think there's a lot of success in the world with that but we need more of it yeah yeah it's it's it strikes me that it's a bit of a fine line between you know denial and maybe it's not a fine maybe it's a golf but you know you know there's Denial on one hand and then there's you know challenging the results and I think you know we need to be as a community be open to having our results being challenged because the flip side of that is this idea that you know data and quote unquote AI is matching right and whatever it says that's what we got do right and that's a whole different you know that a whole different set of challenges I totally agree and I think you know we are we are right at the beginning of what I think will be a very interesting you know future immediate future for you know sort of General consumer because it seems like the technology trend is to move in the direction of effectively using black boxes or Solutions and that means that we accept the fact that it will be very difficult to understand why a system makes a choice MH and I think we are opening ourselves up to a really difficult set of circumstances that will very likely come sooner or later where you know int you know intelligent software systems whether you want to call it AI or sort of a general class of just not decision support systems but decision- making systems that we don't understand and you know I think the danger now is that in some sense we've move so quickly in the direction of having access to tools like this that the education component is just not kept up and I think the flip side of this which is in many ways the compliment to it is I also think that there's an unnecessary amount of fear right I think now we have this pendulum swinging back the other direction where you have folks creating a kind of anxiousness around the arrival of tools like this and systems like this that doesn't really meet with the reality of how that technology trend is doing and so consumers are being conditioned now to not Trust ATM machines and to not trust Subway Subway doors that open and close on their own and I think this friction Now is really literally starting to heat up in a sense that again we're the folks that build these systems and I think because of that it's partly our responsibility to be able to go out and and try to talk to folks and say you know this is this is how this works you know fine we don't I think we lack in some sense good champions for this stuff right we have lots of really good entrepreneurs and technologists who can build this stuff we need to find our Champions who can actually kind of go out there and and try to help folks understand how this is going to change their their lives you know for good and and potentially in ways that we don't even understand yet yeah yeah I've got to ask the ATM and Subway door are those specific examples no no I'm just thinking you know you hear you hear you know all these you know a lot of kind of fear uncertainty and doubt around stuff and so you know actually these be more examples for my personal life where I'm you know asked by family members you know when when am I going to show up to the ATM and it's going to be hacked or when am I going to show up you know autonomous vehicles going to like turn on their owners you know everybody you know people have seen you know Fast and Furious movies and they're like is this how things are going to work and I don't know but I also know that that future is a little bit further away than Hollywood is presenting right and there's a long way to go before we actually have to start thinking about that and I think we because there is time part of our job as a community is to try to fill that that information Gap in a little bit the speaker after you said that in kind of running through similar sets of issues uh more applied societally gave the the audience the advice that you know as you're thinking about these systems You're Building think about the degree to which it resembles an episode of black mirror right yeah I think that's that you know if there's if if there's a useful rubric that's probably a really good one right like if if if you start to bleed into Black Mirror territory then you're probably have made some some choice that uh you might want to reconsider yeah and for those that don't know Black Mirror probably the best analogy is like a British version of The Twilight Zone yeah yeah it's it's a British version of The Twilight Zone but is you know the Twilight Zone modern yeah dealt with lots of different kinds of social issues B Mir tends to focus on technology as the kind of specific or or consistent thread and uh well I'm a big fan so yeah same here same here so what what are the specific you know the the top three things that you want folks to take away from your presentation and and not just take away but like go do right I mean I think the first one you walk out of my talk and you go back to your you know maybe tomorrow or this afternoon you go back to your desk at the office and sit down and think about how confirmation bias motivated reasoning and uh OPP opportunities for denial exist in the current decision-making workflow in in your company your organization and and how your own work may be contributing to that or helping to save it and and really you know not not to be overly negative but you know I think that there is everybody can think of examples of this and I run a company I can think of examples I mean I'm I have aot I'm heavily motivated to sell my products and that will impact my decision- making and I have told my team I have high expectations of them to step in and say hold on Drew like why are you why do you think that that's a good choice to make and this would be specifically in product building even model selection for some of the you know algorithms that we may choose you know this this stuff is this stuff is not you know abstract right these are real examples and I think the second one you know which is I think maybe more fun is you know go out and seek the firstand you know go try try to try to meet folks and see systems where this kind of intersection of biased data and decision- making exists in your community you know local government in you know in in your local community organizing you know this stuff is everywhere and one of the things that's nice about kind of professionally applying statistical methods and Computing to this in this context is that you can bring your own expert piece to potentially a much smaller scale problem that impacts many many more people's lives and the lives of people that you are your neighbors and you know what I've had I had a great opportunity in my life to to work with you know the government in New York City and and work with local communities there and you know I was I was I was a co-founder of an organization called Data kind which really kind of grew out of this really this question of how do you how do you bridge the gap between talented data Sciences engineers and product managers and a social sector that has great data really interesting problems but doesn't have access to those talented data scientists and engineers and data kind exists to do that and you know you can go sign up on data kind's website that's one way to do it but I think an easier way to do it is just you know show up to a community meeting see what see what the Board of Ed is talking about you know they're making decisions about what what data to collect on students you can help you know help them make better choices and the final one I'll say which was the final the sign of kind of called action of my talk which is you know if you're in a position of hiring people which I'm sure many of your listeners are you ought to really think hard about hiring veterans I had an opportunity as I said to to to work alongside active service folks in in the military and they're some of the most brilliant dedicated just you know technically competent folks that even now have ever had a chance to work with and I think there are many underrepresented groups in technology and I think veterans is a group that people don't talk a lot about and I think the the transition from a career as a signals you know a signal analyst on a sub on a submarine to working as a data scientist is actually a lot narrower than certainly folks in that higher technologist think and part of the issue is that a lot of people who come out of out of the service and then try to trans you know transition into a professional job they just don't even know that these jobs exist you know and and I've worked with an organization in New York called the Iraq and Afghanistan Veterans of America to try to build you know access you know just to kind of make bring these two communities together and again you know I think this is this is something that I do in New York here in San Francisco these all these organizations exist and I would encourage folks in all their communities to to try to work and do that yeah how do the bi the biases that you talked about Express themselves in your customers when they're trying to apply industrial Ai and and data driven decision making what how do you see them show up yeah that's a great question and you know we see a lot of the the bias coming from customers at a point now where the buzz around Ai and machine learning in the context of their work so you know I know you're in in you're doing this series I mean the industry 4.0 you know digital transformation all these kind of term terms of art basically what that means is okay we're not a software company but we believe that we need to transform part of our business to rely more heavily on data and software because that's what we think we need to do but those are just words and so when you get to actual practical implications you start talking to folks who you know their job is to make sure that the oil refinery has no safety issues ever again how do we do that and because there's a potentially very large white space around what Ai and machine learning could actually do to solve that problem it's very difficult for that you know site coordinator plant manager to think creatively about how to solve those problems and so the biases that we see is you know particularly in the beginning the first year of of alium the bias that we saw is that when you when you try to rely on customers creativity they come back and say I don't understand this I don't I don't see how it could possibly help my work because there's no real tangible connection to all of this fancy math and compute that you want to throw at it and how this translates to my Workforce getting home safely every day and my facility producing at maximum yeld and so I think for us the Journey that we've gone through as a company is actually trying to figure out how to more narrowly go after a set of really a set of metrics and kind of value of values that can apply to answering that question because the denial bias that's like I'm going to poke holes in this because it's easier for me to say no than it is to try to do the work to understand this right you know it's hard to it's hard to fault a customer for that and they're you know our customers are folks who have worked 15 20 25 even 30 years you know in an in an oil rig on a you know in a manufacturing floor in a power station like software is not their job right software is my job so I should be able to help them with that and so how I can kind of minimize the Gap in that is is is a way of minimizing that bias to try to help them understand and connect the dots between the work they do every day the service that we can provide and then how that makes that work better yeah interesting comments around kind of the RO the role of software I think we throw around ideas like software is eating the world and you know you have big industrial companies like GE and Ford saying that they have kind of committed themselves to become software companies right and so you know on the one hand I think those you know I I guess maybe I'm a cheerleader for the industry and say Hey you know we you know these companies you know everyone at these companies need to start thinking a little bit more at least like a software person in some way but you know know you're right one of the things that has come up repeatedly in talking about the industrial AI in particular is how you know you are you know fundamentally trying to pair you know something that's like at The Cutting Edge of you know software and Technology with you know someone who's you know been working in a particular you know working with a a machine you know a CNC machine you know or a uh you know a boiler or something like that and they know it's like they're the C n c Whisperer right that thing makes a certain noise they know that it's going to need some care and feeding and they know how to do it Y and I think it's an interesting responsibility for us as technologist to try to figure out how to bridge these worlds yeah uh you know at alium we we we really think about this as a first class part of what we're trying to do you know we we have you know set of company values and and one that we we really hold de is this idea we want to put people first and I think you know that's that's sort of easy to say and in the context of what our work what that means is I have I have no idea what a CNC Whisperer knows and I'll never know like I will I could I could start today and work through a m of my career and never be as good as that I mean we've met people who who are those those those folks I mean single single individuals working in massive multinational companies that themselves have institutional knowledge that is probably invaluable to those organizations and so to build software to support that I think the the admission that you have to make as a technologist is that you'll never know what they know and so how can you build tools that shift the cognitive responsibility of that individual away from having to cons constantly be you know checking that CNC machine and pulling data off of it so that it knows exactly that series of vibrations or or heat or spin that may indicate imminent failure to providing them with information in a timely manner that draws them to that you know opportunity to make a choice to make a decision and then they can get back to the work that they're really good at and I think in some sense it's just a matter of increasing cognitive margin you know there's there's this you know this this upward slope of data right constantly everywhere and I think in the industrial space which is is largely not talked about outside of those kind of professional conferences is just as steep or or more you know but the labor Dynamics in those in those Industries are you know fixed or shrinking and so now you have this this very problematic asymmetry between humans having to understand and and deal with data and make decisions and systems that are just drowning them in information so for us the simplest way to bridge that Gap is well let's leverage that expertise that they have so that that that like shifting of cognitive responsib ability for the routine boring observational stuff can go to the computer and the you know imminent high value need to know now decisions can go to those experts and leverage that and really Stitch that stuff together right right awesome well what's the best way for folks to check out what you're doing track you down engage with you yeah so you know websites ali.io that's Al l l u v IU m. iio alium the easiest way to engage with me you know I'm I'm on Twitter at Drew Conway c n w y I'm I'm pretty easy I'm a pretty easy Google so you know I I uh I I answer I answer my emails I like to I like to engage with folks so if you have any questions about the company we are um we are hiring as I think everybody is so you know for us the the nucleus of of our company is really around data science and engineering but I think with a specific bent on streaming data and you know semi-supervised and unsupervised learning so if that's the kind of stuff with you know High volumes of streaming data that get you excited we'd love to hear from you great all right well thanks so much thank you Sam it was a lot of fun all right everyone that's our show for today thanks so much for listening and for your continued support of this podcast for the notes for this episode to ask any questions or to let us know how you like the show leave a comment on the show notes page at twiml a.com talk 39 thanks again to our sponsor for the Wrangle conference series Cloud era to learn more about Cloud era and the company's data science workbench family of products visit them at cloud era.com and be sure to tweet to them at @cloud c u d r a to thank them for their support of this podcast if you're interested in joining our Meetup you can register for that at twiml ai.com Meetup and don't forget to sign up for the newsletter at twiml a.com newsletter thanks again for listening and catch you next time\n"