#148 Why AI is Eating the World with Daniel Jeffries, Managing Director, AI Infrastructure Alliance

**The Future of Automation: A Technological Revolution**

The concept of automation has been a topic of discussion for many years, and it's finally starting to become a reality. With the advancements in technology, we're seeing a proliferation of agents that can perform tasks on their own, making our lives easier and more efficient. However, what's exciting is not just the existence of these agents, but also the potential they hold for transforming various industries and aspects of our daily lives.

**Agents: The Next Generation of Automation**

Agents are essentially programs designed to perform specific tasks, often autonomously. They can learn from new information and adapt to changing situations, making them incredibly powerful tools. For example, at the Infrastructure Alliance, we built a little agent that was fed 2,000 companies' websites and summarized them based on their suitability for joining a particular program. The results were astonishing - 95% accurate, with only 10 out of 50 contacted companies responding, and two eventually joining.

**The Democratization of Automation**

This is not just a phenomenon limited to tech-savvy individuals or large corporations; it's becoming democratized, making automation accessible to everyone. We're seeing the rise of RPA (Robotic Process Automation), which traditionally required significant investment in software and infrastructure. However, with the development of LLMs (Large Language Models) and other AI-powered tools, we can now create agents that can read text, understand its meaning, and perform tasks with remarkable accuracy.

**The Power of Continual Learning**

One crucial aspect of agent technology is continual learning. A breakthrough in this area could revolutionize our understanding of how these programs learn and adapt. Imagine an agent that can continuously update its knowledge base and apply it to new situations - the implications are staggering. Currently, we're working on developing systems that can compress and download new skills into existing agents, allowing them to stay up-to-date without needing significant retraining.

**A Future of Automation**

As we move forward, we can expect to see a huge proliferation of agents in various industries, automating tasks that were previously considered tedious or boring. This will free up human workers to focus on more creative and high-value tasks, leading to increased productivity and efficiency. Infrastructure development will become even more crucial in supporting these systems, ensuring they remain stable and secure.

**Middle Ground: Balancing Progress with Caution**

While it's easy to get caught up in the excitement of technological advancements, we must also acknowledge potential risks and challenges. As with any significant change, there will be those who predict the end of the world as we know it (and sometimes they're right). However, for the vast majority of use cases, these agents are going to make life easier, safer, and more efficient.

**The Role of Human Creativity**

Don't worry - artists and creatives are not obsolete just yet! Agents will become another tool in your toolbox, allowing you to focus on what matters most: creating something new and innovative. Think of it like using Photoshop or a paintbrush - these tools have changed the game for artists, and agents will do the same.

**A Time of Great Change**

In conclusion, we're living in an exciting time, with technology advancing at an unprecedented pace. The concept of automation is no longer just a theory; it's becoming a reality that will transform various industries and aspects of our daily lives. By embracing this revolution and adapting to its implications, we can unlock new possibilities for human creativity, innovation, and progress.

**A Final Word**

Thank you so much for joining me on this journey into the world of automation. It was an absolute pleasure discussing these topics with you. I hope you've enjoyed our conversation as much as I have. Let's continue to explore the possibilities that agent technology has to offer - it's going to be an incredible ride!

"WEBVTTKind: captionsLanguage: ena lot of people are worried the old companies and the old tech companies are going to dominate again this is one of those Cycles where the new companies that become the the fortune 500s 20 years from now are spawned and that's because they're going to be more agile and they're going to look at Ai and how you communicate with things much differently so Dan Jeff it's great to have you on data framed thanks so much for having me and I really appreciate it likewise so you are the managing director of the AI infrastructure Alliance and former CIO at stability AI you also have a pretty awesome substack that I recommend everyone to read called future history a lot of that uh you know you're such a pric thinker and writer about ai ai there's so many directions I can take our conversations in but what I want to Deep dive with you on is exactly what you mean here by MBA is how AI will become the interface for a lot of work that we do and really how we interact with software as we know it so maybe here to Deep dive a bit more walks through what you mean by AI being the interface for the world or world work and how do you think that will play out more deeply in practicality over the coming years so I want to give credit where credit is Du I stole the in AI will be the interface to the world idea from the brilliant fris cholle uh who's the author of Kos brilliant thinker in of himself uh like the privilege to meet one time you know early in my kind of AI career he was and I just thought he was a brilliant fellow uh and so I loved that idea and it made perfect sense to me the idea that basically we're going to talk to this thing like it was kind of like n dog is up there you know on the conversation like man I can understand this thing like I can talk to this you know what I mean like and like and like it can talk back to me you know like this is like am I in a movie right now or what right and that you know to me that is like exact that's awesome right that he Snoop Dog is brilliant too and he kind of really nailed it and and to me I think the more you can kind of chat like right now you have these a lot of people are worried the old companies and the old te companies are going to dominate again this is one of those where the new companies that become the the fortune 500s 20 years from now are spawned and that's because they're going to be more agile and they're going to look at Ai and how you communicate with things much differently so you know right now you know you have PE the big companies being very conservative with their chat Bots right they got to make sure that you go to that that sort of recipe site or whatever but who the heck wants to go to that recipe site when it's become you know just you know six popup ads and like and add every other paragraph it's like it's super annoying so as soon as a company comes along is like man you know we're going to make this this interface you chat and tell it what kind of food you're in and what your dietary restrictions are whatever and it's like boom here's three things that you can you can eat without ever going there and involves a new business model right in other words the old business model will sort of take down a little bit it's not going to be totally destroyed that's crazy but like a new business model that supports this we don't know what that'll be but it'll it'll evolve over time as soon as that comes that's starts to displace the old way of thinking about it and they've got the innovators still they get stuck right just like Kodak is like well we've been working on this film for 100 years like this digital thing looks cool but like it kind of messes with the original business so let's not go too far and then somebody else who doesn't care about that comes along and replaces them so I think you know just being able to naturally converse with things you know bringing in you know I think somebody recently said that you know that there won't be any programmers I totally disagree I agree with David ah that maybe there's going to be a programmers you know and it's just like I'm a crappy web designer I can use Photoshop I can use some other things I'm ter I couldn't write the XML but you give me a drag and drop editor I can all of a sudden put together some pretty cool websites I think we're gonna have more people programming like that I think we're have more people being able to talk to their applications and it understands them and and and becomes kind of a you know a friend in a way right and I think this is super exciting I think that's how most software is going to function we don't you know whether it's always talking or typing in who knows but we're going to be able increasingly just kind of describe what it is that we need and and get better and better output from there that we can iterate with and work on you know that that to me is exciting you think about even an artist you know maybe a guitar player playing a song iterating and going okay give me 30 continuations of that and it goes okay listen listen listen oh number seven is cool yeah let me let me try that okay you know what I just Chang this no give me like 15 variations on that right like that kind of stuff that kind of co- collaborative relationship with AI is going to be a very exciting thing for everybody I think that's really exciting and the co- collaborative experience that you're talking about rests in a lot of ways and great user experience in user interface design and a lot of ways you mentioned the neutron bomb of chat D you know one of the reasons why chat was so widely used is not just because the model is very performant and you know the quality have the output the the time to Value when you get a high quality output is really low right but also the interface of the chat the user experience the ation time the feedback loop that you get when you're talking like speaking or chatting with chat is pretty pretty great and you get a lot of aha moments and that's one of the reasons why it took off so quickly uh so maybe in your opinion what constitutes the ideal interface and experience for an AI model as we entact with that you I don't know that anybody knows the answer to this question right now uh because I think UI you exper the creative process is an iterative process I was just having this conversation where you know I was talking with someone about you know a programmer who's working on you know idea was working on and you know my friend said well you know he's kind of developing I said well he's working on my idea and he said she said well it's different though than what you originally you know created I'm like that's the creative process right like it's it when I start out writing a novel or whatever it doesn't end up exactly the same way as when I sort of originally planned it right there's this co-creative kind of thing that happens so I think that's happening that's going to happen with the uiux as we go along we're going to iterate and we're going to go along and we're going to say wait a minute this is this is a new way to do things and that maybe the best way I ever think about that is you know my friend Chris Dixon who I knew when I was when we were sort of very young um and you know is now famous you know famous investor or whatever um he you he was a programmer at the time and he the the stylus had just come out on these kind of you know non interet connected pad thingies that we had and and most of the people were designing video games on like you know click and type stuff which was the dominant uiux at the time and he made a little like kind of a space in R so game we had to like Circle the you know the attacking aliens you know with the stylist and he was he was said look you got to utilize the new capabilities of right of like the interface right and I think that's the creative process I that's what happens the more people play with these things the more we're going to get an understanding of what the ideal interface will be over time and then when it happens you be like oh well of course right but and that's that's that that's the idea of like any eventure it always looks so obvious in retrospect that's how you're going to know like that we've gotten there but I I don't know precisely what it's going to look like just yet yeah and what's very exciting here is that we take a lot of you know software and tools that we use right now for granted right you know apps are designed you know we've kind of reached consensus of what makes a great application on on a phone right in terms of a user interface and a user experience but we had to learn that as the iPhone was released and as the App Store kind of evolved and apps became uh more and more ubiquitous and we're doing that same process with AI right now right so we're seeing more and more AI being embedded in every software stack it's becoming truly transformational you know seeing scary good applications of AI and tools like word excel and I think that has a lot of potential to change how we work in general so maybe how do you see that transformation happening what do you think our relationship will work will look like once AI once ambient AI becomes more ubiquitous I mean the change will happen gradually and then all at once right I mean that's that's C if you look at that diffusion of innovation curve you know which famously came out in the the late 60s that looked at like how ideas and Technology disseminated right and it's it's been repurposed for every business presentation on the planet right but I think most people miss that it's like you have these Pioneers you have these early adopters you have the early majority the late majority of the laggards and you know at each point in time it becomes something where you're like well I don't know that doesn't make any sense I don't know why i' ever used that to that's kind of interesting I started to use it to like well I'm using it every day to I can't imagine my life without it but that's that's the the sort of progress and and you know I kind of changed my mind maybe at some respects in that I you know as you were talking just now I was thinking back to that scene in Blade Runner which was totally science fiction where he where he has the photo of the of the gal and he puts it in there and he says okay you know pan 23 to6 right okay in you know go no go back right and like you know the software is kind of moving around and searching through he like okay enhance 23 to 15s you know boom boom boom boom boom and you know there were kind of two sci-fi things in there that were impossible for one was enhancing a photo which is in every stupid crime drama of all time and you know you're like okay you're like oh we took this low resolution VHS footage and and got a high and it's HD right yeah like HD we noticed there's a cat in the back know look looked at the reflection of a car of a car little window and yeah right right but now like with kind of the generative models like there's a possibility that they could fill in some of this case and that that that idea of him talking to it he like okay giving it these specific commands Okay do this do this okay no wait go back which is a very human Command right to go back I think we'll be able to sort of talk to it I think we'll be able to have those kinds of things that are available you know in in this sort of gesture to it move things around like that like if you see the the the interface in her by the way where he was kind of there's no controller and he's just sort of walking and then he's talking to it and they're having you know the character and him are having an argument I think that that's how it starts uh you know that's how it starts get so I think I kind of backtrack this out of the the initial question roll back to the you know to the last one um you know but that was sort of my my thinking again this stuff happens slowly and then it kind of happens all all at once sometimes it feels like the cycle has sped up a little these days right and that we're so used to technology that there is kind of an acceleration point and so you know we adapted faster and I think you know I was of the you know the Gen X generation where I I lived without all the super technology and then it kind of gradually came into my life and now it's a total part of life so I'm kind of on this weird Edge whereas I see a lot of younger folks were like they'll pick up a new platform and then abandon the old one overnight like well we switched run over to this and they'll they can learn it you know as as if it's like you know I don't know as if it were just like always there it's like a tree or anything else they just know how to interact with it so there is even a faster acceleration of how we even adapt this stuff so and it's kind of compressed the time there which is exciting I think I'm MBS yeah indeed and if you look at JD like JD's adoption and how we talk about talk about CH and think about it it's crazy to think that this has only been available since November 202 22 right or something along those lines right it's it's been less than a year but it's become so ubiquitous and so widely used uh no I think this marks a great segue to discuss how you think the AI ecosystem will evolve in the next few years right we've talked about applications of AI and tools and the software stack in a lot of ways it's interesting because we're seeing technology incumbents move quickly yet conservatively as you mentioned to adopt AI in their products um there's a lot of competition right now in terms of you know Foundation models AI Pro infrastructure providers I'd love to learn your thoughts about how you think about the different players in the industry today and how do you see things playing out in the next few years I see you know I see the research moving tremendously quickly on a lot of different things and and that's primarily because you have you know traditional researchers empowered with like models they can pick up and tune versus having to have all the money to train it from scratch and you know the failure rate can be really high there so that's accelerating research which is exciting I think that'll pick up the open source you know Foundation models and and we're already seeing a lot of that that cool research I think you're also starting to see a ton of developers traditional developers coders get into it they bring a different perspective which is exciting and they're able to do things that you you might not see in traditional data science merging 26 models together or whatever and and you know data scientists going well that's crazy it's going to collapse the model then it doesn't it makes a better model right so these kinds of like these kinds of engineering things this is it marks a different phase in any Tech ological development you know it's one person who comes up with a way to uh you know extract nitrates or whatever uh from the air with the in Germany but then it's Engineers that make it this scalable platform where you can build uh uh you know something that you can sell repeatedly and and crank out in a in a in a ous way so I think we're seeing that same thing now of developers and Engineers learning about this stuff and bringing their own perspective which is super you know super exciting um I see the foundation models as being a really C intensive business I think it it's costly in terms of people time you know compute uh I think we have to get there has to be a breakthrough in terms of making them all smaller or maybe even learning by example you know I was reading the I always follow the DARPA stuff because they're always like 10 years ahead of the curve and you know they were funding like you know they're funding stuff like how does AI learn in novel situations and be able to adapt to a completely new situation and they describe it as like you know you learn ch but then the rules of chest completely change underneath from you how do you how do you deal with it and today's AI can't do that and I was watching like a liquid AI um Network that was based on like celan's brain and it was able to be like thrown into a novel situation in a drone and able to find itself whereas a Transformer was like this is a forest I was trained I was trained on the city I don't know what to do so I think we're going to see kind of new breakthrough research developments I think we're going to see the refinement of the old stuff what I'm really seeing lack of is as fast as the research and all these ideas are coming in the infrastructure for AI is really really primitive like when I because I spent a lot of my life in infrastructure you I had an IT consulting company I was a red hat you know for a decade that you know I was an ml so I've seen a ton of these kind of like go from bare metal to virtualization to Containers you know all these monitoring and management and security tools we have none of that in yet right and when I look at like I even looked the other day I was like cool we want to try out a bunch of the open source models and it was like cool spin up you know a single instance on an a100 or a 2way A1 100 charged by the hour 250 an hour $38,000 a year to run a single instance I'm going this is crazy why isn't anybody sput up you know a bunch of these models and paralized access to them in charg in a prot Toka basis they're not even there yet you know right and and then there's all kinds of other things I see that are missing I think we need a red hat of AI in fact I was thinking strongly about starting this as a business I just don't want to get up and do it every day um so I'm not going to do it I'm giving this away freely listen closely I think um you know we had all this stuff in traditional code the open source stack where you could rapidly fix bugs and where you like added skills or or upgrades to it I think AI is going to need a similar thing where you need bug fixes and skill pack upgrades meaning okay we added medical knowledge to this Transformer and you know this model just is advising people to commit suicide okay great how do we like fine-tune the heck out of that rapidly you know uh or example it out like take the original model ask it the same question of why you should never commit suicide Heir that answer with the original question generate a 100 versions of each question and answer surface 2% to a person okay take that out take that out take that out great fine-tune yourself rapidly output a bug fix and that's got to be an order of magnitude faster than traditional code because we are going to have you could these things are so open-ended whereas in the past you're like well this is a you know this is an SSL you know library or whatever you can only fail in these ways can fail in a lot so I think we need not just people are going to run the inference of these things but people are going to be able to support these things and the community builds around that of rapid fine-tuning and Rapid iteration rapid pickups so that when I you know as a user as an Enterprise order I'm able to get that model I'm going to have 200 lauras or a thousand lauras I don't I I don't even know the upper scale of how many lauras you can add without disg grading it or adapters in general I use Laura because that was the first adapter ever came across but I really mean adapter I don't know whether it's adapter is the answer and whether they become hot swappable whether it becomes a mixture of experts whether it's fineing I don't know but I know that we have a long way to go in terms of the infrastructure and support and then the middleware I talked to a lot of companies right now building the middleware like how do you parallelize you know or cash these kinds of things you know how do you deal with prompt injections I see that almost being like an anti new antivirus business with your ristics anti you know like neuron Nets or whatever essentially saying like Okay this is a prompt injection stop this on both sides of the equation in the pipeline I see all this middle wear and support uh you know monitoring and management all this kind of stuff none of this stuff exists and to some degree we're really starting from scratch because this stuff is non-deterministic and so it's not enough to just take your monitor software dump it on an lln you know you're going to need a new kind of monitoring that's able to detect logic flaws right for instance like well this I asked it to go get a present uh for me for my sister and like it's off there buying you know I don't know um I don't know a baseball bat you know or it's like you know talking to someone else or it's you know it's doing something or it's just outputting garbage text right so we're going to need all kinds of new sort of mid where monitoring management infrastructure but it's going to be a whole new industry it's going to take a bit of time just like it took a bit of time for us to figure out how to scale rep scale applications you know in the beginning they're like well you throw a single database at it and whatever and that's not good enough all of a sudden you get charted databases and you know distributed applications and load balancers it's going to be the same kind of progression for dealing with these non-deterministic systems it's going to take a it's going to take some time uh and and I don't know how fast it comes together there's definitely a lot down back here uh and you know maybe as you point out all the different kind of challenges and the limitations we currently have in the AI stack what do you think is the most pressing aspect of the AI infrastructure stack that needs to be fixed within the next few 12 months to be able to scale the adoption of AI and large language models in general uh so I think you need to start getting to the point where the mo you know again you have a you have additional models that you need you need the basic middle Weare in place to kind of deal with them I think you need some sort of upgrade process that's much more when I see stuff like open the eye they're like well we deprecated the old model you got two weeks good luck that is totally unacceptable it's not going to work uh you're going to have to have these kind of longer lived models because the the if you just upgrade the model it can rapidly degrade your application developers need the chance to say great here are the last six versions of GPT or Claude or or whatever yes if one of them is considered a security upgrade or something like that then that has to be there but in general you can't just have oh we swapped out the old bone and like now your application that used to summarize text perfectly is now falling off by 75% overnight and good luck that's absolutely not going to work and so I that's the kind of that basic level stuff is totally missing and has got to be fixed in the next six to 12 months for this to become viable and what you're mentioning here is speically on that example with open AI you know is one of the limitations of close sourced API providers right that we see here and I think this Segway to my next question pretty well is on the trade-offs between open source and closed Source models right uh a lot of discussion now in the industry about hey should we work with an open AI should we fine-tune should we you know build a large language model from scratch um how do you imagine these trade-offs will evolve the next few months right where do you what do you advise companies looking to leverage large language models to start well look I think opening is not going to be the only game in town for in anywhere you know you're gonna you're gonna have menro you're GNA have pie and you're going to have Claude we already had Claude to it looks and I my programmer was already telling me that it's it looks like it's much better at coding already so like you you've got the open source models llama's probably going to come out with a commercial viable version uh you know you've got gorilla you've got these other kind of like open source models so there's going to be a proliferation of models um that are going to be viable uh you know for people uh I think that's tremendously exciting I don't want to see sort of one kind of group kind of dominate a l these things and uh you know open source you know we're going to talk more about Open Source later but I I think to me the open source is tremendously important because that lets the it lets the developers and the and the reg and the you know researchers who aren't maybe the researchers making you know $20 million $2 million million dollars that are everyone's competing for but you know what not everybody who is super smart or has a great idea is already at that level in their career right there's a ton of regular folks out there you know everyday researchers who might have a breakthrough because they have access to the weights and they could now go try out an idea that maybe that was too far out and they weren't going to get funding for in kind of a traditional very expensive eclectic you know Foundation model company and uh and now they have the opportunity to do that and if you look at that there's a perfect example in in the stable communion St diffusion Community we'll talk about that more later too but one of the things I thought was really interesting is the Laura paper was made for large language models and the community adopted it for diffusion models to the point that I saw a Reddit on the stable diffusion Community where the the paper writer of Laura was there going hey I never thought to do this I want to do a new version that takes into account a larger stack of models I want to talk to the community to understand what works what sucks what's better right that kind of fast feedback that happens from the open source in the house is super super important and it's why open source eventually ends up eating eating the lunch of things in the long term so do you think that open source it like ultimately will in the long run will eat the like the lunch of like the large language model providers of today yeah so I think so I think right now so this there's a couple of ways this future can play out I like to do kind of a Monte Carlo analysis of the future and have like these sort of hard branches there's there's a lot of this sort of weird lawsuit stuff happening right now where people are trying to redefine copyright um and and uh you know there's a I I didn't fully expect that I saw a lot of pro you know challenges or or protests or things like that but what I didn't see uh was you know these kind of laws to to this sort of I'm an artist for a long time and I I have I have no problem with sort of artificial intelligence but a number of artists and while copyright holders are suddenly like up in arms about it so depending on how that plays out my general sense is that the artificial intelligence industry is way too important to the world economy in the future for it to fall on the side of not allowing sort of public domain or or or public you know scraping I think you know I think it just falls on that kind of naturally over the long term but that's going to play out and that's going to play out and that's going to affect the trajectory of whether open source kind of is held back for a decade or whether you know AI is held back for a decade it could change how the research works and force us down the paast of like doing kind of a liquid neuronet Train by example kind of thing uh in the same way I take a kid out of the back throw a ball to him and after a couple weeks he knows how to throw the ball maybe he's not going to the major leagues but he knows how to throw a ball so I you know that could change their trajectory so I'm watching that kind of closely um the other thing is right now it's really really expensive and and I would say in the short term the proprietary has a big advantage and they have the big advantage and they can hire the best people they can buy the supercomputer um you know they can get all the data they can be quiet quiet about it um and they can license a bunch of data you know so they have a big advantage over the open source providers my general sense though is that open source over a long enough timeline generally wins out gen open source is this weird ugly gnarly kind of thing in and I love it right it's like messy you know you look at the early days of Linux which I spent a lot of time with and you go you're like how the hell is this ever going to be hilarious like it's crap and you know I got to compile my own thing like it barely works like the old gray beards at the time were like this will never disce you you fools you know you young whippers Snappers yeah get out of here with this crap you need an Enterprise support contract and and you know so but over time the swarm of Open Source intelligence right starts to Compact and like you get millions of developers and tens of thousands of developers working on this concept um and it just becomes harder and harder for any proprietary company to build you would never have been able to take all of Microsoft oracles Adobes everyone else's money at the time and pull it together and build the Linux C you couldn't and so over time that openness I think ends up eating the world and if you look at Linux today you know it runs everything it runs supercomputers it runs the entire Cloud even Microsoft which if they had been successful in the early days it like K King it crippling it would have shortsightedly crippled their business today which is you know essentially runs around that right and uh and so I think that open source in the long enough timeline wins now I don't know what whether that's 5 years 10 years 20 years 30 years um I think the proprietary be companies have have a big Advantage right now um and and that's typical in an ecosystem too but I think longterm uh open source has a massive you know has a massive chance to disrupt we may see even a third timeline where they kind of exist peacefully and Co you know and there are huge parts of the stack that are open source and some of it that's just these very proprietary intelligences that are incredibly useful and hard to replicate I think both of those can that all of those are a possibility as well and it's kind of it's one of these things that's sort of a bit up in the ear at the moment okay that's really fascinating Insight uh maybe touching upon the community aspect here you know your previous company cility AI I think is a great example of an open source AI startup that has put itself on the map as an AI leader you mentioned the community aspect of Laura for example U you know from large language models to diffusion models Maybe walk us through the community aspect why it's so important for the progress of AI how you saw play out stability in a bit more detail and why it's so crucial for the future success of AI products again I think it's because you get uh Minds involved in the project who have not passed The Gatekeepers of the kind of current state-of-the-art um what's nice about it if you think about something like for instance when the Kindle came out and allowed direct publishing there was at that time I remember as an art you know as a writer in writing novels with my my group there was a debate about whether you're a real writer if you publish directly or whether you publish with one of the big six Gatekeepers right and uh you know that argument looks ridiculously stupid now and and open you know the Kindle allowed you to keep 70% of your profits and then there was a high there's hybrid Publishers now at the time The Gatekeepers were taking 90% of the profit and giving you 10 okay that'se it's totally insane nowadays it's been much more democratized because of the openness of it yeah you got more too right okay because you open the floodgates right and The Gatekeepers did do a good job of like you know being able to say wait I think this is great but they still miss things they that's the thing about a gatekeeper is it is a limited choke and I think that's the same thing with with when you have open source why the community becomes so important is people who might not be traditionally a part of machine learning or whatever get to contribute their ideas and so I saw a lot of ideas in the community like again I mentioned earlier they would like mash up like 20 30 models and like get a better model and a lot of research like that's crazy it's not going to work and then it did and then you see that kind of idea filter Back In traditional machine learning where it's like you know they took palm and they jammed together a vision Transformer with it and all of a sudden the robot could find itself around like unfamiliar environments and like and they didn't do anything else they didn't retrain it right that was awesome so I think that kind of feedback Lop happens the Laura paper which I mentioned earlier and the kind of feedback there I think somebody did an analysis when I was still in stability of how fast the community was integrating ideas from research papers and research papers was something like 18 days they were like implementing the code and integrating it into the thing and if it was like if it was a like a proprietary piece of software or a new idea that was ready to go like they were you know a plugin they were integrating in in like a day a day and a half right and so you know even look you look something awesome like automatic 1111 or comfy UI which are two totally different interfaces right for how you interact with these things one is that kind of you know uh flowing you know concept where you link together the different ideas and swap them out in comfy automatic which is like the kitchen sink approach right where you just throw everything in there right um but like it's at this kind of Rapid iteration of trying of ideas uh and New Concepts and bringing people who can't pass through the gatekeeping but have an idea outside of the box that contributes to those things now that's that's interesting I think I saw edrick op peny or whatever speaking at the agents conference and he was like every time we see a paper inside open AI um that is about some new technique or whatever like oh there's somebody inside that's like oh well we tried that two years ago here's why it doesn't scale or didn't work blah blah he's like every time we see an agent paper something like that we're like reading it like like it's the great like it's I don't know grr Martin just put out finally put out winds a winter right and and he's like because it's all new to us it's all like it's totally new stuff with people are doing these kind of agents and that BEC comes from Outsiders having that stuff and then it gets even more important when it's open and you can have the weights and now there's like dozens of adapters right that are like way more efficient now as people are like well maybe if we just tweak these weights or just this layer or this thing or we insert it here make it smaller you know these kinds of things cannot happen when you don't have access to the to the full model so open source to me is just tremendously tremendously important and I'm happy to see so many companies that there's you red pajama and I guess there was you know run AI was required and you know there's data bricks kind of fiving towards that and doing some open data sets Community collectives there's lion and there's the Open chat you know group from that awesome podcaster I forget his name he's but uh that stuff is super cool and that comes out of just more access more access more access is so important and we have this weird idea today of like well we've got to make sure that only these trusted people have access to stuff we'll screw that you know like that that that system never works right we have three providers of people who you know are trusted with your credit card data one of them lost all the data you know for half of the United States they're still a trusted provider that's that's how gatekeeping works you can't rip them out of the system and so gatekeeping to me is is garbage I hate it I think it's I think it's a it carries no water with me this kind of idea of like it's it's only the trusted people can can can have this you know but no who an organization is made up of human beings who might be trustworthy at the time those people can change over time and make that former institution that was trusted now totally untrustworthy okay so I don't buy this crap at all I I think open source is critical the more Minds you have working on it the better you get to alignment the better you get to like things that are beneficial for all yeah it's going to do some bad things but just because Photoshop can put a head on a naked body does not mean we need to restrict Photoshop it's stupid right it's like Linux is used for Mal malware and hacking it's also runs every supercomputer in the cloud and nuclear subs so I don't buy this whole concept of well unless you can guarantee the kitchen knife will never stab anybody you can't put it out and I'm like I'm like wait a minute like 99.999% the people are going to cut vegetables we have laws to arrest criminals uh I don't understand this concept so open source to me people have got to get out of this mindset of that you know only the trusted people could have access to this Stu and an imp passioned defense of the open source AI ecosystem here and I think this Mar's a great segue to discuss you know AI risk in general Doomer discourse what we've been seeing a lot in the past you know few months when it comes to uh potential AI risk and existential risk right uh we've seen a lot of high-profile individuals call for the Slowdown of AI development you know we talked about a yeah a few uh we've talked about the existential AI risk AI poses to humanity um I think you have quite an opposite view here you know from your impassioned events of of Open Source but also in your writing you argue for speeding up work in AI uh as a means to reaching better AI safety maybe walk us through your line of thinking here and love to understand look me and Amer dreon you don't agree on this right like and uh I think this stuff is just way too beneficial and too important and uh and every technology from the Sund dial to like bicycles children's teddy bears has been shouted down as like the end of the world um I you know it hasn't happened yet I don't bu that it's going to happen with this I I really I really do not buy the like the far doom and I look I'm a Sci-Fi writer I love you know the singularity and all this kind of crazy stuff I don't you know I don't know if it's actually going to happen any but like it's might just be a cool literary construct um but this whole concept of like you know especially from the you know owski and all this kind of stuff of like oh I can always tell when someone's kind of a member of the cold of owski when they're like have you heard of orthogonal theory I'm like you mean the theory that like intelligence and niess don't line up how long did it take you to think of that 10 seconds like that's not a that's not a theory okay that is nothing that is a slat and statement of something so obvious as to be pointless right to me that doesn't and and what I don't see is any alignment research when I see that thing when I see that called research that is not research that's philosophy okay I'm a writer about artificial intelligence you don't get to call me a researcher I've published no papers no mathematical theories no actual like I didn't invent reinforcement okay that's what I calling research so when I when I see like the you know the Pres of anthropic like looking at this and going like this is absurd that's the company where they the where the people were like left open eye were're like you guys aren't you know thinking about safety and Alignment enough like we're going and start our own thing right they're those are Engineers working on a problem I do not believe that you can solve a problem that does not exist in the future now the way that problems are solved are the problem starts to happen and then you as an engineer look at it and solve it in real time we don't know like the early days of refrigerators where the gas would like occasionally and blow up you don't know that's going to happen ahead of time and you can't solve it until you're like able to look at and go well we need to make the go stronger we need to different gas in there we need to do these things and slowly over time you do that we starting to have real Engineers look at the engineering right and to me look every technology exists on a sliding scale okay from good to evil okay a a a a lamp might be closer to the side of good but I can still I can light my house with it it's really I can still pick it up and hit you over the head with it a gun might be closer to the side of evil right it kills people in wars and all these kinds of horrible things and cries but I can still hunt feed my family defend myself you know these kinds of things right so AI is right in the middle it can do really terrible things all this super intelligent Doom stuff you know it detracts from the fact that like it can be used for like facial recognition against dissidents or that it could be used like you know in in Lethal weapons technology you know right now right in some cases you might even be able to say that's even an example of something that exists in a gray areans not black and white is it better to have a bomb that hits a like like Wars are going to happen no matter how much we hate them and we don't want them to happen is it better to have a bomb that hits a building and blows up everybody in there when you're trying to find one target or is it better to have a little drone that Zooms in finds the thing and you know kills that one person again you could make the case even that that might might be an advancement or you could make a case that this is just a horrific thing that we should never allow but all this kind of Doris stuff really detracts from these kinds of basic problems and they're not solved by any of this philosophical nonsense I I don't think that that has any it has any weight whatsoever it's not research it is a bunch of like people talking about stuff I am going to go with the engineers I'm going to go with the people are going to actually figure out how to like make these things interesting and I don't you know I just watched the tent talk you ask like well you know this things could develop in ways that like we don't have you know it's not like humans at all except in the desire to freaking kill everyone right like what do you you know what so it has none of our it has it evolves from us it has none of our cap you know has none of our emotions or ideas or shares none of our values which is absurd you're already seeing things like with constitutional Ai and things that kind of adjust it to the values plus it's trained on human things and trained by human examples so naturally we're going to push it in that direction but it doesn't have any of those values except the desire to dominate all things and kill everything right then there's even been papers about that kind of stuff where they're like oh the dominant life form is to eradicate other species I'm like what are you even talking about like the Wolves don't involve to eat all the bunnies or they'd be Deb right the bunnies don't proliferate so much right that the Wolves can eat them and and and and I don't see this I I they're even examples of like you know a crab and and a blind sort of like uh you know shrimp or whatever are working together in the same hole one of them defends the other the other one keeps it clean you see all these kinds of reciprocal relationships in nature so a lot of this is just based on weird speculation and and in the past you had people who were sort of Gatekeepers on opinion or basically you didn't get these kind of Fringe ideas and now the exact opposite is true you get like as soon like the most poic the most kind of like black and white thinking the more black and white you can make us or that kill or be killed or whatever the more kind of like divisive you can be the more like insane you could be the more you're going to get Amplified in the media to say like this is the way it is now it doesn't mean that there are no highly intelligent like you know people Jeffrey hint great respect for right you know these Yoshi benj benj and such like these people are looking at I think Unfortunately they you know they look at it in a more nuanced way right like I think Jeffrey hon said recently like you know if these models I used to think they were worse than than human brains but maybe they're better in some ways meaning like they can download the ability to learn a new skill or whatever and I've seen that as a continual learning possibility for a long time right you're a robot and it's like boom download the ability to do the dishes boom download the ability to walk the dog right that's kind of cool we can't trade ideas like that and so in some ways they are better and we do have to be careful we have to be careful about how we use these things but I worry more about a dumb intelligence okay controlled by a sadistic nasty human being then I worry about you know an intelligent like super machine rising up and having its own desires and escaping where is it going to escape to another eight-way you know another eight-way h100 cluster with s down 68 GS of ram a vector database and a back like what where people think of it as like an imperson or a liquid flowing thing out in The Ether that can just flow somewhere else this is utterly ridiculous it means you don't understand anything about infrastructure it makes no sense whatsoever right it's just basically so when I look at these things I look at there are real potential problems and then there's all this other freaking nonsense that people take seriously I can't take the paperclip maximizer seriously I can't take it seriously I think the dick post from Bo is ridiculous I I don't know why I don't know why anybody would use that as an example of like know that's not super intelligence that's super psychotic and if I'm a super intelligent like robot I'm not even going to go you know what I'm going to become so obsessed that I'm going to turn the whole universe into paperclips I'm just going to designate that to a subprocess that's dumb that like maximizes paperclip efficiency in the factory and then be done with it right like that so none of this makes to me it's so much wasted time so and like I don't know why they get so much press other than like it's just like click click click people like to be afraid they do they they like we we are big fear-based creatures we have no art no skyscrapers no tools no war no kings and queens nothing without fear and so this is just the latest thing to be afraid of don't worry about it then 10 or 15 years they going to move on to something else to be afraid of or next month right they're GNA move on to something else being afraid of and some other technology that's going to kill us all so a lot down back here as well and what I love I love that you mentioned the fridge example because in your writing you do draw a lot of historical examples of how technology that we now take for granted was looked at as potentially existential risk or as um High highly disruptive of the way we live right maybe would you want to mention some of these examples that we've had in the past where we there was quite a lot of moral Panic around these Technologies and how does that may be parallel with the current day Iris discourse almost every technology has had a moral risk around it so like um Teddy bears were going to destroy uh young women's desires to have babies uh and to be nurturing mothers because they were going to waste all their time on the teddy bear social media has been you know destroying us and there every politician favorite like you know ability to destruct I would argue that it's a totalitarian system right like we see where there's millions of people trying to desperately control social media versus regular social media which has some exploits right is is um if social media is perfectly useful to us we get all kinds of voices in there and people go well you I think Sasha Baron Cohen was just on there saying oh you know if Hitler was around today he'd be running 30 second ads on the final solution I'm like yeah well guess what Hitler and Stalin didn't actually need social media to whip up a ton of people into one of the most terrific genocides in history so that doesn't track for me okay because you know just because you could use a technology that way cold Technologies were ex a great example that we used earlier of something that was incredibly disruptive to the jobs at the time right and and that was potentially really dangerous right in other words explosions fires these kinds of things in the early days cold has absolutely changed the way that we live in Civilization you know we can live in environments we never Liv in we have a a steady food supply back when in the 60s they thought you know the population bomber was going to destroy us all and all of a sudden we have the Green Revolution right but like cold but the fact that you can keep vegetables and eat cold allows us to have a much steadier food supply in the event that you have a bad crop this is amazing this is incredible it's one of the we are the luckiest 1% of PE 1% of people ever to be alive the luckiest 1% of 1% of people ever to be alive today people don't understand this is because of Technology our a child mortality rate is 4% it used to be half you used to you expect every child if you had two of them one of them was going to die in the 1800s right you know like our life expectancy has gone from 30 to 70 right because of these you know medical Technologies and the things that we have now antibiotics chlorination in the water they went on trial but John Leo went on trial for putting chlorine in the water right meanwhile C was killing millions of people every year right and he went on trial because we were like you can't do this it's going to destroy everything it's destroying all the water right and of course like you know luckily was acquitted in chlorination is now like probably dropped child there was Harvard study that said it dropped CH mortality 7 you know by 74% 74% okay so every single one of these Technologies when you look at them historically the the the ice industry wiped out the industry which were huge businesses of people chopping ice out of rivers and Frozen places and shipping places and so I'm I am sorry that those folks don't have don't have a gig anymore but uh I do think that like having ubiquitous Co technology was useful right and like and like when we invent the electric light some jobs do go away that this is true right but new jobs are created by the possibility of these things and I'm I am sorry that the the the Whale hunters are gone um you know and that you know we we don't have to hunt giant leviathans to kill them and dig the white gunk out of their head to light candles um but you know but don't know that anyone's clamoring for return to wh okay and and because like you know and because there was disruption so there are like actually and there were huge debates over the danger of electricity actually if you read the empires of light uh sometimes perpetrated by the people themselves in AC versus DC you know where Edison had invented DC AC was able to make it traveler much much further and so he tried this spear campaign about how deadly you know a you know AC was and they even got to the point where they were like you know electrocuting a dog or whatever in in public to show how dangerous it was it's crazy right and of course like AC you know is is this ubiquitous technology that lights the entire world and makes it possible for us to you know to to do things we never would have been able to do so almost every single technology in the history of man right has been like punctuated by some sort of like you know sort of moral Panic think of the think of the Lites right that that famous example you know we still have you know rug makers you know who are able to make you know a custom beautiful rug and they can charge tons of money for that thing and we have machines that now make a rug you know from Ikea for everybody else and so what you get is this distribution of things and and then the last argument against that people sometimes use is well this time is different guess what that argument has been used every single time every time is different I don't think that it's different the example of like maybe this time where the horses you know and to the car and that the horse population radically declines I I just don't see this happening I I see this as like I see if there's you know if there's negative uses of AI there's other uses of AI to counter it you know if if if AI feeds up malware creation then then it's also going to speed up the ability to like automatically you know quarantine the you know your your web software and you know you know when it's infected and and respond with a new uh you know a new update written on the fly to to kill that thing off right so so when I look at these kinds of things I see you know these kind of super intelligent these crazy AI things I go that's sort of that's sort of the understanding AI is going to involve in a vacuum like the old Jules Vern version of Technology where one guy has the submarine right like for when I look at that's not how technology develops like it's not interesting and and you saw this sci-fi Shi in me where it's not interesting if one person has a cell phone it's interesting when everybody does and we're going to have lots of AIS kind of working counter and doing the other so you know look all these technologies have have had some moral panic and in the long run I think technology is almost always almost always beneficial it doesn't mean that there's never any dark sides to technology sometimes people hear me say this and like oh you know you just worship progress orever well I yeah I I do I worship I worship like you know going from 50% mortality of children to 4% I worship medicine that that actually works uh and gets us to 70 or 80 years old I wor I worship our you know the Green Revolution and our ability to not have to kill two billion people like the population B was recommending in the 1960s um I I you know I I yeah I I think that's awesome um and so if that means that that I think progress is awesome I think that it's true does that mean it has never has any downsides that we should over consider no that's ridiculous of course everything does and we and and our best thing is to iterate move m iate come up with those answers when they happen toate those kinds of things right to mitigate the harm of these types of things as they develop and I I think that's that's the beauty of technology and the beauty of progress I also agree that it's a good thing that we're not killing whales again uh or anymore right as you mentioned and I definitely agree with you that it's definitely a good thing right all of the technological advances that we've seen while also acknowledging the down of Technology uh one thing that you mentioned is you know a lot of the arguments that we see from AI doomers today tend to be grounded within you know philosophical arguments around super intelligence things that have not yet happened and it tends to distract away from current actual problems that are potentially that can potentially arise from highly intelligent systems that we see today maybe what are some of the real risks unpacking that bit more about AI tools that you're worried about today right and how do you think that we should be approaching risk management when deploying AI models at scale no I think they're open to all kind of new security vulnerabilities like you know prompt injection that we're just starting to deal with logical flaws that could be exploited to give up Secrets uh you know in terms of like a company or something like that um I think that I think we mentioned a couple of these earlier I think AI being used in Lethal systems is something we should be really really really really cautious about it's going to happen anyway um even if we ban it it's going to happen with black budgets it's something we have to be aware of I think AI being used in population control dissident watching on in totalitarian countries is absolutely like horrific I think those Ty of things are terrible um you know I I think that you know it's subject to sort of basic mistakes of logic and reasoning at this point that it's it's just not it's not perfect of that kind of stuff um you know I think I had a log argument with people recently I I I was like sort of against the protest in San Francisco where they were putting cones on the cars to stop them I'm like look these are responsible for zero deaths and people kill 1.3 million people a year and 50 million uh people are injured in cars and they're like well you know a dog was killed and I'm like okay well how many people that's that sucks um I love you know my dog too and I I don't want anyone to lose their dog um but like at the same time how many like dogs went hit by cars driven by humans so statistically they are safe but again you know there is some Merit to some of the folks who said hey you know maybe we pulled the Safety drivers out too fast or maybe they're not quite safe enough I'm concerned that even if they were 10x safer people would be like you know Dow at the self driving whereas I'm like look if you cut that statistic I said by half or cut it down to a quarter that's a million people walking around on the street playing with their kids and their dog and so I think that we should push forward with those kinds of Technologies but I do think we need to have a higher level I don't really fully agree with the the eui ACT I think it's overreaching I think there's a lot of politics involved I think creates a lot of bureaucracy suddenly social media algorithms are classified as high-risk based on an amendment and make some politicking I think that's kind of nonsense but I do agree if you're going to put these into their driving heavy machines in the physical world uh that could kill things that there needs to be a higher level of accountability on these things and a higher level of of of understanding of what's Happening a higher level of like keeping the logs Version Control knowing where the data come in like understanding and giving the investigative tools you know to to uh you know to lawmakers and enforcers of a law to understand these things if if they're going to be controlling you know the the settings on the nuclear reactor or whatever right this is stuff we should go very carefully into okay and there should be high levels of high bars you know for these kinds of things so yeah there are real there are real risks and we are wasting our time talking about sci-fi level nonsense when we could be focusing stuff that's really important I could agree more um then as we're closing out our episode i' been Rec to talk to you about you know where you think the space is headed and maybe having some predictions for us um you know given the rapid base of development of AI um I think it's safe to say that things are going to be quite different in 12 months where do you think we'll be in 12 months when it comes to you know AI the proliferation of AI maybe risk coming from AI I'd love to hear your thoughts here look I think it depends on the kind of breakthroughs we see or whether we or or whether the technology remains static in other words if it's if it's basically Transformers and they kind of learn once and then they're sort of stuck at that level of learning and then you can maybe augment them with external knowledge databases and all that that's one level that the technology gets us to that's that's very use I think we even with that level of Technology we see a proliferation of Agents uh and we see sort of a democratization of things that we might call things like RPA right so RPA traditionally been this big heavy ugly lift of you know filling in forms or things like that and uh and and it was kind of didn't really work but now we have these lolms that can go out they can read text they can understand that text they can do research like we we built a little agent at the infrastructure Alliance that we FedEd 2,000 companies in the near table it went out and read all the websites It summarized them it scored them based on whether they'd be a good fit for joining the a that was 95% accurate we reached out to 50 of them and 10 of them got back to as two folks joined now we're expanding out to 200 those kinds of tools are going to be super Ubi withness I think these are fantastic everybody should be out there working on agents and and you don't need a fully autonomous agent I consider a semi-autonomous agent with a human in the loop having done the decomposition to be incredibly important I think these are going to be ubiquitous what could really change your trajectory would be a couple of continual learning style um breakthroughs like a post Transformer breakthrough that lets the neuronet continually learn on the fly from new information um to me um those kinds of things even if it's something where you do you know it's progress and compress and you can download new skills and jam them into the old one or if it's legitimately learning on the fly like the liquid neuronet that can me takes us into a whole different because now you have something that's always learning and is able to adapt in real time that to me that takes us to a Bonkers level of of awesome um but who knows whether that comes around the corner or whether it's there in the short term I expect a huge proliferation of Agents a huge proliferation of kind of like automation of very tedious boring you know details um in our in our day-to-day life and I think that you know is just going to be tremendous at the same time the infrastructure is going to start to develop more uh make it more kind of you know stay ubiquitous you're going to see a ton of middle rare to kind of like keep these things on the rail and you're going to just and and what you could definitely count on uh is more and more you know crazy nonsense of of people shouting uh about like the the end of the world uh but don't listen I think these things are going to be I I think they're just gonna for the vast majority of use cases they're going to make drugs easier to discover they're going to make like Transportation you know safer they're going to make like uh you know just our day-to-day life where we're like sorting resumees and instead I can have the agent do that and I can just talk to the people I want to hire uh and save you know two hours a week when I think about the research assistant that saved us like two weeks of raing like a 2,000 websites just to be like oh God is this oh how you know reading that marketing you know that's super cool and I just think um you know this is just an exciting time to be alive if you're not kind of working in this stuff you don't listen to them it's going to be programmers like get in there we still need brilliant programmers the hardest part is not writing code it's coming up with how to think about a program then then break it down and so get into it even more just level up the skill adapt to it if you're an artist you know don't be worried about this stuff just embrace it as part of your tool said artists are not going anywhere we're always going to have artists creating real things this concept that they're all going to be disappeared I'm sorry but it's just not true get in there and embrace it as another tool set it's just going to be like using Photoshop or something else or paintbrush it's going to really just be amazing and I think it's going to be amazing across every industry this is one of the greatest timeses to be alive and you should just embrace it just embrace it with relish and love I think this is a great way to end today's discussion thank you so much then for coming on data frame it was a really wonderful discussion thanks so much for having me and I I really enjoyed it uh just wonderful host really really fantastic conversationa lot of people are worried the old companies and the old tech companies are going to dominate again this is one of those Cycles where the new companies that become the the fortune 500s 20 years from now are spawned and that's because they're going to be more agile and they're going to look at Ai and how you communicate with things much differently so Dan Jeff it's great to have you on data framed thanks so much for having me and I really appreciate it likewise so you are the managing director of the AI infrastructure Alliance and former CIO at stability AI you also have a pretty awesome substack that I recommend everyone to read called future history a lot of that uh you know you're such a pric thinker and writer about ai ai there's so many directions I can take our conversations in but what I want to Deep dive with you on is exactly what you mean here by MBA is how AI will become the interface for a lot of work that we do and really how we interact with software as we know it so maybe here to Deep dive a bit more walks through what you mean by AI being the interface for the world or world work and how do you think that will play out more deeply in practicality over the coming years so I want to give credit where credit is Du I stole the in AI will be the interface to the world idea from the brilliant fris cholle uh who's the author of Kos brilliant thinker in of himself uh like the privilege to meet one time you know early in my kind of AI career he was and I just thought he was a brilliant fellow uh and so I loved that idea and it made perfect sense to me the idea that basically we're going to talk to this thing like it was kind of like n dog is up there you know on the conversation like man I can understand this thing like I can talk to this you know what I mean like and like and like it can talk back to me you know like this is like am I in a movie right now or what right and that you know to me that is like exact that's awesome right that he Snoop Dog is brilliant too and he kind of really nailed it and and to me I think the more you can kind of chat like right now you have these a lot of people are worried the old companies and the old te companies are going to dominate again this is one of those where the new companies that become the the fortune 500s 20 years from now are spawned and that's because they're going to be more agile and they're going to look at Ai and how you communicate with things much differently so you know right now you know you have PE the big companies being very conservative with their chat Bots right they got to make sure that you go to that that sort of recipe site or whatever but who the heck wants to go to that recipe site when it's become you know just you know six popup ads and like and add every other paragraph it's like it's super annoying so as soon as a company comes along is like man you know we're going to make this this interface you chat and tell it what kind of food you're in and what your dietary restrictions are whatever and it's like boom here's three things that you can you can eat without ever going there and involves a new business model right in other words the old business model will sort of take down a little bit it's not going to be totally destroyed that's crazy but like a new business model that supports this we don't know what that'll be but it'll it'll evolve over time as soon as that comes that's starts to displace the old way of thinking about it and they've got the innovators still they get stuck right just like Kodak is like well we've been working on this film for 100 years like this digital thing looks cool but like it kind of messes with the original business so let's not go too far and then somebody else who doesn't care about that comes along and replaces them so I think you know just being able to naturally converse with things you know bringing in you know I think somebody recently said that you know that there won't be any programmers I totally disagree I agree with David ah that maybe there's going to be a programmers you know and it's just like I'm a crappy web designer I can use Photoshop I can use some other things I'm ter I couldn't write the XML but you give me a drag and drop editor I can all of a sudden put together some pretty cool websites I think we're gonna have more people programming like that I think we're have more people being able to talk to their applications and it understands them and and and becomes kind of a you know a friend in a way right and I think this is super exciting I think that's how most software is going to function we don't you know whether it's always talking or typing in who knows but we're going to be able increasingly just kind of describe what it is that we need and and get better and better output from there that we can iterate with and work on you know that that to me is exciting you think about even an artist you know maybe a guitar player playing a song iterating and going okay give me 30 continuations of that and it goes okay listen listen listen oh number seven is cool yeah let me let me try that okay you know what I just Chang this no give me like 15 variations on that right like that kind of stuff that kind of co- collaborative relationship with AI is going to be a very exciting thing for everybody I think that's really exciting and the co- collaborative experience that you're talking about rests in a lot of ways and great user experience in user interface design and a lot of ways you mentioned the neutron bomb of chat D you know one of the reasons why chat was so widely used is not just because the model is very performant and you know the quality have the output the the time to Value when you get a high quality output is really low right but also the interface of the chat the user experience the ation time the feedback loop that you get when you're talking like speaking or chatting with chat is pretty pretty great and you get a lot of aha moments and that's one of the reasons why it took off so quickly uh so maybe in your opinion what constitutes the ideal interface and experience for an AI model as we entact with that you I don't know that anybody knows the answer to this question right now uh because I think UI you exper the creative process is an iterative process I was just having this conversation where you know I was talking with someone about you know a programmer who's working on you know idea was working on and you know my friend said well you know he's kind of developing I said well he's working on my idea and he said she said well it's different though than what you originally you know created I'm like that's the creative process right like it's it when I start out writing a novel or whatever it doesn't end up exactly the same way as when I sort of originally planned it right there's this co-creative kind of thing that happens so I think that's happening that's going to happen with the uiux as we go along we're going to iterate and we're going to go along and we're going to say wait a minute this is this is a new way to do things and that maybe the best way I ever think about that is you know my friend Chris Dixon who I knew when I was when we were sort of very young um and you know is now famous you know famous investor or whatever um he you he was a programmer at the time and he the the stylus had just come out on these kind of you know non interet connected pad thingies that we had and and most of the people were designing video games on like you know click and type stuff which was the dominant uiux at the time and he made a little like kind of a space in R so game we had to like Circle the you know the attacking aliens you know with the stylist and he was he was said look you got to utilize the new capabilities of right of like the interface right and I think that's the creative process I that's what happens the more people play with these things the more we're going to get an understanding of what the ideal interface will be over time and then when it happens you be like oh well of course right but and that's that's that that's the idea of like any eventure it always looks so obvious in retrospect that's how you're going to know like that we've gotten there but I I don't know precisely what it's going to look like just yet yeah and what's very exciting here is that we take a lot of you know software and tools that we use right now for granted right you know apps are designed you know we've kind of reached consensus of what makes a great application on on a phone right in terms of a user interface and a user experience but we had to learn that as the iPhone was released and as the App Store kind of evolved and apps became uh more and more ubiquitous and we're doing that same process with AI right now right so we're seeing more and more AI being embedded in every software stack it's becoming truly transformational you know seeing scary good applications of AI and tools like word excel and I think that has a lot of potential to change how we work in general so maybe how do you see that transformation happening what do you think our relationship will work will look like once AI once ambient AI becomes more ubiquitous I mean the change will happen gradually and then all at once right I mean that's that's C if you look at that diffusion of innovation curve you know which famously came out in the the late 60s that looked at like how ideas and Technology disseminated right and it's it's been repurposed for every business presentation on the planet right but I think most people miss that it's like you have these Pioneers you have these early adopters you have the early majority the late majority of the laggards and you know at each point in time it becomes something where you're like well I don't know that doesn't make any sense I don't know why i' ever used that to that's kind of interesting I started to use it to like well I'm using it every day to I can't imagine my life without it but that's that's the the sort of progress and and you know I kind of changed my mind maybe at some respects in that I you know as you were talking just now I was thinking back to that scene in Blade Runner which was totally science fiction where he where he has the photo of the of the gal and he puts it in there and he says okay you know pan 23 to6 right okay in you know go no go back right and like you know the software is kind of moving around and searching through he like okay enhance 23 to 15s you know boom boom boom boom boom and you know there were kind of two sci-fi things in there that were impossible for one was enhancing a photo which is in every stupid crime drama of all time and you know you're like okay you're like oh we took this low resolution VHS footage and and got a high and it's HD right yeah like HD we noticed there's a cat in the back know look looked at the reflection of a car of a car little window and yeah right right but now like with kind of the generative models like there's a possibility that they could fill in some of this case and that that that idea of him talking to it he like okay giving it these specific commands Okay do this do this okay no wait go back which is a very human Command right to go back I think we'll be able to sort of talk to it I think we'll be able to have those kinds of things that are available you know in in this sort of gesture to it move things around like that like if you see the the the interface in her by the way where he was kind of there's no controller and he's just sort of walking and then he's talking to it and they're having you know the character and him are having an argument I think that that's how it starts uh you know that's how it starts get so I think I kind of backtrack this out of the the initial question roll back to the you know to the last one um you know but that was sort of my my thinking again this stuff happens slowly and then it kind of happens all all at once sometimes it feels like the cycle has sped up a little these days right and that we're so used to technology that there is kind of an acceleration point and so you know we adapted faster and I think you know I was of the you know the Gen X generation where I I lived without all the super technology and then it kind of gradually came into my life and now it's a total part of life so I'm kind of on this weird Edge whereas I see a lot of younger folks were like they'll pick up a new platform and then abandon the old one overnight like well we switched run over to this and they'll they can learn it you know as as if it's like you know I don't know as if it were just like always there it's like a tree or anything else they just know how to interact with it so there is even a faster acceleration of how we even adapt this stuff so and it's kind of compressed the time there which is exciting I think I'm MBS yeah indeed and if you look at JD like JD's adoption and how we talk about talk about CH and think about it it's crazy to think that this has only been available since November 202 22 right or something along those lines right it's it's been less than a year but it's become so ubiquitous and so widely used uh no I think this marks a great segue to discuss how you think the AI ecosystem will evolve in the next few years right we've talked about applications of AI and tools and the software stack in a lot of ways it's interesting because we're seeing technology incumbents move quickly yet conservatively as you mentioned to adopt AI in their products um there's a lot of competition right now in terms of you know Foundation models AI Pro infrastructure providers I'd love to learn your thoughts about how you think about the different players in the industry today and how do you see things playing out in the next few years I see you know I see the research moving tremendously quickly on a lot of different things and and that's primarily because you have you know traditional researchers empowered with like models they can pick up and tune versus having to have all the money to train it from scratch and you know the failure rate can be really high there so that's accelerating research which is exciting I think that'll pick up the open source you know Foundation models and and we're already seeing a lot of that that cool research I think you're also starting to see a ton of developers traditional developers coders get into it they bring a different perspective which is exciting and they're able to do things that you you might not see in traditional data science merging 26 models together or whatever and and you know data scientists going well that's crazy it's going to collapse the model then it doesn't it makes a better model right so these kinds of like these kinds of engineering things this is it marks a different phase in any Tech ological development you know it's one person who comes up with a way to uh you know extract nitrates or whatever uh from the air with the in Germany but then it's Engineers that make it this scalable platform where you can build uh uh you know something that you can sell repeatedly and and crank out in a in a in a ous way so I think we're seeing that same thing now of developers and Engineers learning about this stuff and bringing their own perspective which is super you know super exciting um I see the foundation models as being a really C intensive business I think it it's costly in terms of people time you know compute uh I think we have to get there has to be a breakthrough in terms of making them all smaller or maybe even learning by example you know I was reading the I always follow the DARPA stuff because they're always like 10 years ahead of the curve and you know they were funding like you know they're funding stuff like how does AI learn in novel situations and be able to adapt to a completely new situation and they describe it as like you know you learn ch but then the rules of chest completely change underneath from you how do you how do you deal with it and today's AI can't do that and I was watching like a liquid AI um Network that was based on like celan's brain and it was able to be like thrown into a novel situation in a drone and able to find itself whereas a Transformer was like this is a forest I was trained I was trained on the city I don't know what to do so I think we're going to see kind of new breakthrough research developments I think we're going to see the refinement of the old stuff what I'm really seeing lack of is as fast as the research and all these ideas are coming in the infrastructure for AI is really really primitive like when I because I spent a lot of my life in infrastructure you I had an IT consulting company I was a red hat you know for a decade that you know I was an ml so I've seen a ton of these kind of like go from bare metal to virtualization to Containers you know all these monitoring and management and security tools we have none of that in yet right and when I look at like I even looked the other day I was like cool we want to try out a bunch of the open source models and it was like cool spin up you know a single instance on an a100 or a 2way A1 100 charged by the hour 250 an hour $38,000 a year to run a single instance I'm going this is crazy why isn't anybody sput up you know a bunch of these models and paralized access to them in charg in a prot Toka basis they're not even there yet you know right and and then there's all kinds of other things I see that are missing I think we need a red hat of AI in fact I was thinking strongly about starting this as a business I just don't want to get up and do it every day um so I'm not going to do it I'm giving this away freely listen closely I think um you know we had all this stuff in traditional code the open source stack where you could rapidly fix bugs and where you like added skills or or upgrades to it I think AI is going to need a similar thing where you need bug fixes and skill pack upgrades meaning okay we added medical knowledge to this Transformer and you know this model just is advising people to commit suicide okay great how do we like fine-tune the heck out of that rapidly you know uh or example it out like take the original model ask it the same question of why you should never commit suicide Heir that answer with the original question generate a 100 versions of each question and answer surface 2% to a person okay take that out take that out take that out great fine-tune yourself rapidly output a bug fix and that's got to be an order of magnitude faster than traditional code because we are going to have you could these things are so open-ended whereas in the past you're like well this is a you know this is an SSL you know library or whatever you can only fail in these ways can fail in a lot so I think we need not just people are going to run the inference of these things but people are going to be able to support these things and the community builds around that of rapid fine-tuning and Rapid iteration rapid pickups so that when I you know as a user as an Enterprise order I'm able to get that model I'm going to have 200 lauras or a thousand lauras I don't I I don't even know the upper scale of how many lauras you can add without disg grading it or adapters in general I use Laura because that was the first adapter ever came across but I really mean adapter I don't know whether it's adapter is the answer and whether they become hot swappable whether it becomes a mixture of experts whether it's fineing I don't know but I know that we have a long way to go in terms of the infrastructure and support and then the middleware I talked to a lot of companies right now building the middleware like how do you parallelize you know or cash these kinds of things you know how do you deal with prompt injections I see that almost being like an anti new antivirus business with your ristics anti you know like neuron Nets or whatever essentially saying like Okay this is a prompt injection stop this on both sides of the equation in the pipeline I see all this middle wear and support uh you know monitoring and management all this kind of stuff none of this stuff exists and to some degree we're really starting from scratch because this stuff is non-deterministic and so it's not enough to just take your monitor software dump it on an lln you know you're going to need a new kind of monitoring that's able to detect logic flaws right for instance like well this I asked it to go get a present uh for me for my sister and like it's off there buying you know I don't know um I don't know a baseball bat you know or it's like you know talking to someone else or it's you know it's doing something or it's just outputting garbage text right so we're going to need all kinds of new sort of mid where monitoring management infrastructure but it's going to be a whole new industry it's going to take a bit of time just like it took a bit of time for us to figure out how to scale rep scale applications you know in the beginning they're like well you throw a single database at it and whatever and that's not good enough all of a sudden you get charted databases and you know distributed applications and load balancers it's going to be the same kind of progression for dealing with these non-deterministic systems it's going to take a it's going to take some time uh and and I don't know how fast it comes together there's definitely a lot down back here uh and you know maybe as you point out all the different kind of challenges and the limitations we currently have in the AI stack what do you think is the most pressing aspect of the AI infrastructure stack that needs to be fixed within the next few 12 months to be able to scale the adoption of AI and large language models in general uh so I think you need to start getting to the point where the mo you know again you have a you have additional models that you need you need the basic middle Weare in place to kind of deal with them I think you need some sort of upgrade process that's much more when I see stuff like open the eye they're like well we deprecated the old model you got two weeks good luck that is totally unacceptable it's not going to work uh you're going to have to have these kind of longer lived models because the the if you just upgrade the model it can rapidly degrade your application developers need the chance to say great here are the last six versions of GPT or Claude or or whatever yes if one of them is considered a security upgrade or something like that then that has to be there but in general you can't just have oh we swapped out the old bone and like now your application that used to summarize text perfectly is now falling off by 75% overnight and good luck that's absolutely not going to work and so I that's the kind of that basic level stuff is totally missing and has got to be fixed in the next six to 12 months for this to become viable and what you're mentioning here is speically on that example with open AI you know is one of the limitations of close sourced API providers right that we see here and I think this Segway to my next question pretty well is on the trade-offs between open source and closed Source models right uh a lot of discussion now in the industry about hey should we work with an open AI should we fine-tune should we you know build a large language model from scratch um how do you imagine these trade-offs will evolve the next few months right where do you what do you advise companies looking to leverage large language models to start well look I think opening is not going to be the only game in town for in anywhere you know you're gonna you're gonna have menro you're GNA have pie and you're going to have Claude we already had Claude to it looks and I my programmer was already telling me that it's it looks like it's much better at coding already so like you you've got the open source models llama's probably going to come out with a commercial viable version uh you know you've got gorilla you've got these other kind of like open source models so there's going to be a proliferation of models um that are going to be viable uh you know for people uh I think that's tremendously exciting I don't want to see sort of one kind of group kind of dominate a l these things and uh you know open source you know we're going to talk more about Open Source later but I I think to me the open source is tremendously important because that lets the it lets the developers and the and the reg and the you know researchers who aren't maybe the researchers making you know $20 million $2 million million dollars that are everyone's competing for but you know what not everybody who is super smart or has a great idea is already at that level in their career right there's a ton of regular folks out there you know everyday researchers who might have a breakthrough because they have access to the weights and they could now go try out an idea that maybe that was too far out and they weren't going to get funding for in kind of a traditional very expensive eclectic you know Foundation model company and uh and now they have the opportunity to do that and if you look at that there's a perfect example in in the stable communion St diffusion Community we'll talk about that more later too but one of the things I thought was really interesting is the Laura paper was made for large language models and the community adopted it for diffusion models to the point that I saw a Reddit on the stable diffusion Community where the the paper writer of Laura was there going hey I never thought to do this I want to do a new version that takes into account a larger stack of models I want to talk to the community to understand what works what sucks what's better right that kind of fast feedback that happens from the open source in the house is super super important and it's why open source eventually ends up eating eating the lunch of things in the long term so do you think that open source it like ultimately will in the long run will eat the like the lunch of like the large language model providers of today yeah so I think so I think right now so this there's a couple of ways this future can play out I like to do kind of a Monte Carlo analysis of the future and have like these sort of hard branches there's there's a lot of this sort of weird lawsuit stuff happening right now where people are trying to redefine copyright um and and uh you know there's a I I didn't fully expect that I saw a lot of pro you know challenges or or protests or things like that but what I didn't see uh was you know these kind of laws to to this sort of I'm an artist for a long time and I I have I have no problem with sort of artificial intelligence but a number of artists and while copyright holders are suddenly like up in arms about it so depending on how that plays out my general sense is that the artificial intelligence industry is way too important to the world economy in the future for it to fall on the side of not allowing sort of public domain or or or public you know scraping I think you know I think it just falls on that kind of naturally over the long term but that's going to play out and that's going to play out and that's going to affect the trajectory of whether open source kind of is held back for a decade or whether you know AI is held back for a decade it could change how the research works and force us down the paast of like doing kind of a liquid neuronet Train by example kind of thing uh in the same way I take a kid out of the back throw a ball to him and after a couple weeks he knows how to throw the ball maybe he's not going to the major leagues but he knows how to throw a ball so I you know that could change their trajectory so I'm watching that kind of closely um the other thing is right now it's really really expensive and and I would say in the short term the proprietary has a big advantage and they have the big advantage and they can hire the best people they can buy the supercomputer um you know they can get all the data they can be quiet quiet about it um and they can license a bunch of data you know so they have a big advantage over the open source providers my general sense though is that open source over a long enough timeline generally wins out gen open source is this weird ugly gnarly kind of thing in and I love it right it's like messy you know you look at the early days of Linux which I spent a lot of time with and you go you're like how the hell is this ever going to be hilarious like it's crap and you know I got to compile my own thing like it barely works like the old gray beards at the time were like this will never disce you you fools you know you young whippers Snappers yeah get out of here with this crap you need an Enterprise support contract and and you know so but over time the swarm of Open Source intelligence right starts to Compact and like you get millions of developers and tens of thousands of developers working on this concept um and it just becomes harder and harder for any proprietary company to build you would never have been able to take all of Microsoft oracles Adobes everyone else's money at the time and pull it together and build the Linux C you couldn't and so over time that openness I think ends up eating the world and if you look at Linux today you know it runs everything it runs supercomputers it runs the entire Cloud even Microsoft which if they had been successful in the early days it like K King it crippling it would have shortsightedly crippled their business today which is you know essentially runs around that right and uh and so I think that open source in the long enough timeline wins now I don't know what whether that's 5 years 10 years 20 years 30 years um I think the proprietary be companies have have a big Advantage right now um and and that's typical in an ecosystem too but I think longterm uh open source has a massive you know has a massive chance to disrupt we may see even a third timeline where they kind of exist peacefully and Co you know and there are huge parts of the stack that are open source and some of it that's just these very proprietary intelligences that are incredibly useful and hard to replicate I think both of those can that all of those are a possibility as well and it's kind of it's one of these things that's sort of a bit up in the ear at the moment okay that's really fascinating Insight uh maybe touching upon the community aspect here you know your previous company cility AI I think is a great example of an open source AI startup that has put itself on the map as an AI leader you mentioned the community aspect of Laura for example U you know from large language models to diffusion models Maybe walk us through the community aspect why it's so important for the progress of AI how you saw play out stability in a bit more detail and why it's so crucial for the future success of AI products again I think it's because you get uh Minds involved in the project who have not passed The Gatekeepers of the kind of current state-of-the-art um what's nice about it if you think about something like for instance when the Kindle came out and allowed direct publishing there was at that time I remember as an art you know as a writer in writing novels with my my group there was a debate about whether you're a real writer if you publish directly or whether you publish with one of the big six Gatekeepers right and uh you know that argument looks ridiculously stupid now and and open you know the Kindle allowed you to keep 70% of your profits and then there was a high there's hybrid Publishers now at the time The Gatekeepers were taking 90% of the profit and giving you 10 okay that'se it's totally insane nowadays it's been much more democratized because of the openness of it yeah you got more too right okay because you open the floodgates right and The Gatekeepers did do a good job of like you know being able to say wait I think this is great but they still miss things they that's the thing about a gatekeeper is it is a limited choke and I think that's the same thing with with when you have open source why the community becomes so important is people who might not be traditionally a part of machine learning or whatever get to contribute their ideas and so I saw a lot of ideas in the community like again I mentioned earlier they would like mash up like 20 30 models and like get a better model and a lot of research like that's crazy it's not going to work and then it did and then you see that kind of idea filter Back In traditional machine learning where it's like you know they took palm and they jammed together a vision Transformer with it and all of a sudden the robot could find itself around like unfamiliar environments and like and they didn't do anything else they didn't retrain it right that was awesome so I think that kind of feedback Lop happens the Laura paper which I mentioned earlier and the kind of feedback there I think somebody did an analysis when I was still in stability of how fast the community was integrating ideas from research papers and research papers was something like 18 days they were like implementing the code and integrating it into the thing and if it was like if it was a like a proprietary piece of software or a new idea that was ready to go like they were you know a plugin they were integrating in in like a day a day and a half right and so you know even look you look something awesome like automatic 1111 or comfy UI which are two totally different interfaces right for how you interact with these things one is that kind of you know uh flowing you know concept where you link together the different ideas and swap them out in comfy automatic which is like the kitchen sink approach right where you just throw everything in there right um but like it's at this kind of Rapid iteration of trying of ideas uh and New Concepts and bringing people who can't pass through the gatekeeping but have an idea outside of the box that contributes to those things now that's that's interesting I think I saw edrick op peny or whatever speaking at the agents conference and he was like every time we see a paper inside open AI um that is about some new technique or whatever like oh there's somebody inside that's like oh well we tried that two years ago here's why it doesn't scale or didn't work blah blah he's like every time we see an agent paper something like that we're like reading it like like it's the great like it's I don't know grr Martin just put out finally put out winds a winter right and and he's like because it's all new to us it's all like it's totally new stuff with people are doing these kind of agents and that BEC comes from Outsiders having that stuff and then it gets even more important when it's open and you can have the weights and now there's like dozens of adapters right that are like way more efficient now as people are like well maybe if we just tweak these weights or just this layer or this thing or we insert it here make it smaller you know these kinds of things cannot happen when you don't have access to the to the full model so open source to me is just tremendously tremendously important and I'm happy to see so many companies that there's you red pajama and I guess there was you know run AI was required and you know there's data bricks kind of fiving towards that and doing some open data sets Community collectives there's lion and there's the Open chat you know group from that awesome podcaster I forget his name he's but uh that stuff is super cool and that comes out of just more access more access more access is so important and we have this weird idea today of like well we've got to make sure that only these trusted people have access to stuff we'll screw that you know like that that that system never works right we have three providers of people who you know are trusted with your credit card data one of them lost all the data you know for half of the United States they're still a trusted provider that's that's how gatekeeping works you can't rip them out of the system and so gatekeeping to me is is garbage I hate it I think it's I think it's a it carries no water with me this kind of idea of like it's it's only the trusted people can can can have this you know but no who an organization is made up of human beings who might be trustworthy at the time those people can change over time and make that former institution that was trusted now totally untrustworthy okay so I don't buy this crap at all I I think open source is critical the more Minds you have working on it the better you get to alignment the better you get to like things that are beneficial for all yeah it's going to do some bad things but just because Photoshop can put a head on a naked body does not mean we need to restrict Photoshop it's stupid right it's like Linux is used for Mal malware and hacking it's also runs every supercomputer in the cloud and nuclear subs so I don't buy this whole concept of well unless you can guarantee the kitchen knife will never stab anybody you can't put it out and I'm like I'm like wait a minute like 99.999% the people are going to cut vegetables we have laws to arrest criminals uh I don't understand this concept so open source to me people have got to get out of this mindset of that you know only the trusted people could have access to this Stu and an imp passioned defense of the open source AI ecosystem here and I think this Mar's a great segue to discuss you know AI risk in general Doomer discourse what we've been seeing a lot in the past you know few months when it comes to uh potential AI risk and existential risk right uh we've seen a lot of high-profile individuals call for the Slowdown of AI development you know we talked about a yeah a few uh we've talked about the existential AI risk AI poses to humanity um I think you have quite an opposite view here you know from your impassioned events of of Open Source but also in your writing you argue for speeding up work in AI uh as a means to reaching better AI safety maybe walk us through your line of thinking here and love to understand look me and Amer dreon you don't agree on this right like and uh I think this stuff is just way too beneficial and too important and uh and every technology from the Sund dial to like bicycles children's teddy bears has been shouted down as like the end of the world um I you know it hasn't happened yet I don't bu that it's going to happen with this I I really I really do not buy the like the far doom and I look I'm a Sci-Fi writer I love you know the singularity and all this kind of crazy stuff I don't you know I don't know if it's actually going to happen any but like it's might just be a cool literary construct um but this whole concept of like you know especially from the you know owski and all this kind of stuff of like oh I can always tell when someone's kind of a member of the cold of owski when they're like have you heard of orthogonal theory I'm like you mean the theory that like intelligence and niess don't line up how long did it take you to think of that 10 seconds like that's not a that's not a theory okay that is nothing that is a slat and statement of something so obvious as to be pointless right to me that doesn't and and what I don't see is any alignment research when I see that thing when I see that called research that is not research that's philosophy okay I'm a writer about artificial intelligence you don't get to call me a researcher I've published no papers no mathematical theories no actual like I didn't invent reinforcement okay that's what I calling research so when I when I see like the you know the Pres of anthropic like looking at this and going like this is absurd that's the company where they the where the people were like left open eye were're like you guys aren't you know thinking about safety and Alignment enough like we're going and start our own thing right they're those are Engineers working on a problem I do not believe that you can solve a problem that does not exist in the future now the way that problems are solved are the problem starts to happen and then you as an engineer look at it and solve it in real time we don't know like the early days of refrigerators where the gas would like occasionally and blow up you don't know that's going to happen ahead of time and you can't solve it until you're like able to look at and go well we need to make the go stronger we need to different gas in there we need to do these things and slowly over time you do that we starting to have real Engineers look at the engineering right and to me look every technology exists on a sliding scale okay from good to evil okay a a a a lamp might be closer to the side of good but I can still I can light my house with it it's really I can still pick it up and hit you over the head with it a gun might be closer to the side of evil right it kills people in wars and all these kinds of horrible things and cries but I can still hunt feed my family defend myself you know these kinds of things right so AI is right in the middle it can do really terrible things all this super intelligent Doom stuff you know it detracts from the fact that like it can be used for like facial recognition against dissidents or that it could be used like you know in in Lethal weapons technology you know right now right in some cases you might even be able to say that's even an example of something that exists in a gray areans not black and white is it better to have a bomb that hits a like like Wars are going to happen no matter how much we hate them and we don't want them to happen is it better to have a bomb that hits a building and blows up everybody in there when you're trying to find one target or is it better to have a little drone that Zooms in finds the thing and you know kills that one person again you could make the case even that that might might be an advancement or you could make a case that this is just a horrific thing that we should never allow but all this kind of Doris stuff really detracts from these kinds of basic problems and they're not solved by any of this philosophical nonsense I I don't think that that has any it has any weight whatsoever it's not research it is a bunch of like people talking about stuff I am going to go with the engineers I'm going to go with the people are going to actually figure out how to like make these things interesting and I don't you know I just watched the tent talk you ask like well you know this things could develop in ways that like we don't have you know it's not like humans at all except in the desire to freaking kill everyone right like what do you you know what so it has none of our it has it evolves from us it has none of our cap you know has none of our emotions or ideas or shares none of our values which is absurd you're already seeing things like with constitutional Ai and things that kind of adjust it to the values plus it's trained on human things and trained by human examples so naturally we're going to push it in that direction but it doesn't have any of those values except the desire to dominate all things and kill everything right then there's even been papers about that kind of stuff where they're like oh the dominant life form is to eradicate other species I'm like what are you even talking about like the Wolves don't involve to eat all the bunnies or they'd be Deb right the bunnies don't proliferate so much right that the Wolves can eat them and and and and I don't see this I I they're even examples of like you know a crab and and a blind sort of like uh you know shrimp or whatever are working together in the same hole one of them defends the other the other one keeps it clean you see all these kinds of reciprocal relationships in nature so a lot of this is just based on weird speculation and and in the past you had people who were sort of Gatekeepers on opinion or basically you didn't get these kind of Fringe ideas and now the exact opposite is true you get like as soon like the most poic the most kind of like black and white thinking the more black and white you can make us or that kill or be killed or whatever the more kind of like divisive you can be the more like insane you could be the more you're going to get Amplified in the media to say like this is the way it is now it doesn't mean that there are no highly intelligent like you know people Jeffrey hint great respect for right you know these Yoshi benj benj and such like these people are looking at I think Unfortunately they you know they look at it in a more nuanced way right like I think Jeffrey hon said recently like you know if these models I used to think they were worse than than human brains but maybe they're better in some ways meaning like they can download the ability to learn a new skill or whatever and I've seen that as a continual learning possibility for a long time right you're a robot and it's like boom download the ability to do the dishes boom download the ability to walk the dog right that's kind of cool we can't trade ideas like that and so in some ways they are better and we do have to be careful we have to be careful about how we use these things but I worry more about a dumb intelligence okay controlled by a sadistic nasty human being then I worry about you know an intelligent like super machine rising up and having its own desires and escaping where is it going to escape to another eight-way you know another eight-way h100 cluster with s down 68 GS of ram a vector database and a back like what where people think of it as like an imperson or a liquid flowing thing out in The Ether that can just flow somewhere else this is utterly ridiculous it means you don't understand anything about infrastructure it makes no sense whatsoever right it's just basically so when I look at these things I look at there are real potential problems and then there's all this other freaking nonsense that people take seriously I can't take the paperclip maximizer seriously I can't take it seriously I think the dick post from Bo is ridiculous I I don't know why I don't know why anybody would use that as an example of like know that's not super intelligence that's super psychotic and if I'm a super intelligent like robot I'm not even going to go you know what I'm going to become so obsessed that I'm going to turn the whole universe into paperclips I'm just going to designate that to a subprocess that's dumb that like maximizes paperclip efficiency in the factory and then be done with it right like that so none of this makes to me it's so much wasted time so and like I don't know why they get so much press other than like it's just like click click click people like to be afraid they do they they like we we are big fear-based creatures we have no art no skyscrapers no tools no war no kings and queens nothing without fear and so this is just the latest thing to be afraid of don't worry about it then 10 or 15 years they going to move on to something else to be afraid of or next month right they're GNA move on to something else being afraid of and some other technology that's going to kill us all so a lot down back here as well and what I love I love that you mentioned the fridge example because in your writing you do draw a lot of historical examples of how technology that we now take for granted was looked at as potentially existential risk or as um High highly disruptive of the way we live right maybe would you want to mention some of these examples that we've had in the past where we there was quite a lot of moral Panic around these Technologies and how does that may be parallel with the current day Iris discourse almost every technology has had a moral risk around it so like um Teddy bears were going to destroy uh young women's desires to have babies uh and to be nurturing mothers because they were going to waste all their time on the teddy bear social media has been you know destroying us and there every politician favorite like you know ability to destruct I would argue that it's a totalitarian system right like we see where there's millions of people trying to desperately control social media versus regular social media which has some exploits right is is um if social media is perfectly useful to us we get all kinds of voices in there and people go well you I think Sasha Baron Cohen was just on there saying oh you know if Hitler was around today he'd be running 30 second ads on the final solution I'm like yeah well guess what Hitler and Stalin didn't actually need social media to whip up a ton of people into one of the most terrific genocides in history so that doesn't track for me okay because you know just because you could use a technology that way cold Technologies were ex a great example that we used earlier of something that was incredibly disruptive to the jobs at the time right and and that was potentially really dangerous right in other words explosions fires these kinds of things in the early days cold has absolutely changed the way that we live in Civilization you know we can live in environments we never Liv in we have a a steady food supply back when in the 60s they thought you know the population bomber was going to destroy us all and all of a sudden we have the Green Revolution right but like cold but the fact that you can keep vegetables and eat cold allows us to have a much steadier food supply in the event that you have a bad crop this is amazing this is incredible it's one of the we are the luckiest 1% of PE 1% of people ever to be alive the luckiest 1% of 1% of people ever to be alive today people don't understand this is because of Technology our a child mortality rate is 4% it used to be half you used to you expect every child if you had two of them one of them was going to die in the 1800s right you know like our life expectancy has gone from 30 to 70 right because of these you know medical Technologies and the things that we have now antibiotics chlorination in the water they went on trial but John Leo went on trial for putting chlorine in the water right meanwhile C was killing millions of people every year right and he went on trial because we were like you can't do this it's going to destroy everything it's destroying all the water right and of course like you know luckily was acquitted in chlorination is now like probably dropped child there was Harvard study that said it dropped CH mortality 7 you know by 74% 74% okay so every single one of these Technologies when you look at them historically the the the ice industry wiped out the industry which were huge businesses of people chopping ice out of rivers and Frozen places and shipping places and so I'm I am sorry that those folks don't have don't have a gig anymore but uh I do think that like having ubiquitous Co technology was useful right and like and like when we invent the electric light some jobs do go away that this is true right but new jobs are created by the possibility of these things and I'm I am sorry that the the the Whale hunters are gone um you know and that you know we we don't have to hunt giant leviathans to kill them and dig the white gunk out of their head to light candles um but you know but don't know that anyone's clamoring for return to wh okay and and because like you know and because there was disruption so there are like actually and there were huge debates over the danger of electricity actually if you read the empires of light uh sometimes perpetrated by the people themselves in AC versus DC you know where Edison had invented DC AC was able to make it traveler much much further and so he tried this spear campaign about how deadly you know a you know AC was and they even got to the point where they were like you know electrocuting a dog or whatever in in public to show how dangerous it was it's crazy right and of course like AC you know is is this ubiquitous technology that lights the entire world and makes it possible for us to you know to to do things we never would have been able to do so almost every single technology in the history of man right has been like punctuated by some sort of like you know sort of moral Panic think of the think of the Lites right that that famous example you know we still have you know rug makers you know who are able to make you know a custom beautiful rug and they can charge tons of money for that thing and we have machines that now make a rug you know from Ikea for everybody else and so what you get is this distribution of things and and then the last argument against that people sometimes use is well this time is different guess what that argument has been used every single time every time is different I don't think that it's different the example of like maybe this time where the horses you know and to the car and that the horse population radically declines I I just don't see this happening I I see this as like I see if there's you know if there's negative uses of AI there's other uses of AI to counter it you know if if if AI feeds up malware creation then then it's also going to speed up the ability to like automatically you know quarantine the you know your your web software and you know you know when it's infected and and respond with a new uh you know a new update written on the fly to to kill that thing off right so so when I look at these kinds of things I see you know these kind of super intelligent these crazy AI things I go that's sort of that's sort of the understanding AI is going to involve in a vacuum like the old Jules Vern version of Technology where one guy has the submarine right like for when I look at that's not how technology develops like it's not interesting and and you saw this sci-fi Shi in me where it's not interesting if one person has a cell phone it's interesting when everybody does and we're going to have lots of AIS kind of working counter and doing the other so you know look all these technologies have have had some moral panic and in the long run I think technology is almost always almost always beneficial it doesn't mean that there's never any dark sides to technology sometimes people hear me say this and like oh you know you just worship progress orever well I yeah I I do I worship I worship like you know going from 50% mortality of children to 4% I worship medicine that that actually works uh and gets us to 70 or 80 years old I wor I worship our you know the Green Revolution and our ability to not have to kill two billion people like the population B was recommending in the 1960s um I I you know I I yeah I I think that's awesome um and so if that means that that I think progress is awesome I think that it's true does that mean it has never has any downsides that we should over consider no that's ridiculous of course everything does and we and and our best thing is to iterate move m iate come up with those answers when they happen toate those kinds of things right to mitigate the harm of these types of things as they develop and I I think that's that's the beauty of technology and the beauty of progress I also agree that it's a good thing that we're not killing whales again uh or anymore right as you mentioned and I definitely agree with you that it's definitely a good thing right all of the technological advances that we've seen while also acknowledging the down of Technology uh one thing that you mentioned is you know a lot of the arguments that we see from AI doomers today tend to be grounded within you know philosophical arguments around super intelligence things that have not yet happened and it tends to distract away from current actual problems that are potentially that can potentially arise from highly intelligent systems that we see today maybe what are some of the real risks unpacking that bit more about AI tools that you're worried about today right and how do you think that we should be approaching risk management when deploying AI models at scale no I think they're open to all kind of new security vulnerabilities like you know prompt injection that we're just starting to deal with logical flaws that could be exploited to give up Secrets uh you know in terms of like a company or something like that um I think that I think we mentioned a couple of these earlier I think AI being used in Lethal systems is something we should be really really really really cautious about it's going to happen anyway um even if we ban it it's going to happen with black budgets it's something we have to be aware of I think AI being used in population control dissident watching on in totalitarian countries is absolutely like horrific I think those Ty of things are terrible um you know I I think that you know it's subject to sort of basic mistakes of logic and reasoning at this point that it's it's just not it's not perfect of that kind of stuff um you know I think I had a log argument with people recently I I I was like sort of against the protest in San Francisco where they were putting cones on the cars to stop them I'm like look these are responsible for zero deaths and people kill 1.3 million people a year and 50 million uh people are injured in cars and they're like well you know a dog was killed and I'm like okay well how many people that's that sucks um I love you know my dog too and I I don't want anyone to lose their dog um but like at the same time how many like dogs went hit by cars driven by humans so statistically they are safe but again you know there is some Merit to some of the folks who said hey you know maybe we pulled the Safety drivers out too fast or maybe they're not quite safe enough I'm concerned that even if they were 10x safer people would be like you know Dow at the self driving whereas I'm like look if you cut that statistic I said by half or cut it down to a quarter that's a million people walking around on the street playing with their kids and their dog and so I think that we should push forward with those kinds of Technologies but I do think we need to have a higher level I don't really fully agree with the the eui ACT I think it's overreaching I think there's a lot of politics involved I think creates a lot of bureaucracy suddenly social media algorithms are classified as high-risk based on an amendment and make some politicking I think that's kind of nonsense but I do agree if you're going to put these into their driving heavy machines in the physical world uh that could kill things that there needs to be a higher level of accountability on these things and a higher level of of of understanding of what's Happening a higher level of like keeping the logs Version Control knowing where the data come in like understanding and giving the investigative tools you know to to uh you know to lawmakers and enforcers of a law to understand these things if if they're going to be controlling you know the the settings on the nuclear reactor or whatever right this is stuff we should go very carefully into okay and there should be high levels of high bars you know for these kinds of things so yeah there are real there are real risks and we are wasting our time talking about sci-fi level nonsense when we could be focusing stuff that's really important I could agree more um then as we're closing out our episode i' been Rec to talk to you about you know where you think the space is headed and maybe having some predictions for us um you know given the rapid base of development of AI um I think it's safe to say that things are going to be quite different in 12 months where do you think we'll be in 12 months when it comes to you know AI the proliferation of AI maybe risk coming from AI I'd love to hear your thoughts here look I think it depends on the kind of breakthroughs we see or whether we or or whether the technology remains static in other words if it's if it's basically Transformers and they kind of learn once and then they're sort of stuck at that level of learning and then you can maybe augment them with external knowledge databases and all that that's one level that the technology gets us to that's that's very use I think we even with that level of Technology we see a proliferation of Agents uh and we see sort of a democratization of things that we might call things like RPA right so RPA traditionally been this big heavy ugly lift of you know filling in forms or things like that and uh and and it was kind of didn't really work but now we have these lolms that can go out they can read text they can understand that text they can do research like we we built a little agent at the infrastructure Alliance that we FedEd 2,000 companies in the near table it went out and read all the websites It summarized them it scored them based on whether they'd be a good fit for joining the a that was 95% accurate we reached out to 50 of them and 10 of them got back to as two folks joined now we're expanding out to 200 those kinds of tools are going to be super Ubi withness I think these are fantastic everybody should be out there working on agents and and you don't need a fully autonomous agent I consider a semi-autonomous agent with a human in the loop having done the decomposition to be incredibly important I think these are going to be ubiquitous what could really change your trajectory would be a couple of continual learning style um breakthroughs like a post Transformer breakthrough that lets the neuronet continually learn on the fly from new information um to me um those kinds of things even if it's something where you do you know it's progress and compress and you can download new skills and jam them into the old one or if it's legitimately learning on the fly like the liquid neuronet that can me takes us into a whole different because now you have something that's always learning and is able to adapt in real time that to me that takes us to a Bonkers level of of awesome um but who knows whether that comes around the corner or whether it's there in the short term I expect a huge proliferation of Agents a huge proliferation of kind of like automation of very tedious boring you know details um in our in our day-to-day life and I think that you know is just going to be tremendous at the same time the infrastructure is going to start to develop more uh make it more kind of you know stay ubiquitous you're going to see a ton of middle rare to kind of like keep these things on the rail and you're going to just and and what you could definitely count on uh is more and more you know crazy nonsense of of people shouting uh about like the the end of the world uh but don't listen I think these things are going to be I I think they're just gonna for the vast majority of use cases they're going to make drugs easier to discover they're going to make like Transportation you know safer they're going to make like uh you know just our day-to-day life where we're like sorting resumees and instead I can have the agent do that and I can just talk to the people I want to hire uh and save you know two hours a week when I think about the research assistant that saved us like two weeks of raing like a 2,000 websites just to be like oh God is this oh how you know reading that marketing you know that's super cool and I just think um you know this is just an exciting time to be alive if you're not kind of working in this stuff you don't listen to them it's going to be programmers like get in there we still need brilliant programmers the hardest part is not writing code it's coming up with how to think about a program then then break it down and so get into it even more just level up the skill adapt to it if you're an artist you know don't be worried about this stuff just embrace it as part of your tool said artists are not going anywhere we're always going to have artists creating real things this concept that they're all going to be disappeared I'm sorry but it's just not true get in there and embrace it as another tool set it's just going to be like using Photoshop or something else or paintbrush it's going to really just be amazing and I think it's going to be amazing across every industry this is one of the greatest timeses to be alive and you should just embrace it just embrace it with relish and love I think this is a great way to end today's discussion thank you so much then for coming on data frame it was a really wonderful discussion thanks so much for having me and I I really enjoyed it uh just wonderful host really really fantastic conversation\n"