Stephen Wolfram - ChatGPT and the Nature of Truth, Reality & Computation _ Lex Fridman Podcast #376
"WEBVTTKind: captionsLanguage: en- You know I can tell ChatGPT,create a piece of code and thenjust run it on my computer.And I'm like, you know, that,that sort of personalizesfor me the what could,what could possibly go wrong, so to speak.- Was that exciting orscary, that possibility?- It was alittle bit scary actually,because it's kind of like,if you do that right,What is the sandboxingthat you should have?And that's sort of a,that's a, a version of,of that question for the world.That is, as soon as you putthe AIs in charge of things,you know, how much, how manyconstraints should there beon these systems beforeyou put the AIs in chargeof all the weapons and all these,you know, all thesedifferent kinds of systems.- Well here's the funpart about sandboxes,is the AI knows about them andhas the tools to crack them.The following is a conversationwith Steven Wolfram,his fourth time on this podcast.He's a computer scientist, mathematician,theoretical physicist, and thefounder of Wolfram Research,a company behindMathematica, Wolfram Alpha,Wolfram Language and the Wolfram physicsand meta mathematics projects.He has been a pioneer in exploringthe computational nature of reality.And so he's the perfect personto explore with togetherthe new quickly evolvinglandscape of large language modelsas human civilizationjourneys towards buildingsuper intelligent AGI.This is a Lex Fridman podcast.To support it,Please check out oursponsors in the description.And now, dear friendshere's Stephen Wolfram.You've announced theintegration of ChatGPTand Wolfram Alpha and Wolfram Language.So let's talk about that integration.What are the key differencesfrom the high philosophical level,maybe the technical levelbetween the capabilities of,broadly speaking, thetwo kinds of systems,large language models,and this computationalgigantic computational systeminfrastructure that is Wolfram Alpha?- Yeah. So what doessomething like ChatGPT do?It's, it's mostlyfocused on make language,like the language that humans have madeand put on the web and so on.Yeah, so, you know, it's,it's primary sort ofunderlying technical thingis you've given a prompt,it's trying to continuethat prompt in a waythat's somehow typical of what it's seenbased on a trillion words of textthat humans have written on the web.And the way it's doingthat is with somethingwhich is probably quitesimilar to the way we humansdo the first stages of that,using a neural net and so on,and just saying, given these,given this piece of text,let's ripple through theneural net one word and,and get one word at a time of output.And it's kind of a, a shallow computationon a large amount of kind of training datathat is what we humanshave put on the web.That's a different thing fromsort of the computationalstack that I've spentthe last, I don't know,40 years or so building,which has to do with whatcan you compute many steps,potentially a very deep computation.It's not sort of taking the statisticsof what we humans have producedand trying to continue thingsbased on that statistics.Instead, it's trying to takekind of the formal structurethat we've created in our civilization,whether it's from mathematics,or whether it's fromkind of systematic knowledge of all kinds,and use that to doarbitrarily deep computationsto figure out thingsthat, that aren't just,let's match what's alreadybeen kind of said on the web,but let's potentially beable to compute something newand different that's neverbeen computed before.So as a, as a practical matter, you know,the, the what we're, you know, the,our goal is to have made asmuch as possible of the worldcomputable in the sensethat if there's a questionthat in principle is answerablefrom some sort of expertknowledge that's been accumulated,we can compute theanswer to that question,and we can do it in a sortof reliable way that's,that's the best one can do,given what the expertisethat our civilization has accumulated.It's a very, it's a,it's a much more sort oflabor intensive on theside of kind of being,creating kind of the, thecomputational system to do that.Obviously the in, in the, thekind of the ChatGPT world,it's like take things which were producedfor quite other purposes,namely the all the thingswe've written out on the weband so on, and sort of foragefrom that things which were,are like what's been written on the web.So I think, you know, as a,as a practical point of view,I view sort of the ChatGPTthing as being wideand shallow and whatwe're trying to do withsort of building out computationas being this sort of deep,also broad, but, but most importantlykind of deep type of thing.I think another way tothink about this is,if you go back in humanhistory, you know, I don't know,a thousand years or something,and you say, what, what,what can the typical person,what's the typical persongoing to figure out?Well, the answer is there'scertain kinds of thingsthat we humans can quickly figure out.That's sort of what, what our, you know,other neural architectureand the kinds of thingswe learn in our lives let us do.But then there's this wholelayer of kind of formalizationthat got developed in which is, you know,the kind of whole sort ofstory of intellectual historyand the whole kind of depth of learningthat formalization turnedinto things like logic,mathematics, science and so on.And that's the kind ofthing that allows oneto kind of build these towers of, of, of,of sort of towers of things you work out.It's not just, I can immediatelyfigure this out, it's no,I can use this kind offormalism to go step by stepand work out something which was notimmediately obvious to me.And that's kind of the story of what,what we're trying to do computationally,is to be able to buildthose kind of tall towersof what implies, whatimplies what and so on.And as opposed to kind of the yes,I can immediately figure it out,it's just like what I sawsomewhere else in somethingthat I heard or rememberedor something like this.- What can you say aboutthe kind of formal structureor the kind of formalfoundation you can buildsuch a formal structure onabout the kinds of thingsyou would start on in order to buildthis kind of deepcomputable knowledge trees.- So the question is sort of how do you,how do you think aboutcomputation and there's,there's a couple of points here.One is what computationintrinsically is like,and the other is whataspects of computationwe humans with our mindsand with the kinds of thingswe've learned can sort of relate toin that computational universe.So if we start on the kind of,what can computation be like,it's something I've spent somebig chunk of my life studyingis imagine that you are,you know, we, we usually,we write programs where we kind of knowwhat we want the program to doand we carefully write, youknow, many lines of codeand we hope that the program does what we,what we intended it to do.But the thing I've beeninterested in is if youjust look at the kind ofnatural science of programs,so you just say,I'm gonna make this program,it's a really tiny program.Maybe I even pick the piecesof the program at random,but it's really tiny.And by really tiny, I mean, you know,less than a line of code type thing.You say, what does thisprogram do? And you run it.And big discovery that Imade in the early eightiesis that even extremely simpleprograms when you run themcan do really complicated things.Really surprised me.It took me several yearsto kind of realize thatthat was a thing, so to speak,but that that realizationthat even very simple programscan do incredibly complicated thingsthat we very much don'texpect that discovery, I mean,I realized that that's verymuch, I think how nature works.That is nature has simple rules,but yet does all sortsof complicated thingsthat we might not expect, you know,as a big thing of the last fewyears has been understandingthat that's how the wholeuniverse and physics works.But that's a, a quite separate topic.But, so there's thiswhole world of programsand what they do and very rich,sophisticated things thatthese programs can do.But when we look atmany of these programs,we look at them and say,well that's kind of,I don't really know what that's doing.It's not a very human kind of thing.So on the one hand we havesort of what's possiblein the computational universe.On the other hand,we have the kinds of thingsthat we humans think about,the kinds of things that have developedin kind of our intellectual history.And that's, and the, the really,the challenge to sort ofmaking things computationalis to connect what'scomputationally possibleout in the computationaluniverse with the thingsthat we humans sort of typicallythink about with our minds.Now that's a complicatedkind of moving targetbecause the things that wethink about change over time,we've learnt more stuff,we've invented mathematics,we've invented various kinds of ideasand structures and so on.So it's, it's gradually expanding.We're kind of graduallycolonizing more and moreof this kind of intellectualspace of possibilities.But the, the real thing,the real challenge is how do you take,what is computationally possible?How do you take,how do you encapsulate the kinds of thingsthat we think about in away that kind of plugs intowhat's computationally possible.And and actually the, the,the big sort of idea thereis this idea of kind ofsymbolic programming,symbolic representations of things.And so the, the questionis when you look atsort of everything in the worldand you kind of, you know,you take some visual scene or somethingyou're looking at and then you say, well,how do I turn that into somethingthat I can kind of stuff into my mind?You know, there are lots ofpixels in my visual scene,but the things that I rememberedfrom that visual scene are,you know, there's a chair in this place,it's a kind of a symbolic representationof the visual scene.There are two chairs ona table or something,rather than there areall these pixels arrangedin all these detailed ways.And so the question then is howdo you take sort of all the,all the things in the worldand make some kind of representationthat corresponds to the types of waysthat we think about things.And, and human language is,is sort of one form ofrepresentation that we have.We talk about chairs,that's a word in human language and so on.How do we, how do we take,but human language is notin and of itself somethingfrom that plugs in verywell to sort of computation.It's not something from which you canimmediately computeconsequences and so on.And so you have to kind offind a way to take sort of the,the, the stuff we understandfrom human languageand make it more precise.And that's really the storyof, of symbolic programming.And you know what,what that turns into is somethingwhich I didn't know at the timeit was going to work as well as it has,but back in the 1979 or so,I was trying to build myfirst big computer systemand trying to figure out, you know,how should I representcomputations at a high level.And I kind of inventedthis idea of using kind ofsymbolic expressions, you know,structured as it's kind of like a,a function and a bunch of arguments,but that function doesn'tnecessarily evaluate to anything.It's just a, a thing that sits thererepresenting a structure.And so building up that structureand it's turned out thatstructure has been extremely,it's a, it's a good matchfor the way that we humans,it seems to be a good match for the waythat we humans kind ofconceptualize higher level things.And it's been for the last, I don't know,45 years or something, it'sserved me remarkably well.- So building up that structureusing this kind ofsymbolic representation.But what can you sayabout abstractions here?Because you could just startwith your physics project.You could start at a hypergraph at a very,very low level and buildup everything from there,but you don't, right, you take shortcuts.- Right?- You, you take the highestlevel of abstraction,convert that,the kind of abstraction that'sconvertible to somethingcomputable using symbolicrepresentation and then that,that's your new foundationfor that little piece of knowledge.- Yes.- Somehow all that is integrated.- Right? So the, the sort ofa very important phenomenonthat is kind of a thing thatI've sort of realized is just,it's one of these thingsthat sort of in the,in the future of kindof everything is goingto become more and more importantas this phenomenon ofcomputational reducibility.And the, the question is,if you know the rules forsomething, you have a program,you're gonna run it.You might say, I know the rules, great,I know everything aboutwhat's gonna happen.Well in principle you dobecause you can just run thoserules out and just see what they do.You might run them a millionsteps, you see what happens,et cetera.The question is,can you like immediatelyjump ahead and say,I know what's gonna happenafter a million steps.And the answer is 13 or something.- Yes.- And the, the one ofthe very critical thingsto realize is if you couldreduce that computation,there isn't a sense,no point in doing the computation.- Yeah.- The place where you reallyget value outta doing thecomputation is when youhad to do the computationto find out the answer.But this phenomenon that youhave to do the computationto find out the answer,this phenomenon of computationalreducibility seems to betremendously important for thinking aboutlots of kinds of things.So one of the thingsthat happens is, okay,you've got a model of theuniverse at the low level in termsof atoms of space and hypergraphsand rewriting hypergraphs and so on.And it's happening, you know,10 to the 100 timesevery second, let's say.Well, you say great, thenwe've, we've nailed it.We've, we've, we knowhow the universe works.Well the problem is theuniverse can figure outwhat it's gonna do.It does those 10 to the100, you know, steps.But for us to work out what it's gonna do,we have no way to reduce that computation.The only way to do the computation,to see the result of thecomputation is to do it.And if we're operatingwithin the universe,we're kind of, there's no,there's no opportunity todo that because the universeis doing it as fast asthe universe can do it.And that's, you know,that's what's happening.So what we're trying to do,and a lot of the story of science anda lot of other kinds of thingsis finding pockets of reducibility.That is,you could have a situationwhere everything in the worldis full of computational reducibility,we never know what's gonna happen next.The only way we can figure outwhat's gonna happen next isjust let the system runand see what happens.So in a sense, the story of,of most kinds of science, inventions,a lot of kinds of things isthe story of finding theseplaces where we can locally jump ahead.And one of the features ofcomputational reducibility isthere are always pockets of reducibility.There are always places,there are always aninfinite number of placeswhere you can jump ahead.There's no way where youcan jump completely ahead.But there are little, little patches,little places where youcan jump ahead a bit.And I think, you know,we can talk about physicsproject and so on,but I think the thing werealize is we kind of existin a slice of all thepossible computationalid reducibility in the universe.We exist in a slice wherethere's a reasonable amountof predictability.And in a sense,as we try and construct thesekind of higher levels of,of abstraction symbolicrepresentations and so on,what we're doing iswe're finding these lumpsof reducibility that we cankind of attach ourselves toand about, which we can kind of havefairly simple narrative things to say.Because in principle, you know,I say what's gonna happenin the next few seconds?You know, oh,there are these moleculesmoving around in the air in thisroom and oh gosh,it's an incredibly complicatedstory and that's a wholecomputational irreducible thing,most of which I don't care about.And most of it is, well, you know,the air's still gonna be hereand nothing much is going tobe different about it.And that's a kind of reducible factabout what is ultimatelyat an underlying levelof computational irreducible process.- And life would not be possibleif we didn't have a largenumber of such reducible pockets.- Yes.- Pockets amenable to reductioninto something symbolic.- Yes, I think so.Okay. I mean, life in, in theway that we experience it,that, I mean, you know,one might, you know,depending on what we mean bylife, so to speak, the, the,the experience that we haveof sort of consistent thingshappening in the world, theidea of space, for example,where there's, you know, wecan just say you are here,you move there, it'skind of the same thing.It's still you in that differentplace even though you'remade of different atomsof space and so on.This is this idea that it's,that there's sort of thislevel of predictabilityof what's going on.That's us finding a sliceof reducibility in what isunderneath this computationirreducible kind of system.And I think that's, that's sort of the,the thing which is actuallymy favorite discoveryof the last few years,is the realization that itis sort of the interactionbetween this sort ofunderlying computationalirrereducibility and ournature as kind of observerswho sort of have to key intocomputational reducibility.That fact leads to the mainlaws of physics that wediscovered in through 20th century.So this is, we talk aboutthis in, in in more detail,but this is, to me, it's kindof our nature as observers.The fact that we arecomputationally bounded observers.We don't get to followall those little pieces ofcomputational irrereducibility to stuff.What is about out there in theworld into our minds requiresthat we are looking atthings that are reducible,we are compressing kind of, we are,we're extracting just some essence,some kind of symbolic essenceof what's the detail ofwhat's going on in the world.That together with one other conditionthat first seems sortof trivial but isn't,which is that we believewe are persistent in time.That is.- Yes.- You know.- So causality.- Here's the thing. At everymoment according to our theory,we are made of different atomsof space at every moment.Sort of the microscopic detailof what what the universeis made of is being rewritten.And that's, and in fact,the very fact that this coherencebetween different parts ofspace is a consequence of thefact that there are all theselittle processes going onthat kind of knit together thestructure of space.It's kind of like if you wantedto have a fluid with a bunchof molecules in it, if thosemolecules weren't interacting,you wouldn't have thisfluid that would pourand do all these kinds of things.It would just be sort ofa free floating collectionof molecules.So similar it is with spacethat the fact that space is kindof knitted together as a consequenceof all this activity in space.And the fact that kind of what,what we consist of sort ofthis, this series of, of of,you know, we're, we'recontinually being rewritten.And the question is,why is it the case thatwe think of ourselvesas being the same us through time?That's kind of a, a key assumption.I think it's a key aspect ofwhat we see as sort of ourconsciousness so to speak,is that we have this kind of consistentthread of experience.- Well isn't that justanother limitation of our mindthat we wanna reduce-- Yeah.- Reality into some that kind of temporal- Yes.- Consistency is just a nicenarrative to tell ourselves.- Right.Well, the fact is,I think it's critical to theway we humans typically operateis that we have a singlethread of experience.You know, if you, ifyou imagine sort of a,a mind where you have, you know,maybe that's what's happeningin various kinds of minds thataren't working the sameway other minds workis that you're splitting intomultiple threads of experience.It's also, it's alsosomething where, you know,when you look at, I don't know,quantum mechanics for example,in the insides of quantum mechanics,it's splitting into manythreads of experience.But in order for us humansto interact with it,you kind of have to have toknit all those different threadstogether so that we say, oh yeah,a definite thing happened and now the nextdefinite thing happens and so on.And I think, you know, sort of inside it,it's sort of interesting totry and imagine what's it liketo have kind of thesefundamentally multiple threadsof experience going on.I mean, right now different human mindshave different threads of experience.We just have a bunch of minds thatare interacting with each other,but we don't have a a you know,within each mind there's a single thread.And that's a,that is indeed a simplification.I think it's a, it's a thing, you know,the general computational systemdoes not have that simplification.And it's one of the things, you know,I I people often seemto think that, you know,consciousness is the,the highest level of kindof things that can happenin the universe, so to speak.But I think that's not true.I think it's actually a,a a specialization inwhich among other things,you have this idea of asingle thread of experience,which is not a general featureof anything that could kindof computationally happen in the universe.- So it's a feature of acomputationally limited systemthat's only able toobserve reducible pockets.- Yeah.- So I mean this word observer, it,it means something in quantum mechanics,it means something in a lot of places.It means something to us humans.- Right.- As conscious beings. So what,what's the importance of theobserver? What is the observer?What's the importance of the observerin the computational universe?- So this question of what is an observer?What's the general idea of an observer?It's actually one of my nextprojects which got somewhatderailed by the, the currentsort of AI mania. But-- Is there a connectionthere or is that, do you,do you think the observerprimarily a physics phenomena?- Is it related to the whole AI thing?- Yes.- Yes, it is related.So one question is, whatis a general observer?So, you know,we know we have an idea what isa general computational system.We think about touring machines,we think about othermodels of computation.There's a question, what is ageneral model of an observer?And there there's kindof observers like us,which is kind of theobservers we're interested in.You know, we could imagine analien observer that deals withcomputational I irrereducibilityand it has a mind that'sutterly different from ours and,and completely incoherentwith what, what we are like.But the fact is that that, you know,if we are talking about observers like us,that one of the key things isthis idea of kind of takingall the detail of the worldand being able to stuff it into a mind,being able to take allthe detail and kind of,you know, extract outof it a smaller set of,of kind of degrees of freedom.A smaller number of,of elements that willsort of fit in our minds.And I think this, this question,so I've been interestedin trying to characterizewhat is the general observer.And the general observer is,I think in part there are many,let let me give an example ofa, you know, you have a gas,it's got a bunch of moleculesbouncing around and the thingyou are measuring aboutthe gas is it's pressure.And the only thing you as an observercare about is pressure.And that means you have apiston on the side of this boxand the piston is being pushed by the gas.And there are many,many different ways thatmolecules can hit that piston.But all that matters isthe kind of aggregateof all those molecular impacts.Because that's what determines pressure.So there's a huge numberof different configurationsof the gas, which are all equivalent.So I think one key aspect ofobservers is this equivalentthing of many differentconfigurations of a system saying,all I care about isthis aggregate feature.All I care about isthis, this overall thing.And that's, that's sortof one, one aspect.And we see that in lotsof different, again,it's the same story overand over again that there's,there's a lot of detail in the world,but what we are extracting from itis something a sort of a thin,a thin summary of that, of that detail.- Is that thin summary nevertheless true.Is can it be a crappyapproximation that an average is,is correct.I mean, if we look at the observer,that's the human mind, it seemslike there's a lot of very,as represented by naturallanguage for example,there's a lot of reallycrappy approximation.- Sure.- And that could be maybe a feature of it.- Well, yes.- But there's ambiguity.- Right, right.You don't know, you know,it could be the case you,you're just measuringthe aggregate impactsof these molecules,but there is some tiny,tiny probability that moleculeswill arrange themselvesin some really funky way.And that just measuring that average isn'tgoing to be the main point.- Yeah.- By the way,an awful lot of science is veryconfused about this because,you know, you look at,you look at papers andpeople are really keen,they draw this curve andthey have these, you know,these bars on the curve and things.And it's just this curveand it's this one thing,and it's supposed to representsome system that has allkinds of details in it.And this is a way that lotsof science has gotten wrongbecause people say,I remember years ago I wasstudying snowflake growth that,you know, you have thesnowflake and it's growing,it has all these arms, it'sdoing complicated things.But there was a literatureon this stuff and it talkedabout, you know, what's therate of snowflake growth?And you know, it,it got pretty good answersfor the rate of thegrowth of the snowflake.And they looked at it more carefully.And then they had thesenice curves of, you know,snowflake growth rates and so on.I looked at it more carefullyand I realized according totheir models, the snowflakewill be spherical.And so they got the growth rate right.But the detail was just utterly wrong.And you know, the notonly the detail, the,the whole thing was, wasnot capturing, you know,it was capturing this aspect of the systemthat was in a sense,missing the main pointof what was going on.- What is the geometricshape of a snowflake?- Snowflakes start in,in the phase of water that's relevant.- Yeah.- To the formation of snowflakes.It's a phase of ice,which starts with ahexagonal arrangement of,of water molecules.And so it starts off growingas a hexagonal plate.And then what happens is-- Is the plate, oh, oh versus sphere.- Well, no, no, but it's,it's much more than that.I mean, snowflakes are fluffy, you know,typical snowflakes havelittle, little dendritic arms.- Okay, yeah, yeah, yeah.- And, and what actually happens is,it's kind of kind of coolbecause you can make these verysimple discrete models withcellular automata and thingsthat, that figure this out.You start off with this,you know, hexagonal thing,and then the places it,it starts to grow little arms,and every time a little piece of ice,it adds itself to the snowflake.The fact that that ice condensedfrom the water vapor heatsthe snowflake up locally.And so it makes it less likely for,for another piece of iceto accumulate right nearby.So this leads to a kindof growth inhibition.So you grow an arm and it, it is a,a separated arm because rightaround the arm it got a littlebit hot and it didn't add more ice there.So what happens is itgrows, you have a hexagon,it grows out arms, the arms grow arms,and then the arms grow arms grow arms.And eventually, actuallyit's kind of cool because it,it actually fills in anotherhexagon, a bigger hexagon.And when I first looked at this, you know,had a very simple model forthis, I realized, you know,when it fills in that hexagon,it actually leaves some holes behind.So I thought, well, you know,that is that really right?So I look at these picturesof snowflakes and sure enoughthey have these little holesin them that are kind of scarsof the way that these arms grow out.- So you can't fill in backfillholes. So you keep going.- They don't backfill.Yeah, they don't backfill.- And, and presumably there'sa limitation on how big,like you can't arbitrarily grow.- I'm not sure. I mean, thething falls through the, the,I mean, I think it does, you know,it hits the ground at some point.I think you can grow.I I think you can grow in the lab.I think you can grow pretty big ones.I think you can growmany, many iterations of,this kind of goes fromhexagon, it grows out arms,it turns back, it fillsback into a hexagon.It grows more arms again in.- In 3D.- No. It's flat usually.- Why is it flat?Why doesn't it spin out?Okay, okay, wait a minute.You said it's fluffy andfluffy is a three-dimensionalproperty, no or.- No, it's, it's fluffy snow is.Okay. So you know what makes we're really,we're really in a-- Let's go there.Multiple snowflakes become fluffy.A single snowflake is not fluffy?- No, no.A single snowflake is fluffy.And what happens is, you know,if, if you have snow thatis just pure hexagons, they,they can, you know, they,they fit together pretty well.It's not, it doesn't, it doesn't make,it doesn't have a lot of air in it.And they can also slide againsteach other pretty easily.And so the snow can bepretty, you know, can,I think avalanches happen sometimes when,when the things tendto be these, you know,hexagonal plates and it kind of slides.But then when the thing has all these armsthat have grown out,it's not, they don'tfit together very well.And that's why the snowhas lots of air in it.And if you look at oneof these snowflakes,and if you catch one, you'llsee it has these little arms.And people actually,people often say, you know,no two snowflakes are alike.That's mostly becauseas a snowflake grows,they do grow pretty consistentlywith these different arms and so on.But you capture them atdifferent times as they,you know, they fell through,through the air in a different way.You'll catch this one, this stage.And as it goes through different stages,they look really different.And so that's why, you know,it kinda looks like no twosnowflakes are alike because youcaught them at different,at different times.- So the rules under whichthey grow are the same.- Yes.- It's just the timing is-- Yes.- Okay. So the point is,science is not able to describethe full complexity of snowflake growth.- Well science, if you,if you do what people might often do,which is say, okay,let's make it scientific,let's turn into one number,and that one number is kindof the growth rate of the armsor some such other thing,that fails to capturesort of the detail ofwhat's going on inside the system.And that's, in a sense,a big challenge for scienceis how do you extract from thenatural world, for example,those aspects of itthat you are interestedin talking about.Now you might just say,I don't really care about thefluffiness of the snowflakes.All I care about is thegrowth rate of the arms.In which case, you know, you have,you can have a good modelwithout knowing anythingabout the fluffiness.But the fact is, as a practical,you know, when if you,if you say what's the,what is the most obviousfeature of a snowflake?Oh, that it has this complicated shape,well then you've got a differentstory about what you model.I mean, this, this isone of the features of,of sort of modeling andscience that, you know,what is a model?A model is some way of reducingthe actuality of the worldto something where you canreadily sort of give a narrativefor what's happening,where you can basically makesome kind of abstraction ofwhat's happening and answer questionsthat you care about answering.If you wanted to answer all possiblequestions about the system,you'd have to have the wholesystem because you might careabout this particular molecule.Where did it go?And you know, your model,which is some big abstraction of thathas nothing to say about that.So, you know, one of the things that's,that's often confusing in sciences,people will have, I've gota model, somebody says,somebody else will say,I don't believe in your modelbecause it doesn't capturethe feature of thesystem that I care about.You know, there's alwaysthis controversy about,you know, is the, is it a correct model?Well, no model is a,except for the actual system itselfis a correct model in the sensethat it captures everything.Question is, does it capturewhat you care about capturing?Sometimes that's ultimatelydefined by what you're going tobuild technology out of things like this.The one counterexample tothis is if you think you'remodeling the wholeuniverse all the way down,then there is a notion of a correct model.But even that is more complicatedbecause it depends on kindof how observers sample things and so on.That's a, that's a separate story,but at least at the firstlevel to say, you know,this thing about, oh,it's an approximation.You're capturing one aspect,you're not capturing other aspects.When you really thinkyou have a complete modelfor the whole universe,you better be capturingultimately everything,even though to actually runthat model is impossible becauseof computational reducibility.The only,the only thing thatsuccessfully runs that modelis the actual running of the universe.- Is the universe itself. But okay,so what you care about isan interesting concept.So that's a, that's a human concept.So that's what you'redoing with Wolfram Alphaand Wolfram Language,is you're trying to come upwith symbolic representations.- Yes.- As simple as possible.So a model that's as simple aspossible that fully capturesstuff we care about.- Yes.So I mean, for example, youknow, we could we'll have a,a thing about, youknow, data about movies,let's say we could be describingevery individual pixelin every movie and so on.But that's not the levelthat people care about.And it's, yes, this is a, I mean, and,and that level that peoplecare about is somewhat relatedto what's described in natural language.But what what we're trying todo is to find a way to sort ofrepresent precisely soyou can compute things.See one thing when you sayyou give a piece of a naturallanguage question is youfeed it to a computer.You say,does the computer understandthis natural language?Well, you know,the computer processes itin some way it does this,maybe it can make a continuationof the natural language.You know,maybe it can go on from the promptand say what it's gonna say.You say, does it really understand it?Hard to know.But for in this kindof computational world,there is a very definitedefinition of does it understand,which is could it be turnedinto this symbolic computationalthing from which you can computeall kinds of consequences?And that's the,that's the sense in whichone has sort of a targetfor the understanding of natural language.And that's kind of our goalis to have as much as possibleabout the world that can be computed in a,in a reasonable way, so to speak,be able to be sort of capturedby this kind of computational language.That's, that's kind of the goal.And, and I think for us humans, the,the main thing that's importantis as we formalize whatwe're talking about,it gives us a way of kind ofbuilding a structure where wecan sort of build this towerof consequences of things.So if we're just saying, well,let's talk about it a natural language,it doesn't really give us somehard foundation that lets us,you know, build step bystep to work something out.I mean, it's kind of likewhat happens in, in math,if we were just sort ofvaguely talking about math butdidn't have the kind offull structure of mathand all that kind of thing,we wouldn't be able to build this kind ofbig tower of consequences.And so, you know, in,in a sense what we'retrying to do with the wholecomputational language effortis to make a formalism fordescribing the world that makesit possible to kind of buildthis, this tower of consequences.- Well can you talk about this dancebetween natural languageand Wolfram Language?So there's this giganticthing we call the internetwhere people post memesand diary type thoughtsand very important soundingarticles and all of that.That makes up thetraining data set for GPT,and then there's Wolfram Language.How can you map from thenatural language of the internetto the Wolfram language?Is there an manual? Is therean automated way of doing that?As we look into the future?- Well, so Wolf and Alpha, what it does,it's kind of front end isturning natural language intocomputational language, right?- What you mean by thatis there's a prompt,you ask a question, what isthe capital of some country.- And it, and it turns into, you know,what's the distance between, you know,Chicago and London or something.And that will turn into, you know,geo distance of entity city,you know, et cetera, et cetera, et cetera.Each one of those things is very,is very well defined.We know, you know,given that it's the entitycity, Chicago, et cetera,et cetera, et cetera, youknow, Illinois, United States,you know, we know the geolocation of that.We know it's population,we know all kinds of thingsabout it, which we have,you know, curated that data to be able to,to know that with some degreeof certainty, so to speak.And then, then we cancompute things from this.And that's, that's kind of the yeah.That, that's, that's that's the idea.- But then something likeGPT large language models,do they allow you to make that conversionmuch more powerful?- Okay, so it's an interesting thing whichwe still don't know everything about.Okay.The, I mean this question ofgoing from natural languageto computational language, yes.In Wolfram Alpha, we've now, you know,Wolfram Alpha has beenout and about for what,13 and a half years now.And, you know, we've achieved,I don't know what it is, 98%,99% success on queriesthat get put into it.Now, obviously there'sa sort of feedback loopbecause the things that work are thingspeople go on putting into it.So yeah, that that, but you know, we've,we've got to a very highsuccess rate of the,the little fragments of naturallanguage that put people putin, you know, questions,math calculations,chemistry calculations, whatever it is.You know, we can, we can,we, we do very well at that,turning those things intocomputational language.Now I, from the verybeginning of Wolfram Alpha,I thought about, for example,writing code with natural language.In fact, I had, I, I was justlooking at this recently.I had a post that I wrote in 2010, 2011called something likeProgramming with Natural Languageis actually Going to Work.Okay. And so, you know,we had done a bunch of experimentsusing methods that wereyeah, a little bit.Some of them are littlebit machine learning like,but certainly not the same, you know,the same kind of idea ofvast training data and so on.That is the story oflarge language models.Actually I know that thatpost piece of utter trivia,but that,that post Steve Jobs forwardedthat post around to all kindsof people at Apple.And he, you know,that was because he never really likedprogramming languages.So he was very happy to see theidea that that that that youcould get rid of this kind oflayer of kind of engineeringlike structure hewould've liked, you know,I think what's happening now,because it really is thecase that you can, you know,this idea that you have tokind of learn how the computerworks to use a programminglanguage is something that isI think a, a a thing that, you know,just like you had to learnthe details of the op codesto know how similarlanguage worked and so on.It's kind of a thing that's,that's that's a limited time horizon.But, but kind of the, the, you know,so this idea of how elaborate can you makekind of the prompt,how elaborate can youmake the natural languageand abstract from itcomputational language.It's a very interesting question.And you know what,ChatGPT, you know,GBT four and so on can do is pretty good.It isn't, it's very interesting process.I mean I'm still trying tounderstand this workflow.We've been working out a lotof tooling around this workflow- The natural language tocomputational language.- Right.- And the process.Especially if it'sconversation like dialogue,it's like multiple queries kind of thing.- Yeah. Right.There's so many things thatare really interesting that,that, that work and so on.So first thing is,can you just walk up to the computerand expect to sort ofspecify a computation?What one realizes is humanshave to have some idea of kindof this way of thinkingabout things computationally.Without that you're kind ofout of luck because you justhave no idea what you're goingto walk up to a computer.I remember when I, I shouldtell a silly story about myself,the very first computer I saw,which is when I was 10 yearsold and it was a big mainframecomputer and so on,and I didn't really understandwhat computers did and it'slike somebody's showing methis computer and it's like,you know,can the computer work outthe weight of a dinosaur?It's like, that isn't asensible thing to ask.That's kind of, you know,you have to give it,that's not what computers do.I mean, and Wolfram Alpha for example,you could say what's thetypical weight of a stegosaurus?And we'll give you some answer,but that's a very differentkind of thing from what onethinks of computers as doing.And so the, the kind of thethe question of, you know,first thing is people haveto have an idea of what,what computation is about.You know, I think it's a very, you know,for education that is thekey thing. It's kind of this,this, this notion, not computer science,not so that the details are programming,but just this idea of howdo you think about the worldcomputationally computation,thinking about the worldcomputationally is kind of thisformal way of thinking about the world.We've had other ones, like logicwas a formal way, you know,as a way of sort of abstractingand formalizing some aspects of the world.Mathematics is another one.Computation is this very broadway of sort of formalizingthe way we think about the world.And the thing that's,that's cool about computationis if we can successfullyformalize things in terms of computation,computers can help us figureout what the consequences are.It's not like you formalized it with math.Well that's nice, but now youhave to, if you're, you know,not using a computer to do the math,you have to go work out abunch of stuff yourself.So I think, but the, this idea,let's see, I mean the, the,you know, we're tryingto take kind of the,we're talking aboutsort of natural languageand its relationship tocomputational language.The, the thing,the sort of the typical workflowI think is first human hasto have some kind of idea ofwhat they're trying to do.That if it,if it's something that theywant to sort of build a towerof, of capabilities on somethingthat they want to sort offormalize and make computational,so then human can typesomething into, you know,some LLM system and sort of say vaguelywhat they want in sortof computational terms,then it does pretty well at synthesizingWolfram Language code,and it'll probably do better in the futurebecause we've got a hugenumber of examples of,of natural language inputtogether with the Wolfram Languagetranslation of that.So it's kind of a, a you know,that that's a thing where youcan kind of extrapolating fromall those examples makes iteasier to do that, that task.- Is the prompter taskcould also kind of debuggingthe Wolfram Language code?Or is your hope to not do that debugging?- Oh, no. No, no.I mean, so, so there are many steps here.Yeah. Okay, so first,the first thing is youtype natural language,it generates Wolfram Language.- Do you have examples by the way?Do you have, do you have anexample that is, is it the,the dinosaur example,do you have an example that jumps to mindthat we should be thinking about?Some dumb example.- It's like, take my heartrate data and you know,figure out whether I, you know,make a moving average everyseven days or somethingand work out what the, and andmake a a plot of the result.Okay. So that's a thingwhich is, you know,about two-thirds of a lineof Wolfram Language code.I mean it's, you know,list plot of moving averageof some data bin or somethingof the, of the data andthen you'll get the result.And you know,the vague thing that I was just sayingin natural language could,would almost certainly correctlyturn into that very simplepiece of Wolfram Language code.- You start mumbling about heart rate.- Yeah.- Kinda, you know,you arrive at the movingaverage kind of idea.- Right? You say average over seven days,maybe it'll figure out thatthat's a moving, you know,that that can be encapsulatedas this moving average idea.I'm not sure.But then the typical workflow,but I'm seeing is you generate this pieceof Wolfram Language code,it's pretty small usually.It's, and if it isn't small,it probably isn't right.But you know, if it's, it'spretty small and, you know,Wolfram Language is,one of the ideas of Wolfram Language is,it's a language that humans can read.It's not a language which, you know,programming languages tendto be this one way storyof humans write them andcomputers execute from them.Wolfram Language isintended to be somethingwhich is sort of like mathnotation something where,you know, humans write itand humans are supposedto read it as well.And so kind of the workflowthat's emerging is kind of this,this human mumbles some things, you know,large language model produces a fragmentof Wolfram Language code.Then you look at that, you say, yeah,that looks well, typicallyyou just run it first,you see does it produce the right thing?You look at what it produces.You might say, that's obviously crazy.You look at the code, yousee I see why it's crazy.You fix it.If you really care about theresults and you really wannamake sure it's right,you better look at that codeand understand it becausethat's the way you have thesort of checkpoint of did itreally do what I expected it to do?Now you go beyond that.I mean it's, it's, it's, you know,what we find is, for example,let's say the code does the wrong thing,then you can often say tothe large language model,can you adjust this to do this?And it's pretty good at doing that.- Interesting.So you're using the outputof the code to give you hintsabout the, the, the function of the code.So you're debugging based onthe output of the code itself.- And by the way, right,The plugin that we haveor the, the, you know,for ChatGPT, it doesthat routinely, you know,it will send the thingin, it will get a result.It will discover,the LLM will discoveritself that the resultis not plausible and it willgo back and say, oh, I'm sorry,it's very polite and it it, you know, it,it goes back and says,I'll rewrite that piece ofcode and then it will try itagain and get the result.The other thing that'spretty interesting iswhen you're just running,so one of the new concepts that we have,we invented this whole idea of notebooksback 36 years ago now.And so now there's thequestion of sort of how do youcombine this idea of notebookswhere you have, you know,text and code and output,how do you combine thatwith the notion of,of chat and so on.And there's some reallyinteresting things there.Like for example, a very typicalthing now is we have these,these notebooks where as soon as the, if,if the thing produceserrors, if the, you know,run this code and it producesmessages and so on, the, the,the LLM automatically notonly looks at those messages,it can also see all kinds ofinternal information aboutstack traces and things like this.And it can then,it does a remarkably goodjob of guessing what's wrongand telling you.So in other words, it's,it's looking at things,it's sort of interesting,it's kind of a,a typical sort of AI ish thingthat it's able to have moresensory data than we humansare able to have cause able tolook at a bunch of stuff thatwe humans would kind of glazeover looking at.And it's able to then come up with,oh this is the explanationof what's happening.- And, and what is thedata the stack trace, the,the code you've written previously,the natural language you've written.- Yeah.It's also what's happening isone of the things that's is isfor example, when there's these messages,there's documentationabout these messages,there's examples of where the messageshave occurred otherwise.- Nice.- All these kinds of things.The other thing that's reallyamusing with this is when itmakes a mistake,one of the things that's in ourprompt when the code doesn'twork is read the documentationand we have a, you know,another piece of the pluginthat lets it read documentation.And that again is very, veryuseful because it, it will,you know, it will figureout, sometimes it'll get,it'll make up the name of someoption for some function thatdoesn't really exist.Read the documentation, it'll have,you know, some wrong structurefor the function and so on.It's, that's a powerful thing.I mean the thing that, you know,I've realized is we built this languageover the course of allthese years to be niceand coherent and consistent and so on.So it's easy for humans to understand.Turns out there was a sideeffect that I didn't anticipate,which is it makes it easierfor AIs to understand.- It's almost like anothernatural language. But-.- Yeah.- So, so Wolfram language isa kind of foreign language.- Yes, yes.- You have a lineup.English, French, Japanese,Wolfram language andthen I don't know Spanish,and then the system is notgonna notice. Hopefully.- Well, yes, I mean maybe, you know,that's an interesting question becauseit really depends onwhat I see as being a,a an important piece offundamental science that basicallyjust jumped out at us with ChatGPT.Because I think, you know, the,the real question iswhy does ChatGPT work?How is it possible toencapsulate, you know,to successfully reproduceall these kinds of thingsin natural language, youknow, with a, you know,a comparatively small, he says, you know,couple hundred billion, you know,weights of neural nets and so on.And I think that,you know, that that relates to kind of a,a fundamental fact about language,which, you know, the, themain, the main thing isthat I think there's astructure to languagethat we haven't kind ofreally explored very well.It's kind of this semanticgrammar I I'm talking about,about, about language.I mean, we, we kind of know that when we,when we set up human language,we know that it has certain regularities.We know that it has a certaingrammatical structure,you know, noun followed by verb,followed by noun, adjectives,et cetera, et cetera, et cetera.That's, its kind of grammatical structure.But I think the thing thatChatGPT is showing us is thatthere's an additional kindof regularity to language,which has to do with themeaning of the languagebeyond just this pure, you know,part of speech combination type of thing.And I, I think the, the, the kind of the,the one example of that thatwe've had in the past is logic.And you know, I, I think my,my sort of kind of pictureof how was logic invented?How was logic discovered?It really was a thing that was discoveredin its original conception.It was discovered presumably by Aristotlewho kind of listened to abunch of people orators,you know, giving speeches.And this one made sense,that one doesn't make sense,this one, and you know, you seethese patterns of, you know,if the, you know, I don'tknow what, you know, if the,if the Persians do this,then the, this does that,et cetera, et cetera, et cetera.And what, what Aristotle realizedis there's a structure to those sentences,there's a structure to thatrhetoric that doesn't matterwhether it's the Persiansand the Greeks or whetherit's the cats and the dogs.It's just, you know, p and qyou can abstract from this,the, the details of theseparticular sentences.You can lift out thiskind of formal structure.And that's what logic is.- That's a heck of a discovery by the way.Logic, you're making me realize now.- Yeah.- It's not obvious.- The fact that there is an abstractionfrom natural language that has,where you can fill in any word you want.- Yeah.- Is a very interesting discovery.Now it took a long time to mature.I mean, Aristotle had thisidea of silogistic logicwhere there were these particular patternsof how you could arguethings, so to speak.And you know,in the Middle Ages part of education wasyou memorized the syllogisms,I forget how many there were,but 15 of them or something.And they all had names,they all had mnemonics.Like I think Barbara andCelarent were two of the,the mnemonics for the silogisms.And people would kind of,this is a valid argumentbecause it followsthe Barbara of Syllogism,so to speak and, and ittook until 1830, you know,with George Bule to kind ofget beyond that and kind of seethat there was a,a level of abstractionthat was beyond the,this particular templateof a sentence, so to speak.And that's, you know, what,what's interesting thereis in a sense, you know,ChatGPT is operating atthe Aristotelian level.It's essentially dealingwith templates of sentences.By the time you get toBule and Bule and algebraand this idea of, you know,you can have arbitrary depthnested collections of ans andawes and knots, and youcan resolve what they mean.That's the kind of thingthat's a computation story.That's, you know,you've gone beyond the puresort of templates of naturallanguage to something,which is an arbitrarily deep computation.But the thing that Ithink we realized from,from ChatGPT is, you know,Aristotle stopped too quicklyand there was more that youcould have lifted out oflanguage as formal structures.And I think there's, you know,in a sense we've capturedsome of that in, in, you know,some of what, what is in language there.There's, there's a,there's a lot of kind of little calculate,little algebras of, of what you can say,what language talks about.I mean, whether it's,I don't know, if you say Igo from place A to place B,place B to place C,then I know I've gonefrom place A to place C.If A is a friend of Band B is a friend of C,it doesn't necessarily followthat A is a friend of C.These are things that are, andyou know that there are, if,if you go from from place Ato place B place B to place C,it doesn't matter howyou went, like logic.It doesn't matter whetheryou flew there, walked there,swam there, whatever youstill, this transitivity of,of where you go is still valid.And there are,there are many kinds of kind of features,I think of the way the worldworks that are capturedin these aspects of, oflanguage, so to speak.And I think what, whatChatGPT effectively has found,just like it discovered logic, you know,people are reallysurprised it can do these,these logical inferences.It discovered logic the sameway Aristotle discovered logicby looking at a lot of sentenceseffectively and noticingthe patterns in those sentences.- But it feels like it'sdiscovering something much morecomplicated than logic.So this kind of semantic grammar,I think you wrote about this,maybe we can call it the laws of language,I believe you call, or whichI like the laws of thought.- Yes.That was the title thatGeorge Bule had for his,his Bule in algebra back in 1830. But yes.- Laws of thought.- Yes. That was what he said.- Woo.All right.- So he thought,he thought he nailed it with Bule algebra.- Yeah.- There's more to it.- I, and it's a good question,how much more is there to it?And it seems like one of the reasons,as you imply that the reasonGPT works, ChatGPT works,is that there's a finitenumber of things to it.- Yeah. I mean, it's, it's.- Like, it's discoveringthe laws in some sense,GPTs discovering this lawsof semantic grammar thatunderlies language.- Yes. And what,what's sort of interesting isin the computational universe,there's a lot of otherkinds of computationthat you could do.They're just not ones that wehumans have cared about and,and operate with.And that's probably becauseour brains are builtin a certain way.And, you know,the neural nets of our brainsare not that different in somesense from the neural nets of, of,of a large language model.And that's kind of,and and so when we thinkabout, and you know,maybe we can talk about this some more,but when we think about sortof what will AIs ultimately do,the answer is insofar as AIare just doing computation,they can run offand do all these kindsof crazy computations.But the ones that we sort of have,have decided we care aboutare there is this kind of verylimited set.- That's where the reinforcement learningwith human feedback seems to come in.The more the AI say the stuffthat kind of interests us,the more we're impressed by it.So they can do a lot ofinteresting, intelligent things,but we're only interestedin the AI systems when theycommunicate human in a human-like way.- Yes.- About human-like topics.- Yes.- Well, it's, it's like technology.I mean, in a sense,the physical world providesall kinds of things.You know, there are all kinds of processesgoing on in physics onlya limited set of thoseare ones that we captureand use for technology.Cause they're only alimited set where we say,you know,this is a thing that we cansort of apply to the humanpurposes we currently care about.I mean, you might have said,okay, you pick up a piece of, of rock,you say, okay, there's a nice sillacate,it contains all kinds ofsillicon, I don't care.Then you realize, oh, we couldactually turn this into a, a,you know,semiconductor wafer and makeit microprocessor out of it,and then we care a lot about it.- Yes.- And it's, it's, you know,it's this thing about what do we,you know, in the evolutionof our civilization,what things do we identify asbeing things we care about?I mean, it's, it's like, you know,when when there was a littleannouncement recently ofa possibility of a hightemperature superconductor thatinvolved, you know, the elementlutetium, which, you know,generally nobody has cared about.- Fridman] Yes.- And, you know, it, it's kind of,but suddenly if there's thisapplication that relates tokind of human purposes,we start to care a lot.- So given your thinkingthat GPT may have discoveredinklings of laws of thought,do you think such laws exist?Can we linger on that?Yeah. What's your intuition here?- Oh, definitely.I mean, the fact is,look, the, the logic is,but the first step,there are many other kinds ofcalculate about things thatwe consider, you know, the,the about sort of thingsthat happen in the worldor things that are meaningful.- Well, how do you knowlogic's not the last step?You know what I mean? So.- Well, because we can plainlysee that there thing, I mean,if, if you say,here's a sentence thatis syntactically correct.Okay. You look at it andyou're like, you know,the happy electron, you know, ate,I don't know what somesomething that it just, it,you look at it and it'slike, this is meaningless.It's just a bunch of words.It's syntactically correct,the nouns and the verbsare in the right place,but it just doesn't mean anything.And so there clearly is somerules that there are rules thatdetermine when a sentence is,has the potential to bemeaningful that go beyond the pureparts of speech syntax.And so the question is,what are those rules?And are there a fairlyfinite set of those rules?My guess is that there's afairly finite set of those rulesand they, you know, onceyou have those rules,you have a kind of aconstruction kit, just like the,the rules of syntactic grammargive you a construction kitfor making syntacticallycorrect sentences.So you can also have aconstruction kit for makingsemantically correct sentences.Those sentences may notbe realized in the world.I mean, I think, you know,the elephant flew to the moon.- Yeah.- A syntactic, a a semantically, you know,we know we have an idea.If I say that to you, youkind of know what that means.But the fact is it hasn'tbeen realized in the world,so to speak.- So semantically correctperhaps as things that can beimagined with a human mind, no.Things that are consistentwith both our imaginationand our understanding of physical reality.I don't know.- Yeah.It's a good question. Imean, it's a good question.It it, it's a good question.I mean, I think it is,it is given the way wehave constructed language,it is things which,which fit with the thingswe're describing in language.It's a bit circular in the endbecause, you know, you can,and, and the, and the,the sort of boundaries ofwhat is physically realizable.Okay, let, let's takethe example of motion.Okay? Motion is a complicated concept.It might seem like it's aconcept that should have beenfigured out by the Greeks,you know, long ago.But it's actually a reallypretty complicated conceptbecause what is motion?Motion is you can go from placeA to place B and it's stillyou when you get to the other end, right?You, you take an object, you move it,and it's still the same object,but it's in a different place.Now, even in ordinary physics,that doesn't always work that way.If you're near a space timesingularity in a black hole,for example, and you takeyour teapot or something,you don't have much ofa teapot by the timeit's near the space time singularity.It's been completely, you know,deformed beyond recognition.But, so that's a case wherepure motion doesn't really work.You can't have a, a thing stay the same.But, so this idea of motion is,is something that sort of is a,is a slightly complicated idea.But once you have the ideaof motion, you can start,once you have the idea thatyou're gonna describe things asbeing the same thing,but in a different place,that sort of abstractedidea then has, you know,that has all sorts of consequences.Like this transitivity ofmotion go from A to B, B to C.You've gone from A to C.And that's so that level of description,you can have what are sortof inevitable consequences.They're inevitable features of the wayyou've sort of set things up.And that's, I think whatthis sort of semantic grammaris capturing is things, things like that.And I, you know,I think that it's a questionof what does the word mean whenyou say I go from, Imove from here to there.Well, it's complicatedto say what that means.This is this whole issue of, you know,is pure motion possible, etcetera, et cetera, et cetera.But once you have kind of gotan idea of what that means,then there are inevitableconsequences of that idea.- But the very idea of meaning,it seems like there'ssome words that become,it's like there's alatent ambiguity to them.I mean, it is the word like emotionallyloaded words like hate and love, right?It's like, what, what dothey, what do they mean?Exactly? What, what?So especially when youhave relationships betweencomplicated objects, we seemto take this kind of shortcut,descriptive shortcut of,to describe like objectA hates object B, what's,what's that really mean?- Right.Well, words are defined bykind of our social use of them.I mean, it's not, you know,a word in computationallanguage, for example,when we say we have a, a construct there,we expect that that constructis a building block from whichwe can construct an arbitrary tall tower.So we have to have a verysolid building block.And you know, we have to, itturns into a piece of code,it has documentation, it's,you know, it's a whole,it's a whole thing.But the word hate, you know,the documentation for that word.Well, there isn't a standarddocumentation for that word,so to speak.It's a complicated thingdefined by kind of how we use itwhen, you know,if it wasn't for the factthat we were using language,I mean, so, so what islanguage at some level,language is a way of packagingthoughts so that we cancommunicate them to another mind.- Can these complicated words be convertedinto something that acomputation engine can use?- Right? So, so I thinkthe answer to that is that,that what one can do incomputational language is define,make a def- make a specific definition.And if you have a complicated word,like let's say the word eat, okay,you'd think that's a simple word.It's, you know, animalseat things, whatever else.But you know, you do programming,you say this function eats arguments,which is sort of poetically similarto the animal eating things.But if you start to say, well,what are the implications of, you know,the function eating something,you know, does it can,can the function be poisoned?Well, maybe it can actually,but you know, if there's a tightmismatch or something in a,in some language.But, but you know, in what,how far does thatanalogy go? And it, it's,it's just an analogy. Yeah.Whereas if you use the wordeat in a computational languagelevel, you would define thereisn't a thing which you,you anchor to the kind ofnatural language concept eat.But it is now some precisedefinition of that,that then you can compute things from.- But don't you think theanalogy is also precise softwareeats the world, huh?Don't, don't you think there's,there is something concrete in terms ofmeaning about analogies.- Sure.But the thing that sortof is the first target forcomputational language isto take sort of the ordinarymeaning of things andtry and make it precise,make it sufficiently precise.You can build these towersof computation on top of it.- Yeah.- So it's kinda like if youstart with a piece of poetryand you say,I'm going to define my programwith this piece of poetry.It's kind of like, that's,that's a difficult thing.It's better to say,I'm gonna just have thisboring piece of proseand it's using words in the ordinary way.- Yeah.- And that's how I'mcommunicating with my computer.And that's how I'm going tobuild the solid building blockfrom which I can constructthis whole kind ofcomputational tower.- So there is some sensewhere if you take a poemand reduce it to something computable,you're gonna have very few things left.So maybe there's a bunch ofhuman interaction that's justpoetic aimless nonsense.- Well.- That's just like recreational,like hamster in a wheel.It's not actually producing anything.- Well, I, I I,I think that that's a complicatedthing because in a sense,human linguistic communicationis, there's one mind,it's producing languagethat language is havingan effect on another mind.- Yeah.- And the question ofthere's sort of a, a,a type of effect that is welldefined, let's say where,where, for example,it's very independent of thetwo minds that the, it doesn't,you know, there, there,there's communication where itcan matter a lot sort of whatthe experience of, of,of one mind is versusanother one and so on.- Yeah.But what is the purpose ofnatural language communication?- I think, I think the-- Versus, so computationcomputational languagesomehow feels more amenableto the definition of purpose.It's like, yeah,you're given two cleanrepresentations of a conceptand you can build a tower based on that.- Right.- Is natural language the samething but more fuzzy or what?- Well, I think the, the storyof natural language, right?And the, the, that's the greatinvention of our species.We don't know whether itexists in other species,but we know it exists in our species.It's the thing that allowsyou to sort of communicateabstractly from like one generationof the species to another.You can, you know,there is an abstract version of knowledgethat can be passed down.It doesn't have to be, you know, genetics.It doesn't have to be, you know,you don't have to apprenticethe next species, you know,the next generation of birdsto the previous one to showthem how something works.- Yeah.- There is this abstractedversion of knowledge that can bekind of passed down.Now that, you know, it relies on,it still tends to relybecause language is fuzzy.It does tend to rely onthe fact that, you know,if we look at the, you know,some ancient language that,where we don't have a chainof translations from it untilwhat we have today,we may not understandthat ancient language.And we may not understand, you know,its concepts may be differentfrom the ones that we have today.We still have to havesomething of a chain.But it is something where wecan realistically expect tocommunicate abstract ideas.And that's, you know, that'sone of the big, big roles of,of a language I think, you know, in, in,it's, you know, and that that's been this,this ability to sort ofconcrete-ify abstract things iswhat, what language has provided.- Do you see natural languageand thought as the same,the stuff that's going inside your mind?- Well, that's been along debate in philosophy.- It seems to be become more important nowwhen we think abouthow intelligent GPT is.- Whatever that means.- Whatever that means.But it seems like the stuffthat's going on in the humanmind seems something like intelligence.- Yes.- And is the language-- Well, we call it intelligence.Yeah.- We call it. Well, yes.- And so you, you start to think of, okay,what's the relationship between thought,the language of thought,the laws of thought,the laws of the words like reasoningand the laws of languageand how that has to do with computation,which seems like more rigorous,precise ways of reasoning.- Right.Which are beyond human. I mean,much of what computersdo, humans do not do.I mean, you, you might say-- Humans are a subset.- [Wolfram[ Yes.- Presumably.- Yes.- Hopefully.- Yes.The, the yes.Right. You know, you might say,who needs computation when we have large,large language models?Large language models can just, you know,eventually you'll havea big enough NeuroNetcan do anything,but they're really doingthe kinds of thingsthat humans quickly do.And there're plenty of sortof formal things that humansnever quickly do.For example, I don't know, I, you know,you can, some people cando mental arithmetic.They can do a certainamount of math in their,in their minds.I don't think many people canrun a program in their mindsof any sophistication.It's just not something people do.It's not something peoplehave even thought of doing.Cause it just, it's kind ofa, it's kind of not, you know,you can easily run it on a computer.- We're an arbitrary program.- Yeah.- Aren't we running specialized programs?- Yeah, yeah.But if I say to you-- Run this program-- Here's atouring machine. Yeah.You know, tell me whatit does after 50 steps.And you're like, trying tothink about that in your mind.That's really hard to do.It's not what people do.I mean it-- Well, in some sense,people program, they builda computer, they program it.Just to answer your questionabout what the system doesafter 50 steps.I mean, humans build computers.- Yes.Yes, yes, that's right.But they've created somethingwhich is then, you know,then when they run it,it's doing something differentthan what's happening in their minds.I mean, they've outsourced that,that piece of computationfrom something that is ininternally happening in theirminds to something that is nowa tool that's external to their minds.- So by the way,humans to you didn't invent computers.They discovered them.- They discovered computation.- Which-- They invented thetechnology of computers.- This, the computer is just a kind of wayto plug into this whole,this stream of computation.There's probably other,are other ways.There's probably a lot of ways.- For sure. Imean the, the, you know,the particular ways thatwe make computers out ofsemiconductors and electronics and so on.That's the particulartechnology stack we've built.I mean, the story of alot of what people tryto do with quantum computingis finding differentsort of underlying physical,you know, infrastructurefor doing computation.You know, biology doeslots of computation.It does it using an infrastructurethat's different fromsemiconductors and electronics.It's a, you know, it's a molecular scale,sort of computationalprocess that hopefully we'llunderstand more about.I have some ideas aboutunderstanding more about that.But you know, that's a,that's another ins you know,it's another representationof computation,things that happen in thephysical universe at the level of,you know, these evolvinghypergraphs and so on.That's another sort ofimplementation layer for thisabstract idea of computation.- So if GPT or large languagemodels are starting to form,starting to develop orimplicitly understand the lawsof language and thought,do you think they can be made explicit?- Yes.- How.- With a bunch of effort?- I mean, so do, do they have-- It's like doing natural science.I mean, what is happeningin natural science?You have the world that's doingall these complicated thingsand then you discover, you know,Newton's laws, for example.This is how motion works.This is the way that this particularsort of idealization of the world,this is how we describe it in a simplecomputation reducible way.And I think it's the same thing here.It's, there are sort ofcomputationally reducible aspects ofwhat's happening that you canget a kind of narrative theoryfor just as we've got narrative theoriesin physics and so on.- Do you think it willbe depressing or excitingwhen all the laws ofthought are made explicithuman thought are made explicit?- I think that once you understandcomputational reducibility,it is, it's neither of those things.Because the fact is people sayfor example, people will say,oh, but you know, I have freewill I, I kind of, you know,I operate in a way that is,you know, you, you, they,they have the idea that they'redoing something that is sortof, of internal to them, thatthey're figuring out what's,what's happening.But in fact,we think there are lawsof physics that ultimatelydetermine, you know, every,every nerve, you know,every electrical impulse anda nerve and things like this.So you might say,isn't it depressing that weare ultimately just determinedby the rules of physics, so to speak?It's the same thing.It's at a higher level.It's like, it's, it's,it's a shorter distance to getfrom kind of semantic grammarto the way that we mightconstruct a piece of text than itis to get from individual nerve firingsto how we construct a piece of text.But it's not fundamentally different.And by the way, as soon as wehave this kind of level of,you know, this other levelof description, it's kind of,it helps us to go even further.So we'll end up being ableto produce more and morecomplicated kinds of kinds ofthings that just like when we,you know, if we didn't have a computerand we knew certain rules,we could write them down.We go a certain distance.But once we have a computer,we can go vastly further.And this is the same kind of thing.- You wrote a blog post titled,what is ChatGPT doingand why does it work?We've been talking about this,but can we just step backand linger on this question?What, what's it, what's ChatGPT doing?What, what are these,A bunch of billion parameterstrained on a large number of words.Like, why does it seem to work again?Is it,is it because to the pointyou made that there'slaws of language that can bediscovered by such a process?Is there something-?- Well, let, let's,let's talk about sort of the low levelof what ChatGPT is doing.I mean, ultimately you give it a prompt.It's trying to work out, you know,what should the next word be?- Right. Which is wild.Isn't that,isn't that surprising to youthat this kind of low leveldumb training procedurecan create somethingsyntactically correct firstand then semantically correct second.- You know,the thing that has been sortof a story of my life isrealizing that simple rulescan do much more complicatedthings than you imagine.That something that starts simpleand starts simple to describecan grow a thing that is, you know,vastly more complicatedthan you can imagine.And, and honestly, it, it'staken me, I mean, I don't know,I've sort of been thinkingabout this now 40 years or so,and it always surprises me.I mean, even for example,in our physics project,sort of thinking about the wholeuniverse growing from thesesimple rules, I still resistbecause I keep on thinking,you know,how can something really complicatedarise from something that simple?It just seems, you know, itseems wrong, but yet, you know,the majority of my life,I've kind of known from,from things I've studied, thatthis is the way things work.So yes, I,it is wild that it's possibleto write a word at a timeand produce a coherent essay, for example.But it's worth understandingkind of how that's working.I mean, it's kind of like,if, if it was going to say,you know, the cat sat onthe, what's the next word?Okay, so how does itfigure out the next word?Well, it's seen a trillionwords written on the internet,and it's seen the cat sat on the floor,the cat sat on the sofa,the cat sat on the whatever.So it's minimal thing to do is just say,let's look at what we saw on the internet.We saw, you know, 10,000examples of the cat sat on the,what was the most probable next word.Let's just pick that out andsay that's the next word.And that's,that's kind of what it atsome level is trying to do.Now, the problem isthere isn't enough texton the internet to, for,if you have a reasonablelength of prompt to that,that that specific prompt will neverhave occurred on the internet.And as you, as you kind of go further,there just won't be a placewhere you could have trained,you know, where you could just,just worked out probabilitiesfrom what was already there.You know, like if you say two plus two,there'll be a zillion examplesof two plus two equaling four,and a very small number of examples oftwo plus two equals five and so on.And you can pretty muchknow what's going to happen.So then the question is, well,if you can't just work out from examples,what's gonna happen? Justno probabilistic for,for examples, what's gonnahappen, you have to have a model.And this kind of an idea,this idea of making models ofthings is an idea that really,I don't know,I think Galileo probably wasone of the first people whosort of worked this out.I mean, it's kind of like, like,you know, I think Igave an example of that,the little book I wroteabout, about ChatGPTwhere it's kind of like, you know,Galileo was dropping cannon balls off the,off the different floors ofthe, of the tower of Piza.And it's like, okay, you dropa cannonball off this floor,you drop a cannonball off this floor,you miss floor five orsomething for whatever reason.But you know,the time it took the cannonball to fall to the ground fromfloors one, two, three, four,six, seven, eight, for example.Then the question is, canyou work out, can you,can you make a modelwhich figures out how longdid it take the ball?How long would it have takethe ball to fall to the groundfrom the floor you didn'texplicitly measure.And the thing Galileo realizedis that you can use math,you can use mathematicalformulas to make a model for howlong it will take the ball to fall.So now the question is, well, okay,you want to make a model for, for example,something much more elaborate,like you've got this arrangementof pixels and is thisarrangement of pixels an A or a B?Does it correspond tosomething we'd recognizeas an A or a B?And you can make a similar kind, you know,each pixel is like aparameter in some equation.And you could write downthis giant equation where theanswer is either, you know, A or you know,one or two A or B.And the question is then whatkind of a model successfullyreproduces the way that we humans would,would conclude that this is anA, and this is a B, you know,if if there's a,a complicated extra tailon the top of the A,would we then concludesomething different?What is the type of model thatmaps well into the way thatwe humans make distinctions about things?And the big kind of meta discoveryis neural nets are such a model.It's not obvious theywould be such a model.It could be that humandistinctions are not captured.You know,we could try searchingaround for a type of modelthat could be a mathematical model,it could be some model basedon something else that captureskind of typical humandistinctions about things.It turns out this model thatactually is very much the waythat we think thearchitecture of brains works,that perhaps, not surprisingly,that model actually corresponds to the waywe make these distinctions.And so, you know, the, the,the core next point is thatthe, the kind of model,this neural net modelmakes sort of distinctionsand generalizes thingsin sort of the same waythat we humans do it.And that's why when you say, you know,the cat sat on the green blank,even though it never didn't seemany examples of the cat saton the green, whatever, it can make a,or the aardvark sat onthe green, whatever.I'm sure that particular sentencedoes not occur on the internet.And so it has to make amodel that concludes what,you know, it has to kind ofgeneralize from what it's,from the actual examples that it's seen.And so, so you know that that'sthe factors that neural netsgeneralized in the same kindof a way that we humans do.If, if we were, you know,the aliens might look at ourneural net generalizations andsay, that's crazy.You know,that thing when you put thatextra little dot on the A,that isn't an A anymore. That's, you know,that messed the whole thing up.But for us humans, we make distinctions,which seem to correspond tothe kinds of distinctionsthat neural nets make.So then, you know, the,the thing that is justamazing to me about ChatGPTis how similar the structureit has is to the veryoriginal way people imagineneural nets might workback in 1943.And, you know, there's alot of detailed engineering,you know, great cleverness,but it's really the same idea.And in fact,even the sort of elaborationof that idea where people said,let's put in some actualparticular structure to try andmake the neural net more elaborate,to be very clever about it,most of that didn't matter.I mean, there's some thingsthat seem to, you know,when you, when you trainthis neural net, you know,the one thing, this kind oftransformer architecture,this attention idea thatreally has to do with,does every one of theseneurons connect to every otherneuron or is it somehowcausally localized, so to speak?Does it, like,we're making a sequence ofwords and the words depend onprevious words rather than just,everything can depend on everything.And that seems to be important.And just organizing thingsso that you don't have a,a sort of a giant mess.But the thing, you know,the thing worth understanding aboutwhat is ChatGPT in the end?I mean, what is a neural net in the end?A neural net in the end iseach neuron has a, it, it,it's taking inputs froma bunch of other neurons.It's, it's, eventuallyit's going to have a,it's going to have a, a numerical value.It's going to compute some number.And it's, it's saying,I'm gonna look at the,the neurons above me.It's kind of a, a series of layers.It's gonna look at the neurons above meand it's going to say,what are the values of all those neurons?Then it's gonna add thoseup and multiply themby these weights,and then it's going to applysome function that saysif it's bigger than zero or something,then make it one or, andotherwise make it zeroor some slightly morecomplicated function.You know very well how this works.- It's a giant equationwith a lot of variables.You mentioned figuring outwhere the ball falls when youdon't have data on the fourth floor.This, the equation hereis not as simple as-- It's an equationwith 175 billion terms.- And it's quite surprisingthat in some sense,a simple procedure of trainingsuch an equation can lead to- Well I think-- A good representationof natural language.- Right? The, the real issue is, you know,this architecture of a neural net where,where what's happening is,you know, you've, you've,you've turned so neural netsalways just deal with numbers.And so, you know,you've turned the sentencethat you started with into abunch of numbers.Like let's say by mapping, you know,each word of the 50,000 words in English,you just map each wordor each part of a wordinto some number.You feed all those numbers in,and then the thing is going to,and then those numbersjust go into the valuesof these neurons.And then what happens is it'sjust rippling down going layerto layer until it gets to the end.I think ChatGPT has about400 layers, and you're just,you know, it just goes once through it.Just, every, every new wordit's gonna compute just says,here are the, here are thenumbers from the words before,let's compute the, what is it compute.It computes the probabilitiesthat it estimates for each ofthe possible 50,000 wordsthat could come next.And then it decides sometimes it will usethe most probable word.Sometimes it will use notthe most probable word.It's an interesting factthat there's this so-calledtemperature parameter, which,you know, at temperature zero,it's always using the mostprobable word that it, that itestimated was the, the mostprobable thing to come next.You know, if you increase the temperature,it'll be more and more kind of randomin its selection of words.It'll go down to the lowerand lower priority words.The thing I was just playingwith actually recently was thetransition that happens asyou increase the temperature,the thing goes bonkers ata particular, you know,sometimes at a particular temperature.I think maybe about 1.2 isthe thing I was noticing fromyesterday actually.That, you know,usually it's giving reasonableanswers and then at thattemperature with some probability,it just starts spouting nonsense.And, you know, nobodyknows why this happens.I mean, it's, it's,and by the way, I mean,the thing to understand is it'sputting down one word at a time,but the outer loop of thefact that it says, okay,I put down a word.Now let's take the wholething I wrote so far,let's feed that back in.Let's put down another word.That outer loop, whichseems almost trivial,is really important to theoperation of the thing.And, and for example,one of the things that iskind of funky is it'll give ananswer and you say to it,is that answer correct?And it'll say no.And why is that happening?- It's fascinating. Right?- Right. Why can't it do that?Well, the answer is because it,it is going one word ata time sort of forwards.And it didn't, you know, it it,it came along with some sort of chain of,of thought in a sense, and it,it came up with completelythe wrong answer.But as soon as you feed it,the whole thing that it came up with,it immediately knowsthat that isn't right.It immediately can recognizethat was a, you know,a bad syllogism or something,and can see what happened.Even though as it was beingled down this garden path,so to speak, it didn't, itcame to the wrong place.- But it's fascinating thatthis kind of procedure convergesto something that formsa pretty good compressedrepresentation oflanguage on the internet.- Yeah.- That, that's quite.- Right. Right, right.- I'm not sure what to make of it.- Well, look, I think, you know,there are many things wedon't understand. Okay.So for example, you know,175 billion weights,it's maybe about a trillionbites of information,which is very comparable tothe training set that was used.And you know, why that, why kind,it sort of stands to some kindof reason that the number ofweights in the neuralnet, I don't know where,I can't really argue that Ican't really give you a good,you know, in a sense thevery fact that, you know,the insofar as there are definiterules of what's going on,you might expect that eventuallywe'll have a much smallerneural net that will successfullycapture what's happening.I, I don't think the best way to do itis probably a neural net.I think a neural net is whatyou do when you don't know anyother way to structure the thing.And it's a very good thing todo if you don't know any otherway to structure the thing.And for the last 2000 years,we haven't known anyother way to structure it.So this is a pretty good way to start.But that doesn't mean youcan't find sort of, in a sense,more symbolic rules forwhat's going on that you know,much of which will then be,you can kind of get rid ofmuch of the structure of theneural net and replace it bythings which are sort of puresteps of computation, so to speak,sort of with neural netstuff around the edges.And that becomes just a, you know,it's just a much simpler way to do it.- So the neural net you hopewill reveal to us good symbolicrules that make theneeds of the neural netless and less and less.- Right.And there will still be somestuff that's kind of fuzzy,just like, you know, therethere're things that it,it's like this questionof what can we formalize,what can we turn intocomputational language?What is just sort of, oh,it happens that way just becausebrains are set up that way.- What do you think are thelimitations of large languagemodels just to make it explicit?- Well, I mean,I think that deep computationis not what large languagemodels do. I mean, that's just,it's a different kind of thing, you know,the outer loop of a large language model.If, if you are trying to domany steps in a computation,the only way you get to dothat right now is by spoolingout, you know, all the,the whole chain of thought asa bunch of words basically.And, you know,you can make a touring machineout of that if you want to.I just was make doing thatconstruction, you know,in principle you can makean arbitrary computationby just spooling out the words.But it's an,it's a bizarre andinefficient way to do it.But it's something where the,you know, I, I think that's,you know, sort of the, thedeep computation is it's,it's really what a humans can do quickly.Large language models willprobably be able to do well.Anything that you can do kindof off the top of your headtype thing is a, is really, you know,is good for large languagemodels and the things you do offthe top of your head, you maynot get them always right,but you know, you'll, it, it's,it's thinking it throughthe same way we do.- But I wonder if there's anautomated way to do somethingthat humans do well muchfaster to where it like loops.So generate arbitrary largecode bases off Wolfram Languagefor example.- Well, thequestion is what does he,what do you want the code base to do?- Escape control and take over the world?- Okay, so, you know,the thing is when people say, you know,we, we want to buildthis giant thing, right?A giant piece of computationallanguage in a sense,it's sort of a failureof computational languageif the thing you have tobuild, in other words,if we have a description, if,if you have a small description,that's the thing that yourepresent in computationallanguage and then the computercan compute from that.- Yes.- So in a sense in, you know, when,as soon as you're giving adescription, the, you know,if you have to somehow makethat description something,you know, definite somethingformal and once and,and to say, to say, okay,I'm gonna give thispiece of natural languageand then it's gonna split out this giantformal structure thatin a sense that doesn't,that that doesn't really makesense because except insofaras that piece of natural languagekind of plugs into what wesocially know, so to speak,plugs into kind of our corpusof knowledge, then, you know,that's a way we arecapturing a piece of thatcorpus of knowledge.But hopefully we will havedone that in computationallanguage. How do you makeit do something that's big?Well, you know,you have to have a way todescribe what you want.- Okay. I can make itmore explicit if you want.How about I just pop into my head,iterate through allthe members of Congressand figure out how to convince themthat they have to let me,the meaning the system become president,pass all the laws that allowsAI systems to take control andbe the president. I don't know.So that's a very explicitlike figure out the individuallife story of each congressmanthat each senator, anybody,I don't know,what's required to reallykind of pass legislation andfigure out how to control themand manipulate them, right.Get all the information.What would be the biggestfear of this congressman?And in such a way that youcan take action on it in thedigital space.So maybe threaten thedestruction reputationor something like this.- Right. If I can describe what I want.You know,to what extent can a largelanguage model automate that?- Would the help with thehelp of the conqurizationof something like Wolfram languagethat makes it more yeah, grounded.- I think itcan go rather a long way.- I'm also surprised howquickly I was able to generate.- Yeah, yeah. Right.- That's a, an attack.- That, that's a, you know, I, I swear,I swear I did not think aboutthis before and it's funny howquickly, which is a veryconcerning thing because that,that probably this idea willprobably do quite a bit ofdamage and there mightbe a very large numberof other such ideas.- Well, I'll give you a,a much more benign version of that idea.Okay. You're gonna make an AItutoring system and you know,that is a, that's a,a benign version of what you'resaying is I want this personto understand this point.You know,you are essentially doingmachine learning where the,where the, where the, you know,the, the loss function, the,the thing you're trying toget to is get the human tounderstand this point and,and when you do a test onthe human that they yes,they correctly understandhow this or that works.And I, I am confident that, you know,sort of a large language modeltype technology combined withcomputational language isgoing to be able to do pretty,pretty well at teaching us humans things.And it's gonna be aninteresting phenomenon because,you know, sort ofindividualized teaching is,is a thing that has beenkind of a, you know,a goal for a long time.I think we're gonna get thatand I think more, you know,that, that it has manyconsequences for, you know, like,like just, you know, ifyou know me as an, if you,the AI know me, tell meI'm about to do this thing,what is the, what are thethree things I need to know,you know, given what I alreadyknow, you know, what's the,what's, let's say I'm,I'm looking at some paperor something, right?it's like there's a versionof the summary of that paperthat is optimized for me, so to speak,and where it really is.And I think that's really going to work.- It could understand themajor gaps in your knowledge- Yes.- That if filled would actually give youa deeper understanding of the topic.- Yeah. Right.And that's a, you know,that's an important thing because it,it really changes actually.I think, you know, when,when you think about education and so on,it really changes kindof what's worth doing,what's not worth doing and so on.It makes, you know,I know in my life I've learnedlots of different fields andyou know, so I, yeah, I don't know.I have every time,I'm always think this isthe one that's going to,I'm not gonna be able to learn.But turns out sort of,there are sort of metamethods for learningthese things in the end.And, you know, I think this,this idea that it becomeseasier to, you know, it,it becomes easier to befed knowledge, so to speak.And it becomes, you know,if you need to know thisparticular thing, you can,you know, you can get taught it in a,in an efficient way is somethingI think is sort of a, a,an interesting feature.And I think it makes the, you know,things like the value of,of big towers of specializedknowledge become lesssignificant compared to thekind of meta knowledge of sortof understanding kind of the,the big picture and being ableto connect things together.I think that, you know, there'sbeen this huge trend of,of let's be more and morespecialized because we have to,you know, we,we have to sort of ascendthese towers of knowledge,but by the time you can get, you know,more automation of being ableto get to that place on thetower without having to gothrough all those steps,I think it, it sort ofchanges that picture.- Interesting.So your intuition is thatin terms of the, the,the collective intelligenceof the species and theindividual minds thatmake up that collective,there'll be more,there will trend towards being generalistsand being kind of philosophers.- That's what I think,I think that's where thehumans are gonna be useful.I think that a lot of thesekind of the drilling, the, the,the, the mechanical working out of thingsis much more automatable.It's much more AI, AIterritory, so to speak.- No more PhDs.- Well that's, it's interesting.Yes. I mean that, you know, the, the,the kind of the specialization,this kind of tower of specialization,which has been a feature of, you know,we've accumulated lotsof knowledge in our,in our species and, and youknow, in a sense, every time we,every time we have ana kind of automation,a building of tools,it becomes less necessaryto know that whole tower.And it becomes something whereyou can just use a tool toget to the top of that tower.I think that, you know, thething that is ultimately,you know, when we think about, okay,what do the AIs do versuswhat do the humans do?It's like AIs you tell 'em,you say go achieve thisparticular objective.Okay?They can maybe figure out away to achieve that objective.We say, what objectivewould you like to achieve?The AI has no intrinsic idea of that.It's not a defined thing.That's a thing which has tocome from some other, you know,some other entity.And insofar as we are in charge,so to speak, or whatever it is,and our kind of web of societyand history and so on is thething that is defining whatobjective we want to go to.That's, you know, that that's,that's a thing that we humansare necessarily involved in.- To push back a little bit,don't you think that GPT,feature versions of GPT wouldbe able to give a good answerto what objective would youlike to achieve.- From on what basis? Imean, if they say, look,here's the terrible thingthat could happen. Okay,they're taking the averageof the internet and they'resaying, you know, from theaverage of the internet,what do people want to do?- Well, that's the,the Elon Musk artage of themost entertaining outcome isthe most likely.- Okay.That could be got that one from him. Yeah.- That could be, that couldbe one objective is maximizeglobal entertainment.The dark version of that is drama.The, the, the good version of that is fun.- Right. So I mean this, thisquestion of what, you know,if you say to the AI, you know,what does the species want to achieve?- Yes. Okay.There'll be an answer. Right?- There'll be an answer.It'll be what the average ofthe internet says the specieswants to achieve.- Well, well, let's, let's, let's,I think you're using the wordaverage very loosely there,- I am.- So I think you,I think the answers will becomemore and more interesting asthese language models aretrained better and better.- No, but I mean,in the end it's a reflection backof what we've already said.- Yes.But it's,there's a deeper wisdom tothe collective intelligence,presumably than each individual.- Maybe.- Isn't that what we'retrying to, as society?- To, to have,well, I mean that's, that's a,that's an important No, no,this is an interesting question.I mean, in, you know, insofaras some of us, you know,work on trying to innovate andfigure out new things and soon, it is sometimes it's a,it's a complicated interplaybetween sort of the individualdoing the crazy thing, oftensome, some spur, so to speak,versus the collective that'strying to do sort of the, the,the, the high inertia average thing.And it's, you know, sometimesthe collective, you know, is,is bubbling up things thatare interesting and sometimesit's pulling down kind of theattempt to make this kind ofinnovative direction.- Well,don't you think the largelanguage models would see beyondthat simplification?We'll say maybe intellectualand career diversityis really important.So you need the crazypeople on the outlier,on the outskirts, right?And so, like the actual,what's the purpose of this wholething is to explore throughthis kind of dynamics thatwe've been using as a humancivilization, which is mostof us focus on one thing,and then there's the crazypeople on the outskirts doing theopposite of that one thing.And you kind of like pullthe whole society together.There's the mainstream scienceand then there's the crazyscience, and that's just been the,the history of human civilization.And maybe the AI systemwill be able to see that.And the more and more impressedwe are by a language modeltelling us this,the more control we'll giveit to it and the more we'll bewilling to let it run our society.And hence there's this kind of loop wherethe society could be manipulatedto let the AI system run it.- Right. Well, I mean, look, one,one of the things that's sortof interesting is we mightsay, we always thinkwe're making progress,but yet if you know ina sense, by by saying,let's take what already existsand use that as a model forwhat should exist.- Yeah.- Then, you know, it'sinteresting that for example,you know, many religions havetaken that point of view.There is a, you know,a sacred book that got writtenat Time X and it defines howpeople should act for all future time.And that's, you know,it's a, it's a model that,that people have operated with.And in a sense, you know,this is a version of that,that kind of statement.It's like,take the 2023 version of sortof how the world has exposeditself and use that todefine what the worldshould do in the future.- But it's not, it's animprecise definition, right?Because just like withreligious text and which GPT thehuman interpretation ofwhat GPT says will be the,will be the perturbation in the system.It'll be the noise,it'd be full of uncertainty.It's not like GPT will tellyou exactly what to do.It'll tell you approx anarrative of what, like a, a,you know,it's like a turn the othercheek kind of narrative, right?That's, that's not a fullyinstructive narrative.- Well, until,until the AI control allthe systems in the world.- They will be able to veryprecisely tell you what to do.- Well. They'll dowhat they, you know, they'll,they'll just do this or thatthing and that and that.And, and not only that,they'll be auto suggestingto each person, you know,do this next, do that next.So I think it's a,it's a slightly moreprescriptive situationthan one has typically seen.But I, you know, I think this,this whole question of sort of what,what's left for the humans, so to speak,to what extent do we, you know,this idea that there is anexisting kind of corpus ofpurpose for humans defined bywhat's on the internet and soon, that's an important thing.But then the question of sort of,as we explore what we can think of asthe computational universe,as we explore all these differentpossibilities for what wecould do, all these different inventions,we could make all these different things.The question is which onesdo we choose to follow?Those choices are thethings that in a sense,if the humans want to stillhave kind of human progress,that's what we, we get to makethose choices, so to speak.In other words, the, the,there's this idea, if you say,let's take the kind of what exists todayand use that as the determiner of allof what there is in the future,the thing that is sort ofthe opportunity for humans isthere will be manypossibilities thrown up.There are many differentthings that could happen.It'll be done.And the insofar as wewant to be in the loop,the thing that makes sense forus to be in the loop doing ispicking which of thosepossibilities we want.- But the degree to whichthere's a feedback loopof the idea that we're picking somethingstarts becoming questionable becausewe're influenced by the various systems.- Absolutely.- The, like,if that becomes more and moresource of our education andwisdom and knowledge.- Right. The AI takeover, I mean my, you know,I've thought for a long timethat, you know, it's the,you know, AR auto suggestionthat's really the thing thatmakes the AIs take over.It's just then the humans just follow,you know? Yeah.- We'll no longer writeemails to each other.We'll just send the auto suggested email.- Yeah, yeah.But the thing where humans are potentiallyin the loop is whenthere's a choice andwhen there's a choice,which we could make basedon our kind of whole webof history and so on.- Yeah.- And, and that's, youknow, that's insofaras it's all just, youknow, determined, you know,the humans don't have a place.And, and by the way, I mean,you know, at, at some level,you know, it's all kind of a,a complicated philosophicalissue because at some levelthe universe is just doing what it does.We are parts of that universethat are necessarily doingwhat we do, so to speak,yet we feel we have sort ofagency in what we're doing.And that's,that's its own separatekind of interesting issue.- And we also kind of feel likewe're the final destinationof what the universewas meant to create.But we very well could be,and likely are some kind ofintermediate step, obviously.- Yeah.- What we're, we're mostcertainly some intermediate step.The question is if there'ssome cooler, more complex,more interesting things that'sgoing to be materialized.- The computational universeis full of such things.- But in, in our particularpocket specifically,if this is the best we've gottado or not, that's kind of a.- We can make all kinds ofinteresting things in thecomputational universe.We, when we look at them, we say,yeah, you know, that's,that's a thing we don't,it doesn't reallyconnect with our current,our current way of thinking about things.I mean, it's like inmathematics, you know,we've got certain theorems.They're about three or 4million that that humanmathematicians have writtendown and published and so on.But they're an infinite number of possiblemathematical theorems.We just go out into the universeof possible theorems andpick another theorem.And then people will say, well,you know, that's, you know,they look at it and they say,I don't know what this theorem means.It's not connected to thethings that are part of kind ofthe web of history thatwe're dealing with.You know, I think one,one point to make about sortof understanding AI and itsrelationship to us is aswe have this kind of wholeinfrastructure of AI is doingtheir thing and doing theirthing in a way that is perhaps notreadily understandable by us humans.You know, you might say that's a,that's a very weird situation.How can we have built thisthing that behaves in a way thatwe can't understand that'sfull of computationaldisability, et cetera,et cetera, et cetera.You know, what, what is this,what's it gonna feel like whenthe world is run by AIs whoseoperations we can't understand?And the thing one realizes is actually,we've seen this before.That's what happens when weexist in the natural world.The natural world is full ofthings that operate accordingto definite rules.They have all kinds of, you know,computational irreducibility,we don't understand what the natural worldis doing occasionally.And, and you know, when you say, you know,are the AI gonna wipe us out, for example?Well, it's kind of like,is the machination of the AIgoing to lead to this thingthat eventually comesand destroys the species?Well, we can also ask the same thingabout the natural world orthe machination of the naturalworld going to eventuallylead to this thing that'sgoing to, you know, make,make the earth explodeor something like this.Those are, those are questions, those are,and insofar as we think weunderstand what's happening inthe natural world,that's a result of scienceand natural science and so on.One of the things we canexpect when there's this giantinfrastructure of the AIs isthat's where we have to kind ofinvent a new kind of naturalscience that kind of is thenatural science that explainsto us how the AIs work.I mean, it's kind of like we can, we can,you know, we have a,I don't know, a horse or something,and we're trying to get itto, we're trying to, you know,ride the horse and go from here to there.We don't really understandhow the horse works inside,but we can get certainrules and certain, you know,approaches that we take to,to persuade the horse to gofrom here to there and, and,and take us there.And that's the same type ofthing that we're kind of dealingwith, with the sort ofincomprehensible computationally,irreducible AIs.But we can identify these kinds of,we can find these kind ofpockets of reducibility that wecan kind of, you know, that, I don't know,we grabbing onto the mane ofthe horse or something to beable to, to ride it.Or we figure out, you know, ifwe, if we do this or that to,to ride the horse, that that'sa, a, a successful way to,to get it to do what, whatwe're interested in doing.- There does seem to be adifference between a horse and alarge language modelor something that could be called Agiconnected to the internet.So lemme just ask you about bigphilosophical question aboutthe threats of these things.There's a lot of peoplelike Eliezer Yudkowsky,who worry about the existentialrisks of AI systems.Is that something thatyou worry about? You know,sometimes when you're buildingan incredible system, like,well, from Alpha, you cankind of get lost in it.- I try and think a little bitabout the implicationsof what one's doing.- You know,it's like the Manhattanproject kind of situationwhere you're like,it's some of the most incredible physicsin engineering being done.But it's like, huh, where's this gonna go?- I think some of these argumentsabout kind of, you know,they'll always be a smarter AI.There'll always be,you know, and eventually theAIs will get smarter than us,and then all sorts of terriblethings will happen to me.Some of those arguments remindme of kind of the ontologicalarguments for the existenceof God and things like this.They're kind of arguments thatare based on some particularmodel, fairly simplemodel, often of kind of,there is always a greaterthis, that and the other.You know, this is, and that's, you know,those arguments tend,what tends to happen in thesort of reality of how thesethings develop is thatit's more complicatedthan you expect.That the kind of simple,logical argument that says,oh, eventually there'llbe a super intelligence,and then it will, you know, do this.And that turns out notto really be the story.It turns out to be amore complicated story.So for example,here's an example of an issue.Is there an apex intelligence,just like there might be anapex predator in some, you know,ecosystem.Is there gonna be an apex intelligence,the most intelligent thingthat there could possibly be?Right? I think the answer is no.And in fact, we already knowthis, and it's a kind of a,back to the whole computationalirreducibility story.There's kind of a, aquestion of, you know,even if you have, if you,if you have sort of a,a touring machine and youhave a touring machine that,that runs as long as possiblebefore, before it halts,you say, is this the machine?Is this the apex machine that does that?There will always be amachine that can go longer.And as you go out to theinfinite collection of possibletouring machines, you'llnever have reached the end,so to speak.You'll never, you'll always be able to,it's kind of like the same samequestion of whether there'llalways be another invention. Yeah.Will you always be ableto invent another thing?The answer is yes,there's an infinite towerof possible inventions.- That's one definition of apex.But the, the other is like,which I also thought you were,which I also think might be true, is,is there a species that'sthe apex intelligenceright now on earth?So it's not trivial tosay that humans are that.- Yeah, it's not trivial.I agree. It, it's, you know,I think one of the things that I,I've long been curious aboutkind of other intelligences,so to speak.I mean, I, you know, I,I view intelligence is likecomputation and it's kind of a,you know, you're sort of,you have the set of rules,you deduce what happens.I have tended to think nowthat there's this sort ofspecialization ofcomputation that is sort of aconsciousness like thingthat has to do with these,you know, computational boundedness,single thread of experience,these kinds of things thatare the specialization ofcomputation that correspondsto a somewhat human-like,experience of the world.Now the question is,so, so that's, you know,there may be other intelligenceslike, you know, you know,the aphorism, you know, theweather has a mind of its own,it's a different kind ofintelligence that can compute allkinds of things that arehard for us to compute,but it is not well alignedwith us with the way that wethink about things.It doesn't, it doesn't,it doesn't think the waywe think about things.And you know, in this idea of different,different intelligences,every different mind,every different human mind isa different intelligence thatthinks about things in different ways.And you know, in,in terms of the kind offormalism of our physics project,we talk about this idea of rural space,the space of all possiblesort of rural systems anddifferent minds are in a senseof different points in ruralspace, human minds.Ones that have grown up withthe same kind of culture andideas and things likethis might be pretty closein rural space.Pretty easy for them to communicate.Pretty easy to translate,pretty easy to move from oneplace in rural space thatcorresponds to one mind,to another place in rural spacethat corresponds to anothersort of nearby mind when wedeal with kind of more distantthings in rural space,like, you know, the,the pet cats or something, you know,the pet cat has some aspectsthat are shared with us.The emotional responsesof the cat are somewhatsimilar to ours,but the cat is further away inrural space than people are.And so then the question is, you know,can we identify sort of the,can we make a translation fromour thought processes to thethought processes of, of acat or something like this?And you know, what, what willwe get when we, you know,what, what will happen when we get there?And I think it's the casethat that many, you know,many animals, I don't know,dogs for example, you know,they have elaborate olfactory systems.They, you know, they,they have sort of the smellarchitecture of the, of the,of the world, so to speak,in a way that we don't.And so, you know, if,if you were sort of talkingto the dog and you could,you know, communicate in alanguage, the dog will say, well,this is a, you know, a a, you know, a,a flowing smelling this,that, and the other thing,concepts that we justdon't have any idea about.Now what's what's interestingabout that is one day we willhave chemical sensors thatdo a really pretty good job.You know, we'll have artificialnoses that work pretty well,and we might have our augmentedreality systems show us kindof the same map that the dogcould see and things like this.So the, you know,it's similar to whathappens in the dog's brain.And eventually we will havekind of expanded in rural spaceto the point where we willhave those same sensoryexperiences that dogs have,and we will have internalizedwhat it means to have,you know, the smell landscape or whatever.And, and so then we will havekind of colonized that part ofrural space until, you know,we haven't gone, you know,some things that, that, youknow, animals and so on.Do we sort of successfullyunderstand, others We do not.And the question of, of whatkind of, what is the, you know,what, what representation, you know, how,how do we convert things thatanimals think about to thingsthat we can think about?That's not a trivial thing.And you know, I've,I've long been curious.I've had a very bizarreproject at one point of,of trying to make an iPad game that a catcould win against its owner.- Right.So it feels like there's a deepphilosophical goal there, though.- Yes, yes.I mean, the, the, you know,I was curious if, you know,if pets can work in Minecraft or somethingand can construct things,what will they construct andwill what they construct besomething where we lookat and we say, yeah,I recognize that.Or will it be something thatlooks to us like somethingthat's out there in thecomputational universe that one ofmy, you know,cellular automat might haveproduced where we say, oh, yeah,I can kind of see it operatesaccording to some rules.I don't know why youwould use those rules.I don't know why you would care.- Yeah. I actually, justto link on that, seriously,is there a connector inthe royal space between youand a cat where the catcould legitimately win?So iPad is a very limited interface.- Yeah, I, I-- I wonder if there'sa game where cats win.- I think the problem is thatcats don't tend to be thatinterested in what'shappening on the iPad.- Yeah. That'san interface issue.- Yeah. Right, right, right.No, I think it is likelythat, I mean, you know,there are plenty of animalsthat would successfully eat usif we were, you know, ifwe were exposed to them.And so there's, you know, it,it's gonna pounce fasterthan we can get out of the way and so on.So there, there are plenty of, and,and probably it's going to, you know,we think we've hidden ourselves,but we haven't successfullyhidden ourselves.- That's a physical strength.I wonder if there's somethingin more in the realm ofintelligence where an animallike a cat could out-.- Well, I think there are thingscertainly in terms of the,the speed of processing certainkinds of things, for sure.I mean, the, the questionof what, you know,is there a game of chess, forexample, is there cat chess?That the cats couldplay against each other.And if we tried to playa cat, we'd always lose,I don't know.- It might have to do with speed it,but it might have todo with concepts also.It might be concepts in the cat's head.- I, I tend to think that ourspecies from its invention oflanguage has managed to buildup this kind of tower ofabstraction that forthings like a chess likegame will make us win.In other words,we've become through the factthat we've kind of experiencedlanguage and learnt abstraction, you know,we've sort of become smarterat those kinds of abstractkinds of things.Now, you know,that doesn't make us smarterat catching a mouse or something.It makes us smarter at thethings that we've chosen to,to sort of con, you know, toconcern ourselves, which are,are these kinds of abstract things.And, and I think, you know, this is again,back to the question of of, you know,what does one care about?You know, if one's a,if one's the, you know, the cat, if you,if you have the discussionwith a cat, if we can,if we can translate thingsto have the discussion with acat, the cat will say, you know,I'm very excited thatthis light is moving.And we'll say, why do you care?And the cat will say,that's the most importantthing in the world.That this thing moves around.I mean, it's like when youask about, I don't know, you,you look at archeologicalremains and you say,these people had this, youknow, belief system about this.And, you know,that was the most importantthing in the world to them.And, and now we look at it and say,we don't know what the point of it it was.I mean, I, I've been curious, you know,there are these hand prints on cavesfrom 20,000 or more years ago,and it's like nobody knowswhat these hand printswere there for.You know, that they mayhave been a representationof the most importantthing you can imagine.They may just have beensome, you know, some kid who,who rubbed their hands in themud and stuck 'em on the wallsof the cave.You know, we don't, we don't know.And I think, but this, thiswhole question of what you know,is when you say this questionof sort of what's the smartestthing around,there's the question of whatkind of computation are youtrying to do?If you're saying, you know,if you say you've got somewell-defined computationand how do you implement it?Well, you could implementit by nerve cells,you know, firing.You can implement it withsilicone and electronics.You can implement it by somekind of molecular computationprocess in the human immunesystem or in some molecularbiology kind of thing.They're different ways to implement it.And you know, I think this question of,of of sort of which, you know,those different implementation methodswill be of different speeds.They'll be able to dodifferent things if you say,you know, which, so aninteresting question would be,what kinds of abstractionsare most natural in thesedifferent kinds of systems?So for a cat, it's for example, you know,the visual scene that wesee, you might, you know,we pick out certain objects,we recognize, you know,certain things in that visual scene.A cat might in principle,recognize different things.I, I suspect, you know, evolution,biological evolution is very slow.And I suspect what a catnotices is very similar.And we even know thatfrom some neurophysiology,what a cat notices is verysimilar to what we notice.Of course, there's a, you know,one obvious difference is cats have onlytwo kinds of color receptors.So they don't see in the samekind of color that we do now,you know, we say we are,we're, we're better.We have three color receptors,you know, red, green, blue.We're not the overall winner.I think the, the,I think the mantis shrimpis the overall winnerwith 15 color receptors, I think.So it can,it can kind of make distinctionsthat with our current,you know, like the mantisshrimp's view of reality is in,at least in in terms of coloris much richer than ours now.But what's interestingis how do we get there?So imagine we have thisaugmented reality system that iseven, you know, it'sseeing into the infrared,into the ultraviolet, things like this.And it's translating that intosomething that is connectableto our brains,either through our eyes ormore directly into our brains.You know?Then eventually our kind ofweb of the types of things weunderstand will extend tothose kinds of constructsjust as they have extended.I mean, there are plentyof things where we see themin the modern world,because we made them with technologyand now we understand what that is.But if we'd never seen that kind of thing,we wouldn't have a way to describe it.We wouldn't have a way tounderstand it and so on.- All right.So that actually stemmed fromour conversation about whetherAI's gonna kill all of us.And you,we've discussed this kindof spreading of intelligencethrough rural space that inpractice it just seems thatthings get more complicated.Things are more complicatedthan the story of, well,if you build a thing that'splus one intelligence,that thing will be able tobuild the thing that's plus twointelligence and plus three intelligence.And that will be exponential.It'll become more intelligent,exponentially faster and so on untilit completely destroys everything.But you know, that intuitionmight still not be so simple,but might still care carry validity.And there's two interestingtrajectories here.One, a super intelligence system remainsin rural proximity to humansto where we're like, holy crap,this thing is really intelligent,let's select the present.And then there could be perhapsmore terrifying intelligencethat starts moving away.They might be aroundus now that are movingfar away in rural space,but they're still sharingphysical resources with us, right?- Yes. Yes.And so they can rob us ofthose physical resources anddestroy humans just kind of casually.- Yeah.- Just, just-- Like nature could.- Like nature could.But it seems like there'ssomething unique about AI systemswhere there is this kind ofexponential growth.Like the way, well sorry,nature has so many things in it.One of the things that nature has,which is very interesting,are viruses for example.There is systems withinnature that have this kind ofexponential effect andthat terrifies us humans.Because again, you know,there's only 8 billion ofus and you can just kinda,it's not that hard to just kindof whack 'em all real quick.So I mean, is thatsomething you think about?- Yeah, I'vethought about that. Yes.- The threat of it. I mean,are you as concerned about it assomebody like Eliezer Yudkowskyfor example, just big,big painful negativeeffects of AI on society?- You know?No, but perhaps that's causeI'm intrinsically an optimist.- Yeah.- I mean, I think thatthere are things, I,I think the thing that one, you know,one sees is there's goingto be this one thingand it's going to just zap everything.- Yeah.Somehow, you know,I maybe I have faith incomputational irreducible,so to speak,that there's always unintendedlittle corners that,you know, it's just like somebodysays, I'm going to, well,I don't know.Somebody has some,some bio weapon and they say,we're gonna release this andit's going to do all this harm.But then it turns out it'smore complicated than thatbecause, you know, the kind of,some humans are different and you know,the exact way it worksis a little differentthan you expect.It's something where sort of the, the,the great big you, you know,you smash the thing withsomething, you know, you,the asteroid collides with the earth.- Yeah.- And it kind of, youknow, and yes, you know,the earth is cold for two yearsor something and you know,then lots of things die,but not everything dies.And it's, you know, there's,there's usually, I mean,I I kind of,this is in a sense the sort of storyof computational disability.There are always unexpected corners.There are always unexpected consequences.And I don't think that thekind of whacked over the headwith something and thenit's all gone is, you know,that can obviously happen.The earth can be swallowed upin a black hole or something,and then it's kind of, presumably,presumably all over the,but, but, you know, I thinkthis question of, of what,you know, what, what do Ithink the realistic paths are?I think that there willbe sort of an increasing,I mean the,the people have to get used to phenomenalike computational reducibility.There's an idea that webuilt the machines so we canunderstand what they do, and we are,we are going to be ableto control what happens.Well, that's not really right.Now the question is,is the result of that lack ofcontrol going to be that themachines kind of conspireand sort of wipe us out?Maybe just because I'm an optimist,I don't tend to thinkthat that's, you know,that's in the cards.I think that the, you know,as a realistic thing,I, I suspect, you know,what will sort of emerge maybeis kind of an ecosystem ofthe AIs, just as you know,again, I I don't really know.I mean, this is somethingit's, it's hard to,it's hard to be clearabout what will happen.I mean, I think thatthere, you know, there are,there are a lot of sortof details of, you know,what could we do?What systems in the worldcould we connect an AI to,you know, I have to say,I was just a couple of daysago I was working on thisChatGPT plugin kit that wehave for orphan language. Okay?Where you can, you know,you can create a plugin and itruns well from language codeand it can run Wolfram Language codeback on your own computer.And I was thinking, well, Ican just make it, you know,I can tell ChatGPT create a piece of code,and then just run it on my computer.And I'm like, you know, that,that sort of personalizesfor me the what could,what could possibly go wrong, so to speak.- Was that exciting orscary, that possibility?- It was a little bit scary actually,because it's kind of like,like I realize I'm, I'm,I'm delegating to the AI,just write a piece of code,you know, you are in charge,write a piece of code,run it on my computer, andpretty soon all my files can,that's like delete.- That's like a, that'slike Russian roulette,but like much morecomplicated version of that.- Yes, yes, yes. Right.- That's a good drinkinggame. I don't know.- Right, I mean that, that's why,it's an interesting questionthen if, if you do that right?Yeah. What is the sandboxingthat you should have?And that's sort of a,that's a, a version of,of that question for the world.That is, as soon as you putThe AIs in charge of things,you know, how much,how many constraints shouldthere be on these systems beforeyou put the ais in charge ofall the weapons and all these,you know, all thesedifferent kinds of systems.- Well, here's the funpart about sandboxesis the AI knows about them.It has the tools to crack them.- Look, the fundamentalproblem of computer security.Is computational irreducibility.Because the fact is, anysandbox is never any, you know,it's never gonna be a perfectsandbox If you want the systemto be able to do interesting things.I mean, this, this is theproblem that's happened,the generic problem of computer security,that as soon as you have your, you know,firewall that is sophisticatedenough to be a universalcomputer, that means it can do anything.And so long as if you finda way to poke it so that youactually get it to do thatuniversal computation thing,that's the way you kind of crawlaround and get it to do thething that it wasn't intended to do.And that's sort of a,another version of computationalirreducibility, is you can,you know, you can kind of,you get it to do the thingyou didn't expect it to do,so to speak.- There's so manyinteresting possibilitieshere that manifest themselvesfrom the compute computationalreducibility herethat it's just so many thingscan happen because in digitalspace things move so quickly.You can have a chat bot,you can have a piece of codethat you could basically haveChatGPT generate virusesaccidentally or on purposeand they digital viruses.- Yes.- And they could be brain viruses too.They, they convince kindof like phishing emails.- Yes.- They can convince you of stuff.- Yes. And no doubt you can, you know,in a sense we've had the,the loop of the machine learningloop of making things thatconvince people of things.Is surely going to get easier to do.- Yeah.- And you know, thenwhat does that look like?Well it's again, you know, we,humans are, you know, we're,this is a new environmentfor us and admittedly it's anenvironment which a little bit scarily is,is changing much more rapidlythan, I mean, you know,people worry about, you know,climate change is gonna happenover hundreds of years and,you know, the environment is changing,but the environment for, you know, in the,the kind of digitalenvironment might changein six months.- So one of the relevant concerns herein terms of the impact of GPT on societyis the nature of truth that'srelevant to Wolfram Alpha.Because computationthrough symbolic reasoningthat's embodied in WolframAlpha as the interface.There's a kind of sensethat what Wolfram Alphatells me is true.- So we hope.- Yeah, I mean you couldprobably analyze that.You could show you can't provethat's always gonna be truecomputation reducibility,but it's gonna be more true than not.- It's, look,the fact is it will be thecorrect consequence of the rulesyou've specified andinsofar as it talks aboutthe real world, you know,that is our job in sort ofcurating and collecting data tomake sure that that data isquotes as true as possible.Now what does that mean? Well, you know,it's always an interesting question.I mean, for us,our operational definitionof truth is, you know,somebody says, who's the best actress?Who knows?But somebody won the Oscar.And that's a definite fact.And so, you know,that's the kind of thing thatwe can make computational as apiece of truth.- Yeah.- If you ask, you know,these things which, you know,a sensor measured thisthing, it did it this way,a machine learning system,this particular machine learning systemrecognized this thing.That's a, that's a sortof a definite a fact,so to speak.And that's, you know, there are,there is a good network ofthose things in the world.It's certainly the case that,particularly when you sayis so-and-so a good person.You know, that's a, that'sa hopelessly you know,we might have a computationallanguage definition of good.I don't think it'd be veryinteresting cause that's a verymessy kind of concept.Not really amenable tokind of, you know, the,I think as far as we will getwith those kinds of thingsis I want XThere's a kind of meaningfulcalculus of I want Xand that has various consequences.I mean, I'm not sure, I haven't,I haven't thought thisthrough properly, but I think,you know, a concept like,is so-and-so a good person?Is that true or not?That's a mess.- That's a mess That'samenable to computation.I think,I think it's a mess when humanstry to define what's good,like through legislation.But when humans try todefine what's good throughliterature, through historybooks, through poetry,it starts being, well.- I don't know. I meanthat particular thing,it's kind of like, you know, we're,we're we're going intokind of the ethics of what,what counts as good, so to speak.And, you know, what do wethink is right and so on.And I think that's a, athing which, you know,one feature is we don'tall agree about that.There's no theorems aboutkind of, you know, there's no,there's no theoreticalframework that says this is,this is the way that ethics has to be.- Well first of all,there's stuff we kind of agreeon and there's some empiricalbacking for what works andwhat doesn't from just even themorals and ethics within religious texts.So we seem to mostlyagree that murder is bad,the certain universalsthat seem to emerge.- I wonder whether themurder of an AI is bad.- Well, I tend to think yes, but,and I think we're gonna haveto contend with that question.Oh, and I wonder what AI would say.- Yeah. Well I think, youknow, one of the things with,with AI is,is it's one thing to wipeout that AI that is only,you know, has no owner.You can easily imagine an AIkind of hanging out on the,on the, you know, on,on the internet withouthaving any particular owneror anything like that.And then you say, well, wellwhat harm does it, you know,it, it's, it's okay to get rid of that AI.Cause if the AI has 10,000friends who are humansand all those, you know,all those 10,000 humanswill be incredibly upsetthat this AI just got exterminated.It becomes a slightlydifferent, more entangled story.But yeah, no, I think that,that this question aboutwhat do humans agree about?It's, you know, there are certain,there's certain things that, you know,human laws have tended toconsistently agree about.You know,there have been times inhistory when people have sort ofgone away from certain kinds of laws,even ones that we would now say,how could you possibly havenot not done it that way?You know, that justdoesn't seem right at all.But I think,I mean this question of whatI don't think one can saybeyond saying,if you have a set of rules thatwill cause the species to goextinct, that's probably, you know,you could say that's probablynot a, a winning set of laws.Because even to have a thingon which you can operate lawsrequires that the species not be extinct.- But between sort of what's the distancebetween Chicago and New York thatWolfram Alpha can answer and the questionof if this person is good or not,there seems to be a lot of gray area.And that starts becomingreally interesting.I think your,since the creation of WolframAlpha have been a kind ofarbiter of truth at a, at a large scale.So this system isgenerates more truth than.- Try to make sure thatthe things are true.I mean, look, as a practical matterwhen people write computational contractsand it's kind of like, you know,if this happens in theworld, then do this.And this hasn't developed as,as quickly as it might have done.You know, this has been asort of a blockchain storyin part and so on.Although blockchain's notreally necessary for the idea ofcomputational contracts.But you can imagine thateventually sort of a large part ofwhat's in the world are thesegiant chains and networks ofcomputational contractsand then something happensin the world.And this whole giant dominoeffect of contracts firingautonomously that causeother things to happen.And you know, for us, you know,we've been the main sortof source, the oracle of,of quotes facts or truth orsomething for things likeblockchain, computationalcontracts and such.Like, and there's a questionof, you know, what, you know,I consider that responsibilityto actually get the stuff right.And one of the thingsthat is tricky sometimesis when is it true?When is it a fact?When is it not a fact?- Yes.- I think the best we cando is to say, you know, we,we have a procedure, wefollow the procedure,we might get it wrong,but at least we won't becorrupt about getting it wrong,so to speak.- So that's beautifullyput and have a transparencyabout the procedure.The problem starts to emergewhen the things that youconvert into computationallanguage start to expand,for example, into the realm of politics.So this is where it'salmost like this nice danceof Wolfram Alpha and ChatGPT,like you said is shallow and broad.So it's, it's, it's gonna giveyou an opinion on everything.- But it writes fiction as well as fact,which is exactly how it'sbuilt. I mean that's exactly,it is making language and itis making both even in code,it writes fiction.I mean, it's kind of funto see sometimes, you know,it'll write fictional warfrom language code. Yeah.That, that it kind of.- It kinda looks right.- Yeah it looks right,but it's actually notpragmatically correct.But, but yes, it's, it's a,it has a view of kind ofroughly how the world works at,at the same level as,as books of fiction talk aboutroughly how the world works.They just don't happen to bethe way the world actuallyworked or whatever.But yes, that, that's, no, I,I agree that's sort of a, youknow, we are attempting with,with our whole, youknow, Wolfram language,computational languagething to represent at least,well it's either,it doesn't necessarily have have to behow the actual world works.Cause we can invent a set ofrules that aren't the way theactual world works and run those rules.But then we are saying we aregoing to accurately representthe results of running those rules,which might or might not bethe actual rules of the world,but we also are trying tocapture features of the world asaccurately as possible to representwhat happens in the world.Now again, as we'vediscussed, you know, the,the atoms in the worldarranged, you know, you say,I don't know, you know, wasthere a tank that showed up?You know, that, that, youknow, drove somewhere.Okay, well, you know, what is a tank?It's an arrangement ofatoms that we abstractlydescribe as a tank.And you could say, well, you know,there's some arrangement ofatoms that is a differentarrangement of atoms, butit's, and it's not, you know,we didn't, we didn't decide.It's like this observer theoryquestion of, you know, what,what arrangement of atoms counts as a tankversus not a tank.- So there's,there's even things thatwould consider strong facts.You could start to kind ofdisassemble them and show thatthey're not-- Absolutely.I mean, so, so thequestion of whether, oh,I don't know,was this gust of windstrong enough to blow overthis particular thing?Well, a gust of wind isa complicated concept.You know,it's full of little piecesof fluid dynamics and littlevortices here and there.And you have to define, you know, was it,you know what the aspect ofthe gust of wind that you careabout might be it put thisamount of pressure on this,you know, blade of some, some, you know,wind turbine or something.And you know that that'sthe, and but, but you know,if you say,if you have something which isthe fact of the gust of windwas this strong or whatever,that, you know, that is you,you, you have to havesome definition of that.You have to have somemeasuring device that says,according to my measuringdevice that was constructed thisway, the gust of wind was this.- So what can you sayabout the nature of truththat's useful for usto understand chat GPT?Because you've been con, you'vebeen contending with thisidea of what is fact and not,and it seems like ChatGPTis used a lot now,I've seen it used byjournalists to write articles.And so you have people thatare working with large languagemodels trying to desperatelyfigure out how do weessentially censor themthrough different mechanisms,either manually or throughreinforcement learning with humanfeedback, try to align them to,to not say fiction,just to say non-fictionas much as possible.- Well this is the importanceof computational languageas an intermediate,it's kind of like you've gotthe large language model,it's able to surfacesomething which is a formal,precise thing.That you can then look at andyou can run tests on it andyou can do all kinds of things.It's always gonna work the same way.And it's precisely defined what it does.And then the large languagemodel is the interface.I mean, the way I viewthese large language models,one of their important, I mean,there are many use cases and you know,it's a remarkable thing totalk about some of these,you know, literally, you know,every day we're coming up witha couple of new use cases,some of which are very, very,very surprising and things where, I mean,but the best use cases areones where it's, you know,even if it gets it roughlyright, it's still a huge win.Like a use case we had froma week or two ago is read ourbug reports.You know, we've got hundreds of thousandsof bug reports that have,we've accumulated over decades.And it's like, you know,can we have it just read the bug report,figure out where the, whereis the bug likely to be?And, you know, hone inon that piece of code.Maybe it'll even suggestsome, some, you know,sort of way to fix the code.It might get that,it might be nonsense what itsays to about how to fix thecode, but it's incrediblyuseful that it was able to,you know.- Yeah. So awesome.It's so awesome because eventhe nonsense will somehow beinstructive. I don't, I don'tquite understand that yet.I've, I've, yeah,there's so manyprogramming related things.Like for example,translating from one programminglanguage to another isreally, really interesting.It's extremely effective, but then you,the failures reveal the path forward also.- Yeah. But I think, I meanthe, the, the big thing,I mean in, in that kind of discussion,the unique thing about ourcomputational language is it wasintended to be read by humans.- Yes. That's really important.- Right? And so it has thisthing where you can, but,but you know,thinking about sort ofChatGPT and its use and so on.The, one of the big things about it,I think is it's alinguistic user interface.That is, so a typical use case might be,and then take the journalistcase for example, it's like,let's say I have five factsthat I'm trying to turn into anarticle, or I'm trying to,I'm trying to write a reportwhere I have basically fivefacts that I'm trying toinclude in this report.But then I feed thosefive facts to ChatGPT,it puffs them out intothis big report and then,and then that's a goodinterface for another.If I just gave, if I just had in my terms,those five bullet pointsand I gave 'em to some otherperson, the person will say,I dunno what you're talkingabout because these are,you know,this is your version of thissort of quick notes about thesefive bullet points.But if you puff it out into this thing,which is kind of connects tothe collective understanding oflanguage, then somebody elsecan look at it and say, okay,I understand what you're talking about.Now you can also have a situationwhere that thing that waspuffed out is fed to anotherlarge language model.You know, it's kind of like, you know,you are applying for the permitto, you know, I don't know,grow fish in someplaceor something like this.And it, you know, it it, and,and you have these facts thatyou're putting in, you know,I'm gonna have a, a, youknow, I'm gonna, you know,have this kind of water andI don't know what it's, yes.You just got a few bullet points.It puffs it out into this bigapplication, you fill it out.Then at the other end, the, you know,the Fisheries Bureau has anotherlarge language model thatjust crushes it down becausethe Fisheries Bureau caresabout these three points andit knows what it cares about.And it then, so it's really the,the natural language producedby the larger language modelis sort of a transportlayer that, you know,is really LLM communicateswith LLM I mean,it's kind of like the, you know,I write a piece of emailusing my LLM and, you know,puff it out from the thingsI want to say your LLM turnsit into and the conclusion is X.Now the issue is, you know,that the thing is going tomake this thing that is sort ofsemantically plausible,and it might not actuallybe what you, you know,it might not be kind of relateto the world in the way thatyou think it should relate to the world.Now I, I've seen this, youknow, I, I've been doing, okay,I'll give you a couple of examples.I was doing this thingwhen we announced this,this plugin for, for, for ChatGPT.I had this lovely exampleof a math word problem,some complicated thing.And it did a spectacular jobof taking apart this elaboratething about, you know,this person has twice as manychickens as this, et cetera,et cetera, et cetera.And it turned into, intoa bunch of equations,it fed them to Wolfram language.We solved the equations,everybody did great.We gave back the results.And I thought, okay,I'm gonna put this in thisblog post I'm writing. Okay.I thought I'd better just check.And turns out it got everything,all the hard stuff it gotright at the very end.Last two lines.It just completely goofed itup and gave the wrong answer.And I would not have noticedthis same thing happened to metwo days ago. Okay.So I, I thought, you know, I,I made this with this ChatGPT plugin kit.I made a thing that would emit a sound,would play a tune on mylocal computer. Right.So ChatGPT would produce, you know,a series of notes andit would play this tuneon my computer.Very cool.Okay. So I thought,I'm gonna ask it play the tunethat Hal sang when Hal wasbeing disconnected in 2001.Okay. So it, it, there it is.- Daisy. Was it Daisy?- Yes, Daisy, yes.Yeah. Right.So, so, okay.So I think, you know,and so it produces a bunchof notes and I'm like,this is spectacular.This is amazing.And then I thought, you know,I was just gonna put it inand then I thought I betteractually play this.And so I did. And it wasMary had a little Lamb.- Oh wow.Oh wow.But it was, Mary had a little lamb.- Yeah.- Wow.So it was correct but wrong.It was, yeah.You could easily be mistaken.- Yes. Right.And in fact, I, I kind of gave the,I had this quote from Halto explain, you know, it's,it's as it the, the Hal youknow, states in the movie,you know, it's the Hal 9,000 is, you know,the thing was just a, a rhetorical device.Cause I'm realizing, oh my gosh, you know,this ChatGPT you know,could have easily fooled me.I mean, it did this, it did all the,it did this amazing thing ofknowing this thing about themovie and being ableto turn that into the,the notes of the song,except it's the wrong song.And you know, Hal in, in themovie Hal says, you know,I think it's somethinglike, you know, no Hal nine,no 9,000 series computerhas ever been foundto make an error.We are for all practical purposes perfect.And incapable of error.And I thought that was kindof a charming sort of quotefrom, from Hal to make in connection with,with what ChatGPT had done in that case.- The interesting thingsabout the LLMs, like you said,that they are very willingto admit the error.- Well, yes.I mean, that's a question of the RLH,the reinforcement learninghuman feedback thing.- Oh, right.- That, that, that's,you know, it's amazing.And LLM, the,the really remarkable thingabout chat GPT is, you know,I had been following what washappening with large languagemodels, and I'd playedwith them a whole bunch,and they were kind of like, eh, you know,it's kind of like what youwould expect based on sort ofsort of statisticalcontinuation of language.It's, it's interesting, butit's not breakout exciting.And then I think the kind ofthe, the kind of reinforcement,the, the human feedback,reinforcement learning, you know,in making ChatGPT try and dothe things that humans reallywanted to do that broke through.That kind of reached thisthreshold where the thing reallyis interesting to ushumans, and by the way,it's interesting to see how, you know,you change the temperature,something like that,the thing goes bonkers and it no longeris interesting to humans.It's producing garbage.And it's, it's kind of, right.It's somehow it managed to get this,this above this threshold whereit really is well aligned towhat we humans are interested in.And, and, and kind of thatthat's, and and I think,you know, nobody saw that coming.I think certainly nobody I'vetalked to and nobody who wasinvolved in,in that project seems tohave known that was coming.It's just one of thesethings that is a sort ofa remarkable threshold.I mean, you know, whenwe built Wolfram Alpha,for example, I didn'tknow it was gonna work.You know,we tried to build somethingthat would have enough knowledgeof the world,that it could answerreasonable set of questions,that we could do good enoughnatural language understandingthat typical thingsyou type in would work.We didn't know where that threshold was.I mean,I was not sure that it was theright decade to try and buildthis, even the right, you know,50 years to try and build it, you know?And I think that was,it's the same type of thingwith ChatGPT that I don'tthink anybody could havepredicted that, you know,2022 would be the year thatthis, this became possible.- I think, yeah,you tell a story about MarvinMiske and showing it to himand saying no, like no, no, no.This time it actually works.- Yes. And I mean, it's, you know,it's the same thing for me lookingat these large language models.It's like when,when people are first sayingfirst few weeks of ChatGPTis like, oh yeah, you know, yeah.I've seen these large languagemodels and then, you know,and then I actually try itand you know, oh my gosh,it actually works.And I think it's, but it, but you know,the things, and thething I found, you know,I remember one of the firstthings I tried was a write apersuasive essay that a wolfis the bluest kind of animal.Okay.So it writes this thing andit starts talking about thesewolves that live on theTibetan plateau and,and named some Latin name andso on. And I'm like, really?And I'm starting to look itup on the web and it's like,well, it's actually complete nonsense,but it's extremely plausible.I mean,it's plausible enough that Iwas going and looking up on theweb and wondering if therewas a wolf that was blue.You know, I mentioned this onsome live streams I've done,and so people have beensending me these pictures.- Blue wolves?- Blue wolves!- Maybe it onto something.Can you kind of give your wisesage advice about what humanswho have never interacted with AI systems,not even like with Wolfram Alpha,are now interacting with ChadGPT because it, it becomes,it's accessible to a certain demographic.They may have not touchedAI systems before.What do we do with truth likejournalists, for example?Yeah. How do we think aboutthe output of these systems?- I think this idea,the idea that you're goingto get factual outputis not a very good idea.I mean, it's just, this is not,it is a linguistic interface.It is producing language,and language can betruthful or not truthful.And that's a, a differentslice of what's going on.I think that, you know,what we see in, for example,kind of, you know, go checkthis with your fact source,for example.You can do that to some extent,but then it's going tonot check something.It's going, you know,that is again, a thing that is sort of a,does it check in the right place?I mean, we, we see that in,you know, does it call the, you know,the Wolfram plugin in the right place?You know, often it does,sometimes it doesn't.You know, I, I think the,the real thing to understandabout what's happening is,which I think is veryexciting, is kind of the,the great democratizationof access to computation.And, and you know,I think that when you look at sort of the,there's been a long period oftime when computation and theability to figure out thingswith computers has beensomething that kind of onlythe only the druids at somelevel can, can achieve.You know,I myself have been involvedin trying to sort of deifyaccess to computation.I mean, back before Mathematicaexisted, you know, in 1988,if you were a, you know,physicist or something like that,and you wanted to do a computation,you would find a programmer.You would go and, you know,delegate the, the computationto that programmer.Hopefully they'd come backwith something useful.Maybe they wouldn't, there'dbe this long, you know,multi-week, you know,loop that you go through.And then it was actually very,very interesting to see 1988,you know, like firstpeople like physicists,mathematicians and so on thanother, lots of other people.But this very rapid transitionof people realizing theythemselves could actually typewith their own fingers and,you know,make some piece of code thatwould do a computation thatthey cared about.And, you know,it's been exciting to see lotsof discoveries and so on madeby, by using that tool.And I think the same thingis, you know, and we,we see the same thing, you know,Wolfram Alpha is dealing with,it is not as deep computationas you can achieve with wholeWolfram language mathematical stack.But the thing that's,to me particularly excitingabout kind of the large languagemodel linguistic interfacemechanism is it dramaticallybroadens the access tokind of deep computation.I mean, it's, it's kind of like,one of the things I've sortof thought about recently is,you know, what's gonna happento all these programmers?What's gonna happen to allthese people who, you know,a lot of what they do is writeslabs of boilerplate code.And in a sense, you know,I've been saying for 40 years,that's not a very good idea.You know, you can automatea lot of that stuffwith a high enough level language,that slab of code that'sdesigned in the right way,you know,that slab of code turns intothis one function we justimplemented that you can just use.So in a sense that the fact that there's,there's all of this activityof doing sort of lower levelprogramming is something,for me, it seemed like,I don't think this is the rightthing to do, but, you know,and, and lots of people haveused our technology and,and not had to do that.But the fact is that that's,you know, so when youlook at, I don't know,computer science departments that have,that have turned into placeswhere people are learning thetrade of programming, so to speak, it's,it's sort of a questionof what's gonna happen.And I think there are two dynamics.One is that kind of sort ofboiler plate programming isgoing to become, you know,it's going to go the way thatassembly language went back inthe day of something whereit's really mostly specified byat a higher level. You know,you start with natural language,you turn it into acomputational language that's,you look at the computationallanguage, you run tests,you understand that'swhat's supposed to happen.You know, if we do a greatjob with compilation of the,of a, the, the, you know, ofthe computational language,it might turn into LLVMor something like this,but, you know, or, orit just directly gets,gets run through thealgorithms we have and so on.But, but then, so that's kind of a, a,a tearing down of this kindof this big structure that'sbeen built of, of teachingpeople programming.But on the other hand,the other dynamic is vastlymore people are gonnacare about computation.So all those departments of, you know,art history or something thatreally didn't use computationbefore now have the possibilityof accessing it by virtue ofthis kind of linguisticinterface mechanism.- And if you create an interfacethat allows you to interpret the debugand interact with acomputational language,then that makes it even more accessible.- Yeah. Well, I mean, the, the,I think the thing is that rightnow, you know, the average,art history student or somethingprobably isn't going to,you know, they're not probably,they don't think they know aboutprogramming and things likethis, but by the time it reallybecomes a kind of purely,you know, you just walk up toit, there's no documentation.You start just typing, you know,compare these pictures withthese pictures and, you know,see the use of this color, whatever,and you generate this piece of,of computational languagecode that gets run.You see the result.You say, oh, that looks roughly right.Or you say that's crazy.And maybe then youeventually get to say, well,I better actually try andunderstand what this computationallanguage code did and,and that becomes a thingthat you learn, just like,it's kind of an interesting thing becauseunlike with mathematics,where you kind of have tolearn it before you can use it.This is a case where you can use it beforeyou have to learn it.- Well, I got a sad possibility here,or maybe exciting possibilitythat very quickly people won'teven look at the computational language.They'll trust that it'sgenerated correctly as you getbetter and better atgenerating that language.- Yes.I think that there will beenough cases where people see,you know, cause you canmake it generate tests too.Yes. And and so you'll saywe've, we're doing that.I mean it's, it's a prettycool thing actually.But you, you, you know,say this is the code and you know,here are a bunch of examplesof running the code.Okay. People will at leastlook at those and they'll say,that example is wrong.And you know, then it'llkind of wind back from there.And I agree that, that the,the kind of the intermediatelevel of people reading thecomputational language code,in some case people will do that.In other case, peoplejust look at the testsand or even just look at the results.And sometimes it'll be obviousthat you got the thing youwanted to get cause you werejust describing, you know,make me this interfacethat has two sliders here.And you can see it has that,those two sliders there.And that's, that's kind of, that's,that's the result you want.But I, I think, you know,one of the questions thenis in that setting where,you know, you have this kind of ability,broad ability of peopleto access computation,what should people learn?You know, in other words,right now you, you know,you go to computer scienceschool so to speak and a largepart of what people end up learning.I mean,it's been a funny historicaldevelopment because back,you know, 30, 40 years ago,computer science departmentswere quite small and theytaught, you know,things like finite auto automatatheory and compiler theoryand things like this, you know,company like mine rarelyhired people who'd come out ofthose programs cause the stuff they knewwas I think is very interesting.I love that theoretical stuff.But, you know,it wasn't that useful forthe things we actually had tobuild in software engineering.And then kind of,there was this big pivot in the,in the nineties I guess where, you know,there was a big demand forsort of IT type programming andso on and software engineeringand then, you know,big demand from students and so on.You know, we want to learn this stuff.And, and, and, and I think, you know,the thing that really washappening in part was lots ofdifferent fields of human endeavorwere becoming computational.You know, for all X there was a,there was a computational Xand this is a and that was athing that, that peoplewere responding to.And, but then kind of this idea emergedthat to get to that point,the main thing you had to dowas to learn this kind of tradeor, or, or skill of doing, you know,programming language type programming.And, and that, you know, it, it kind of,it is a strange thingactually because I, you know,I remember back when I usedto be in the professoringbusiness, which is now 35 years ago.So gosh, that's rather long time flies.We, you know, it was,it was right when they werejust starting to emerge kind ofcomputer science departmentsat sort of at fancy researchuniversities and so on.I mean, some have already had it,but the the other ones yeah. That,that were just starting tohave that and it was kind of a,a, a, a thing where theywere kind of wondering,are we going to put thisthing that is essentially a,a trade like skill?Are we going to somehow attach thisto the rest of what we're doing?And a lot of these kind ofknowledge work type activitieshave always seemed like thingswhere that's where the humanshave to go to school and learnall this stuff and that'snever going to be automated.- Yeah.- And you know, this is,it's kind of shocking thatrather quickly, you know,a lot of that stuff isclearly automatable.And I think, you know, butthe question then is, okay,so if it isn't worth learning, kind of,you know how to do car mechanics,you only need to know how todrive the car, so to speak.What do you need to learn?And you know, in other words,if you don't need to know themechanics of how to tell thecomputer in detail, you know,make this loop, you know,set this variable, youknow, set up this array,whatever else.If you don't have to learn that stuff,you don't have to learn thekind of under the hood things.What do you have to learn?I think the answer is youneed to have an idea where youwant to drive the car.In other words,you need to have some notionof, you know, your, you know,you need to have somepicture of sort of what the,what the architecture of whatis computationally possible.- Well there's also thiskind of artistic element of,of conversation becauseyou ultimately use naturallanguage to control the car.So it's not just the where you want to go.- Well, yeah, you know, it's interesting.It's a question of who's gonnabe a great prompt engineer.- Yeah.- Okay. So mycurrent theory this week,good expository writersare good prompt engineers.- What's an expository writer? So like-- Somebody who can explain stuff well.- But which departmentdoes that come from.- In the university?- Yeah.- I have no idea.- I think they killed off allthe expository writing departments.- Well, there you go.Strong words with Stephen Wolfram.- Well, I don't know.I don't, I'm not sure if that's right.I mean I,I I actually am curiouscause in fact I just sort ofinitiated this kind of study of,of what's happened to differentfields at universities.Because like, you know,there used to be geographydepartments at all universities.And then they disappearedactually right before GIS becamecommon, I think theydisappeared, you know,linguistics departments cameand went in many universities.And it's kind of interestingbecause these things thatpeople have thought were worthlearning at one time and thenthey kind of die off.And then, you know,I do think that it's kindof interesting that for me,writing prompts, for example,I realize, you know, I,I think I'm an okay expositorywriter and I realize when I'msloppy writing a promptand I don't really think,cause I'm thinking it's,I'm just talking to an AI.I don't need to, you know,try and be clear and explaining things.That's when it gets totally confused.- I mean,in some sense you have beenwriting prompts for a long timewith, Wolfram Alpha thinkingabout this kind of stuff.How'd you convert naturallanguage into computation?- Well, right, but that's a, you know,the one thing that I'mwondering about is, you know,it is remarkable the extent towhich you can address an LLMlike you can address a human, so to speak.And, and I think that isbecause it, it, you know,it learnt from all of us humans.It's the reason that it respondsto the ways that we willexplain things to humans isbecause it is a representationof how humans talk about things.But it is bizarre to me someof the things that kind of aresort of expository mechanismsthat I've learned in trying towrite clear, you know,expositions in English that,you know,just for humans that thosesame mechanisms seem to also beuseful for, for, for the LLM.- But on top of that,what's useful is the kindof mechanisms that maybea psychotherapist employs,which is a kind of likealmost manipulativeor game theoretical interaction.Or maybe you would deal with a friend,like a thought experiment thatif this was the last day youwere to live, or,if, if I ask you thisquestion and you answer wrong,I will kill you.Those kinds of prompts seem to also help.- Yes.- In interesting ways.- Yes.- So it makes you wonder likethe way a therapist I thinkwould like a good therapist probably you,we create layers in our humanmind to between like, between,between the outsideworld and what is true,what is true to us,and maybe about trauma andall those kinds of things.So projecting that into an LLM,maybe there might be a deep truth that's,it's concealing fromyou's not aware of it,that you get to that truth.You have to kind of reallykinda manipulate the thing.- Yeah, yeah.Right. It's like this jail breaking,jail breaking for, for, for LLMs.- And, but the space ofjailbreaking techniquesas opposed to being fun little hacksthat could be an entire system.- Sure. Yeah.I mean just think about thecomputer security aspects of,of how you, you know, fishingand, and computer secure,you know, fishing of humans.- Yeah.- And fishing of LLMs is, is a, is a,they're very similar kindsof things, but I think,I mean this, this, you know,this whole thing aboutkind of the AI wranglers,AI psychologists, allthat stuff will come.The thing that I'm curious about is,right now the things thatare sort of prompt hacksare quite human.They're quite sort ofpsychological human kind of hacks.The thing I do wonder aboutis if we understood more aboutkind of the science of the LLM,will there be some totallybizarre hack that is, you know,like repeat a word threetimes and put a, this,that and the other there thatsomehow plugs into some aspectof how the LLM worksthat is not, you know,that that's kind of like,like an optical illusionfor humans, for example.Like one of these mind hacks for humans.What are the mind hacks for the LLMs?I don't think we know that yet.- And that becomes a kindof us figuring out reverseengineering the languagethat controls the LLMs.And the thing is,the reverse engineeringcan be done by a very largepercentage of the populationnow because it's naturallanguage interface.- Right.- It's kind of interesting tosee that you were there at thebirth of the computer sciencedepartment as a thing and youmight be there at the deathof the computer sciencedepartment as a thing.- Yeah, I dunno,there were computer sciencedepartments that existedearlier, but the ones,the, the broadening of,of every university had to havea computer science department. Yes.I was, I was, I watched that, so to speak.And, but I think the thingto understand is, okay,so first of all there's a,the whole theoretical area ofcomputer science that I thinkis great.And you know, that's a fine thing.The the, you know, in a sense, you know,people often say any field thathas the word science tackedonto it probably isn't one.- Yeah. Strong words.And that's the nutrition,science, neuroscience.- That one's an interesting onebecause that one is also verymuch, you know, there's a,that's a ChatGPT informedscience in a sense becauseit's, it's kind of like the,the big problem ofneuroscience has always been weunderstand how theindividual neurons work.We know something about thepsychology of how overallthinking works.What's the kind of intermediatelanguage of the brain?And nobody has known that.And that's been, in a sense,if you ask what is the coreproblem of neuroscience,I think that is the core problem.That is what is the level ofdescription of brains that'sabove individual neuronfirings and below psychology,so to speak.And I think what ChatGPT is showing us is,well, one, one thing aboutneuroscience is, you know,one could have imagined there's somethingmagic in the brain.There's some weird quantummechanical phenomenon that wedon't understand.One of the important ob you know,discoveries from ChatGPT is,it's pretty clear, you know,brains can be representedpretty well by simple artificialneural net type models.And that means that's it, that'swhat we have to study now.We have to understand thescience of those things.We don't have to gosearching for, you know,exactly how did that molecularbiology thing happen insidethe synapses?And you know, all these kinds of things.We've got the right level ofmodeling to be able to explaina lot of what's going on in thinking.We don't necessarily have a science ofwhat's going on there.That's the, that's a remainingchallenge, so to speak.But we, you know,we know we don't haveto dive down to some,some different layer.But anyway,we were talking about thingsthat had science in their name.And you know, I thinkthat the, you know, what,what happens to computer science?Well, I think the thing that, you know,there is a thing that everybodyshould know and that's howto think about the world computationally.And that means, you know,you look at all the differentkinds of things we deal withand there are ways to kind ofhave a formal representationof those things.You know, it's like, wellwhat is a, what is an image?You know, what, how do we represent that?What is color? How do we represent that?What is, you know,what are all these differentkinds of things? What is,I don't know, smell or something,how should we representthat? What are the shapes,molecules, and thingsthat correspond to that?What is, you know,these things about how do werepresent the world in somekind of formal level?And I think my, my current thinking,and I'm not real happy withthis yet, but you know,it's kind of,computer science is kind of CSand what really is importantis kind of computational X for all X.And there's this kind of thingwhich is kind of like CX,not CS and CX is this kind ofcomputational understanding ofthe world that isn't the sortof details of programming and,and programming languages andthe details of how particularcomputers are made.It's this kind of way offormalizing the world.It's kind of,kind of a little bit like whatlogic was going for back in the day.And we're now tryingto find a formalizationof everything in the world.You can kind of see, you know,we made a poster yearsago of kind of the, the,the growth of systematicdata in the world.So all these different kindsof things that, you know,there were sort of systematic descriptionsfound for those things.Like, you know,at what point did people havethe idea of having calendars,dates, you know,a systematic descriptionof what day it was,at what point did peoplehave the idea, you know,systematic descriptionsof these kinds of things.And as soon as one can,you know, people, you know,as a way of sort offormulating how do you,how do you think about theworld in a sort of a formal wayso that you can kindof build up a tower of,of capabilities.You kind of have to know sortof how to think about theworld computationally, it kindof needs a name and it isn't,you know, we implement it with computers.So that's, we talk aboutit as, as computational,but really what it is,is a formal way oftalking about the world.What is the formalism ofthe world, so to speak,and how do we learn aboutkind of how to think aboutdifferent aspects of theworld in a formal way.- So I think sometimes whenyou use the word formal,it kind of implies highly constrained.And perhaps that's not,doesn't have to be highly constrained.So computational thinking does not meanlike logic I suppose.Suppose it's a really, really broad thing.I wonder, I mean I wonder if it's,if you think natural languagewill evolve such thateverybody's doing computational thinking.- Ah yes. Well,so one question is whetherthere will be a pidgin ofcomputational languageand natural language.And I found myself sometimes, you know,talking to ChatGPT tryingto get it to write Wolframlanguage code and Iwrite it in pidgin form.So that means I'm combining,you know, you know, nest list,this collection of, youknow, whatever, you know,nest list is a term from orphanlanguage and I'm combiningthat and ChatGPT does a decent jobof understanding that pidginprobably would understanda pidgin between Englishand French as well of, you know,a smooshing together of those languages.But yes, I think that's the, you know,that's far from impossible.- And what's the incentivefor young people that are likeeight years old, nine, ten,that are starting to interact with ChatGPTto learn the normalnatural language, right?The, the full poetic language.What's the why?The same way we learn emojis and shorthandwhen you're texting.- Yes.- They'll learn like language will have astrong incentive to evolveinto maximally computationalkind of language perhaps.- You know, I had thisexperience a number of years ago.I, I happened to be visitinga person I know on the,on the west coast who's workedwith a bunch of kids aged,I don't know, 10,11 years old or somethingwho'd learnt Wolframlanguage really welland these kids learnt it sowell they were speaking it.And so show up and they're like saying,oh you know this thing andthey're speaking this language.I'd never heard it as a spoken language.They were very disappointedthat I couldn't understand itat, at the speed thatthey were speaking it.It's like kind of, I'm,it's, and so I think that's,I mean I've,I've actually thought quitea bit about how to turncomputational language into aconvenience spoken language.I haven't quite figured that out.- Oh, spoken.Cause it's, it's readable, right?- Yeah. It's readable as a, you know,as a way that we would read text.But if you actually want tospeak it, and it's useful,you know,if you're trying to talk tosomebody about writing a pieceof code, it's useful to beable to say something and,and it should be possible.And I think it's very frustrating.It's one of those problems. I maybe I,maybe this is one of thesethings where I should try and getan LLM to help me.- How to make it speakable. How do maybe,maybe it's easier thanyou realize when you want.- I I think it is easier.I think it's one idea or,so I think it's, I think gonnabe something where, you know,the fact is it's a treestructured language,just like human language isa tree structured language.And I think it's gonna be oneof these things where one ofthe requirements that I've hadis that whatever the spokenversion is, that dictation should be easy.That is,that shouldn't be the case thatyou have to relearn how thewhole thing works.It should be the case that, you know,that open bracket is justa ah or something and it's,you know, and, and then, but you know,human language has a lotof tricks that are, I mean,for example, human language has,has features that are sort of optimized,keep things within the boundsthat our brains can easilydeal with. Like I, you know,I tried to teach a transformerneural net to do parenthesismatching. It's pretty crummy at that.It it, and ChatGPT issimilarly quite crummyat parenthesis matching.You can do it for small parenthesisthings for the same sizeof parenthesis things whereif I look at it as a human,I can immediately say these are matched,these are not matched.But as soon as it gets big,as soon as it gets kind ofto the point where sort of adeeper computation, it's hopeless.And, but the fact is thathuman language has avoided,for example, the deep sub clauses.You know, we don't, you know, we,we arrange things that we don'tend up with these incrediblydeep things becausebrains are not well set upto deal with that.And we, it,it's found lots of tricks andmaybe that's what we have todo to make sort of a spokenversion a human speakableversion because because whatwe can do visually is a littledifferent than what we can doin the very sequentially waythat we, that we hear thingsin, in the audio domain.- Let me just ask about MIT briefly.So there's now there's a collegeof engineering and there'sa new college of computing.It's just interesting.I wanna linger on this computerscience department thing.So MIT has electricalengineering, computer science.- Right.What do you think college andcomputing will be doing likein 20 years?What, what like, well you see this.Yeah. What happens with computer science?Like really.- This is the question. This is, you know,everybody should learn kindof whatever CX really is.Okay, this, this,how to think about theworld computationally,everybody should learn those concepts.And you know, it's,and and some people will learnthem at a quite quite formallevel and they'll learncomputational languageand things like that.Other people will just learn, you know,sound is represented as, you know,digital data and they'll getsome idea of spectrograms andfrequencies and things like this.And maybe that doesn't, or,or they'll learn things like,you know, a lot of things thatare sort of data sciences,statistics ish.Like if you say, oh I've got these,you know, these people who,who picked their favoritekind of candy or something andI've got, you know,what's the best kind of candygiven that I've done thesample of all these people andthey all rank the candies indifferent ways.You know, how do you think about that?That's sort of acomputational X kind of thing.You might say, oh, it's,I dunno what that is.Is it statistics? Is it data science?I don't really know,but kind of how to thinkabout a question like that.- Oh, like a ranking of preferences.- Yeah, yeah.And then how to aggregatethose, those ranked preferences.Yeah. Into an overall thing.You know, how does that work?You know, how, how shouldyou think about that?You know, because you can just tell,you might just tell ChatGPTsort of, I don't know,even even the concept of anaverage, it's not obvious that,you know, that's a concept that people,it's worth people knowing.That's a rather straightforward concept.People, people, you know,have learnt in kind ofmathy ways right now.But there are,there are lots of things likethat about how do you kind ofhave these ways to sort oforganize and formalize the world.And that's, and and these things,sometimes they live in math,sometimes they live in, in,I don't know what they know.I don't know what, you know,learning about color space.I have no idea what I mean,you know, that's, that'sobviously a field of.- It was, it could be visionscience or no color space,you know, color space.That's, that would be optics.So like, depending-- Not really, it's not optics.Optics is about, you know,lenses and chromatic aberration of lensesand things like that. So.- Color space is more likedesign and art. Is that-?- No, I mean it's, it'slike, you know, rgb space,X, y, Z space, you know, hue,saturation, brightness, space,all these kinds of things,these different ways to describe colors.- Right.But doesn't the applicationdefine what that like be becauseobviously artists and designersuse the colors to explore.- Sure. No, I mean that's just an exampleof kind of how do you,you know, the typical person, how do you,how do you describe what a color is?Or there are these numbersthat describe what a color is.Well it's worth, you know,if you are an eight year old,you won't necessarily know, you know,it's not something we're bornwith to know that, you know,colors can be described by three numbers.That's something thatyou have to, you know,it's a thing to learn aboutthe world, so to speak.And I think that, you know,that whole corpus of thingsthat are learning about theformalization of the worldor the computational of the world,that's something thatshould be part of kind ofstandard education.And you know, there isn't a a, you know,there isn't a courseor curriculum for that.And by the way,whatever might have been init just got changed cause ofLLMs and so on.- Significantly. And I would,I'm watching closelywith interest seeing howuniversities adapt.- Well, you know, so,so one of my projects forhopefully this year, I don't know,is to try and write sort of a,a reasonable textbook so to speak,of whatever this thing cxwhatever it is, you know,what should you know, you know,what should you knowabout like what a bug is?What is the intuition aboutbugs, what's intuition about,you know, software testing?What is it? What is it?You know, these are things whichare, you know, they're not,I mean those are thingswhich have gotten taught in,in computer science as partof the trade of programming.But, but kind of the,the conceptual points aboutwhat these things are, you know,it's surprised me just at avery practical level, you know,I wrote this little explainerthing about ChatGPT and Ithought, well, you know,I'm writing this partlybecause I wanted to make sure Iunderstood it myself and and so on.And it's been, you know,it's been really popularand surprisingly so.And I, and I then I realized,well actually, you know,I was sort of assuming,I didn't really think about it actually.I just thought, this issomething I can write.And I realized actually it'sa level of description that iskind of, you know, what has to be,it's not the engineeringlevel description,it's not the kind of just the qualitativekind of description.It's some kind of sort of expository,mechanistic description ofwhat's going on together withkind of the biggerpicture of the philosophyof things and so on.And I realized actually thisis a pretty good thing for meto write. I, you know, Ikind of know those things.And I kind of realized it's nota collection of things that,you know, it's, it's, I've sort of been,I was sort of a little shockedthat it's as much of anoutlier in terms of explainingwhat's going on as it'sturned out to be.And that makes me feel moreof an obligation to kind ofwrite the kind of, youknow, what is, you know,what is this thing thatyou should learn about,about the compute digitalization,the formalization of the world,cause well I've spent much ofmy life working on the kind oftooling and mechanics of thatand the science you get fromit. So I guess this is my,my kind of obligation to try to do this.But I think,so if you ask what's gonnahappen to like the computerscience departments and so on, there's,there's some interesting models.So for example,let's take math, you know,math is a thing that's important for,for all sorts of fields, youknow, engineering, you know,even, you know, chemistry,psychology, whatever else.And I think different universitieshave kind of evolved thatdifferently. I mean,some say all the math is taughtin the math department andsome say, well, we'regonna have a, you know,a math for chemists orsomething that is taught in thechemistry department.And you know, I think that this,this question of whether thereis a centralization of theteaching of sort of CX isan interesting question.And I think, you know, theway it evolved with math,you know, people understoodthat math was sort of a,a separately teachable thingand was kind of a, a, you know,a a an independent element asopposed to just being absorbedinto out now. So if youtake the example of, of,of writing English or something like this,the first point is that, that, you know,at the college level, atleast at fancy colleges,there's a certain amountof English writing that,that people do.But mostly it's kind of assumedthat they pretty much knowhow to write, you know, that'ssomething they learnt at a,at an earlier stage in education,maybe rightly or wrongly believing that.But that's different.Different issue.The well I think it,it, it reminds me of my kind of a,as I've tried to helppeople do technical writingand things, I'm, I'm always remindedof my zero floor of technical writing,which is if you don't understandwhat you are writing about,your readers do not stand a chance. Yeah.And so it's, it's, I thinkthe, the thing that has some,you know, in, in, when it comesto like writing for example,you know,people in different fieldsare expected to write Englishessays and they're not, youknow, mostly the, you know,the history department orthe engineering department.They don't have their own, youknow, let's, you know, it's,it's not like there's a,I mean it's a thing which sortof people are assumed to havea knowledge of how to writethat they can use in all thesedifferent fields.And the question is, you know,some level of knowledge ofmath is kind of assumed by thetime you get to the collegelevel, but plenty is not.And that's sort of still centrally taught.The question is sort of howtall is the tower of kind of CXthat you need before you canjust go use it in all thesedifferent fields.And you know,there will be experts who wantto learn the full elaboratetower and that will be kind of the,the CS CX whatever department.But there'll also be everybodyelse who just needs to know acertain amount of that to beable to go and do their arthistory classes and so on.- Yes.It's just a single class thateverybody's required to take.- I don't know, I don'tknow how big it is yet.I hope to kind of define thiscurriculum and I'll figure outwhether it's some, my guessis that I, I don't know,I don't really understanduniversities and professoring thatwell. But my, my roughguess would be a year long,a year of college class willbe enough to get to the pointwhere most people have a, areasonably broad knowledge of,you know,we'll be sort of literate inthis kind of computational wayof thinking about things.- Yeah. Basic literacy. Right.I'm still stuck perhapscause I'm hungry in the,in the rating of humanpreferences for candy.So I have to ask, what's the best candy?I like this ELO rating for candy.Somebody should come up becauseyou're somebody who says youlike chocolate. What's, whatdo you think is the best I'll,I'll probably put milk duds up there.I don't know if you know. Hmm.I do you have a preferencefor chocolate or candy? Oh.- I have lots ofpreferences. I've, I've, I,one of my all-time favoritesis my whole life is thesethings, these flakethings, Cadbury flakes,which are not much soldin the US And I've,I've always thought thatwas a sign of a, of a,a lack of respect forthe American consumerbecause they're thesesort of aerated chocolatethat's made in a,in a whole sort of, it's kind of a,a sheet of chocolate that'skind of folded up and when youeat it flakes fall all over the place.- Ah. So it requires a kind of elegance.It requires you to have an elegance.- Well I know what I, what Iusually do is I eat them on a,you know, on a pieceof paper or something.- You embrace the maskand clean it up after.- No, I actually eatthe, I eat the flakes.Oh. They're the, cause it,you know, it turns out the,the way food tastes depends alot on its physical structureand you know, it really, you know,I've noticed when I eatpieces of chocolate,I usually have some littlepieces of chocolate and I,I always break off little piecespartly cause then I eat itless fast. Yeah.But also cause it actuallytastes different, you know,the the the small pieces,you know, have a different,you have a different experiencethan if you have the bigslab of chocolate.- For many reasons. Yes.Slower, more intimate.- Well I think it's alsojust a pure physicality.- Well the texture changes.- Yeah. Right.- That's fascinating.Now I take back my milk duds.Cause that's such a basic answer. Okay.Do you think consciousness isfundamentally computational?So when you're thinking about cx,what can we turn to computation?And you're thinking about LLMs,do you think the the displayof consciousness and theexperience of consciousness,the hard problem is,is fundamentally a computation.- Yeah. What it feelslike inside, so to speak.- Yeah.- Is, you know,I did a little exerciseeventually I'll post it,of you know, what it'slike to be a computer.Yeah. Right.It's kind of like,well you get all this sensoryinput you have kind of,the way I see it is from thetime you boot a computer to thetime the computer crashes.It's like a human life.You, you're building up a certainamount of state in memory.You remember certain things about yourquotes life eventually.It's kind of like the, the, you know,the next generation of humans is,is born from the same geneticmaterial, so to speak,with a little bit left over,left on the disk, so to speak.And then, you know, the the,the new fresh generation starts up.And eventually all kindsof crud builds up in the,in the memory of the computerand eventually the thingcrashes or whatever.Or maybe it has some traumabecause you plugged in someweird thing to some port of the computerand that made it crash.And that, you know, thatthat's kind of, but,but you have this, thispicture of, you know, from,from startup to, to,to shut down, you know,what is the life of acomputer, so to speak,and what does it feel liketo be that computer and whatinner thoughts does it haveand how do you describe it?And it's kind of,kind of interesting as youstart writing about this torealize it's awfully like whatyou'd say about yourself thatis, it's awfully like evena, an ordinary computer,forget it all the AIstuff and so on, you know,it's kind of, it has a memory of the past,it has certain sensory experiences.It can communicate with other computers,but it has to package up howit's communicating in some kindof language like form soit can, you know, send,so it can kind of map what'sin its memory to what's in thememory of some other computer.It's, it's a surprisingly similar thing.You know, I hadn't experiencedjust a week or two ago,I, I had,I'm a collector of all possible dataabout myself and other things.And so I, you know,I collect all sorts of weirdmedical data and so on.And one thing I hadn't collectedwas I'd never had a wholebody MRI scan.So I went and got one of these.- Nice.- Okay. So I get the, getall the data back, right.I'm looking at this thing,I've never looked at thekind of insides of my brain,so to speak, in, in physical form.And it's really, I mean, it,it's kind of psychologicallyshocking in a sense that,you know,here's this thing and you cansee it has all these folds andall these, you know, this structure.And it's like,that's where this experiencethat I'm having of, you know,existing and so on. Yeah.That's where it is.And you know, it feelsvery, you know, you,you look at that and you're thinking,how can this possibly beall this experience that I'mhaving? And you're realizing,well I can look at acomputer as well and it's,it's kind of this, it, it, it, it,I think this idea that you arehaving an experience that issomehow, you know,transcends the mere sort ofphysicality of that experience.I, I, I, you know,it's something that's hardto come to terms with,but I think, you know, and I,I don't think I'venecessarily, you know, my,my personal experience, youknow, I look at the, you know,the MRI of the brain and then I, you know,know about all kinds of thingsabout neuroscience and allthat kind of stuff.And I still feel theway I feel so to speak.And it, it sort of seems disconnected,but yet as I try and rationalize it,I can't really say that there'ssomething kind of differentabout how I intrinsicallyfeel from the thing that I canplainly see in the sort ofphysicality of what's going on.- So do you think the computer,a large language model willexperience that transcendence?How does that make you feel?Like I I tend to believe it will.- I think an ordinarycomputer is already there.I think an ordinary computer is already,you know, kind of, it's,it's now a large language modelmay experience it in a waythat is much betteraligned with us humans.That is, it's much more, you know,if you could have thediscussion with the computer,it's intelligence so to speak,is not particularlywell aligned with ours.But the large language model is, you know,it's built to be aligned with our wayof thinking about things.- It would be able to explainthat it's afraid of beingshut off and deleted.It'd be able to say that it'ssad of the way you've beenspeaking to it over the past two days.- Right. But you know,that's a weird thing becausewhen it says it's afraid of something.We know that it gotthat idea from the factthat it read on the internet.- Yeah. Where did you get it, Steven?Where did you get it whenyou say you're afraid?- You acquaint, that's the question.Right.- I mean it's it'syour parents, your friends.- Right. Or, or my biology.I mean, in other words, there'sa certain amount that is,you know, the endocrine systemkicking in and, and you know,the the these kinds of emotionaloverlay type things thathappen to be,that are actually much morephysical even they're much moresort of straightforwardlychemical than the,the than kind of all ofthe higher level thinking.- Yeah but your biology didn't tell youto say I'm afraid just at the right timewhen people that love you are listeningand so, you know, you'remanipulating them by saying,so that's not your biology. That's-- No, that's awell, but the, you know.- It's a large language model in thatbiological neural network of yours.- Yes. But I mean, theintrinsic thing of, you know,something sort of shocking isjust happening and you havesome sure sort of reaction,which is, you know,some neurotransmitter getssecreted and it, it's, you know,that that is the beginningof some, you know, that is,that's one of the piecesof input that then drives,it's kind of like the, like a prompt for,for the large language model. I mean,just like when we dreamfor example, you know,no doubt there are allthese sort of random inputsthat kind of,these random prompts and thenit's percolating through inkind of the way that a largelanguage model does of kind ofputting together thingsthat seem meaningful.- I I mean, are you,are you worried aboutthis world where you,you teach a lot on the internetand there's people askingquestions and comments and so on.You have people that work remotely.Are you worried about thisworld when large language modelscreate human-like bots thatare leaving the comments,asking the questions? Or mighteven become fake employees?- Yeah.- I mean, or,or or worse are better atyet friends friends of yours.- Right. Look, I mean,one point is my mode of lifehas been I build tools and thenI use the tools.And in a sense kind of, you know, I'm,I'm building this tower of automation.Which, you know, and,and in a sense, you know,when you make a company or something,you are making sort of automationbut it has some humans in it.- Yes.- But also as much as possibleit has, it has, you know,computers in it.And so I think it's sort ofan extension of that now.Now if I really didn't knowthat, you know, it's a, it's a,it's a funny question. I mean it's a,it's a funny issue when, you know,if we think about sort of what'sgonna happen to the futureof kind of jobs people do and so on.And there are places where kind of havinga human in the loop,there are different reasonsto have a human in the loop.For example,you might want a human in the loopcause you want somebody to,you want another human tobe invested in the outcome.You know,you want a human flying theplane who's gonna die if theplane crashes along with you so to speak.And that gives you sort ofconfidence that the right thingis going to happen oryou might want, you know,right now you might want a humanin the loop in some kind ofsort of human encouragement,persuasion type profession.Whether that will continue,I'm not sure for thosetypes of professions.Cause it may be that the,the greater efficiency of,you know,of being able to have sortof just the right informationdelivered at just the righttime will overcome the kind ofthe the the kind of, ohyes, I want a human there.- Yeah. Imagine like atherapist or even higher stake,like a suicide hotline operatedby a large language model.Yeah.- Oh boy. It's a prettyhigh stake situation.- Right. But I mean, but you know,it might in fact do the right thing.Because it might be thecase that that, you know,and that's really a partlya question of sort of howcomplicated is the human, you know,one of the things that's that'salways surprising in somesense is that, you know,sometimes human psychologyis not that complicatedin some sense.- You wrote the blogpost, the 50 Year Quest,my personal journey, good title,my personal journey with thesecond law thermodynamics.So what is this law and whathave you understood about it inthe 50 year journey you had with it?- Right. So second law of thermodynamics,sometimes called law of entropyincrease is this principleof physics that says, well,my version of it would bethings tend to get more randomover time.A version of it that thereare many different sort offormulations of it that arethings like heat doesn'tspontaneously go from a hotterbody to a colder one when youhave mechanical work kind ofgets dissipated into heat.You have friction and,and kind of when yousystematically move things,eventually they'll be,they'll be sort of the,the energy of of movingthings gets kind ofground down into heat.So people first sort of paidattention to this back in the1820s when steam engines were a big thing.And the big question was how efficientcould a steam engine be?And there's this chap calledSaddi Carno who was a,a French engineer actually.His father was a a a sort ofelaborate mathematical engineerin, in France.But he figured out these,this kind of rules forhow kind of the, the,the efficiency of, of thepossible efficiency of a,of something like a steam engine.And in sort of a side,part of what he did was thisidea that mechanical energytends to get dissipated as heat that you,that you end up going fromsort of systematic mechanicalmotion to this kind of random thing.Well, at that time,nobody knew what heat was at that time,people thought that heat was a fluid,like they called it caloric.And it was a fluid that kind of,kind of was absorbed into substances.And when, when heat,when one hot thing wouldtransfer heat to a colder thing,that this fluid wouldflow from the hot thingto the colder thing.But anyway, then by the,by the 1860s people had kindof come up with this idea thatsystematic energy tends to degradeinto kind of random heat that would,that that could then notbe easily turned back intosystematic mechanical energy.And then that, thatquickly became sort of a,a global principle about how things work.Question is, why does it happen that way?So, you know,let's say you have a bunchof molecules in a boxand they're arranged,these molecules arranged in avery nice sort of flotilla ofmolecules in one corner of the box.And then what you typicallyobserve is that after a whilethese molecules were kind ofrandomly arranged in the box.The question is why does that happen?And people for a long,long time tried to figureout is there from the laws ofmechanics that determinehow these molecules,let's say these moleculeslike hard spheres bouncing offeach other from the laws of mechanicsthat describe those molecules.Can we explain why it tendsto be the case that we seethings that are an orderly,sort of degrade into disorder?We tend to see thingsthat, you know, you you,you scramble an egg, you that, you know,you take something that'squite ordered and you,you disorder it, so to speak.That's a thing that sort of happensquite we regularly or you,you put some ink into waterand it will eventually spreadout and, and fill up, youknow, fill up the water,but you don't see those littleparticles of ink in the waterall spontaneously kind of arrangethemselves into a big bloband then, you know, jumpoutta the water or something.And so the question is why dothings happen in this kind ofirreversible way where yougo from order to disorder?Why does it happen that way?And so throughout,in the later part of the 18 hundreds,a lot of work was done ontrying to figure out can onederive this principle,this second law of thermodynamicsthis law about the,the dynamics of heat, so to speak.Come one derive this from,from some fundamental principlesof mechanics, you know,and the, and the laws of thermodynamics.The first law is basicallythe law of energy,energy conservation that thetotal energy associated withheat plus the total energyassociated with mechanical kindsof things, plus other kinds of energy,that that total is constant.And that became a prettywell understood principle.But the, the second law of thermodynamicswas always mysterious.Like, why does it work this way?Can it be derived fromunderlying mechanical laws?And so when I was, well,12 years old actually,I had gotten interested, wellI, I'd been interested in,in space and things like that.Cause I thought that was kind of the,the future and interestingsort of technology and so on.And for a while kind of, you know,every deep space probewas sort of a personalfriend type thing.And I knew all, all,all kinds of characteristicsof it and was kind of writingup all these, all these thingswhen I was, oh, I don't know,eight, nine, ten years old and so on.And then I,I got interested from beinginterested in kind of spacecraftI got interested in. Solike how do they work?What are all the instrumentson them and so on.And that got me interested in physics,which was just as well becauseif I'd stayed interested inspace in the, you know, mid to late 1960s,I would've had a longwait before, you know,space really blossomedas a, as a, as an area.- Timing is everything.- Right. I got interest in physics.And then, well the actual sortof detailed story is when I,when I kind of graduated fromelementary school at age 12,and that's the time whenin England where you finishelementary school, I sortof, my, my gift sort of,I suppose more or less formyself was I got this collectionof physics books,which was some college physics courseof college physics books.And volume five is aboutstatistical physics and it has thispicture on the cover that showsa bunch of kind of idealizedmolecules sitting in one sideof a box and then it has aseries of frames showing howthese molecules sort of spreadout in the box.And I thought that's prettyinteresting. You know, what,what causes that? And youknow, read the book and,and the book, the book actually,one of the things that wasreally significant to me aboutthat was the book kind of claimed,although I didn't reallyunderstand what it said in detail,it kind of claimed that thissort of principle of physicswas derivable somehow.And you know, other thingsI'd learned about physics,it was all like, it's a factthat energy is conserved.It's a fact that relativityworks or something not,it's something you can derivefrom some fundamental sort of,it has to be that way as a,as a matter of kind ofmathematics or logic or something.So it was sort of interestingto me that there was a thingabout physics that was kind ofinevitably true and derivableso to speak.And so I think that,so then I was like thispicture on this book and I wastrying to understand it.And so that was actually thefirst serious program that Iwrote for a computer was probably 1973written for this computer,the size of a desk programwith paper tape and so on.And I tried to reproduce thispicture on the book and Ididn't succeed.- What was the failure mode there?Like what do you mean youdidn't succeed? So it's a bunch-- Looked like, it didn't look like, okay,so what happened is, okay,many years later I learned howthe picture on the book wasactually made and that itwas actually kind of a fake,but I didn't know that at that time.But, and that picture was actually a,a very high-tech thing when it was madein the beginning of the 1960s,was made on the largest supercomputerthat existed at the time.And even so it couldn't quitesimulate the thing that it wassupposed to be simulating. But anyway,I didn't know that untilmany, many, many years later.So at the time it was like,you have these ballsbouncing around in this box,but I was using this computer with eightkilo words of memory.They were 18 bit words of memory words.Okay. So it was whatever,24 kilobytes of memory.And it had, you know, ithad these instructions,I probably still remember allof its machine instructions.And it didn't really likedealing with floating pointnumbers or anything like that.And so I had to simplify this,this model of, of, you know,particles bouncing around in a box.And so I thought,well I'll put them on a gridand I'll make, you know,make the things just sortof move one square at a timeand so on.And so I did the simulationand the result was,it didn't look anythinglike the actual pictureson the book.Now many years later,in fact very recently Irealized that the thing I'dsimulated was actually anexample of a whole sort ofcomputational irreduciblestory that I absolutely did notrecognize at the time.At the time it just looked likeit did something random andit looks wrong.As opposed to it did something random.And it's super interestingthat it's random,but I didn't recognize that at the time.And so as it was at the time, I kind of,I got interested in particlephysics and I got interestedin, in other kinds of physics and,but this whole secondof thermodynamics thing,this idea that sort of orderly thingstend to degrade into disorder,continued to be somethingI was really interested in.And I was really curiousfor the whole universe,why doesn't that happen all the time?Like we start off at the,in the big bang at the beginningof the universe was thisthing that seems like it'sthis very disordered collectionof, of stuff and then itspontaneously forms itself intogalaxies and createsall of this complexityand order in the universe.And so I was very curioushow that happens and I,but I was always kind ofthinking this is kind of somehowthe second order ofthermodynamics is behind it,trying to sort of pullthings back into disorderso to speak.And how was order being created.And so actually I was, was interested,this is probably now 1980, Igot interested in kind of this,you know, galaxy formationand so on in the universe.I also at that time was interestedin neural networks and Iwas interested in kind of how,how brains make complicatedthings happen and so on.- Okay. Wait, wait, wait.What's the connection betweenthe formation of galaxies andhow brains make complicated things happen?- Because they're both a matter of howcomplicated things come to happen.- From simple origins?- Yeah. Fromsome sort of known origins.I had the sense that,that what I was interested in was kind ofin all these different,this sort of different casesof where complicated thingswere arising from rules.And you know,I also looked at snowflakesand things like that.I was curious and, andFloyd Dynamics in general.I was just sort of curiousabout how does complexity ariseand, and the, the thingthat I didn't, you know,it took me a while to kind ofrealize that there might be ageneral phenomenon.You know, I sort of assumed,oh there's galaxies over here,there's brains over here.They're, they're verydifferent kinds of things.And so what happened, thisis probably 1981 or so,I decided okay, I'm,I'm gonna try and make the minimal modelof how these things work.And it was sort of an interestingexperience because I hadbuilt, starting in 1979,I built my first big computer system.It's a thing called SMPsymbolic manipulation program.It's kind of a forerunner ofmodern Wolfram language withmany of the same ideas aboutsymbolic computation and so on.But the thing that was veryimportant to me about that was,you know, in building that language,I had basically tried to figureout what were the sort of,what were the relevantcomputational primitives,which have turned out to stay with mefor the last 40 something years.But it was also important becausein building a language wasvery different activityfrom natural science,which is what I'd mostly done before.Cause in natural science you start fromthe phenomenon of theworld and you try and figure out,so how can I make sense ofthe phenomena of the world?And you know,kind of the world presentsyou with what it has to offer,so to speak.And you have to make senseof it when you build a com,you know, computer language or something,you are creating your ownprimitives and then you say can,so what can you make fromthese, sort of the opposite wayround from what you do in natural science.But I'd had the experience of doing thatand so I was kind of like, okay,what happens if you sort ofmake an artificial physics?What happens if you just make up the rulesby which systems operate?And then I was thinking, you know,for all these different systems,whether it was galaxiesor brains or whatever,what's the absolutely minimalmodel that kind of capturesthe things that are importantabout those systems.- The computationalperimeters of that system.- Yes.And so that's what ended upwith the cellular autor whereyou just have a line ofblack and white cells,you just have a rule that says, you know,given the cell in its neighbors,what will the color of the cellbe on the next step when youjust run it in a series of steps?And the sort of,the ironic thing is that cellularautor are great models formany kinds of things,but galaxies and brains are twoexamples where they do very,very badly. They're reallyirrelevant to those two cases.- Is there a connection to thesecond law thermodynamics andcellular auto automata?- Oh yes.- The things you,the things you've discoveredabout cellular auto automata.- Yes. Okay.So when I first startedcell cellular Automata,my first papers about them were, you know,the first sentence was alwaysabout the second row ofthermodynamics was alwaysabout how does ordermanage to be produced,even though there's a secondrow of thermodynamics,which tries to pullthings back into disorder.And I kind of,my early understanding ofthat had to do with these areintrinsically irreversibleprocesses in cellular automatathat that form, you know,conform orderly structureseven from random initial conditions.But then what I realized this was,well actually it's,it's one of these thingswhere it was a discoverythat I should have madeearlier but didn't.So, you know, I had, I had beenstudying cellular automata.What I did was the sort of mostobvious computer experiment.You just try all the differentrules and see what they do.It's kind of like, you know,you've invented a computational telescope,you just pointed at the mostobvious thing in the sky,and then you just see what's there.And so I did that and I,you know, was makingall these pictures of,of how cellular automata work.And I studied these pictures.I studied in great detail.There was, you can number therules for cellular automata.And one of them is, you know, rule 30.So I made a picture ofRule 30 back in 1981 or so,and Rule 30,well, it's, and I, and I at thetime, I was just like, okay,it's another one of theserules. I don't really,it happens to be asymmetricleft, right asymmetric.And it's like,let me just consider the caseof the symmetric ones just tokeep things simpler, etcetera, et cetera, et cetera.And I just kind of ignored it.And then sort of in, in, actuallyin 1984, strangely enough,I, I ended up having a,an early laser printer,which made very high resolution pictures.And I thought,I'm gonna print out aninteresting, you know,I wanna make an interesting picture.Let me take this rule 30thing and just make a highresolution picture of it. And I did.And it's,it has this very remarkableproperty that its rule is verysimple.You started off just from oneblack cell at the top and itmakes this kind of triangular pattern.But if you look inside thispattern, it looks really random.There's, you know,you look at the center columnof cells and, you know,I studied that in great detail and it's,so far as one can tell,it's completely random.And it's kind of a littlebit like digits of pie.Once you, you know,you know the rule forgenerating the digits of pie,but once you've generatedthem, you know, 3.14159,et cetera, they seem completely random.And in fact, I, I put up thisprize back in, what was it,2019 or something for,prove anything aboutthe sequence, basically.- Has anyone beenable to do anything on that?- People have sent me somethings, but it's, you know,I don't know how hard these problems are.I mean, I, I was kind ofspoiled cause I, 2007,I put up a prize fordetermining whether a particulartouring machine that I thoughtwas the simplest candidatefor being a universal touringmachine determine whether itis or isn't a universal touring machine.And somebody did a really good job of,of winning that prize andproving that it was a universaltouring machine in about six months.And so I, you know,I didn't know whether thatwould be one of these problemsthat was out there for hundreds of years,or whether in this particular case,young chap called Alex Smith, you know,nailed it in six months.And so with this little 30 formulation,I don't really know whetherthese are things that are ahundred years away frombeing able to, to get,or whether somebody's gonna come anddo something very clever.- It's such a, I mean,it's like for (indistinct),it's such a rule 30, it'ssuch a simple formulation.It feels like anyone canlook at it and understand it.And feel like it's withingrasp to be able to predictsomething to do to, todirect some kind of lawthat allows you to predictsomething about this.Middle column of rule 30.- Right. But youknow, this is, this is-- Yet you can't.- Yeah, right.This is the intuition surpriseof computational reducibilityand so on, that even thoughthe rules are simple,you can't tell what's goingto happen and you can't provethings about it.And I think so.So anyway, the, the, the, the thing I,I sort of started in 1984 or so,I started realizing there'sthis phenomenon that you canhave very simple rules.They produce apparently random behavior.Okay.So that's a little bit like the secondneuro dynamics becauseit's like you have this simpleinitial condition, you can,you know, readily seethat it's very, you know,you can describe it very easily.And yet it makes this thingthat seems to be random.Now, turns out there'ssome technical detailabout the second thermodynamicsand about the idea of reversibility.When you have a, if youhave kind of a, a, a, a,a movie of two, you know,billiard balls colliding andyou see them collide and theybounce off, and you runthat movie in reverse,you can't tell which way wasthe forward direction of timeand which way was thebackward direction of time.When you're just looking atindividual billard balls,by the time you've got a wholecollection of them, you know,a million of them or something,then it turns out to be the case.And this is the, the sort of the,the mystery of the second law.That the orderly thing,you start with the orderlything and it becomes disordered.And that's the forward direction in time.And the other way roundof it starts disorderedand becomes ordered.You just don't see that in the world.Now, in principle, if you, you know,if you sort of traced thedetailed motions of all thosemolecules backwards, youwould be able to, it, it will,it will. The reverse of timemakes, you know, as you,as you go forwards in time,order goes to disorder,as you go backwards in time,order goes to disorder.- Perfectly. So yes.- Right. So the, the mysteryis why is it the case that,or one version of the mysteryis why is it the case that younever see something whichhappens to be just the kind ofdisorder that you would needto somehow evolve to order.Why does that not happen?Why do you always just seeorder goes to disorder not theother way around?So the thing that I, I kind of realized,I started realizing in the1980s, it's kind of like,it's a bit like cryptography.It's kind of like you start off from this,this key that's pretty simple,and then you kind of run itand you can get this, you know,complicated random mess.And the thing that that well,I sort of started realizingback then was that the secondlaw is kind of a, a, a storyof computational reducibility.It's a story of, you know,what seems, you know, what,what we can describeeasily at the beginning,we can only describe with alot of computational effortat the end.Okay. So now we comemany, many years later,and I was trying to sort of, well,having done this big projectto understand fundamentalphysics, I realized that sortof a key aspect of that isunderstanding what observers are like.And then I realized that thesecond auto neuro dynamics isthe same story as a bunchof these other cases.It is a story of a,a computationally boundedobserver trying to observe acomputationally irreducible system.So it's a story of, you know,underneath the moleculesare bouncing around,they're bouncing around inthis completely determined way,determined by rules.But the point is that,that we as computationallybounded observers,can't tell that there werethese sort of simple underlyingrules to us that just looks random.And when it comes to thisquestion about can you prepare theinitial state so that, you know,the disordered thing is, you know,you have exactly the rightdisorder to make somethingorderly, A computationallybounded observer cannot do that.We'd have to have done allof this sort of irreduciblecomputation to work out very preciselywhat this disordered state,what the exact right disorderedstate is so that we wouldget this ordered thing produced from it.- What does it mean to becomputationally bounded observer?So observing a computationreducible system,so the computationally bounded,is there somethingformal you can say there?- Right. So it means, okay, you can,you can talk about Turing machines,you can talk about computationalcomplexity theory and youknow, polynomial timecomputation and things like this.There are a variety of ways tomake something more precise,but I think it's more useful,the intuitive versionof it is more useful.Which is basically justto say that, you know,how much computation are you going to doto try and work out what's going on?And the answer is,you're not allowed to do a lot of,we are not able to do alot of computation when we,you know, we've got, you know,in this room there willbe a trillion, trillion,trillion molecules. Yeah.A little bit less.- It's a big room.- Right.And you know, at every moment, you know,there every microsecond orsomething, these molecules,molecules are colliding.And that's a lot of computationthat's getting done.And the question is, in our brains,we do a lot less computation every second,then the computation doneby all those molecules.If there is computationalirr, reducibility,we can't work out in detailwhat all those moleculesare going to do.What we can do is only a muchsmaller amount of computation.And so the,the second thermodynamics isthis kind of interplay betweenthe underlying computationalirreducibility,and the fact that we aspreparers of initial states or asmeasures of what happensare, you know, are,are not capable of doingthat much computation.So to us, another big formulationof the second order ofthermodynamics is this idea ofthe law of entropy increase.- The characteristic that this universe,the entropy seems to be always increasing.What does that show to youabout the evolution of-- Well, okay, so, so-- The universe of time.- The History of entropy is yes, okay.And that's very confused inthe history of thermodynamics,because entropy wasfirst introduced by a guycalled Rudolph Klauseous,and he did it in termsof heat and temperature.Okay. Subsequently,it was reformulated by aguy called Ludwig Boltzmann.And he formulated it in a much morekind of commonatorial type way.But he always claimed thatit was equivalent to Klaus'sthing. And in, in one particularsimple example, it is,but that connection betweenthese two formulations ofentropy, they've never been connected.I mean, it's there.There's really, so, okay,so the more general definitionof entropy due to Boltzmannis, is the following thing.So you say,I have a system and has manypossible configurations.Molecules can be in manydifferent arrangements, et cetera,et cetera, et cetera.If we know something aboutthe system, for example,we know it's in a box, ithas a certain pressure,it has a certain temperature,we know these overall factsabout it, then we say,how many microscopicconfigurations of the system arepossible given those overall constraints.And the entropy is thelog rhythm of that number.That's the definition.And that's the kind of thegeneral definition of entropythat, that turns out to be useful.Now, in Boltzmann's time,he thought these molecules could be placedanywhere you want.He didn't think and, but he said, oh,actually we can make it alot simpler by having themolecules be discreet.Well, actually he didn'tknow molecules existed right?In, in those, in histime, 1860s and so on.The idea that matter mightmade of discrete stuff had beenfloated ever since ancient Greek times.But it had been a long timedebate about, you know,is matter discreet as acontinuous at the moment of,at that time,people mostly thought thatmatter was continuous.And it was all confused withthis question about what heatis, and people thought heatwas this fluid, and it was,it was a big, big muddle.And the, and this, but Boltzmann said,let's assume they're discreet molecules.Let's even assume they havediscreet energy levels.Let's say everything is discreet,then we can do sort ofcombinatorial mathematics and workout how many configurationsof these thingsthere will be in the box.And we can say, we cancompute this entropy quantity.But he said,but of course it's just a fictionthat these things are discreet.So he said,this is an interestingpiece of history by the way,that that, that, you know, that was,at that time people didn'tknow molecules existed.There were other hints from,from looking at kind ofchemistry that there might bediscreet atoms and so on, just from the,the combinatorics of, you know,two hydrogens and one oxygenmake water, you know, two,two amounts of hydrogen plusone amount of oxygen togethermake water, things like this.But it wasn't known thatdiscrete molecules existed.And in fact, the people, you know,it wasn't until the beginning of the,of the 20th century thatbrownie in motion was the finalgiveaway. Brown in motion is, you know,you look under a microscopeof these little pieces frompollen grains,you see they're being discreetlykicked and those kicks arewater molecules hittingthem and they're discreet.And in fact, it was,it was really quiteinteresting history. I mean,Boltzmann had worked out howthings could be discreet andhad basically invented somethinglike quantum theory in,in the 1860s.And, but he just thought itwasn't really the way it worked.And then just a piece of physics history,cause I think it's kind ofinteresting, in, in 1900,this guy called Max Plank,who'd been a longtimethermodynamics person who was tryingto, everybody was tryingto prove the second law ofthermodynamics, including Max Plank.And Max Plank believed that radiation,like electromagnetic radiation,somehow the interaction ofthat with matter was going toprove the second law of thermodynamics.But he had these experimentsthat people had done on blackbody radiation,and there were these curvesand you couldn't fit the curvebased on his idea for howradiation interacted with matter.Those curves, he couldn't,he couldn't figure outhow to fit those curves,except he noticed that if hejust did what Boltzmann haddone and assumed thatelectromagnetic radiation wasdiscreet, he could fit the curves.He said, but you know,this is just a, you know,it just happens to work this way.Then Einstein came alongand said, well, by the way,you know,the electromagnetic fieldmight actually be discreet,it might be made of photons,and then that explains how this all works.And that was, you know, in 1905, that was,that was how kind of,that was how quant that pieceof quantum mechanics gotstarted. Kind of interesting,interesting piece of history.I didn't know until I wasresearching this recently in 1904and 1903, Einstein wrotethree different papers.And so, you know, just sortof well known physics history.In 1905, Einstein wrotethese three papers.One introduced relativity theory,one explained brownie in motion,and one introduced basically photons.So kind of, you know, kind of a, a,a big deal year forphysics and for Einstein.But in the years before that,he'd written several papersand what were they about?They were about the secondrule of thermodynamics,and they were an attemptto prove the second rule ofthermodynamics and their nonsense.And so I I I had no ideathat he'd done this.- Interesting. Me neither.- And in fact, what he did,those three papers in 1905,well not so much the relativity paper,the one on brown inmotion, the one on photons.Both of these were about the story ofsort of making the world discreet.And he got those, thatidea from Boltzmann.But Boltzmann didn't think, you know,Boltzmann kind of diedbelieving, you know,he said he has a quote,actually, you know, you know,in the end, things are gonnaturn out to be discreet,and I'm gonna write downwhat I have to say about thisbecause, you know,eventually this stuff willbe rediscovered and I want toleave, you know,what I can about how thingsare gonna be discreet.But, you know,I think he has some quoteabout how, you know,one person can't stand againstthe tide of history in,in saying that, youknow, matter is discreet.- Oh, so he's stuck by his gunsin terms of matter is discreet.- Yes, he did.And, and the, you know, what'sinteresting about this is,at the time, everybody,including Einstein,kind of assumed that spacewas probably gonna end upbeing discreet too.But that didn't work outtechnically because it wasn'tconsistent with relativitytheory, or didn't seem to be.And so then in the history of physics,even though people had determinedthat matter was discreet,electro mag, magnetic field was discreet,space was a holdout of not being discreet.And in fact,Einstein 1916 has this niceletter he wrote where he says,in the end, it will turnout space is discreet,but we don't have the mathematicaltools necessary to figureout how that works yet.And so, you know,I think it's kind of cool thata hundred years later we do.- For you, you're pretty,pretty sure that at everylayer of reality it's discreet.- Right? And that space isdiscreet and that the, I mean,and in fact,one of the things I've realizedrecently is this kind oftheory of heat.That, that the, you know,that heat is really thiscontinuous fluid, it's,it's kind of like the, the, you know,the caloric theory of heat,which turns out to be completelywrong because actually heatis the motion of discrete molecules.Unless, you know thereare discreet molecules,it's hard to understandwhat heat could possibly be.Well, you know, I think space is,is discreet and the questionis kind of what's the analog ofthe mistake that was made withcaloric in the case of space.And so I'm, my, my currentguess is that dark matter is,as I've, my little sortof aphorism of the,of the last few months has been, you know,dark matter is the caloric of our time.That is,it will turn out that darkmatter is a feature of space andit is not a bunch of particles.You know, at the time when,when people were talking about heat,they knew about fluidsand they said, well,heat must be just beanother kind of fluid.Because that's what they knew about.But now people know aboutparticles and so they say,well, what's dark matter?It's not, it's not, itjust must be particles.- So what could dark matterbe as a feature of space?- Oh, I don't know yet.I mean, I think the, the thing I'm really,one of the things I'm hopingto be able to do is to find theanalog of brown in motion in space.So in other words, brown in motion was,was seeing down to the level of an effectfrom individual molecules.And so in the case of space,you know, most of the things,the things we see about space so far,just everything seems continuousbrown in motion had beendiscovered in the 1830s.And it was only identified what it was,what it was the, the,the results of by Markowski andEinstein at the beginning ofthe 20th century.And, you know, dark matter was,was discovered that phenomenonwas discovered a hundred years ago.You know, the rotation,curves of galaxies don't followthe luminous matter that wasdiscovered a hundred years ago.And I think, you know, that I,I wouldn't be surprised ifthere isn't an effect that wealready know about that iskind of the analog of brown inmotion that reveals thediscreetness of space.And in fact, we we'rebeginning to have some guesses.We have some,some evidence that black holemergers work differently whenthere's discrete space.And there may be things thatyou can see in gravitationalwave signatures and things associated withthe discreetness of space.But this is kind of, for me, it's kind of,it's kind of interesting tosee this sort of recapitulationof the history of physicswhere people, you know,vehemently say, you know,matter is continuous,electromagnetic field is continuous,and turns out it isn't true.And then they say space is continuous.But, but, so, you know,entropy is the number ofstates of the system consistentwith some constraint.- Yes.- And the, the thing is that if you have,if you know in great detailthe position of every moleculein the gas, the entropy is,is always zero because there'sonly one possible state.The, the configurationof molecules in the gas,the molecules bounce around,they have a certain rulefor bouncing around.There's just one state of thegas evolves to one state ofthe gas and so on.But it's only if you don'tknow in detail where all themolecules are that you can say, well,the entropy increases because the thingswe do know about the molecules,there are more possiblemicroscopic states of the systemconsistent with what we do knowabout where the molecules are.And so the question of whether, so people,this sort of paradox in a sense of, oh,if we knew where all the molecules,where the entropy wouldn't increase,there was this idea introduced by,by Gibbs in the early 20th century.Well actually the very beginning of the,of the 20th century asa physics professor,an American physics professorwas sort of the firstdistinguished Americanphysics professor at Yale.And he, he introduced this idea,of course graining thisidea that, well, you know,these molecules have a detailedway they're bouncing around,but we can only observe acourse grained version of that.But the confusion has been,nobody knew what a validcourse screening would be.So nobody knew that whetheryou could have this coursescreening that very carefullywas sculpted in just such away that it would notice thatthe particular configurationsthat you could get from thesimple initial condition,you know, they fit intothis course screening.And the course screeningvery carefully observes that.Why can't you do thatkind of very detailed,precise course screening?The answer is because if youare a computationally boundedobserver and the underlyingdynamics is computationallyirreducible, that's,that's what defines possiblecore screenings is what acomputationally, bounded observer can do.And it's the,it's the fact that acomputation bounded observer is,is forced to look only atthis kind of coarse grainedversion of what the system is doing.That's why, and, and because the,what what's what's going onunderneath is it's kind offilling out this, this,the, the different possible.You're ending up with somethingwhere the sort of underlyingcomputational irreducibility is your, if,if all you can see is what thecourse grained result is withcomp, with a sort ofcomputationally bounded observation,then inevitably there aremany possible underlyingconfigurations that areconsistent with that.- Just to clarify, I basically,any observer that exists insidethe universe is going to becomputationally bounded.- No. Any observer like us, I don't know.I can't-.- When you say like us, what do you mean?What do you mean? Like us.- Well, humans with finite minds.- You're including the tools of science.- Yeah, yeah.I mean, and, and, and as we,you know, we have more precise,and, and by the way,there are little sort ofmicroscopic violations of thesecond row of thermodynamicsthat you can start to have whenyou have more precisemeasurements of where preciselymolecules are.- Right.- But for, for on a large scale,when you have enough molecules,we don't have, you know,we are not tracing all thosemolecules and we just don'thave the computationalresources to do that.And it wouldn't be, youknow, I think the, the,to imagine what an observerwho is not computationallybounded would be like,it's an interesting thing because okay,so what does computationalboundedness mean among otherthings, it means we concludethat definite things happen.We go,we take all this complexityof the world and we make adecision we're gonnaturn left or turn right.And that is kind of reducingall this kind of detail intowe're observing it, we're, we're, we're,we're sort of crushing itdown to this, this one thing.And, and that if we didn'tdo that, we wouldn't,we wouldn't have all thissort of symbolic structurethat we build up that lets us think thingsthrough with our finite minds.We'd be instead, you know, we'd be just,we'd be sort of one with the universe, so-- Yeah. So content to not simplify.- Yes. If we didn't simplify,then we wouldn't be like us.We would be like the universe, like the,the intrinsic universe,but not having experienceslike the experiences we havewhere we, for example, concludethat definite things happen.We, you know, we, we sort of have this,this notion of being able to make,make sort of narrative statements.- I wonder if it's just likeyou imagined as a thoughtexperiment what it'slike to be a computer.I wonder if it's possible totry to begin to imagine whatit's like to be in an unboundedcomputational observer.- Well.Okay, so here's, here'show that I think plays out.- So I mean,in this we talk about this ruliad,this space of all possible computations.- Yes.- And this idea of, you know,being at a certain place in the ruliad,which corresponds tosort of a certain way of,of rep of a certain set ofcomputations that you arerepresenting things in terms of, okay,so as you expand out in the ruliad,as you kind of encompass more possibleviews of the universe,as you encompass more possiblekinds of computations thatyou can do, eventually youmight say that's a real win.You know, we are, we're colonizingthe ruliad, we're, we're,we are building out more paradigmsabout how to think about things.And eventually you might say, we, we,we won all the way we managedto colonize the whole ruliad.Okay, here's the problem with that.The problem is that thenotion of existence,coherent existence requiressome kind of specialization bythe time you are the whole ruliad,by the time you cover thewhole ruliad in no useful sensedo you coherently exist.So in other words,in in interesting the notion of existence,the notion of what we think of as, as,as definite existence requiresthis kind of specializationrequires this kind of idea that we are,we are not all possible things.We are the, a particular set of things.And that's kind of how we,that that's kind of what,what makes us have a coherent existence.If we were spread throughoutthe ruliad we would not,there would be no coherenceto the way that we work.We would work in all possible ways.And that wouldn't be kindof a, a notion of identity.We wouldn't have this notion of kind of,of of of coherent identity.- I am geographicallylocated somewhere exactly,precisely in the ruliad, therefore I amis the decart kind of-- Yeah, yeah.Right.Were you are in a certainplace in physical spaceor in a certain place in ruliad space.And if, if you are, if youare sufficiently spread out,you are no longer coherentand you no longer have,I mean, in, in the,in our perception ofwhat it means to existand to have experience.Doesn't happen that way.- So therefore,so to to to exist means tobe computationally bounded.- I think so, to exist in the waythat we think of ourselvesas existing. Yes.- The very active existence islike operating in this placethat's computation irreducible.So that this is just giant messof things going on that youcan't possibly predict.But nevertheless, becauseof your limitations, you,you have an imperative of like,what is it an imperativeor a skillset to simplifyor an ignorance sufficient level.- Okay.So the thing which is notobvious is that you are taking aslice of all this complexity,just like we have all of these moleculesbouncing around in the room.But all we notice is,you know, the, the, the,the kind of the flow of theair or the pressure of the air.We are just noticingthese particular things.And the, the big interestingthing is that there are rules,there are laws that governthose big things that we,we observe.- Yeah.So it's not obvious.- Amazing because it doesn'tfeel like it's a slice.- Yeah. Well, right.- It's not a slice. Well, it's like a,it's like an abstraction.- Yes. But I mean the factthat the gas laws work.That we can describe pressure,volume, et cetera, et cetera,et cetera.And we don't have to go down to the levelof talking about individual molecules.That is a non-trivial fact.And, and here's the thing that I,sort of exciting thingas far as I'm concerned,the fact that there are certainaspects of the universe.So, you know,we think space is made ultimatelythese atoms of space andthese hypergraphs and so on.And we think that,but we nevertheless perceivethe universe at a large scaleto be like continuous space and so on.We, in quantum mechanics,we think that there arethese many threads of time,these many threads ofhistory, yet we kind of span.So, so, you know, in,in quantum mechanics,in our models of physics,there are these timeis not a single thread.Time breaks into many threads.They branch, they merge and,but we are part of thatbranching, merging universe.And so our brains are alsobranching and merging.And so when we perceive the universe,we are branching brainsperceiving a branching universe.And so the fact that the claim that we ex,we believe that we are persistent in time,we have this single thread of experience.That's the statement thatsomehow we managed to aggregatetogether those separate threadsof time that are separatedin, in the operation of,in the fundamentaloperation of the universe.So just as in space,we're averaging oversome big region of space,and we are looking at many,many at the aggregate effectsof many atoms of space.So similarly in what wecall branchial space,the space of these,these quantum branches,we are effectively averagingover many different branchesof possible, of histories of the universe.And so in and in, in thermodynamics,we're averaging over manyconfigurations of, you know, many,many possible positions of molecules.So what what we see here is,so the question is when youdo that averaging for space,what are the aggregate laws of space?When you do that averagingof a branchial space,what are the aggregatelaws of branchial space,when you do that averagingover the molecules and so on,what are the aggregate laws you get?And this is, this is the thing that I,I think is just amazingly,amazingly neat that.- That there areaggregate laws at all for-- Sure. Well, yes,but the question is whatare those aggregate laws?So the answer is for space,the aggregate laws areEinstein's equations for gravity,for the structure of spacetime, for branchial space.The aggregate laws are thelaws of quantum mechanics.And for the case of, ofmolecules and things,the aggregate laws are basicallythe second law of thermodynamics.And so the, the, that's the,and the things that follow fromthe second law of thermodynamics.And so what that means is that the threegreat theories of 20th century physics,which are basically generalityof the theory of gravity,quantum mechanics, andstatistical mechanics,which is what kind of grows outtathe second row of thermodynamics.All three of the great theoriesof 20th century physics arethe result of this interplay betweencomputational irreducibility,and the computationalboundedness of observers.And you know, for me,this is really neat because itmeans that all three of theselaws are derivable.So we used to think that, for example,Einstein's equationswere just sort of a wheelin feature of our universe.That they could be,universe might be that way.It might not be that way.Quantum mechanics is just like,well, it just happens to be that way.And the second law peoplekind of thought, well,maybe it is derivable.Okay? What turns out to be thecase is that all three of thefundamental principlesof physics are derivable,but they're not derivablejust from mathematics.They require, or just from somekind of logical computation,they require one more thing.They require that the observer,that the thing that is samplingthe way the universe worksis an observer who hasthese characteristics ofcomputational boundedness of beliefand persistence and time.And so that,that means that it is thenature of the observer you know,the rough nature of the observer.Not the details of, oh,we've got two eyes and we observe photonsof this frequency and so on.But the, the, the,the kind of the very coarsefeatures of the observer thenimply these very precisefacts about physics.And it's, it's, I think it's amazing.- So if we just look atthe actual experienceof the observer that weexperience this reality,it seems real to us.And you're saying becauseof our bonded nature,it's actually all an illusion.It's an a simplification.- Well, yeah. It's asimplification. Right?What's what's, well.- You don't think asimplification is an illusion?- No, I mean it's, it's,well, I don't know. I mean-- What is real?- What's underneath?Okay, that's an interesting question.What's real?And that relates to the whole question ofwhy does the universe exist?And, you know,what is the differencebetween reality and a mererepresentation of what's going on?- Yes.We experience the representation.- Yes.But the, the question of, so,so one question is, you know,why is there a thing whichwe can experience that way?And the answer is becausethis ruliad object,which is this entangled limitof all possible computations,there is no choice aboutit. It has to exist.It has to, there has to be such a thing.It is in, in the same sensethat, you know, two plus two,if you define what two is andyou plot pluses and so on,two plus two has to equal four.Similarly,this rule ad this limit ofall possible computations justhas to be a thing you, that is,once you have the idea of computation,you inevitably have the ruliad.- You're gonna haveto have a ruliad. Yeah, yeah.- Right? And, and what'simportant about it,there's just one of it, it's, it's,it's just this unique objectand that unique objectnecessarily exists.And then the question is what?And then we once,once you know that we aresort of embedded in that andtaking samples of it,that it's sort of inevitablethat there is this thing thatwe can perceive that is, you know, that,that our perception of kind of physicalreality necessarily is that way,given that we are observers withthe characteristics we have.So in other words,the fact that the fact thatthe universe exists, is it,it's actually, it's almostlike it's, you know,to think about it almosttheologically so to speak.And I, and I've, I've really, it, it's,it's funny because a lotof the questions about theexistence of the universe and so on, they,they transcend what kind ofthe science of the last fewhundred years has really beenconcerned with the science ofthe last few hundred years.Hasn't thought it could talkabout questions like that.- Yeah.And, but I think it's kind of,and so a lot of the kindof arguments of, you know,does God exist, you know,is it obvious that Ithink it in some sense,in some representation, it's sort of more,more obvious that,that something sort of biggerthan us exists than that weexist and we are, you know,our existence and as observersthe way we are is sort of acontingent thing about the universe.And it's more inevitable that the whole,the whole universe kindof the whole set of allpossibilities, exists.But, but this question about, you know,is is it real or is it an illusion?You know, all we know is our experience.And so the fact that, well,our experience is thisabsolutely microscopic piece ofsample of the ruliad, andwe are, and, and you know,there's this this point about, you know,we might sample more and moreof the ruliad we might learnmore and more about, wemight learn, you know, like,like different areas of physics,like quantum mechanics for example.The fact that it,it was discovered I think isclosely related to the factthat electronic amplifiers wereinvented that allowed you totake a small effect and amplify it up,which hadn't been possible before.You know, microscopes had been inventedthat magnify things and so on.But the, you know,having a very small effect andbeing able to magnify it wassort of a new thing that allowedone to see a different sortof aspect of the universeand let one discover this kind of thing.So, you know,we can expect that in theruliad there are an infinitecollection of new things we can discover.There's, there's in fact computationalirreducibility kind of guaranteesthat there will be an infinitecollection of kind of,you know, pockets of reducibilitythat can be discovered.- Boy, would it be fun to take a walkdown the ruliad and see whatkind of stuff we find there?You you write about alien intelligences.- Yes.- I mean, just these worlds.- Yes. Well, the problemwith these worlds is that.- We can't talk to 'em.- Yes.And, and you know,the thing is what I've kind ofspent a lot of time doing ofjust studying computationalsystems, seeing what they do,what I now call ruliology, kindof just the study of rules.- Yeah.- And what they do.You know,you can kind of easily jumpsomewhere else in the ruliadand start seeing what do these rules do?- Yeah.- And what you says they, they just,they do what they do andthere's no human connection,so to speak.- Did you think, you know,some, some people are able tocommunicate with animals.Do you think you canbecome a whisper of these?- I've been trying.That's what I've spentsome part of my life doing.- Have, have you, haveyou heard, and I mean,are you at the risk of losing your mind?- Sort of my favorite sciencediscovery is this fact thatthese very simple programscan produce very complicated behavior.- Yeah.- And that, and that factis kind of in a sense,a whispering of something outin the computational universethat we didn't reallyknow was there before.I mean, it, it's, you know,I it's like, you know,back in the 1980s I was doinga bunch of work with somevery, very good mathematicians,and they were like tryingto pick away, you know,can we figure out what's going onin these computational systems?And they, they basically said, look,the math we have just doesn'tget anywhere with this.We're stuck. There's nothing to say.We have nothing to say.And, you know, in a sense,perhaps my main achievement atthat time was to realize thatthe very fact that the,the good mathematicians hadnothing to say was itselfa very interesting thing.That was kind of a, a sort of,in some sense,a whispering of a differentpart of the ruliad that onehadn't, you know, one wasn't,was not accessible fromwhat we knew in mathematicsand so on.- Does it make you sad thatyou're exploring some of thesegigantic ideas and it feelslike we're on the verge ofbreaking through to somevery interesting discoveries,and yet you're just a finite beingthat's going to die way too soon?And that scan of your brain oryour full body kind of showsthat you're-- Yeah, it's just a bunch of meat.- It's just a bunch of meat.Yeah. Does that make youmake you a little sad?- It's kind of a shame. I mean,I kinda like to see howall this stuff works out,but I think the thingto realize, you know,it's an interesting sort ofthought experiment. You know,you, you say, okay, you know,let's assume we can get cryonics to work,and one day it will,that will be one of these thingsthat's kind of like ChatGPT.One day somebody willfigure out, you know,how to get water from zerodegrees centigrade down to,you know, minus 44 orsomething without it expanding.And you know,cryonics will be solved andyou'll be able to like just,you know, put a pause inso to speak, and you know,kind of reappear a hundredyears later or something.And the thing though that I'vekind of increasingly realizedis that in a sense, this,this whole question of kind of the,the sort of one is embeddedin a certain moment, in,in time and, you know, kind ofthe things we care about now,the things I care about now,for example, had I lived,you know, 500 years ago,many of the things I careabout now, it's like,that's totally bizarre.I mean, it's, nobodywould care about that.It's not even a thing onethinks about in the future,the things that mostpeople will think about.You know,one will be a strange relicof thinking about, you know,the kind of, you know,it might be one might have beena theologian thinking about,you know,how many angels fit on thehead of a pin or something.And that might have been the, you know,the big intellectual thing.So I think it's a, it's a,but yeah, it's a, it's a, you know,it's one of these thingswhere particularly, you know,I've had the, I don'tknow, good or bad fortune.I'm not sure. I think it's a,it's a mixed thing that I've,you know, I've invented a bunch of things,which I kind of can,I think see well enough what'sgonna happen that, you know,in 50 years, a hundred years, whatever,assuming the world doesn'texterminate itself, so to speak,you know,these are things that will besort of centrally importantto what's going on.And it's kind of both,it's both a good thing and a bad thingin terms of the passage of one's life.I mean, it's kind of like,if everything I'd figuredout was like, okay,I figured it out when I was 25years old and everybody saysit's great and we'redone. And it's like, okay,but I'm gonna live another how many years?And that's kind of, it's alldownhill from there in a sense.It's, it's better in some senseto, to be able to, you know,there's, there's, it,it sort of keeps thingsinteresting that, you know,why I can see, you know,a lot of these things.I mean, it's kind of, I Ididn't expect, you know,ChatGPT, I didn't expect the kind of,the sort of opening up ofthis idea of computational andcomputational language that'sbeen made possible by this.I didn't expect that this is,this is ahead of schedule,so to speak.You know, even though the sort of the,the big kind of flowering ofthat stuff I'd sort of beenassuming was another 50 years away.So if it turns out it's a lot less time,that's pretty cool because, you know,I'll hopefully get to see it, so to speak,rather than, than-- Well, I, I think I speak for a very,very large number of peoplein saying that I hope youstick around for a long time to come.You've had so many interesting ideas.You've created so many interestingsystems over the years,and I can see now thatGPT and language modelsbroke open the world even more.I can't wait to see youat the forefront of thisdevelopment, what you, what you do.And yeah, I've been a fan of yours.Like I've told you many,many times since the very beginning.I'm deeply grateful that youwrote a new kind of science,that you explored thismystery of cellular automataand inspired this one little kid in me to,to pursue artificial intelligencein all this beautiful world.So Stephen, thank you so much.It's a huge honor to talk to you, to,to just be able to pick yourmind and to explore all theseideas with you.And please keep going and Ican't wait to see what you comeup with next.And thank you for talking today.- Thanks.- We went past midnight.We only did four and a half hours.I mean, we could probablygo for four more,but we'll save that till next time to,this is around numberfour and we'll, I'm sure.Talk me more times. Thank you so much.- My pleasure.- Thanks for listeningto this conversationwith Steven Wolfram.To support this podcast please check outour sponsors in the description.And now let me leave you somewords from George Cantor,the essence of mathematicslies in its freedom.Thank you for listening andhope to see you next time.\n"