**A Conversation with Matt Salopek**
In this latest episode of the Twilio Talks series, Sam and Nick are joined by Matt Salopek, a well-known figure in the Human History Machine (HHM) community. As he explains, he feels like a "community manager" due to his involvement with the group, which is dedicated to exploring the concept of human history through neural networks.
Salopek emphasizes that the HHM community is comprised of highly intelligent individuals who have been working on various projects in recent years. He notes that their papers are open-access, allowing anyone to access and contribute to the research. This approach enables the community to share its findings and collaborate with each other, fostering a spirit of transparency and collaboration.
One of the key areas of focus for the HHM community is the study of location signals in the brain. Salopek explains that researchers have made significant progress in understanding how the brain generates these signals, particularly in relation to grid cells and place cells. He references a fascinating example from neuroscience research, where a mouse was placed in a box and monitored as it ran around, revealing the firing patterns of specific neurons that corresponded to its location.
Salopek highlights the connection between these neural activity patterns and the brain's representation of objects in space. He suggests that the allocentric representation of an object is linked to the sensory input we receive about it, and that this is related to the grid cells that fire in response to our movements through space. This idea has implications for how we understand perception, cognition, and even art.
**Further Exploration**
Salopek's conversation with Sam and Nick touches on various topics, including the potential for new forms of intelligence and cognition. He mentions the concept of "HTM systems" that have been developed by members of the HHM community, which are designed to simulate human-like intelligence in a more efficient and robust manner.
One of the most intriguing examples presented is a thesis project from a researcher in Turkey who created a 3D game environment to simulate sensorimotor interactions. The researcher used the HTM framework to explore how the brain represents locations in space, incorporating real-world behaviors and motivations into the simulation. This work demonstrates the potential for HTM systems to go beyond simple simulations of human cognition and create more complex, realistic models.
Salopek concludes his conversation by emphasizing the importance of continued research and exploration within the HHM community. He notes that there is still much to be learned about how the brain works, particularly in relation to location signals and object representation. The community's focus on open-access papers, collaborative research, and innovation has made significant progress in understanding these complex topics.
**Final Thoughts**
As Salopek wraps up his conversation with Sam and Nick, he reiterates the significance of the HHM community and its commitment to advancing our understanding of human history and cognition. He encourages listeners to explore the community's work further, particularly through the Twilio Talks series, which provides access to a wealth of knowledge on these topics.
For those interested in learning more about Salopek's research or exploring related topics, he recommends checking out his podcast episodes and visiting the Twilio Talks website for additional resources. The conversation also highlights the importance of supporting and engaging with communities that push the boundaries of human knowledge and innovation.
**Supporting Organizations**
Throughout the episode, Salopek thanks several organizations for their support, including Twilio, which is sponsoring the series. He also mentions NexusComm, a platform that enables developers to build innovative applications using AI and machine learning technologies.
By supporting these organizations, listeners can access a range of resources, tools, and opportunities for exploring the latest developments in human history and cognition.
**Stay Connected**
To stay up-to-date with the HHM community's work and engage with other enthusiasts, Salopek recommends checking out his Twitter profile (@MattSalopek) or sending feedback via email. Listeners can also join the conversation by visiting the show notes page for this episode or searching for #TwilioTalks on social media.
"WEBVTTKind: captionsLanguage: enhello and welcome to another episode of we'll talk the podcast where I interview interesting people doing interesting things in machine learning and artificial intelligence I'm your host Sam Charrington a big thanks to everyone who participated in last week's twill online meet up and to Kevin tea from Sig up for presenting you can find the slides for his presentation in the meetups like channel as well as in this week's show notes our final Meetup of the year will be held on Wednesday December 13th make sure to bring your thoughts on the top machine learning nai stories for 2017 for our discussion segment for the main presentation prior to mole talk guest Bruno Gonzales will be discussing the paper understanding deep learning requires rethinking generalization by Shi Huang Shang from MIT and Google brain and others you can find more details in register at twill Malaya comm slash Mita if you receive my newsletter you already know this but twill m'l is growing and we're looking for an energetic and passionate community manager to help expand our programs this position can be remote but if you happen to be in st. Louis all the better if you're interested please reach out to me for additional details I should mention that if you don't already get my newsletter you are really missing out and should visit to Malaya comm slash newsletter to sign up now the show you're about to hear is part of our strange loop 2017 series brought to you by our friends at neck SOSUS neck SOSUS is a company focused on making machine learning more easily accessible to enterprise developers their machine learning api meets developers where they're at regardless of their mastery of data science so they can start coding up predictive applications immediately and in their preferred programming language it's as simple as loading your data and selecting the type of problem you want to solve their automated platform trains and selects the best model fit for your data and then outputs predictions to learn more about neck SOSUS be sure to check out the first episode in this series emolia calm / talks last 69 where I speak with co-founders Ryan Seavey and Jason Montgomery be sure to also get your free neck SOSUS API key and discover how to start leveraging machine learning in your next project at neck SOSUS comm / twimble in this episode i speak with matthew taylor open-source manager at Numenta you might remember hearing a bit about Numenta from an interview I did with Francisco Webber of cortical do on twill talk number 10 a show which remains the most popular show of the podcast to date Numenta is basically trying to reverse-engineer the neocortex and use what they learn to develop a neocortical theory for biological and machine intelligence that they call hierarchical temporal memory Met join me at the conference to discuss his talk the biological path towards strong AI in our conversation we discussed the basics of HTM it's biological inspiration and how it differs from traditional neural networks including deep learning this is a nerd alert show and after you listen I would also encourage you to check out the conversation with Francisco which will link to in the show notes and now on to the show hey everyone I am here at the Strange Loop conference in st. Louis and I am joined by Matt Taylor who is an open-source community manager at Numenta and I am super excited to have you here with me Matt you just delivered a talk here at the conference I did and I'm looking forward to us diving into that but before we go anywhere else welcome thank you pleasure to have you on the show so why don't we get started by having you tell us a little bit about your background and how you got into machine learning and AI yeah I don't know how far back to go but I mean in computers I got interested in computers when I was enlisted in the Air Force I was an intelligence analyst in the Air Force and then that turned into a Department of Defense job in the same place and I was doing a lot of simulation like air defense simulations okay in like Fortran and nice and shells it was it was kind of archaic but it was a very powerful simulation so that's sort of what got me into programming I didn't really think much about artificial intelligence until I read on intelligence which is a book that our founder Jeff Hawkins wrote I think in 2005 and I was working in the software industry at that point I got away from the old defense industry and moved here to st. Louis to work in software after I got my software degree oh wow and so I would just consult it around st. Louis looked at a bunch of different places okay and did a bunch of different jobs and I read that book I remember reading it on intelligence and another book called the singularity is near by Ray Kurzweil sure and reading those two books at the same time really like flip the script for me and I don't like like it maybe you start wondering all these big questions like what is consciousness what is intelligence how do we even define these things this is it really possible that we could build intelligent systems out of non-biological materials but at the time you know I was just working here doing mundane software programming and stuff but I have a I don't have a math degree I don't I didn't have any experience and you know deep learning or artificial neural networks at that point deep learning wasn't even a big deal yet you know so I just gave it up as a pipe dream and but at some point I got a job at Yahoo as a front-end engineer which is odd because I've never done for an engineer before but I got a job at Yahoo and moved out to the Bay Area worked there for a couple years and out of the blue I got a call from a recruiter for a front-end position at Numenta and I was like ah what okay sure Wow so I jumped on board at that point and just and started doing web stuff eventually moved up to do web services it was the manager web services and when my boss Jeff decided he wanted to take all of their algorithms open-source I was like I want to help with that yeah I like open source I've always been an advocate of open source and been a part of different communities and I was like sign me up that's all we did that's fantastic so maybe for folks that aren't familiar with Numenta you can kind of walk us through the the company and its its position in the machine learning space cuz I think the company has a kind of a unique approach to machine learning and folks that have been around with the podcast for a while and listen to Francisco Weber's podcast might recall Numenta and Jeff Hawkins work coming up in that context because the work that cortical is doing is related to what the mint is doing yeah that podcast so I think was a great primer for us you know for Numenta of course was a partner of ours that happens and Francisco is a brilliant guy who knows exactly absolutely what he's talking about so our mission that Numenta is different and I think most companies it is and it's always been this ever since that I've been at the company it's two things I understand how intelligence works in the neocortex yep and the second thing is implement those things outside of biological systems like try and build it you know it's basically reverse engineer the neocortex is our mission and hopefully you know we'll make money up at some point but honestly we're really really kind of R&D focused right now yeah a very small company very focused on the research yeah and is it primarily primarily just funded by Jeff yeah it's probably it's privately funded Jeff Donna yeah yeah a group of you know contributors that have been longtime associates of Jeff and Donna you know they built palm and handspring and so that there's a crew of board members that I think helped with the funding but I don't know the details about them and is the implication of that mission though that the company is not under your traditional and adventure commercialization pressures is it yeah is it better to think of namenda is like an open ai then like a you know machine learning company acts I guess I've never thought of it in comparison to open AI but I guess it would be similar in that we're not building products we're not selling services ah what we're doing is we're trying to make discoveries so all of our discoveries are based on neuroscience research you know our research engineers are are always reading the most recent neuroscience papers that come out there they're interacting with different neuroscientists in the community trying to answer questions that are relevant to how we understand intelligence and Crytek's and what we do is as we make these discoveries and we test them out and we'll prototype them and software and think oh this is how it works it actually does our theories seems to work and software the way we thought then we all create patents around around those discoveries so they're you know specific ones about things that we've discovered about how the brain is working and how how we've implemented it and apparently in software but it could be implemented in hardware - okay the idea being you know the monetization strategy is in the value of the IT itself so we don't want to be distracted by consulting by providing services or by creating applications at this point we really want to focus on on the discovery on the brain trying to figure out how it works and and we think that good things will come with that okay so for your talk at one of the one of the big things that I think you talked about at least from the perspective from what I got out of the abstract was you kind of promised it on you know hey we're there's a lot of excitement out there about neural nets and deep learning and things like that but these are all based on a model of a neuron that is you know rather dated right and I presume you then walk through some of the new things that we've learned since then you know kind of walk us through your your talk and sure the ideas that you wanted to share with folks sure so I'd like to say first off that I don't have anything bad to say about artificial neural networks or deep learning I sure that that's necessary technology that we that we needed to build them but one of my main points is that it's not going to naturally evolve into what people call strong AI and the first thing I say in my talk is weak AI is not intelligent and won't become intelligent there's not going to be some some this like exponential growth and suddenly you know sentience there's some core things about the in end point neuron models specifically that don't have the capacity for intelligence as we understand though so what are some of those core things yeah dive right in there's there's there's two main things one is that the neuron needs to have three states and current neurons have two states active or not and we add the idea of a predictive state so the neuron goes into a predictive state to indicate that it thinks based upon the context of its input that it's going to be active soon and that prediction is core to everything about our theory that and and we you know we take that from understanding how the brain works your brain is constantly making predictions about what it's going to see next one's going to feel next all the time and you can see that by investigating these depolarized pyramidal neurons and the neuroscience they call these cells a depolarized which means they're primed to fire and we're missing that in the an and neuron model there's no there's no concept of that so there's that and the other thing is pyramidal neurons have different integration zones it's not they don't just have one group of connections to other neurons they've got apical dendrites that kind of provide feedback from layers that are either above it or different parts of the cortex there is distal a distal zone that's that's lateral so that's getting connections from it could be from another layer it could be from within the layer itself but that provides context and these both provide context for the proximal input the proximal input is is really the driver input that's typically coming from the direction of the senses and so that's that's like the sensory input that we need to understand we need to process and the pyramidal neurons do that in the context of these other zones in the context of distal input and a typical input so those are the two things I think we're really missing from that point neuronal so I get that the neuroscience research has identified these things and in human biology but it's not clear to me how we've demonstrated that those are required for intelligence or even that those things can't be approximated with artificial neural networks as we currently know that like the last thing the different zones you know made me think of well you know we just have different inputs in different weights right and then as far as predictions are concerned if we're able to predict at a network level you know who's to say that the neuron itself has to have that predictive state in order to create intelligence well it's it's through that current artificial neural networks of deep learning could potentially put together models that replicate the the parts of the things about the neuron that we're saying they're required for intelligence I think we use them for prediction all the time yeah but I don't know that that's it doesn't feel natural to me and and think about this recently there's been this big discussion in the in the declarant community about back propagation because Geoff Hinton has recently said let's give up on back propagation go back to the drawing board and try and figure out what's really going on we did that 12 years ago so we never tried back propagation we've always tried to do this but we don't see back propagation happening in the brain and for the for the longest time you know Hinton and Ben geo were insisting that back propagation is happening in the brain we just don't see it so I think so that's kind of a move in our direction and even from like the deep mind crew they recently had this blog post about important neurosciences to contributing to artificial intelligence so it feels to me like the community's starting to move in our direction and maybe they will be able to hack these properties that we're saying we need in the neuron model into deep learning systems that could happen but I don't think that it will happen without them doing something to incorporate those ideas and been geo just this week published a paper that talked about I forget the exact title something about consciousness I don't know that I saw that I did not see that that's and it was controversial might be strong but it raised a lot of questions because he you know proposed that somehow we need to take into account some notion of consciousness in our models but the paper didn't present any experimental results of whatever it was just like a prod to the community anything about consciousness is going to be controversial because what is consciousness Sam alright what is intelligence yeah that's where I started my talk off with was asking people in the audience who believes humans are intelligent and they all raised their hands who believes chimpanzees are intelligent and that just go down the evolutionary ladder and see hands going down by the end of it I'm asking who thinks plants are intelligent and there's still one or two people that think plants are intelligent and they may be right you know I don't I don't we don't know yeah I heard you say only whatever so there's a lot of disagreement the thing is everybody believes humans are intelligent so at least we can start with that so we have been staying and I think by association we can include most primates and not too because they have the same neocortical structure that we have so we focused that on that neuroscience on what we think what we all know is intelligent and that's the neocortex of the mammalian brain so you started off talking about like level setting on intelligence and and just how open-ended that is and then talked about kind of the evolution of the neuron like how do you get from there to systems oh okay so you think about the pyramidal neuron like I said and it has these integration zones it's hard to visualize without a picture but Francisco said the same thing but my talk will be online at some point so if I invent a bunch of drawings and stuff but if you look at a pyramidal neuron and it's got a little link to it if you shoot us a link okay it has these integration zones distel which is lateral to the side próxima which comes from below apical which comes from on top the cortex has this homogeneous structure if you took your neocortex and you unwrinkled it and unfolded it and flattened it all out it's a sheath of cells it's about the size of a dinner napkin about the thickness of the dinner and it's homogeneous throughout it has the same structure and what that there's sort of like this computational unit and the cortex called a cortical column and this is this is something that is more recent of the neuroscience discovery we've known for like a hundred years that the cortex had layers like there is these distinct little layers and the sheet and that they their structure was different enough that we thought well they're doing different things but not exactly sure what they're doing now now that we know they're not just layers there's also columns and we can take that each column and say okay that each one of these is some individual computational unit right and maybe they can share their computation or the output of their computations with their neighbors and stuff so this idea that a column can have layers within it and every layer is full of these pyramidal neurons okay so imagine a column that's cut up into layers and and this is sort of a cylindrical column cut up in layers each one of those layers is full of primal neurons that have these integration zones we're apical up and down to the North sort of and proximal to the South and distal to the side so each layer itself has the same integration zone properties as an individual neuron because they're all oriented in exactly the same way so you can treat that layer as a computational unit so a layer gets proximal input a bunch of proximal input that all gets piped into its neurons in different ways from some space that's representing generally some spatial sensory features changing over time or something like that so you can think of the layer itself as a computational unit depending on where it gets proximal input where it gets its disel into it and it's able to input it does different things and also there's you know there's a bunch of different layers and look in the core text somewhere between 6 and 10 depending on which neuroscience to talk to but each one of those layers is a little bit structured a little bit differently too so there's some minor deviation in the organization of those pyramidal neurons within layers also give them a little bit of different computational aspects organization in what sense like for example we have these algorithms that we're saying or happening in these layers one is called a spatial pooling algorithm that takes some input and kind of spreads it normalizes it while retaining the semantics of the input and these create these many column structures of neurons and some layers have this and typically the the distal connections from each one of those neurons as it's receiving proximal input they start connecting to each other over time when you take that distal input to a layer and you say okay we're not going to get that distal input from somewhere else we're going to have all of the pyramidal neurons within the layer give each other distal input okay what you're doing is just naturally creating a temporal context because when you're only context to some input is what you've what state you've been in in the past then that's the temporal context if you're getting that input from somewhere else who knows that context could mean any number of things but if you're just giving yourself context that's you're looking at your own past that layer has context of its own history when you live them back to itself so that's one of the core things that we discovered we call this a temporal memory algorithm and it relies on these little mini column structures that takes the input you know the the bits of input that are coming in from from some sensory organ or pair perhaps from another part of the cortex normalizes it into these column activations and then activate cells within each column based upon the distal context that it's getting okay so what you get is it's starting to tie sequences together so when you see a pattern repeating over and over over and over you get these distal connections that are being reinforced because I see the pattern and the and the distal connection will create that connection too the active cells that it just saw that represented the previous spatial input and then we used to get another input and there may be a prediction so I saw that last time I'm going to be next so it makes a prediction and if it's right and the next input activates a column that that cells in then it becomes active it was a correct prediction so the context you're creating for me is how I felt when Francisco was explaining some of this stuff for me whoa visuals and hence that's why I created this bunch of videos on our YouTube channel to try and explain it all visually so you explain kind of the micro structure than the macro structure and then like what's next so strange loop is a developer conference like how do you get from there to okay how do I build something well there's two questions there I guess strange oh there's a developer conference however it's also like a weird conference you know granted yes it's very eclectic so you can get in if you have something that's like on the fringe but very interesting you can get in and talk there so I think that's why I cut this talk but there's as far as from it but still I mean in addition to developing IP and all of that as I understand it new meant as a company offers tools that allow people to actually use this stuff is that correct or an open-source tool so right all of our code is open source and anybody can find use it if they want to I've created a lot of tutorials and code samples and I try and make it as approachable as possible for our community you've got a very active forum with lots of discussions about the theory code private stuff so as a user of these open-source tools and things like am I do I need to think about columns and dendrites and all of that stuff or am I thinking about other representations so it could go either way it depends on what you're trying to do so we have a pretty diverse and a collective community that are interested in this typically people who are really interested in how the brain works or yeah I could say that can be a little off but I mean they're always very smart and inquisitive and curious and then it amazes me the types of things that people try and do with with our stuff okay and I always encourage it and I was like yeah try it give it a try who knows we don't know what's gonna so our software that we open-source is called new pick the momenta platform for intelligent computing okay and we just released 1.0 of that a few months ago and that includes up to what I just talked about the temporal memory part of it and a few years back after we you know went through this research cycle and made the did the temporal memory discovery that was a big discovery for us to see how how sequences were memorized in the brain and the cortex we kind of just dumped it all open source and we started like building these potential sample you know we just brainstormed about what could we make with this that people might want to use and we made all these sample applications there's one that was like rogue human behavior detection which is something you can install on a computer and it monitors the different metrics that are coming out of the computer over days and weeks and can give an indication about a user's behavior are they behaving oddly or differently based on the time of day and the thing that they're doing and the metrics that are coming out of the computer so that that's a sort of thing you can do we also had a IT analytics program ok that hooked up to AWS and we actually license that to another company called croc and so they are actively selling that to IT companies that have a bunch of servers on Amazon and it will automatically like through cloud watch connect to all the different metrics coming out of your servers and it will create models for all of them and it'll just start streaming the data into them and you don't really have to do anything they're all sort of pre-configured and then it'll give you anomaly indications over time so after it's seeing that server data for a while it gets an idea of what's normal and what's not normal then it notifies you you know that something's wrong with this server might not it doesn't know what's wrong with the server but it can tell you that something abnormal is happening and even you know with this server and this server and the combination of those so there's anywhere that there's streaming analytics that Indian anomaly detection I think that there's there's a potential application for what we have right now with new pic 1.0 ok there's also this really interesting thing that I think is a still a big opportunity for people who want to try and build something novel with this we figured out a way to encode geospatial location into a format remember when Francisco and you talked we talked a lot about sdrs about sparse distributed representations of coracles it was all about that they call them semantic fingerprints right right we found a way to encode location information like latitude longitude altitude into an SDR okay so we can take something that moves through time and space and give the algorithms the intelligence algorithms a way to understand the patterns and that time and the movement of that object so for an example that I always do is I go walk my dogs on the same dog walking we're out you know every day and if I take a tracker with me and I and then I go put all my points back through the algorithm the first time it sees the walk it's like all anonymous it doesn't it doesn't think none of its familiar because it's brand new the second time I do it it's a little bit less familiar the third time I do it it's like no big deal this is normal right as soon as I deviate from the path that I've taken and even if I just go walk on the other side of the street or if my dog's decide they don't want to stop at that tree they want to stop at some other tree I get anomaly indications coming from my path so I think this has big applications in fields like logistics air traffic control human tracking pet tracking stuff like that where you've got normal routes of things that normally happen and you don't necessarily to the T you want to say oh if they deviate right now or if they're not at this point at this time there's something wrong you just want to get an idea of their general movement whether it's strange or not or whether it's has been seen or not then it can do that sort of thing you know certainly for the the network and server anomaly detection and the example you gave before that there you know things that you can do with a variety of different techniques like are there things that you've found that you know either you know the approach you take you know because of the approach you take you know it's just kind of best-in-class or like it's you know if you need to do X Y Z like this is the best way to do it or you know either from either from you know complexity of creating the solution or you know computational cost or some other metric like we wondered the same thing but the problem we had is several years ago is that there are no there are no standard benchmarks for streaming temporal anomaly anomaly detection or just from you know temporal anomaly detection most of the benchmarks are on spatial data and most of the machine learning techniques work on spatial data only we didn't find any things that we could compare what we did with what like LS TM for example does has some abilities to do temporal analysis on things be it you know it's sort of in batches that move along so we created a benchmark we called the anomaly momenta anomaly benchmark okay and we've set up you know ours as one of them in the running and we set up an LST m1 and we set up there's one from Twitter there is one from Etsy that does streaming anomaly detection like they've got open source projects that do that sort of thing and we created you know these input datasets things like how many taxi calls were there in New York City over an entire period of time or something and you can look at that data and you can say Oh something weird happened there for sure and you go look it up and they're like oh there was there was a big game in town or there's stuff like that you can find in that data so we'd find datasets like that that had a good amount of data and had obvious anomalies that were labeled and marked yeah and we would run all of these algorithms against them and score them based on how well they detected the anomaly and weighting it I think we weighted it pretty heavily on not providing false positives I think I can't remember exactly but it's it's open source it's on github at Numenta slash NAB for Numenta anomaly at benchmark okay we have that at least and of course we're the winner because everything we had like this contest really couldn't anybody beat us at this and somebody came and beat us at it we're like okay we're gonna fix it so fix it we're beating again but okay there's always some tweaking that you can do you know to try and get that last few percent haha okay it kind of leaves me with an impression that like this is a tool that you know we've or a set of tools you have a strong feeling that you know closely models the inner workings of the brain as we understand it and that over time that will lead to you know I'm assuming you're banking on like order of magnitude you know capabilities over current approaches like that things you can do using momenta and the things you can do using other things will diverge over time but today it doesn't sound like there's a you know bang on the table like if you need to do X Y Z these tools will get you there you know a hundred times faster and a hundred times cheaper or even ten or you know it's it sounds like you you know it's an interesting approach and something that's worthwhile for people to learn and take a look at and understand the thinking around but it's kind of a killing axe there's no cure I guess that's what I'm getting at yeah there's no killer app but we're patient to there's a lot of things about the brain that we don't we still don't understand right and what we have currently in the minta 1.0 is just you know temporal memory stuff all of our other work that we're doing is in research repositories that kind of attach on top of that so okay we're taking those core algorithms which are going to change and we're building new and different things with them because the core algorithms in your brain don't change or any discover that it can do lots of different things with those core algorithms right so we're building structures now because we think we understand how sensory motor integration happens with sensory input and movement but it is the integration of two layers in one of those columns or my told you about the layers having these integrations yes so we could have one layer that is running the same temporal memory algorithm that I described earlier with the many columns and everything but we don't send it its own distal input we don't give it a temporal context okay we can pipe in the context the distal connection comes from somewhere else in the brain comes from a different layer down and and if we assume that that layer or that the output of that layer is providing us with location information associated with a sensory input that's proximal coming up to the layer from from the bottom so that's the driver signal is this sensory input the distal signal is going to represent the object being touched and what location on the object that sensory feature was since then we can have a layer that can represent every object we've ever touched and what sensory input we've felt we're on it and so that layer now provides that information to another layer which we call an output layer this output layer has a little bit of the different structure because it doesn't have the many columns like the one underneath it but it represents over time a library of every object we've ever learned right so we can train this thing and say okay this is a coffee cup touch it all over the place right right okay here's a banana touch it all over the place very low and we can build a library of objects that that top layer represents so the bottom layer is basically just gonna represent all the sensory input you felt on every location on every object that you've touched and this is the temporal memory because of that it's the temporal memory concept it's not doing temporal memory anymore it's doing sensory feature and location Association just because we've changed the distal input so it's no longer giving itself distal and good it's getting it from somewhere else and it does something entirely different and so it sounds like the idea there is like if you think about using deep learning object recognition like our best guess at the way the different layers work now is you've got layers to kind of figure out edges and layers that figure out colors and so you know the you know when you when the inputs the banana you know will get kind of the curvy layer firing and the yellow yeah what you're describing sounds like you know maybe in the kind of in the internals is capturing a richer representation of these various that clear the big differences is our model incorporates movement and that's that's the big difference so can you name anything that is intelligent that cannot move nothing kind of never nobody ever gets because there's not there's nothing intelligent that can't move yeah so we we believe that's a core feature of intelligence the ability to interact with your environment has to be baked in to the architecture of the intelligent system it's not something that you could just add you can't just add behavior to it to a system that you're building it has to be baked into the flow of information so like I said when when you move your finger to touch the object you know where your finger is going to move because you just commanded it to move there so that information is available to your brain that loop has to be baked in so that every time you touch something you know where it's going and you know what you expect to feel if you don't feel that something's wrong okay and some that is demonstrating all this with a glass of water and we've been experimenting with a video camera set up here so we may be able to show the visual aids of motion it helps with the visual aids but like I said if you want visuals go to the mental work yeah I got lots of stuff nice nice awesome well anything else did you cover it in your talk or last kind of final thoughts that you want to leave us with I guess I just I want to emphasize that we have a really nice community I feel like I'm the community manager so of course I'm gonna say that but honestly there's some really bright people that have even shown up just in the past year that are doing some really interesting things with HHM all of our papers are open access so all the steering is everything that we theorized about we write papers about and we throw it we put it out there and we do it with code so we're like here's a paper here's a simulation here's the code you can run it yourself if you want to talk about it yourself so if you don't believe us you can try it yourself and there's lots of people in our community that have decided they're gonna write their own HTM system and their own favorite language of their own environment so there's a lot of people doing and new and interesting things creating their own visualizations last one was thesis from this guy in Turkey he did this amazing sensorimotor sort of simulation in a 3d game environment where he's got a player trying to find a point and how long but he wrote his whole thesis on his Brian but he used our theory and then attached some stuff on top like he theorized further and he's like well what if I've got this and this and this and trying to like create a more complete idea of the brain not race the cortex because we're really just working on cortex right now and he's trying to incorporate some other things like some like real behaviors or real drivers of what is the motivation for that agent that is running the intelligence okay the intelligence and huh and we're not quite there but we're focusing our research right now on location like that location signal you about we've got a really good idea of how that location signal is generated if it's super interesting like the way that your brain rocks location of things is amazing probably don't I don't have acknowledged to to explain it but it's about grid cells location cells and place cells and stuff like that if anybody wants to go research that there's some really interesting neuroscience papers coming out about grid cells okay first I'll give you a little example okay if you put a mouse in a box and you let it run around the box and you're monitoring its neurons you'll see as it runs around the box and you trace where it goes certain neurons will fire when it's in certain places and those fire and you can identify those cells that are leading whenever it's in place yeah that's why yeah and specific neurons gonna fire yes Wow and if you look at it it forms this hexagonal grid so there's these this hexagonal pattern of neurons that are firing as you move through space representing where you're at in the space that you're occupying and we think that that interplay of neurons and that idea of location of neurons representing locations in in space plays out at a bigger level to even represent objects in space like this you have an allocentric representation of any object that you can imagine allocentric meaning not related to where you are not egocentric but just in Psych imagine a cup that's a that's an object that you have and and if you like used its center of gravity for whatever as its center you could define it entirely based upon all the sensory input that you've ever received about those objects that you felt or seen or whatever and we think that that that has something to do with grid cells that how those objects are stored like the the plate how in 3d space they're defined is linked to the sensory input that we receive about them and what cells are firing in space as we're imagining what we're we're touching on the unopened Wow super super interesting stuff I will definitely make a note for folks to listen to the conversation with Francisco kind of times before this one or maybe this one should be the prerequisite for that one I don't know it's hopefully hopefully it's standalone awesome thanks so much Matt you're welcome I appreciate the opportunity absolutely all right everyone that's our show for today thanks so much for listening and for your continued feedback and support for more information on Matt or any of the topics covered in this episode head on over to Twilio dot-com / talks last 71 to follow along with our strange loot 2017 series visit twill Millea calm / st loop of course you can send along your feedback or questions via twitter - at - Malaya or at Sam Carrington or leave a comment right on the show notes page thanks again to Nick SOSUS for their sponsorship of the show check out - Malaya dot-com / talks last 69 to hear my interview with the company founders and visit Nexus comm / twimble for more information and to try their API for free thanks again for listening and catch you next timehello and welcome to another episode of we'll talk the podcast where I interview interesting people doing interesting things in machine learning and artificial intelligence I'm your host Sam Charrington a big thanks to everyone who participated in last week's twill online meet up and to Kevin tea from Sig up for presenting you can find the slides for his presentation in the meetups like channel as well as in this week's show notes our final Meetup of the year will be held on Wednesday December 13th make sure to bring your thoughts on the top machine learning nai stories for 2017 for our discussion segment for the main presentation prior to mole talk guest Bruno Gonzales will be discussing the paper understanding deep learning requires rethinking generalization by Shi Huang Shang from MIT and Google brain and others you can find more details in register at twill Malaya comm slash Mita if you receive my newsletter you already know this but twill m'l is growing and we're looking for an energetic and passionate community manager to help expand our programs this position can be remote but if you happen to be in st. Louis all the better if you're interested please reach out to me for additional details I should mention that if you don't already get my newsletter you are really missing out and should visit to Malaya comm slash newsletter to sign up now the show you're about to hear is part of our strange loop 2017 series brought to you by our friends at neck SOSUS neck SOSUS is a company focused on making machine learning more easily accessible to enterprise developers their machine learning api meets developers where they're at regardless of their mastery of data science so they can start coding up predictive applications immediately and in their preferred programming language it's as simple as loading your data and selecting the type of problem you want to solve their automated platform trains and selects the best model fit for your data and then outputs predictions to learn more about neck SOSUS be sure to check out the first episode in this series emolia calm / talks last 69 where I speak with co-founders Ryan Seavey and Jason Montgomery be sure to also get your free neck SOSUS API key and discover how to start leveraging machine learning in your next project at neck SOSUS comm / twimble in this episode i speak with matthew taylor open-source manager at Numenta you might remember hearing a bit about Numenta from an interview I did with Francisco Webber of cortical do on twill talk number 10 a show which remains the most popular show of the podcast to date Numenta is basically trying to reverse-engineer the neocortex and use what they learn to develop a neocortical theory for biological and machine intelligence that they call hierarchical temporal memory Met join me at the conference to discuss his talk the biological path towards strong AI in our conversation we discussed the basics of HTM it's biological inspiration and how it differs from traditional neural networks including deep learning this is a nerd alert show and after you listen I would also encourage you to check out the conversation with Francisco which will link to in the show notes and now on to the show hey everyone I am here at the Strange Loop conference in st. Louis and I am joined by Matt Taylor who is an open-source community manager at Numenta and I am super excited to have you here with me Matt you just delivered a talk here at the conference I did and I'm looking forward to us diving into that but before we go anywhere else welcome thank you pleasure to have you on the show so why don't we get started by having you tell us a little bit about your background and how you got into machine learning and AI yeah I don't know how far back to go but I mean in computers I got interested in computers when I was enlisted in the Air Force I was an intelligence analyst in the Air Force and then that turned into a Department of Defense job in the same place and I was doing a lot of simulation like air defense simulations okay in like Fortran and nice and shells it was it was kind of archaic but it was a very powerful simulation so that's sort of what got me into programming I didn't really think much about artificial intelligence until I read on intelligence which is a book that our founder Jeff Hawkins wrote I think in 2005 and I was working in the software industry at that point I got away from the old defense industry and moved here to st. Louis to work in software after I got my software degree oh wow and so I would just consult it around st. Louis looked at a bunch of different places okay and did a bunch of different jobs and I read that book I remember reading it on intelligence and another book called the singularity is near by Ray Kurzweil sure and reading those two books at the same time really like flip the script for me and I don't like like it maybe you start wondering all these big questions like what is consciousness what is intelligence how do we even define these things this is it really possible that we could build intelligent systems out of non-biological materials but at the time you know I was just working here doing mundane software programming and stuff but I have a I don't have a math degree I don't I didn't have any experience and you know deep learning or artificial neural networks at that point deep learning wasn't even a big deal yet you know so I just gave it up as a pipe dream and but at some point I got a job at Yahoo as a front-end engineer which is odd because I've never done for an engineer before but I got a job at Yahoo and moved out to the Bay Area worked there for a couple years and out of the blue I got a call from a recruiter for a front-end position at Numenta and I was like ah what okay sure Wow so I jumped on board at that point and just and started doing web stuff eventually moved up to do web services it was the manager web services and when my boss Jeff decided he wanted to take all of their algorithms open-source I was like I want to help with that yeah I like open source I've always been an advocate of open source and been a part of different communities and I was like sign me up that's all we did that's fantastic so maybe for folks that aren't familiar with Numenta you can kind of walk us through the the company and its its position in the machine learning space cuz I think the company has a kind of a unique approach to machine learning and folks that have been around with the podcast for a while and listen to Francisco Weber's podcast might recall Numenta and Jeff Hawkins work coming up in that context because the work that cortical is doing is related to what the mint is doing yeah that podcast so I think was a great primer for us you know for Numenta of course was a partner of ours that happens and Francisco is a brilliant guy who knows exactly absolutely what he's talking about so our mission that Numenta is different and I think most companies it is and it's always been this ever since that I've been at the company it's two things I understand how intelligence works in the neocortex yep and the second thing is implement those things outside of biological systems like try and build it you know it's basically reverse engineer the neocortex is our mission and hopefully you know we'll make money up at some point but honestly we're really really kind of R&D focused right now yeah a very small company very focused on the research yeah and is it primarily primarily just funded by Jeff yeah it's probably it's privately funded Jeff Donna yeah yeah a group of you know contributors that have been longtime associates of Jeff and Donna you know they built palm and handspring and so that there's a crew of board members that I think helped with the funding but I don't know the details about them and is the implication of that mission though that the company is not under your traditional and adventure commercialization pressures is it yeah is it better to think of namenda is like an open ai then like a you know machine learning company acts I guess I've never thought of it in comparison to open AI but I guess it would be similar in that we're not building products we're not selling services ah what we're doing is we're trying to make discoveries so all of our discoveries are based on neuroscience research you know our research engineers are are always reading the most recent neuroscience papers that come out there they're interacting with different neuroscientists in the community trying to answer questions that are relevant to how we understand intelligence and Crytek's and what we do is as we make these discoveries and we test them out and we'll prototype them and software and think oh this is how it works it actually does our theories seems to work and software the way we thought then we all create patents around around those discoveries so they're you know specific ones about things that we've discovered about how the brain is working and how how we've implemented it and apparently in software but it could be implemented in hardware - okay the idea being you know the monetization strategy is in the value of the IT itself so we don't want to be distracted by consulting by providing services or by creating applications at this point we really want to focus on on the discovery on the brain trying to figure out how it works and and we think that good things will come with that okay so for your talk at one of the one of the big things that I think you talked about at least from the perspective from what I got out of the abstract was you kind of promised it on you know hey we're there's a lot of excitement out there about neural nets and deep learning and things like that but these are all based on a model of a neuron that is you know rather dated right and I presume you then walk through some of the new things that we've learned since then you know kind of walk us through your your talk and sure the ideas that you wanted to share with folks sure so I'd like to say first off that I don't have anything bad to say about artificial neural networks or deep learning I sure that that's necessary technology that we that we needed to build them but one of my main points is that it's not going to naturally evolve into what people call strong AI and the first thing I say in my talk is weak AI is not intelligent and won't become intelligent there's not going to be some some this like exponential growth and suddenly you know sentience there's some core things about the in end point neuron models specifically that don't have the capacity for intelligence as we understand though so what are some of those core things yeah dive right in there's there's there's two main things one is that the neuron needs to have three states and current neurons have two states active or not and we add the idea of a predictive state so the neuron goes into a predictive state to indicate that it thinks based upon the context of its input that it's going to be active soon and that prediction is core to everything about our theory that and and we you know we take that from understanding how the brain works your brain is constantly making predictions about what it's going to see next one's going to feel next all the time and you can see that by investigating these depolarized pyramidal neurons and the neuroscience they call these cells a depolarized which means they're primed to fire and we're missing that in the an and neuron model there's no there's no concept of that so there's that and the other thing is pyramidal neurons have different integration zones it's not they don't just have one group of connections to other neurons they've got apical dendrites that kind of provide feedback from layers that are either above it or different parts of the cortex there is distal a distal zone that's that's lateral so that's getting connections from it could be from another layer it could be from within the layer itself but that provides context and these both provide context for the proximal input the proximal input is is really the driver input that's typically coming from the direction of the senses and so that's that's like the sensory input that we need to understand we need to process and the pyramidal neurons do that in the context of these other zones in the context of distal input and a typical input so those are the two things I think we're really missing from that point neuronal so I get that the neuroscience research has identified these things and in human biology but it's not clear to me how we've demonstrated that those are required for intelligence or even that those things can't be approximated with artificial neural networks as we currently know that like the last thing the different zones you know made me think of well you know we just have different inputs in different weights right and then as far as predictions are concerned if we're able to predict at a network level you know who's to say that the neuron itself has to have that predictive state in order to create intelligence well it's it's through that current artificial neural networks of deep learning could potentially put together models that replicate the the parts of the things about the neuron that we're saying they're required for intelligence I think we use them for prediction all the time yeah but I don't know that that's it doesn't feel natural to me and and think about this recently there's been this big discussion in the in the declarant community about back propagation because Geoff Hinton has recently said let's give up on back propagation go back to the drawing board and try and figure out what's really going on we did that 12 years ago so we never tried back propagation we've always tried to do this but we don't see back propagation happening in the brain and for the for the longest time you know Hinton and Ben geo were insisting that back propagation is happening in the brain we just don't see it so I think so that's kind of a move in our direction and even from like the deep mind crew they recently had this blog post about important neurosciences to contributing to artificial intelligence so it feels to me like the community's starting to move in our direction and maybe they will be able to hack these properties that we're saying we need in the neuron model into deep learning systems that could happen but I don't think that it will happen without them doing something to incorporate those ideas and been geo just this week published a paper that talked about I forget the exact title something about consciousness I don't know that I saw that I did not see that that's and it was controversial might be strong but it raised a lot of questions because he you know proposed that somehow we need to take into account some notion of consciousness in our models but the paper didn't present any experimental results of whatever it was just like a prod to the community anything about consciousness is going to be controversial because what is consciousness Sam alright what is intelligence yeah that's where I started my talk off with was asking people in the audience who believes humans are intelligent and they all raised their hands who believes chimpanzees are intelligent and that just go down the evolutionary ladder and see hands going down by the end of it I'm asking who thinks plants are intelligent and there's still one or two people that think plants are intelligent and they may be right you know I don't I don't we don't know yeah I heard you say only whatever so there's a lot of disagreement the thing is everybody believes humans are intelligent so at least we can start with that so we have been staying and I think by association we can include most primates and not too because they have the same neocortical structure that we have so we focused that on that neuroscience on what we think what we all know is intelligent and that's the neocortex of the mammalian brain so you started off talking about like level setting on intelligence and and just how open-ended that is and then talked about kind of the evolution of the neuron like how do you get from there to systems oh okay so you think about the pyramidal neuron like I said and it has these integration zones it's hard to visualize without a picture but Francisco said the same thing but my talk will be online at some point so if I invent a bunch of drawings and stuff but if you look at a pyramidal neuron and it's got a little link to it if you shoot us a link okay it has these integration zones distel which is lateral to the side próxima which comes from below apical which comes from on top the cortex has this homogeneous structure if you took your neocortex and you unwrinkled it and unfolded it and flattened it all out it's a sheath of cells it's about the size of a dinner napkin about the thickness of the dinner and it's homogeneous throughout it has the same structure and what that there's sort of like this computational unit and the cortex called a cortical column and this is this is something that is more recent of the neuroscience discovery we've known for like a hundred years that the cortex had layers like there is these distinct little layers and the sheet and that they their structure was different enough that we thought well they're doing different things but not exactly sure what they're doing now now that we know they're not just layers there's also columns and we can take that each column and say okay that each one of these is some individual computational unit right and maybe they can share their computation or the output of their computations with their neighbors and stuff so this idea that a column can have layers within it and every layer is full of these pyramidal neurons okay so imagine a column that's cut up into layers and and this is sort of a cylindrical column cut up in layers each one of those layers is full of primal neurons that have these integration zones we're apical up and down to the North sort of and proximal to the South and distal to the side so each layer itself has the same integration zone properties as an individual neuron because they're all oriented in exactly the same way so you can treat that layer as a computational unit so a layer gets proximal input a bunch of proximal input that all gets piped into its neurons in different ways from some space that's representing generally some spatial sensory features changing over time or something like that so you can think of the layer itself as a computational unit depending on where it gets proximal input where it gets its disel into it and it's able to input it does different things and also there's you know there's a bunch of different layers and look in the core text somewhere between 6 and 10 depending on which neuroscience to talk to but each one of those layers is a little bit structured a little bit differently too so there's some minor deviation in the organization of those pyramidal neurons within layers also give them a little bit of different computational aspects organization in what sense like for example we have these algorithms that we're saying or happening in these layers one is called a spatial pooling algorithm that takes some input and kind of spreads it normalizes it while retaining the semantics of the input and these create these many column structures of neurons and some layers have this and typically the the distal connections from each one of those neurons as it's receiving proximal input they start connecting to each other over time when you take that distal input to a layer and you say okay we're not going to get that distal input from somewhere else we're going to have all of the pyramidal neurons within the layer give each other distal input okay what you're doing is just naturally creating a temporal context because when you're only context to some input is what you've what state you've been in in the past then that's the temporal context if you're getting that input from somewhere else who knows that context could mean any number of things but if you're just giving yourself context that's you're looking at your own past that layer has context of its own history when you live them back to itself so that's one of the core things that we discovered we call this a temporal memory algorithm and it relies on these little mini column structures that takes the input you know the the bits of input that are coming in from from some sensory organ or pair perhaps from another part of the cortex normalizes it into these column activations and then activate cells within each column based upon the distal context that it's getting okay so what you get is it's starting to tie sequences together so when you see a pattern repeating over and over over and over you get these distal connections that are being reinforced because I see the pattern and the and the distal connection will create that connection too the active cells that it just saw that represented the previous spatial input and then we used to get another input and there may be a prediction so I saw that last time I'm going to be next so it makes a prediction and if it's right and the next input activates a column that that cells in then it becomes active it was a correct prediction so the context you're creating for me is how I felt when Francisco was explaining some of this stuff for me whoa visuals and hence that's why I created this bunch of videos on our YouTube channel to try and explain it all visually so you explain kind of the micro structure than the macro structure and then like what's next so strange loop is a developer conference like how do you get from there to okay how do I build something well there's two questions there I guess strange oh there's a developer conference however it's also like a weird conference you know granted yes it's very eclectic so you can get in if you have something that's like on the fringe but very interesting you can get in and talk there so I think that's why I cut this talk but there's as far as from it but still I mean in addition to developing IP and all of that as I understand it new meant as a company offers tools that allow people to actually use this stuff is that correct or an open-source tool so right all of our code is open source and anybody can find use it if they want to I've created a lot of tutorials and code samples and I try and make it as approachable as possible for our community you've got a very active forum with lots of discussions about the theory code private stuff so as a user of these open-source tools and things like am I do I need to think about columns and dendrites and all of that stuff or am I thinking about other representations so it could go either way it depends on what you're trying to do so we have a pretty diverse and a collective community that are interested in this typically people who are really interested in how the brain works or yeah I could say that can be a little off but I mean they're always very smart and inquisitive and curious and then it amazes me the types of things that people try and do with with our stuff okay and I always encourage it and I was like yeah try it give it a try who knows we don't know what's gonna so our software that we open-source is called new pick the momenta platform for intelligent computing okay and we just released 1.0 of that a few months ago and that includes up to what I just talked about the temporal memory part of it and a few years back after we you know went through this research cycle and made the did the temporal memory discovery that was a big discovery for us to see how how sequences were memorized in the brain and the cortex we kind of just dumped it all open source and we started like building these potential sample you know we just brainstormed about what could we make with this that people might want to use and we made all these sample applications there's one that was like rogue human behavior detection which is something you can install on a computer and it monitors the different metrics that are coming out of the computer over days and weeks and can give an indication about a user's behavior are they behaving oddly or differently based on the time of day and the thing that they're doing and the metrics that are coming out of the computer so that that's a sort of thing you can do we also had a IT analytics program ok that hooked up to AWS and we actually license that to another company called croc and so they are actively selling that to IT companies that have a bunch of servers on Amazon and it will automatically like through cloud watch connect to all the different metrics coming out of your servers and it will create models for all of them and it'll just start streaming the data into them and you don't really have to do anything they're all sort of pre-configured and then it'll give you anomaly indications over time so after it's seeing that server data for a while it gets an idea of what's normal and what's not normal then it notifies you you know that something's wrong with this server might not it doesn't know what's wrong with the server but it can tell you that something abnormal is happening and even you know with this server and this server and the combination of those so there's anywhere that there's streaming analytics that Indian anomaly detection I think that there's there's a potential application for what we have right now with new pic 1.0 ok there's also this really interesting thing that I think is a still a big opportunity for people who want to try and build something novel with this we figured out a way to encode geospatial location into a format remember when Francisco and you talked we talked a lot about sdrs about sparse distributed representations of coracles it was all about that they call them semantic fingerprints right right we found a way to encode location information like latitude longitude altitude into an SDR okay so we can take something that moves through time and space and give the algorithms the intelligence algorithms a way to understand the patterns and that time and the movement of that object so for an example that I always do is I go walk my dogs on the same dog walking we're out you know every day and if I take a tracker with me and I and then I go put all my points back through the algorithm the first time it sees the walk it's like all anonymous it doesn't it doesn't think none of its familiar because it's brand new the second time I do it it's a little bit less familiar the third time I do it it's like no big deal this is normal right as soon as I deviate from the path that I've taken and even if I just go walk on the other side of the street or if my dog's decide they don't want to stop at that tree they want to stop at some other tree I get anomaly indications coming from my path so I think this has big applications in fields like logistics air traffic control human tracking pet tracking stuff like that where you've got normal routes of things that normally happen and you don't necessarily to the T you want to say oh if they deviate right now or if they're not at this point at this time there's something wrong you just want to get an idea of their general movement whether it's strange or not or whether it's has been seen or not then it can do that sort of thing you know certainly for the the network and server anomaly detection and the example you gave before that there you know things that you can do with a variety of different techniques like are there things that you've found that you know either you know the approach you take you know because of the approach you take you know it's just kind of best-in-class or like it's you know if you need to do X Y Z like this is the best way to do it or you know either from either from you know complexity of creating the solution or you know computational cost or some other metric like we wondered the same thing but the problem we had is several years ago is that there are no there are no standard benchmarks for streaming temporal anomaly anomaly detection or just from you know temporal anomaly detection most of the benchmarks are on spatial data and most of the machine learning techniques work on spatial data only we didn't find any things that we could compare what we did with what like LS TM for example does has some abilities to do temporal analysis on things be it you know it's sort of in batches that move along so we created a benchmark we called the anomaly momenta anomaly benchmark okay and we've set up you know ours as one of them in the running and we set up an LST m1 and we set up there's one from Twitter there is one from Etsy that does streaming anomaly detection like they've got open source projects that do that sort of thing and we created you know these input datasets things like how many taxi calls were there in New York City over an entire period of time or something and you can look at that data and you can say Oh something weird happened there for sure and you go look it up and they're like oh there was there was a big game in town or there's stuff like that you can find in that data so we'd find datasets like that that had a good amount of data and had obvious anomalies that were labeled and marked yeah and we would run all of these algorithms against them and score them based on how well they detected the anomaly and weighting it I think we weighted it pretty heavily on not providing false positives I think I can't remember exactly but it's it's open source it's on github at Numenta slash NAB for Numenta anomaly at benchmark okay we have that at least and of course we're the winner because everything we had like this contest really couldn't anybody beat us at this and somebody came and beat us at it we're like okay we're gonna fix it so fix it we're beating again but okay there's always some tweaking that you can do you know to try and get that last few percent haha okay it kind of leaves me with an impression that like this is a tool that you know we've or a set of tools you have a strong feeling that you know closely models the inner workings of the brain as we understand it and that over time that will lead to you know I'm assuming you're banking on like order of magnitude you know capabilities over current approaches like that things you can do using momenta and the things you can do using other things will diverge over time but today it doesn't sound like there's a you know bang on the table like if you need to do X Y Z these tools will get you there you know a hundred times faster and a hundred times cheaper or even ten or you know it's it sounds like you you know it's an interesting approach and something that's worthwhile for people to learn and take a look at and understand the thinking around but it's kind of a killing axe there's no cure I guess that's what I'm getting at yeah there's no killer app but we're patient to there's a lot of things about the brain that we don't we still don't understand right and what we have currently in the minta 1.0 is just you know temporal memory stuff all of our other work that we're doing is in research repositories that kind of attach on top of that so okay we're taking those core algorithms which are going to change and we're building new and different things with them because the core algorithms in your brain don't change or any discover that it can do lots of different things with those core algorithms right so we're building structures now because we think we understand how sensory motor integration happens with sensory input and movement but it is the integration of two layers in one of those columns or my told you about the layers having these integrations yes so we could have one layer that is running the same temporal memory algorithm that I described earlier with the many columns and everything but we don't send it its own distal input we don't give it a temporal context okay we can pipe in the context the distal connection comes from somewhere else in the brain comes from a different layer down and and if we assume that that layer or that the output of that layer is providing us with location information associated with a sensory input that's proximal coming up to the layer from from the bottom so that's the driver signal is this sensory input the distal signal is going to represent the object being touched and what location on the object that sensory feature was since then we can have a layer that can represent every object we've ever touched and what sensory input we've felt we're on it and so that layer now provides that information to another layer which we call an output layer this output layer has a little bit of the different structure because it doesn't have the many columns like the one underneath it but it represents over time a library of every object we've ever learned right so we can train this thing and say okay this is a coffee cup touch it all over the place right right okay here's a banana touch it all over the place very low and we can build a library of objects that that top layer represents so the bottom layer is basically just gonna represent all the sensory input you felt on every location on every object that you've touched and this is the temporal memory because of that it's the temporal memory concept it's not doing temporal memory anymore it's doing sensory feature and location Association just because we've changed the distal input so it's no longer giving itself distal and good it's getting it from somewhere else and it does something entirely different and so it sounds like the idea there is like if you think about using deep learning object recognition like our best guess at the way the different layers work now is you've got layers to kind of figure out edges and layers that figure out colors and so you know the you know when you when the inputs the banana you know will get kind of the curvy layer firing and the yellow yeah what you're describing sounds like you know maybe in the kind of in the internals is capturing a richer representation of these various that clear the big differences is our model incorporates movement and that's that's the big difference so can you name anything that is intelligent that cannot move nothing kind of never nobody ever gets because there's not there's nothing intelligent that can't move yeah so we we believe that's a core feature of intelligence the ability to interact with your environment has to be baked in to the architecture of the intelligent system it's not something that you could just add you can't just add behavior to it to a system that you're building it has to be baked into the flow of information so like I said when when you move your finger to touch the object you know where your finger is going to move because you just commanded it to move there so that information is available to your brain that loop has to be baked in so that every time you touch something you know where it's going and you know what you expect to feel if you don't feel that something's wrong okay and some that is demonstrating all this with a glass of water and we've been experimenting with a video camera set up here so we may be able to show the visual aids of motion it helps with the visual aids but like I said if you want visuals go to the mental work yeah I got lots of stuff nice nice awesome well anything else did you cover it in your talk or last kind of final thoughts that you want to leave us with I guess I just I want to emphasize that we have a really nice community I feel like I'm the community manager so of course I'm gonna say that but honestly there's some really bright people that have even shown up just in the past year that are doing some really interesting things with HHM all of our papers are open access so all the steering is everything that we theorized about we write papers about and we throw it we put it out there and we do it with code so we're like here's a paper here's a simulation here's the code you can run it yourself if you want to talk about it yourself so if you don't believe us you can try it yourself and there's lots of people in our community that have decided they're gonna write their own HTM system and their own favorite language of their own environment so there's a lot of people doing and new and interesting things creating their own visualizations last one was thesis from this guy in Turkey he did this amazing sensorimotor sort of simulation in a 3d game environment where he's got a player trying to find a point and how long but he wrote his whole thesis on his Brian but he used our theory and then attached some stuff on top like he theorized further and he's like well what if I've got this and this and this and trying to like create a more complete idea of the brain not race the cortex because we're really just working on cortex right now and he's trying to incorporate some other things like some like real behaviors or real drivers of what is the motivation for that agent that is running the intelligence okay the intelligence and huh and we're not quite there but we're focusing our research right now on location like that location signal you about we've got a really good idea of how that location signal is generated if it's super interesting like the way that your brain rocks location of things is amazing probably don't I don't have acknowledged to to explain it but it's about grid cells location cells and place cells and stuff like that if anybody wants to go research that there's some really interesting neuroscience papers coming out about grid cells okay first I'll give you a little example okay if you put a mouse in a box and you let it run around the box and you're monitoring its neurons you'll see as it runs around the box and you trace where it goes certain neurons will fire when it's in certain places and those fire and you can identify those cells that are leading whenever it's in place yeah that's why yeah and specific neurons gonna fire yes Wow and if you look at it it forms this hexagonal grid so there's these this hexagonal pattern of neurons that are firing as you move through space representing where you're at in the space that you're occupying and we think that that interplay of neurons and that idea of location of neurons representing locations in in space plays out at a bigger level to even represent objects in space like this you have an allocentric representation of any object that you can imagine allocentric meaning not related to where you are not egocentric but just in Psych imagine a cup that's a that's an object that you have and and if you like used its center of gravity for whatever as its center you could define it entirely based upon all the sensory input that you've ever received about those objects that you felt or seen or whatever and we think that that that has something to do with grid cells that how those objects are stored like the the plate how in 3d space they're defined is linked to the sensory input that we receive about them and what cells are firing in space as we're imagining what we're we're touching on the unopened Wow super super interesting stuff I will definitely make a note for folks to listen to the conversation with Francisco kind of times before this one or maybe this one should be the prerequisite for that one I don't know it's hopefully hopefully it's standalone awesome thanks so much Matt you're welcome I appreciate the opportunity absolutely all right everyone that's our show for today thanks so much for listening and for your continued feedback and support for more information on Matt or any of the topics covered in this episode head on over to Twilio dot-com / talks last 71 to follow along with our strange loot 2017 series visit twill Millea calm / st loop of course you can send along your feedback or questions via twitter - at - Malaya or at Sam Carrington or leave a comment right on the show notes page thanks again to Nick SOSUS for their sponsorship of the show check out - Malaya dot-com / talks last 69 to hear my interview with the company founders and visit Nexus comm / twimble for more information and to try their API for free thanks again for listening and catch you next time\n"