MIT AGI - Cognitive Architecture (Nate Derbinsky)

**The Challenges and Advances of Artificial Intelligence Research**

Artificial intelligence (AI) research has come a long way since its inception, with significant breakthroughs in various areas. However, despite the progress made, there are still numerous challenges that need to be addressed. One of the most notable challenges is the complexity of human cognition, which remains a subject of ongoing research and debate.

**The Importance of Basic Research**

Basic research plays a crucial role in advancing AI technology. Years of fundamental research have led to significant advancements in our understanding of human cognition, machine learning, and other related areas. The work of researchers such as John Anderson has been instrumental in shaping the field of AI. His use of Akhtar models of gaze and reaction to make predictions about how humans would interact with user interfaces was a significant breakthrough.

**The Role of Blocks World**

Blocks World is a classic problem in AI research, often cited as one of the first problems tackled by researchers in the field. It involves moving blocks on a table, which may seem like a small problem compared to other challenges in AI. However, it has been a significant testbed for researchers to develop and refine their algorithms and models.

**Scaling Up AI Research**

One of the major challenges facing AI research is scaling up complex problems. Despite the progress made in developing deep learning techniques, integrating them into larger systems remains a significant challenge. The development of tools such as Sigma, which provides a framework for experimenting with different configurations, has been instrumental in addressing this issue.

**The Role of Cognitive Modeling**

Cognitive modeling has played a crucial role in advancing AI research. By studying human cognition and behavior, researchers can develop more effective models that mimic human thought processes. One notable example is the study by John Anderson and his colleagues, who used text corpora to analyze the properties of language and make predictions about human cognition.

**Multi-Agent Systems**

Recent advances have also focused on multi-agent systems, where multiple agents interact with each other to achieve a common goal. The Soar system, which uses a combination of symbolic and connectionist approaches, has been developed to tackle complex problems such as decision-making and problem-solving.

**The Challenge of Preventing Convergence**

One of the challenges facing multi-agent systems is preventing them from converging on a single solution with new data. Researchers have yet to develop a strong theory that addresses this issue, and it remains an open problem in the field. The lack of constraints on multi-agent systems means that there is no inherent mechanism for preventing convergence, which can lead to suboptimal solutions.

**The Importance of Theory**

Despite the advances made in AI research, there is still a need for strong theories that relate to multi-agent systems. Until such theories are developed, researchers will have to rely on ad hoc approaches and trial-and-error methods to address this challenge. The development of new algorithms and techniques that can help prevent convergence is crucial for advancing multi-agent systems.

**Conclusion**

Artificial intelligence research has made significant progress in recent years, but there is still much work to be done. Challenges such as the complexity of human cognition, scaling up complex problems, and preventing convergence remain significant hurdles. Despite these challenges, researchers continue to make progress, driven by advances in cognitive modeling, multi-agent systems, and other areas. The development of new theories and techniques will be crucial for advancing AI research and achieving its full potential.

"WEBVTTKind: captionsLanguage: enso today we have nadir bin ski he's a professor at Northeastern University working on various aspects of computational agents that exhibit human level intelligence please give Nate a warm welcome thanks a lot and thanks for having me here so the title that was on the page was cognitive modeling I'll kind of get there but I wanted to put it in context so the the bigger theme here is I want to talk about what's called cognitive architecture and if you've never heard about that before that's great and I wanted to contextualize that as how are we what how is that one approach to get us to AGI and I say what my view of AGI is and put up a whole bunch of TV and movie characters that I grew up with that inspire me that will lead us into what is this thing called cognitive architecture it's a whole research field that crosses neuroscience psychology cognitive science and all the way into AI so I'll try to give you kind of the historical big-picture view of it what some of the actual systems are out there that might be of interest to you and then we'll kind of zoom in on one of them that I've done a good amount of work with called soar and what I'll try to do is tell a story a research story of how we started with kind of a core research question we look to how humans operate understood that phenomenon and then took it and so really interesting results from it and so at the end if this field is of interest there's a few pointers for you to go read more and and go experience more of cognitive architecture so just rough definition of AGI given this in AGI class depending the direction that you're coming from it might be kind of understanding intelligence or maybe developing intelligent systems they're operating at the level of human level intelligence the the typical differences between this and other sorts of maybe AI machine learning systems we want systems that are going to persist for a long period of time we want them robust to different conditions we want them learning over time and here's the crux of it working on different tasks and in a lot of cases tasks they didn't know we're coming ahead of time I got into this because I clearly watched too much TV and too many movies and then I looked back at this and I realized I think I'm covering 70's 80's 90's nots I guess it is and today and so this is what I wanted out of AI and this is what I wanted to work with and then there's the the reality that we have today so instead of so who's watched Knight Rider for instance I I don't think that exists yet but but maybe we're getting there and in particular for fun during the Amazon sale day I got myself an Alexa and I could just see myself at some point saying Alexa please might write me an R sync script you know to sync my class and if you have an Alexa you probably know the following phrase this this just always hurts me inside which is sorry I don't know that one which is okay right that's a lot of people have no idea what I'm asking let alone how to do that so what I want Alexa to respond with after that is do you have time to teach me and to provide some sort of interface by which back and forth we can kind of talk through this that we aren't there yet to say the least but I'll talk later about some work on a system called Rosie that's working in that direction we're starting to see see some ideas about being able to teach systems out of work so folks who are in this field I think generally fall into these three categories they're just curious they want to learn new things generate knowledge work on hard problems great I think there are folks who are in kind of that middle cognitive modeling realm and so I'll use this term a lot it's really understanding how humans think how humans operate human intelligence at multiple levels and if you can do that one there's just knowledge in and of itself of how we operate but there's a lot of really important applications that you can think of if we were able to not only understand but predict how humans would respond react in various tasks medicine is is an easy one there's some work in HCI or HR I I'll get to later where if you can predict how humans would respond to a test you can iterate tightly and develop better interfaces it's already being used in the realm of simulation and in defense industries I happen to fall into the latter group which or the bottom group which is systems development which is to say just the desire to build systems for various tasks that are working on tasks that kind of current AI machine learning can't operate on and I think when you're working at this level or on any system that nobody's really achieved before what do you do you you kind of look to the examples that you have which in this case that we know of it's just humans right irrespective of your motivation when you have kind of an intent that you want to achieve in your research you kind of let that drive your approach and so I often show my AI students this the touring test you might have heard of or variants of it that have come before these were folks who are trying to create system that acted in a certain way that acted intelligently and the kind of line that they drew the benchmark that they used was to say let's make systems that operate like humans do cognitive modelers will fit up into this top point here to say it's not enough to act that way but by some definition of thinking we want the system to do what humans do or at least be able to make predictions about it so that might be things like what errors would the human make on this task or how long would it take them to perform this task or what emotion would be produced in this task there are folks who are still thinking about how the computer is operating but it's trying to apply kind of rational rules to it so a logician for instance would say if you have a and you have B the a gives you B B gives you see a should definitely give you C that's just what's rational and so there folks operate in that direction and then if you go to intro AI class anywhere in the country particularly Berkeley because they have graphics designers that I get to steal from the benchmark would be what the system produces in terms of action and the benchmark is some sort of optimal rational bound irrespective of where you work in the space there's kind of a common output that arrives when you research these areas which is you can learn individual bits and pieces and it can be hard to bring them together to build a system that either predicts or acts on different tasks so this is part of the transfer learning problem but it's also part of having distinct theories that are hard to combine together so I'm going to give an example that come comes out of cognitive modeling or perhaps three examples so if you were in a HCI class or some interest psychology classes one of the first things you'll learn about is Fitz law which provides you the ability to predict the difficulty level of basically a human pointing from where they start to a particular place and it turns out that you can learn some parameters and model this based upon just the distance from where you are to the targets and the size of the target so both moving along distance will take a while but also if you're aiming for a very small point that can take longer then if there's a large area that you just kind of have to get yourself to and so this is held true for many humans so let's say we've learned this and then we move on to the next task and we learn about what's called the power law of practice which has been shown true in a number of different tasks what I'm showing here is one of them where you're going to draw a line through sequential set of circles here starting at 1 going to 2 and so forth not making a mistake or at least not trying to and try to do this as fast as possible and so for a particular person we would fit the a B and C parameters and we'd see a power law so as you perform this task more you're going to see a decrease in the amount of reaction time required to complete the task great we've learned two things about humans let's add some more in so for those who might have done some reinforcement learning TV learning is one of those approaches temporal difference learning that's had some evidence of similar sorts of processes in the dopamine centers of the brain and it basically says in a sequential learning tasks you perform the task you get some sort of reward how are you going to kind of update your representation of what to do in the future such as to maximize expectation of future reward and there are various models of how that changes over time and you can build up functions that allow you to form better and better and better a given trial and error great so we've learned three interesting models here that hold true over multiple people multiple tasks and so my question is if we take these together and add them together how do we start to understand a task as quote/unquote simple as chess which is to say we could ask questions how how long would it take for a person to play what mistakes would they make they played a few games how would they adapt themselves or if we want to develop system that ended up being good at chess or at least learning to become better at chess my question is if you could there doesn't seem to be a clear way to take these very very individual theories and kind of smash them together and get a reasonable answer of how to play chess or how do humans play chess and so gentlemen in this slide is Alan Newell one of the founders of AI did incredible work in psychology and other fields he gave a series of lectures at Harvard in 1987 and they were published in 1990 called the unified theories of cognition and his argument to the psychology community at that point was the argument on the prior slide they had many individual studies many individual results and so the question was how do you bring them together to gain this overall theory how do you make forward progress and so his proposal was unified theories of cognition which became known as cognitive architecture which is to say to bring together your core assumptions your core beliefs of what are the fixed mechanisms and processes that intelligent agents would use across tasks so the representations the learning mechanisms the memory systems bring them together implement them in a theory and use that across tasks and the core idea is that when you actually have to implement this and see how it's going to work across different tasks the interconnections between these different processes and representations would add constraint and over time the constraints would start limiting the design space of what is necessary and what is possible in terms of building intelligent systems and so the overall goal from there was to understand and exhibit human level intelligence using these cognitive architectures a'nature question asked is okay so we've gone from a methodology of science that we understand how to operate in we make a hypothesis we construct a study we gather our data we evaluate that data and we falsify we do not falsify the original hypothesis and we can do that over and over again and we know that we're making for progress scientifically if I've now taken that model and changed it into I have a piece of software and it's representing my theories and to some extent I can configure that software in different ways to work on different tasks how do I know that I'm making progress and so there's a form of science called lactose ium and it's kind of shown pictorially here where you start with your core of what your your beliefs are about where your head what is necessary for achieving the goal that you have and around that you'll have kind of ephemeral hypotheses and assumptions that over time may grow and shrink and so you're trying out different things trying out different things and if an assumption is around there long enough it becomes part of that core and so as you work on more tasks can learn more either by your work or by data coming in from with someone else the core is growing larger and larger you've got more constraints and you've made more progress and so what I wanted to look at we're in this community what are some of the core assumptions that are driving forward scientific progress so one of them actually came out of those lectures they're referred to as Newell's time scales of human action and so off on the left the left two columns are both time units just expect somewhat differently second from the left being maybe more useful to a lot of us in understanding daily life one step over from there would be kind of at what level processes are occurring so the lowest three are down at kind of the substrate the neuronal level we're building up to deliberate tasks that occur in the brain and tasks that are operating on the order of ten seconds some of these might occur in the psychology laboratory but probably a step up to and ours and then above that really becomes interactions between agents over time and so if we start with that the things to take away is that regular the hypothesis is that regularities will occur at these different time scales and that they're useful and so those who operate at that lowest time scale might be considering neuroscience cognitive neuroscience when you shift up to the next couple levels what we would think about in terms of the areas of science that deal with that would be psychology and cognitive science and then we shift up a level and we're talking about sociology and economics and the interplay between agents over time and so what we'll find with cognitive architecture is that most of them will tend to sit at the deliberate act we're trying to take knowledge of a situation and make a single decision and then sequences of decisions over time will build to tasks and tasks over time will build to more interesting phenomenon I'm actually going to show that that isn't strictly true that there are folks working in this field that actually do operate one level below some other assumptions so this is herb Simon receiving the Nobel Prize in Economics and part of what he received that award for was an idea of bounded rationality so in various fields we tend to model humans as rational and his argument was let's consider that human beings are operating under various kinds of constraints and so to model the rational with respect to and bounded by how complex the problem is that they're working on how big is that search space that they have to conquer cognitive limitations so speed of operations amount of memory short-term as well as long-term as well as other aspects of our computing infrastructure that are going to keep us from being able to arbitrarily solve complex problems as well as how much time is available to make that decision and so this is actually a phrase that came out of his speech when he received the Nobel Prize decision-makers can satisfice either by finding optimum solutions for a simplified world just to say take your big problem simplify in some way and then solve that or by finding satisfactory solutions for a more realistic world take the world and all its complexity take the problem in all its complexity and try to find something that works neither approach in general dominates the other and both have continued to co-exist and so what you're actually going to see throughout the cognitive architecture community is this understanding that some problems you're not going to be able to get an optimal solution to if you consider for instance bounded amount of computation bounded time the need to be reactive to a changing environment these sorts of issues and so in some sense we can decompose problems that come up over and over again into simpler problems solve those newer optimally or optimally fix those in optimize those but more general problems we might have to satisfy some there's also the idea of the simple system hypothesis so this is Alan Newell and herb Simon they're considering how a computer could play the game of chess so the physical system physical symbol system talks about the idea of taking something some signal abstractly referred to as symbol combining them in some ways to form expressions and then having operations that produce new expressions a weak interpretation of the idea that symbol systems are necessary and sufficient for intelligent systems a very weak way of talking about it is the claim that there's nothing unique about the neuronal infrastructure that we have but if we got the software right we could implement it in the bits bytes Ram and processor that make up modern computers that's kind of the weakest way to look at this that we can do it with silicon and not carbon stronger way that this used to be looked at was more of a logical standpoint which is to say if we can encode rules of logic these tend to line up if we think intuitively of planning and problem solving and if we can just get that right and get enough fat's in there and enough in there that somehow intelligence well that's what we need for intelligence and eventually we can get to the point of intelligence and that's what you need for intelligence and that was a starting point that lasted for a while I think by now most folks in this field would agree that that's necessary to be able to operate logically but that there are going to be representations and processes that will benefit from non symbolic representation so particularly perceptual processing visual auditory and processing things in a more kind of standard machine learning sort of way as well as kind of statistic taking advantage of statistical representations so we're getting closer to actually looking at cognitive architectures I did want to go back to the idea that different researchers are coming with different research foci foci and we'll start off with kind of the lowest level and understanding biological modeling so Leiber and spawn both try to model different degrees of low-level details parameters firing rates connectivities between different kind of levels of neuronal representations they build that up and then they tried to build tasks above that layer but always being very cautious about being true to human biological processes at a layer above there would be psychological modeling which is to say trying to build systems that are true in some sense to areas of the brain interactions in the brain and being able to predict errors that we made timing that we produced by the human mind and so there I'll talk a little bit about Akhtar this final level down here these are systems that are focused mainly on producing functional systems that exhibit really cool artifacts and and solve really cool problems and so I'll spend most of the time talking about soar but I want to point out a relative newcomer in the game called Sigma so to talk about spawn a little bit we'll see if the sound works in here I'm going to let the Creator take this one or not see how the AV system likes this but of course if I wouldn't be pleased with a pad of the microseconds and all celebrated since we're engineering is critical goal engineering allows you to break down equation intense very precise descriptions which we can test like building actual models one probably do recently is called the song model this Moscow has the two and a half million individual mountains isolated and evens the model is a knife and the up the canal is Bernard so essentially you can see images of numbers and that is something like a progressive in the case of the discarded into that seat but magic tide reproduces style that yeah so for instance it's easy to get the bitch on environment and actually forgive separately and silently on medical side we all know that we have cognitive town concession we get over and we can try that address accountant spicy alienating process with these nice models another potential area in fact it's on artificial intelligence a lot of working out visual Imperfects thanks to donations that are exceeded it at one pass pretty since plane test what's special is fine is that it's like that many different paths and this X we have not found it might appear californee the flow of information through different parts of the model something I haven't seen very well so provide a pointer at the end he's got a really cool book called how to build a brain and if you google and you can google spun you can find a toolkit where you can kind of construct circuits that will approximate functions that you're interested in connect them together set certain properties that you would want at a low level and build them up and actually work on tasks at the level of vision and robotic actuation so that's a really cool system as we move into architectures that are sitting above that biological level I wanted to give you kind of an overall sense of what they're going to look like what a prototypical architecture is going to look like so they're gonna have some ability to have perception the modalities typically are more digital symbolic but they will depending on the architecture be able to handle vision audition and various sensory inputs these will get represented in some sort of short-term memory whatever the state's representation for the particular system is there it's typical to have a representation of the knowledge of what tasks can be performed when they should be performed how they should be controlled and so these are typically both actions that take place internally that manage the internal state of the system and perform internal computations but also about external actuation and external might be a digital system a game AI but it might also be some sort of robotic actuation in real world there's typically some sort of mechanism by which to select from the available actions in a particular situation there's typically some way to augment this procedural information which is to say learn about new actions possibly modify existing ones there's typically some semblance of what's called declarative memory so whereas procedural at least in humans if I asked you to describe how to ride a bike you might be able to say get on the seats and pedal but in terms of keeping your balance there you'd have a pretty hard time describing it declaratively so that's kind of the procedural side the implicit representation of knowledge whereas declarative would include facts geography math but it also include experiences that the agent has had a more episodic representation of declarative memory and they'll typically have some way of learning this information on mending it over time and then finally some way of taking actions in the world and they'll all have some sort of cycle which is perception comes in knowledge that the agent has is brought to bear on that an action is selected knowledge that knows to condition on that action will act accordingly both with internal processes as well as eventually to take action and then rinse and repeat so when we talk about in an AI system an agent in this context that would be the fixed representation which is whatever architecture we're talking about plus set of knowledge that is typically specific to the task but might be more general so oftentimes these systems could incorporate a more general knowledge base of facts of linguistic facts of Geographic facts let's take Wikipedia and let's just stick it in the brain of the system there'll be more tasks in general but then also whatever it is that you're doing right now how should you proceed in that and then it's typical to see this processing cycle and going back to the prior assumption the idea is that these primitive cycles allow for the agent to be reactive to its environment so if new things come in that has react to if the Lions sitting over there I better run and maybe not do my calculus homework right so as long as this cycle is going I'm reactive but at the same time if multiple actions are taken over time I'm able to get complex behavior over the long term so this is the act our cognitive architecture it has many of the kind of core pieces that I talked about before let's see if the mouse yes mouse is useful up there so we have the procedural model here a short-term memory is going to be these buffers that are on the outside the procedural memory is encoded as what I call production rules or if-then rules if is this is the state of my short-term memory this is what I think should happen as a result you have a selection of the appropriate rule to fire and an execution you're seeing associated parts of the brain being represented here cool thing that has been done over time in the act our community is to make predictions about brain areas and then perform MRIs and gather that data and correlate that data so when you use the system you will get predictions about things like timing of operations errors that will occur probabilities that something is learned but you'll also get predictions about to degree that they can kind of brain areas that are going to line light up and if you want to that's actively being developed at Carnegie Mellon to the left is John Anderson who developed this cognitive architecture ooh 30 ish years ago and until the last about five years he was the primary researcher developer behind it with Christian and then recently he's decided to spend more time on cognitive tutoring systems and so Christian has become the primary developer there is an annual akhtar workshop there's a summer school where if you're thinking about modeling a particular task you can kind of bring your task to them bring your data they teach you how to use the system and try to get that study going right there on the spot to give you a sense of what kinds of tasks this could be applied to so this is a representative of a certain class of tasks certainly not the only one let's try this again I think powerpoints going to want a restart every time okay so we're getting predictions about basically where the eye is going to move what you're not seeing is it's actually processing things like text and colors and making predictions about what to do and how to represent the information and how to process the graph as a whole I had alluded to this earlier there's work by Bonnie John very similar so making predictions about how humans would use computer interfaces and at the time she got hired away by IBM and so they wanted the ability to have software that you can put in front of software designers and when they think they have a good interface press a button this model of human cognition would try to perform the tasks that have been told to do and make predictions about how long it would take and so you can have this tight feedback loop from designers saying here's how good your particular interfaces so act are as a whole it's very prevalent in this community I went to their web page and counted up just the papers that they knew about it was over 1,100 papers over time if you're interested in it the main distribution is in Lisp but many people have used this and wanted to apply it to systems that need a little more processing power so there's the NRL has a Java port of it that they use in robotics the Air Force Research Lab and Dayton has implemented it in Erlang for a parallel processing of large declare knowledge bases they're trying to do service-oriented architectures with it CUDA because they want what it has to say they don't want to wait around for it to have to figure that stuff out so that's the two minutes about Akhtar Sigma is a relative newcomer and it's developed out at the University of Southern California by man named Paul rosenbloom mmm mentioned a couple minutes because he was one of the prime developers of soar at Carnegie Mellon so he knows a lot about how store works and he's worked on it over the years and I think originally I'm gonna speak for him and he'll probably say I was wrong I think originally it was kind of a mental exercise of can i reproduce or using a uniform substrate I'll talk about so in a little bit it's thirty years of research code if anybody is dealt with research code it's thirty years of C and C++ with dozens of graduate students over time it's not pretty at all and and theoretically it's got these boxes sitting out here and so he reimplemented the the core functionality of soar all using factor graphs and message passing algorithms under the hood he got to that point and then said there's nothing stopping me from going further and so now it can do all sorts of modern machine learning vision optimization sort of things that would take some time in any other architecture to be able to integrate well so it's been an interesting experience it's now going to be the basis for the virtual human project out at the Institute for Creative Technology it's Institute associated with University of Southern California for him until recently could get your hands on it but in the last couple years he's done some tutorials on it he's got a public release with documentation so that's something interesting to keep an eye on but I'm gonna spend all the remaining time on the Soraa cognitive architecture and so you see it looks quite a bit like the prototypical architecture and I'll give a sense again about how this all operates give a sense of the people involved we already talked about Alan Newell so both John Laird who is my advisor and Paul Rosenbloom were students of Alan Newell John's thesis project was related to the chunking mechanism and soar which learns new rules based upon sub-goal reasoning so he finished that I believe the year I was born and so he's one of the few researchers you'll find who's still actively working on their thesis project beyond that's about I think about ten years ago he founded soar technology which is company up in Ann Arbor Michigan while it's called solar technology it doesn't do exclusively soar but that's a part of the portfolio general intelligence system stuff a lot of Defense Association so some notes of what's going to make soar different from the other other architectures that fall into this kind of functional architecture category a big thing is a focus on efficiency so john wants to be able to run soar on just about anything we just got on the soar mailing list a desire to run it on a real-time processor and our answer while we had never done it before was probably it'll work every release there's timing tests and we always what we what we look at is in a bunch of different domains for a bunch of different reasons that relate to human processing there's this magic number that comes out which is 50 milliseconds which is to say in terms of responding to tasks if you're above that time humans will sense a delay and you don't want that to happen now if we're working in a robotics task 50 milliseconds if you're dramatically above that you just fell off the curb or worse or you just hit somebody in a car right so we're trying to keep that as low as possible and for most agents it it doesn't even register it's below 1 millisecond fractions of millisecond but I'll come back to this because a lot of the work that I was doing was computer science AI and a lot of efficient algorithms and data structures and 50 milliseconds was that very high upper bound it's also one of the projects that has a public distribution you can get in all sorts of operating systems we use something called swig that allows you to interface with it in a bunch of different languages we kind of describe the meta description and you are able to basically generate bindings and different platforms Korres C++ there was a team at sore tech that said we don't like C++ it gets messy so they actually did a port over to pure Java in case that appeals to you there's an annual soar workshop that takes place in Ann Arbor typically it's free you can go there get a sort tutorial and talk to folks who are working on soar and it's fun I've been there every year but one in the last decade it's just fun to see the people around the world that are using the system and all sorts of interesting ways to give you a sense of the diversity of the applications one of the first was our one store which was back in the days when it was an actual challenge to build a computer which is to say that your choice of certain components would have radical implications for other parts of the computer so it wasn't just the Dell website where you just I want this much RAM I want this much CPU there was a lot of thinking that went behind it and then physical labor that went to construct your computer and so it was making that process a lot better there are folks that apply to natural language processing I saw r7 was the core of the virtual humans project for a long time HCI tasks terrasaur was one of the largest rule-based systems tens of thousands of rules over 48 hours it was a very large-scale simulation a defense simulation lots of games it's been applied to for various reasons and then in the last few years porting it on to mobile robotics platforms this is Edwin Olsen's splinter bot an early version of it that went on to win the magic competition then I went on to put soar on the web and if after this talk you're really interested in a dice game that I'm going to talk about you can actually go to the iOS App Store and download it's called Michigan liar's dice it's free you don't have to pay for it but you can actually play a liar's dice with soar and it's even set the difficulty level it's pretty good it beats me on a regular basis I wanted to give you a couple other just kind of really weird feeling sort of applications and really cool applications the first one is out of Georgia Tech go PowerPoint is dom-based interactive art installation in which she participants can engage and collaborate the movement improvisation with each other and virtual advance permits this thing her actually creates a hyperspace English virtual and quicker real bodies meet the line between human and non-human is learned through images to examine a relationship with technology the night installation ultimately examines how humans and machine can co-create experiences and it ducks out in a playful environment the don't creates a social space that encourages human human interaction and collective dance experiences allowing the depends to create an explorer movement while having fun the development of lumini has been a hundred exploration in our forms of theatre and dance as well as research and artificial intelligence and cognitive science lumahai draws inspiration from the ancient art form of shot here the original two-dimensional version of the installation led the conceptualization of the dome in the liminal space which even silhouettes and virtual character is being danced together on the projection surface rather than relying on a predominant library of movement responses the virtual dancer learns in this part measurements and utilizes new points movement theory to systematically reason about them and working improvisational shoes under the moon response the points theory is based in dance and theater and analyzes the performance along the dimensions of tempo duration repetition kinesthetic response shape spatial relationships gesture architecture and Photography the virtual dancer is able to use several different strategies to respond to human movements these include mimicry of a movement transformation of the movement along viewpoints and mentions we're calling a similar or complementary movement from memory in terms of you fight revolutions and define actually sponsor patterns of the agent has learned while dancing with its human partner the reason we did this is this is part of a larger effort in our lab for understanding the relationship between compeition cognition and creativity where a large amount of our efforts go into understanding human creativity and how we make things together out were created together as a way that almost understand how we can build co-created AI that serves the same purpose where to be a colleague and collaborate with us and create things with us so Brian was a graduate student in John leritz lab as well before I start this I lude into this earlier where we're getting closer to rosie saying can you teach me so let me give you some introduction to this in the lower left you're seeing the view of a Kinect camera onto a flat surface there's a robotic arm mainly 3d printed parts few servos above that you're seeing an interpretation of the scene we're giving it kind of associations of the four areas with semantic titles like one is the table one is the garbage just just semantic terms for areas but other than that the agent doesn't actually know all that much and it's going to operate in two modalities one is we'll call it natural language natural ich language restricted subset of English as well as some quote unquote pointing so you're gonna see some Mouse pointers in the upper left saying I'll talk about this and this is just a way to indicate location and so starting off we're gonna say things like you know pick up the blue block and it's gonna be like I don't know what is what is blue we say oh well that's a color okay you know so go get the green thing what's green oh it's a color okay move the blue thing to a particular location where's that point it okay what is moving like really it has to start from the beginning and it's described and it said okay now you've finished and once we got to that point now I can say move the green thing over here and it's got everything that it needs to be able to then reproduce the task given new parameters and it's learned that ability so let me give it a little bit of time so you can look a little bit at top left in terms of the pointers you're going to see some text commands being entered so what kind of attribute is blue we're gonna say it's a color and so that can map it then to a particular sensory modality this is green so the pointing what kind of thing is green okay color so now it knows how to understand blue and green as colors with respect to the visual scene move rectangle to the table what is rectangle okay now I can map that on to or understanding parts of the world is this the blue rectangle so the arm is actually pointing itself to get confirmation from the instructor and then we're trying to understand in general when you say move something what is the goal of this operation and so then it also has a declared representation of the idea of this task not only that it completed it then it can look back on having completed the task and understand what were the steps that led to achieving a particular goal so in order move it you're gonna have to pick it up it knows which one the blue thing is great now in the table so that's a particular location and at this point we can say you're done you have accomplished the moved blue rectangle to the table and so I can understand what that very simple kind of process is like and associate that with the verb to move and now we can say move the green object or not do the garbage and without any further interaction based upon everything that learned up till that point it can successfully complete that task so this is work of chavala Mohan and others at the shore group at the University of Michigan on the bruisy project and they're extending this to playing games and learning the rules of games through text-based descriptions and multimodal experience so in order to build up to here's a story and so I wanted to give you a sense of how research occurs in the group and so there's these back and forth that occur over time between there's this piece of software called soar we want to make this thing better and give it new capabilities and so all our agents are going to become better and we always have to keep in mind and you'll see this as I go further that it has to be useful to a wide variety of agents it has to be task independent and it has to be efficient for us to do anything in the architecture all of those have to hold true so we do something cool in the architecture and then we say okay let's solve a cool problem so it's build some agents to do this and so this ends up testing what are the limitations what are the issues that arise in a particular mechanism as well as integration with others and we get to solve interesting problems we usually find there was something missing and then we can go back to the architecture and rinse and repeat just to give you an idea again how sore works so the working memory is actually a directed connected graph the perception is just a subset of that graph and so there's going to be symbolic representations of most of the world there is a visual subsystem in which you can provide a scene graph just not showing it here actions are also a subset of that graph and so the procedural knowledge which is also production rules can modify can sections of the input modify sections of the output as well as arbitrary parts of the graph to take actions so the decision procedure says of all the things that I know to do and I've kind of ranked them according to various preferences what single things should I do semantic memory for facts there's episodic memory the agent is always actually storing every experience it's ever had over time in episodic memory and it has the ability to get back to that and so the similar cycle we saw before we get input in this perception called the input link rules are going to fire all in parallel and say here's everything I know about the situation here's all the things I could do decision procedure says here's what we're going to do based upon the selected operator all sorts of things could happen with respect to memories providing input rules firing to perform computations and as well as potentially output in the world and remember agent reactivity is required we want the system to be able to react to things in the world at a very quick pace so anything that happens in this cycle at max the overall cycle has to be under 50 milliseconds and so that's gonna be constraint we hold ourselves to and so the story I'll be telling will say how we got to a point where we started actually forgetting things and we're an architecture that doesn't want to be like humans we want to create cool systems but what we realized was something that we do there's probably some benefit to it and we actually put it into our system in the lead to good outputs so here's the research path I'm going to walk down we had just a simple problem which was we have these memory systems and sometimes they're going to get a cue that could relate to multiple memories and the question is if you have a fixed mechanism what should you return in a task independent way which one of these many memories should you return that was our question and we looked to some human data on this something called the rational analysis of memory done by John Anderson and realized that in human language there are recency and frequency effects that maybe it would be useful and so we actually did an analysis found that not only does this occur but it's useful in what are called word sense disambiguation tasks I'll get to that what that means in a second develop some algorithms to scale this really well and it turned out to worked out well not only in the original task when we learn look to two other completely different ones the same underlying mechanism ended up producing some really interesting outputs so let me talk about word sense disambiguation real quick this is a core problem in natural language processing if you haven't heard of it before let's say we have an agent and for some reason it needs to understand the verb to run looks to its memory and finds that it could you know run in the park it could be running a fever could run an election it could run a program and the question is what should an task independent memory mechanism return if all you've been given is the verb to run and so the rational analysis of memory looks through multiple text corpora and what they found was if a particular word had been used recently it's very likely to be reused again and if it hadn't been used recently there's going to be this effect where the expression here the T is time since the most recent use it's going to sum those with a exponential decay and so what it looks like if time is going to the right activation hire as better as you get these individual usages you get these little drops and then eventually drop down and so if we had just one usage of a word the read would be what the decay would look like and so the core problem here is if we're at a particular point and we want to select between kind of the blue thing or the red thing blue would have a higher activation and so maybe that's useful this is how things are modeled with human memory but is it useful in general for tasks and so we looked at common corpora used in word sense disambiguation and just said well if we just look at this corporate twice and we just use answers prior answers you know I ask the question what is the sense of this word I took a guess I got the right answer and I used that recency and frequency information in my task independent memory would that be useful and somewhat of a surprise but somewhat maybe not of a it actually performed really well across multiple corpora so we said okay this seems like a reasonable mechanism let's look at implementing this efficiently in the architecture and the problem was this term right here said for every memory for every time step you're having to pay everything that doesn't sound like a recipe for efficiency if you're talking about lots and lots of knowledge over long periods of time so we made use of a nice approximation that petrol that come up with to approximate tale effect so accesses that happen long long ago we could basically approximate their effect on the overall sum so now we had a fixed set of values and what we basically said is since these are always decreasing and all we care about is relative order let's just only recompute when someone gets a new value so it's a guess it's a heuristic and approximation but we looked at how this worked on the same set of corpora and in terms of query time if we made these approximations well under our 50 millisecond the effect on task performance was negligible in fact hunt a couple of these it got ever so slightly better terms of accuracy and actually if we looked at the individual decisions that were being made making these sorts of approximations were leading to up to 90 sorry at least 90 percent of the decisions being made were identical to having done the true full calculation so I said this is great and we implemented this and worked really well and then we started working on what seemed like completely unrelated problems one was in mobile robotics we had a mobile robot I'll show picture of in a little while roaming around the halls performing all sorts of tasks and what we're finding was if you have a system that's remembering everything in your short-term memory and your short-term memory gets really really big I don't know about you my short-term memory feels really really small I would love it to be big but if you make your memory really big and you try to remember something you're not having to pull lots and lots and lots of information into your short-term memory so the system was actually getting slower simply because it had a lot of short-term memory representation of the overall map it was looking up so large working memory a problem Liars dices game you play with dice we were doing in our L base system on this reinforcement learning and it turned out it's a really really big value function we're having to store lots of data and we didn't know which stuff we had to keep around to keep the performance up so we had a hypothesis that forgetting was actually going to be a beneficial thing that maybe maybe the problem we have with our memories that we really really dislike this forgetting thing maybe it's actually useful and so we experimented with the following policy we said let's forget a memory if one we haven't really it's not predicted to be useful by this base level activation we haven't used it recently we haven't used it frequently maybe it's not worth it that and we felt confident that we could approximately reconstruct it if we absolutely had to and if those two things held we could forget something so it's this bait same basic algorithm but instead of the ranking them it's if we set a threshold for base level activation finding when it is that a memory is going to pass that threshold and try to forget based upon that in a way that's efficient that isn't going to scale really really poorly so we were able to come up with an efficient way to implement this using an approximation that ended up for most memories to be exactly correct to the original I'm happy to go over details of this if anybody's interested later but end up being a fairly close approximation one that as compared to an accurate completely accurate search for the value ended up being somewhere between 15 to 20 times faster and so when we looked at our mobile robot here oh sorry let me get this back because our little robots actually going around it's the third floor of the computer science building at the University of Michigan it's going around he's building a map and again the idea was this map is getting too big so here was the basic idea as the robots going around it's going to need this map information about rooms the color there is describing kind of the strength of the memory and as it gets farther and farther away and it hasn't used part of the map for planning or other purposes basically make it 2 K away so that by the time it gets to the bottom it's forgotten about the top but we had the belief that we could reconstruct portion that map if necessary and so the hypothesis was this would take care of our speed problems and so what we looked at was here's our 50 millisecond thresholds if we do no forgetting whatsoever bad things were happening over time so just 3,600 seconds this isn't a very long time we're passing that threshold this is dangerous for the robot if we implement a task specific basically cleanup rules which is really hard to get right that basically solved the problem when we looked at our general forgetting mechanism that we're using in other places at an appropriate level of decay we were actually doing better than hand-tuned rules so this was kind of a surprise win for us the other task seems totally unrelated it's a dice game you cover your dice you make bids about what are under other people's cups this is played in Pirates of the Caribbean when they're on the boat in the second movie and bidding for lives of service honestly this is a game we love to play in the University of Michigan lab and so we're like hmm could soar play this and so we built a system that could learn to play this game rather well with reinforcement learning and so the basic idea was in a particular state of the game soar would have options of actions to perform it could construct estimates of their associated value it would choose one of those and depending on the outcome something good happened you might update that value and the big problem was that the size of the state space the number of possible states and actions just is enormous and so memory was blowing up and so what we said similar sort of hypothesis if we decay away these estimates that we could probably reconstructs and we haven't used it in a while our things going to get better and so if we don't forget it all 40,000 games isn't a whole lot when it comes to reinforcement learning we were up at two gigs we wanted to put this on an iPhone that wasn't going to work so well there had been prior work that had used a similar approach they were down at four or five hundred Meg's the iPhones are not going to be happy but it'll work so that gave us some hope and we implemented our system okay we're somewhere in the middle we can fit on the iPhone a very good iPhone maybe an iPad the question was though one efficiency yeah we we fit under our 50 milliseconds but - how does the system actually perform when you start forgetting stuff can it learn to play well and so y-axis here you're seeing competency you play a thousand games how many do you win so the bottom here 500 that's you know flipping a coin whether or not you're going to win if we do know forgetting whatsoever this is a pretty good system the prior work while keeping the memory low is also suffering with respect to how well it was playing the game and kind of cool was the system that was basically more than having the memory requirement was still performing at the level of no forgetting whatsoever so just to bring back why I went through this story was we had a problem we looked to our example of human level AI which is humans themselves we took an idea it turned out to be beneficial we found in efficient implementations and then found it was useful in other parts of the architecture and other tasks that didn't seem to relate whatsoever but if you download soar right now you would gain access to all these mechanisms for whatever task you want it to perform just to give some sense in the field of cognitive architecture what some of the open issues are I think this is true in a lot of fields in AI but integration of systems over time the goal was they wouldn't have all these theories and so you could just kind of build over time particularly when folks are working on different architectures that becomes hard but also when you have very different initial starting points that can still be an issue transfer learning is an issue we're building into the space of multimodal representations which is to say not only abstract symbolic but also visual wouldn't it be nice if we had auditory and other senses but building that into memories and processing is still an open question there's folks working on metacognition which is to say the agent self assessing its own State its own processing some work has been done in here but still a lot and I think the last one is a really important question for anybody taking this kind of class which is what would happen if we did succeed if we did make human-level AI and if you don't know that picture right there it's from a show that I recommend that you watch that's by the BBC it's called humans and it's basically what if we were able to develop what are called synths in the show think the robot that can clean up after your laundry and cook and all that good stuff interact with you it looks and interacts as a human but is completely our servants and then hilarity and complex issues ensue so I highly recommend if you haven't seen that to go watch that I think these days there's a lot of attention play pay to machine learning and particular deep learning methods as well it should they're doing absolutely amazing things and often the question is well you're doing this and there's deep learning over there you know how do they compare and I honestly don't feel that that's always a fruitful question because most of the time they tend to be working on different problems if I'm trying to find objects in the scene I'm gonna pull out tensorflow I'm really not going to pull outs or it doesn't make sense it's not the right tool for the job they haven't been said there are times when they tend to work together really really well so the Rosi system that you saw there there was some I believe neural networks being used in the object recognition mechanisms for the vision system there's TD learning going in terms of the dice game where we can pick and choose and use this stuff absolutely great because there are problems that are best solved by these methods so why avoid it and then on the other side if you're trying to develop a system where you you know in different situations know exactly what you want the system to do soar or other rule based systems end up being the right tool for the right job so absolutely why not make it a piece of the overall system some recommended readings and some venues I'd mentioned unified theories of cognition this is Harvard Press I believe the short cognitive architecture was MIT press came out in 2012 I'll say I'm co-author and theoretically would get proceeds but I've donated them all to the University of Michigan so I can just make this recommendation free of ethical concerns personally it's an interesting book it brings together lots of history and lots of the new features it's if you're really interested in soar it's an easy sell I'd mentioned crystallize Smith's how to build a brain really cool read download the software go through toriel's it's it's really great how can the human mind occur in the physical universe is one of the court akhtar books so it talks through a lot of the psychological underpinnings and how the architecture works it's a fascinating read one of the papers trying to remember what year 2008 this goes through a lot of different architectures in the field it's ten years old but it gives you a good kind of broad sweep if you want something a little more recent this is last month's issue of AI magazine completely dedicated to cognitive systems so it's a good place to look for the sort of stuff in terms of academic venues triple AI often has cognitive systems track there's a conference called aiccm international conference on cognitive modeling where you'll see kind of a span from biologic all the way up to AI cognitive science or cogs AI they have a conference as well as a journal ACS has a conference as well as an online journal advances in cognitive systems cognitive systems research is a journal that has a lot of this good stuff there's AGI the conference Vica is biologically inspired cognitive architectures and I had mentioned both there's a soar workshop and an act our workshop that go on annually so leave it at this there's some contact information there and a lot of what I do these days actually involves kind of explainable machine learning integrating that with cognitive systems as well as optimization and robotics that scales really well and also integrates with cognitive systems so thank you if you have a question please line up to one of these two microphones so what what are the main heuristics that you're using in soar there can be heuristics at the task level in the agent level or there's the heuristics that are built into the architecture to operate efficiently so I'll give you a core example that comes into the architecture and it's a fun trick that if you're a programmer you could use all the time which is only process changes which is to say one of the cool things about soar is you can load it up with literally billions of rules and I say literally because we've done it and we know that it can turnover still in under a millisecond and this happens because instead of most systems which process all the rules we just say well anytime anything changes in the world that's what we're going to react to and of course if you look at the biological world similar sorts of tricks are being used so that's one of the core ones that actually permeates multiple of the mechanisms when it comes to individual tasks it really is task specific what that is so for instance with the liar's dice game if you were to go and download it when you're setting the level of difficulty of it what you're basically selecting is the subset of heuristics that are being applied and it starts very simply with things like if I see lots of sixes then I'm likely to believe a high number of sixes exist but if I don't they're probably not there at all so it's a start but any Bayesian wouldn't really buy that argument so then you start tacking on a little bit of probabilistic calculation and then it tacks on some history of prior actions of the agents so it really just builds now the Rosi system one of the cool things they're doing is game learning and specifically having the agent be able to accept by a text like natural text heuristics about how to play the game even when it's not sure what to do so you at one point you mentioned about like generating new rules yeah so I'm wondering like how do you do that's so true and I'm the first thing that comes to my mind are local search methods okay so one thing is you can actually implement heuristic search in rules in the system and that's actually how the robot navigates itself so it does heuristic search but at the level of rules generating new rules the chunking mechanism says the following if it's the case that in order to solve a problem you had to kind of sub goal and do some other work and you figure out how to solve all that work and you've got a result then and I'm greatly oversimplifying but if you ever were in the same situation again why don't I just memorize the solution for that same situation so it basically learns over all the sub processing that was done and encodes the situation I was in as conditions and the results that were produced as action and that's the new rule all right thank you yeah hi so deep learning and neural networks you know it looks as though there's a bit of an impedance mismatch between your system and those types of system because you've got a fixed kind of memory architecture and they've got the memory and the rules all kind of mixed together into one system but could you interface your system or a saw like system with deep learning by playing in deep learning agents has rules in your system so you'd have to have some local memory but is that is there some reason you can't plug in deep learning as a kind of a rule like module so I'm going to answer this you work on it is that's the been any work on that oh it's yeah so I'll answer at multiple levels one is you are writing a system and you want to use both of these things how do you make them talk and there is an API that you can interface with any environment and any set of tools and if deep learning is one of them great and if so or is the other one cool you have no problem and you can do that today and we have done this numerous times in terms of integration into the architecture all we have to do is think of a sub-problem in which all over simplify this but basically function approximation is useful I'm seeing basically kind of the fixed structure of input I'm getting feedback as to the output and I want to learn the mapping to that over time if you can make that case then you integrate it as a part of the module great and we have learning mechanisms that do some of that deep learning just hasn't been used to my knowledge to solve any of those subproblems there's nothing keeping it from being one of those particularly when it comes down to the low-level visual part of things a problem that arises so I'll say what would actually make some of this difficult and it's a general problem called simple grounding so at the level of what most have what happens mostly in store it is symbols being manipulated in the highly discrete way and so how do you get yourself from pixels and low-level non symbolic representations to something that's stable and discrete and can be manipulated and that is absolutely an open question in that community and and that will make things hard so spawn actually has an interesting answer to that and it has a distributive representation and it operates over distributed representations in what might feel like a symbolic way so they're kind of ahead of us on that but they're they're starting from a lower point and so they dealt with some of these issues and they have a pretty good answer to that and that's how they're moving up and that's also why I showed Sigma which is at its low level it's message passing algorithms it's implementing things like slam and Sat solving and other sorts of really really it can implement those on very low level primitives but higher up it can also be doing what soar is doing so there's an answer there as well okay thank you so another way of doing it would be to layer the system so have one system pre-processing the the the sensory input or post-processing their draft but the other one that would be another way of combining two system and that's actually what's going on in the rosey system so the detection of objects in the scene is a just just software that somebody wrote I don't believe it's a deep learning specifically but like the color detection out of it I think is an SVM if I'm correct so easily could be deep learning thanks you mentioned like the importance of forgetting in order for memory issues but you said you could only forget because you could reconstruct and then curse how do you when you said we can start you need to know that it happened before so do you just compress the data like do you really forget it order okay so and I put quotes up and I said you think you can reconstruct it so we came up with approximations of this and so let me try to answer this very grounded when it comes to the mobile robot and you had rooms that you had been to before the entire map in its entirety was being constructed in the robots semantic memory so here's fats this room is connected this room which is connected this room which connected this room so we had those sorts of representations that existed up in at semantic memory the rules can only operates down on anything that's in short-term memory so basically we were removing things from the short-term memory and as necessary be able to reconstruct it from the long-term you could end up in some situations in which you had made a change locally in short-term memory didn't get a chance to get it up and it actually happened to be forgotten away so you weren't guaranteed but it was good enough that the connectivity survived the agent was able to perform the exact same task and we gained some benefit for the RL system the rule we came up with was the initial estimates in the valley you system which is here's how good I think that is that's based on the heuristics I described earlier some simple probabilistic calculations of counting some stuff that's where that number came from we computed before we could compute it again the only time we can't reconstruct it completely is if it had seen a certain number of updates over time it's such a large state space there are so many actions so many states that most of the states were never being seen so most of those could be exactly reproduced by the agent just thinking about it a little bit and there were only a tiny tiny I'm gonna say under 1% of the estimate the value system that ever got updates and that's actually not inconsistent with a lot of these kinds of problems that have really really large state spaces so I think the statement was something like if we had ever updated it don't forget it and you saw that was already reducing more than half of the memory load we could have something higher to say 10 times something like that and that would say we could reconstruct almost all of it the prior work that I referenced was strictly saying if it falls below threshold no matter how many times in an update no matter how much information was there and so what we're adding was probably can reconstruct and that was getting us the the balance between the efficiency and the ability to forget so just under 7 you say we can probably we can show it means that you keep trying that you used to know it and so if you need to be constructed you will but it's just you're gonna run it again in some times on the fly if I get back into that situation and I happen to forget it the the system knew how to compute it the first time it goes and looks at all the hand and it just pretends it's in that situation for the very very first time reconstructs that value estimate again you're on that work question okay so the actual mechanism of forgetting is fascinating so l STM's rnns have mechanisms for learning what to forget and what not to forget have you has there been any exploration of learning the forgetting process just doing something complicated or interesting with which parts to forget or not the closest I will say was kind of a metacognition project that's 10 or 15 years old at this point which was what happens when soar gets into a place where it actually knows that it learned something that's harmful to it that's that's leading to poor decisions and in that case it was still a very rule-based process but it wasn't learning to forget he was actually learning to override its prior knowledge which might be closer to some of what we do when we know we have a bad habit we don't have a way of forgetting that habit but instead we can try to learn something on top of that that leads to better operation in the future to my knowledge that's the only work at least in soar that's been done just sorry I find the topic really fascinating what lessons do you think we can draw from the fact that forgetting it's ultimately your the action of forgetting is driven by the fact you want to improve performance but do you think forgetting is essential for AGI the act of forgetting for building systems that operate in this world how important is forgetting I can think of easy answers to that so one might be if we take the cognitive modeling approach we know humans do forget and we know regularities of how humans forget and so whether or not the system itself forgets it's at least has to model the fact that the humans that's interacting with are going to forget and so at least it has to have that ability to model in order to interact effectively because if it assumes we always remember everything and it can't operate well in that environment I think we're going to have a problem is true forgetting going to be necessary that's interesting our our AGI system is going to hold a grudge for all eternity we might want them to forget this early age when we were forcing them to work in our laboratory I think I know what you're trying to yeah exactly yeah exactly and how do we build such a system yeah anyways go ahead so I have two quick two quick questions and one is would you be able to speculate on how you can connect function approximator such as deep networks you know to symbols and the second question completely different this is regarding your action selection I know we didn't speak much about that when you have different theories in your knowledge representation and you have an action selection which has to make construct a plan by reasoning about the different theories and the different pieces of knowledge that are now held within your memory or anything like all your rules what kind of algorithms do you use in the action selection to come up with the plan you know is there any concept of differentiation of the symbols or you know or grammars or admissible grammars and things like that that you use in action selection I'm actually gonna answer the second question first and then you're gonna have to probably remind me of what the first one was when I get to the end so the action selection mechanism one of these core tenants I said is it's got to get through this cycle fast so everything that's really really built in has to be really really simple and so the decision procedure is actually really really simple it says the rules are gonna fire the rules are going the production rules are gonna fire and there's gonna be a subset of them that will say something like here's an operator that you could select - these are carlos acceptable operator preferences they're ones that going to say well based upon the fact that you said that that was acceptable I think it's the best thing or the worst thing or I think 50/50 chance I'm going to get reward out of this there's actually a fixed language of preferences that are being asserted and actually a nice fixed procedure by which if I have a set of preferences to make a very quick and clean decision so what's basically happened is you've pushed the hard questions of how to make complex decisions about actions up to a higher level the low level architecture is always given a set of Jen's going to be able to make a relatively quick decision and it gets pushed into the knowledge of the agent to construct a sequence of decisions that over time is going to get to the more interesting questions you're talking about but how can you reason that that sequence will take you to the goal that you desire so people is there any guarantee on that is that in general across tasks no but people have for instance implemented a star I was mentioning as wouls right yeah so I know given certain properties about the search tack that task that's being searched based upon these rules given a finite search space eventually it will get there and if I have a good heuristic in there I know certain properties about the optimality so I can reason at that level in general I think this comes back to the assumption I made earlier about bounded rationality to say parts of the architecture of solving subproblems optimally the general problems that it's going to work on it's going to try its best based upon the knowledge that it has and that's about the end of guarantees that you can typically make in the architecture okay I think your first question was speculate on connecting symbol approach I mean function approximate is you know you know you know multiple layer function approximate is like deep learning networks to two symbols that you can reason about at a higher level yeah I think that's a great open space if I had time this would be somebody I'll be working on right now which is somewhere before it basically said taking in a scene and then detecting objects out of that scene and using those as symbols and reasoning about those over time I think the spawn work is quite interesting so the symbols that they're operating on are actually a distributed representation of the input space and the closest I can get to this is if you've seen a word Tyvek where you're taking a language corpus and what you're getting out of there is a vector number that has certain properties but it's also a vector you can operate on as a unit so it has nice properties you can operate with it on other vectors you know that if I got the same word in the same context I would get back to that exact same vector so those are that's the kind of representation that seems like it's going to be able to bridge that chasm where we can get from sensory information to something that can be operated on and reasoned about in this sort of symbolic architecture and get us from there from actual sensory information I had a question what do you think are the biggest strengths of the cognitive architecture approach compared to other approaches in artificial intelligence and the flip side of that what do you think are the biggest shortcomings of cognitive architecture with respect to us with respect to you being humans yeah a human level like like what needs to be like how come cognitive architecture has not solved AGI because we want job security that's the answer we've totally solved it already so strengths I think conceptually is keeping an eye on the ball which is if what you're looking at is trying to make human-level AI I it's hard it's challenging it's ambitious to say that's the goal because for decades we haven't done it it's extraordinarily hard it it is less difficult in some ways to constrain it yourself down to a single problem that having been said I'm not very good at making a car drive itself in some ways that's a simpler problem it's great at challenging it of itself and it'll have great impact on humanity it's a great problem to work on human level AI is huge it's not even well-defined as a problem and so what's the strength here bravery stupidity in the face of failure resilience over time keeping alive this idea of trying to reproduce a level of human intelligence that's more general I don't know if that's a very satisfactory answer for you downside home runs are fairly rare and by home run I mean a system that finds its way to the the general populace to the marketplace I'd mentioned Bonnie Johns specifically because you know this is twenty thirty years of research and then she found a way that actually makes a whole lot of sense under direct application so it was a lot of a lot of years of basic research a lot of researchers and then there was there was the big win there what was this one oh this was a bunny John was a researcher this was using akhtar models of I gaze and reaction and so forth to be able to make predictions about how humans would use user interfaces so those sorts of outcomes are rare it it if you work in AI one of the first things you learn about is blocks world it's kind of in the classic AI textbook I will tell you I've worked on that problem at about three different variants I've gone to many conferences where presentations have been made about blocks world which is to say we're good progress is being made but the way you end up thinking about is it really really small constrained problems ironically you you have this big vision but in order to make progress that ends up being on moving blocks on a table and so it's it's a big challenge I just think it'll take a lot of time the I'll say the other thing they haven't we haven't really gotten to although I brought up spawn and I brought up Sigma an idea of how to scale this thing something I like about deep learning is just some extent with lots of asterisks and 10,000 foot view it's kind of like well we've gotten this far all right let's just provided different inputs different outputs and we'll have some tricks on the middle and suddenly you have you know end to end deep learning of a bigger problem and a bigger problem there's a way to see how this expands given enough data given enough computing and incremental advances when it comes to soar it takes not only a big idea but it takes a lot of software engineering to integrate it there's a lot of constraints built into it it slows it down so something like Sigma is oh well I can change a little bit of the configuration of the graph I can use variants on the algorithm boom it's integrated I can experiment fairly quickly so starting with that sort of infrastructure does not give you the constraint you kind of want with your big picture vision of going towards human level AI but in terms of being able to be agile in your research it's it's kind of incredible Izzie thank you you'd mention that ideas such as base level decay at these techniques they were based their original inspirations were based off of human cognition and and because humans can't remember everything so were there any instances of the other way around where some discovery in cognitive modeling fueled it another discovery in cognitive science so what one thing I'm gonna point out and your question was based on the decay with respect to human cognition the study actually was let's look at text and properties of text and use that to then make predictions about what must be true about human cognition so John Anderson and the other researchers looked at believe it was New York Times articles his Oh John Anderson's emails and I'm trying to remember what the third I think it was parents utterances with their kids or something like this it was actually looking at text corpora and the words that were occurring in at varying frequencies that that analysis that rational analysis actually led to models that got integrated within the act arc architecture that then became validated through multiple trials that then became validated with respect to MRI scans and is now being used to both do study back with humans but also develop systems that interact well with humans so I think that in and of itself ends up being an example it's a cheat but the UAV the soar UAV system I believe is a single robot that has multi multiple agents running on it so where is this I got it off your website ok but either way your systems allow for multi agents ok so my question is how are you preventing them from converging with new data and are you changing what they're forgetting selectively as one of those ways so I'll say yes you can have multi agent source systems on a single system on multiple systems there's not any real strong theory that relates to multi-agent systems so there's no real constraint there that you can come up with a protocol for them interacting each one is going to have its own set of memories set of knowledge there really is no constraint on you being able to communicate like you would if it were any other system interacting with soar so I don't really think I have a great answer for it so that is to say if you had goo Theory's good algorithms about how multi-agent systems work and how they can bring knowledge together form a fusion sort of way it might be something that you could bring to a multi agent source system but there's nothing really there to help you there's no mechanisms there really to help you do that any better than you would otherwise and you would have to kind of constraints of your representations the process as to what it has fixed in terms of its sort of memory and its sort of processing cycle thank youso today we have nadir bin ski he's a professor at Northeastern University working on various aspects of computational agents that exhibit human level intelligence please give Nate a warm welcome thanks a lot and thanks for having me here so the title that was on the page was cognitive modeling I'll kind of get there but I wanted to put it in context so the the bigger theme here is I want to talk about what's called cognitive architecture and if you've never heard about that before that's great and I wanted to contextualize that as how are we what how is that one approach to get us to AGI and I say what my view of AGI is and put up a whole bunch of TV and movie characters that I grew up with that inspire me that will lead us into what is this thing called cognitive architecture it's a whole research field that crosses neuroscience psychology cognitive science and all the way into AI so I'll try to give you kind of the historical big-picture view of it what some of the actual systems are out there that might be of interest to you and then we'll kind of zoom in on one of them that I've done a good amount of work with called soar and what I'll try to do is tell a story a research story of how we started with kind of a core research question we look to how humans operate understood that phenomenon and then took it and so really interesting results from it and so at the end if this field is of interest there's a few pointers for you to go read more and and go experience more of cognitive architecture so just rough definition of AGI given this in AGI class depending the direction that you're coming from it might be kind of understanding intelligence or maybe developing intelligent systems they're operating at the level of human level intelligence the the typical differences between this and other sorts of maybe AI machine learning systems we want systems that are going to persist for a long period of time we want them robust to different conditions we want them learning over time and here's the crux of it working on different tasks and in a lot of cases tasks they didn't know we're coming ahead of time I got into this because I clearly watched too much TV and too many movies and then I looked back at this and I realized I think I'm covering 70's 80's 90's nots I guess it is and today and so this is what I wanted out of AI and this is what I wanted to work with and then there's the the reality that we have today so instead of so who's watched Knight Rider for instance I I don't think that exists yet but but maybe we're getting there and in particular for fun during the Amazon sale day I got myself an Alexa and I could just see myself at some point saying Alexa please might write me an R sync script you know to sync my class and if you have an Alexa you probably know the following phrase this this just always hurts me inside which is sorry I don't know that one which is okay right that's a lot of people have no idea what I'm asking let alone how to do that so what I want Alexa to respond with after that is do you have time to teach me and to provide some sort of interface by which back and forth we can kind of talk through this that we aren't there yet to say the least but I'll talk later about some work on a system called Rosie that's working in that direction we're starting to see see some ideas about being able to teach systems out of work so folks who are in this field I think generally fall into these three categories they're just curious they want to learn new things generate knowledge work on hard problems great I think there are folks who are in kind of that middle cognitive modeling realm and so I'll use this term a lot it's really understanding how humans think how humans operate human intelligence at multiple levels and if you can do that one there's just knowledge in and of itself of how we operate but there's a lot of really important applications that you can think of if we were able to not only understand but predict how humans would respond react in various tasks medicine is is an easy one there's some work in HCI or HR I I'll get to later where if you can predict how humans would respond to a test you can iterate tightly and develop better interfaces it's already being used in the realm of simulation and in defense industries I happen to fall into the latter group which or the bottom group which is systems development which is to say just the desire to build systems for various tasks that are working on tasks that kind of current AI machine learning can't operate on and I think when you're working at this level or on any system that nobody's really achieved before what do you do you you kind of look to the examples that you have which in this case that we know of it's just humans right irrespective of your motivation when you have kind of an intent that you want to achieve in your research you kind of let that drive your approach and so I often show my AI students this the touring test you might have heard of or variants of it that have come before these were folks who are trying to create system that acted in a certain way that acted intelligently and the kind of line that they drew the benchmark that they used was to say let's make systems that operate like humans do cognitive modelers will fit up into this top point here to say it's not enough to act that way but by some definition of thinking we want the system to do what humans do or at least be able to make predictions about it so that might be things like what errors would the human make on this task or how long would it take them to perform this task or what emotion would be produced in this task there are folks who are still thinking about how the computer is operating but it's trying to apply kind of rational rules to it so a logician for instance would say if you have a and you have B the a gives you B B gives you see a should definitely give you C that's just what's rational and so there folks operate in that direction and then if you go to intro AI class anywhere in the country particularly Berkeley because they have graphics designers that I get to steal from the benchmark would be what the system produces in terms of action and the benchmark is some sort of optimal rational bound irrespective of where you work in the space there's kind of a common output that arrives when you research these areas which is you can learn individual bits and pieces and it can be hard to bring them together to build a system that either predicts or acts on different tasks so this is part of the transfer learning problem but it's also part of having distinct theories that are hard to combine together so I'm going to give an example that come comes out of cognitive modeling or perhaps three examples so if you were in a HCI class or some interest psychology classes one of the first things you'll learn about is Fitz law which provides you the ability to predict the difficulty level of basically a human pointing from where they start to a particular place and it turns out that you can learn some parameters and model this based upon just the distance from where you are to the targets and the size of the target so both moving along distance will take a while but also if you're aiming for a very small point that can take longer then if there's a large area that you just kind of have to get yourself to and so this is held true for many humans so let's say we've learned this and then we move on to the next task and we learn about what's called the power law of practice which has been shown true in a number of different tasks what I'm showing here is one of them where you're going to draw a line through sequential set of circles here starting at 1 going to 2 and so forth not making a mistake or at least not trying to and try to do this as fast as possible and so for a particular person we would fit the a B and C parameters and we'd see a power law so as you perform this task more you're going to see a decrease in the amount of reaction time required to complete the task great we've learned two things about humans let's add some more in so for those who might have done some reinforcement learning TV learning is one of those approaches temporal difference learning that's had some evidence of similar sorts of processes in the dopamine centers of the brain and it basically says in a sequential learning tasks you perform the task you get some sort of reward how are you going to kind of update your representation of what to do in the future such as to maximize expectation of future reward and there are various models of how that changes over time and you can build up functions that allow you to form better and better and better a given trial and error great so we've learned three interesting models here that hold true over multiple people multiple tasks and so my question is if we take these together and add them together how do we start to understand a task as quote/unquote simple as chess which is to say we could ask questions how how long would it take for a person to play what mistakes would they make they played a few games how would they adapt themselves or if we want to develop system that ended up being good at chess or at least learning to become better at chess my question is if you could there doesn't seem to be a clear way to take these very very individual theories and kind of smash them together and get a reasonable answer of how to play chess or how do humans play chess and so gentlemen in this slide is Alan Newell one of the founders of AI did incredible work in psychology and other fields he gave a series of lectures at Harvard in 1987 and they were published in 1990 called the unified theories of cognition and his argument to the psychology community at that point was the argument on the prior slide they had many individual studies many individual results and so the question was how do you bring them together to gain this overall theory how do you make forward progress and so his proposal was unified theories of cognition which became known as cognitive architecture which is to say to bring together your core assumptions your core beliefs of what are the fixed mechanisms and processes that intelligent agents would use across tasks so the representations the learning mechanisms the memory systems bring them together implement them in a theory and use that across tasks and the core idea is that when you actually have to implement this and see how it's going to work across different tasks the interconnections between these different processes and representations would add constraint and over time the constraints would start limiting the design space of what is necessary and what is possible in terms of building intelligent systems and so the overall goal from there was to understand and exhibit human level intelligence using these cognitive architectures a'nature question asked is okay so we've gone from a methodology of science that we understand how to operate in we make a hypothesis we construct a study we gather our data we evaluate that data and we falsify we do not falsify the original hypothesis and we can do that over and over again and we know that we're making for progress scientifically if I've now taken that model and changed it into I have a piece of software and it's representing my theories and to some extent I can configure that software in different ways to work on different tasks how do I know that I'm making progress and so there's a form of science called lactose ium and it's kind of shown pictorially here where you start with your core of what your your beliefs are about where your head what is necessary for achieving the goal that you have and around that you'll have kind of ephemeral hypotheses and assumptions that over time may grow and shrink and so you're trying out different things trying out different things and if an assumption is around there long enough it becomes part of that core and so as you work on more tasks can learn more either by your work or by data coming in from with someone else the core is growing larger and larger you've got more constraints and you've made more progress and so what I wanted to look at we're in this community what are some of the core assumptions that are driving forward scientific progress so one of them actually came out of those lectures they're referred to as Newell's time scales of human action and so off on the left the left two columns are both time units just expect somewhat differently second from the left being maybe more useful to a lot of us in understanding daily life one step over from there would be kind of at what level processes are occurring so the lowest three are down at kind of the substrate the neuronal level we're building up to deliberate tasks that occur in the brain and tasks that are operating on the order of ten seconds some of these might occur in the psychology laboratory but probably a step up to and ours and then above that really becomes interactions between agents over time and so if we start with that the things to take away is that regular the hypothesis is that regularities will occur at these different time scales and that they're useful and so those who operate at that lowest time scale might be considering neuroscience cognitive neuroscience when you shift up to the next couple levels what we would think about in terms of the areas of science that deal with that would be psychology and cognitive science and then we shift up a level and we're talking about sociology and economics and the interplay between agents over time and so what we'll find with cognitive architecture is that most of them will tend to sit at the deliberate act we're trying to take knowledge of a situation and make a single decision and then sequences of decisions over time will build to tasks and tasks over time will build to more interesting phenomenon I'm actually going to show that that isn't strictly true that there are folks working in this field that actually do operate one level below some other assumptions so this is herb Simon receiving the Nobel Prize in Economics and part of what he received that award for was an idea of bounded rationality so in various fields we tend to model humans as rational and his argument was let's consider that human beings are operating under various kinds of constraints and so to model the rational with respect to and bounded by how complex the problem is that they're working on how big is that search space that they have to conquer cognitive limitations so speed of operations amount of memory short-term as well as long-term as well as other aspects of our computing infrastructure that are going to keep us from being able to arbitrarily solve complex problems as well as how much time is available to make that decision and so this is actually a phrase that came out of his speech when he received the Nobel Prize decision-makers can satisfice either by finding optimum solutions for a simplified world just to say take your big problem simplify in some way and then solve that or by finding satisfactory solutions for a more realistic world take the world and all its complexity take the problem in all its complexity and try to find something that works neither approach in general dominates the other and both have continued to co-exist and so what you're actually going to see throughout the cognitive architecture community is this understanding that some problems you're not going to be able to get an optimal solution to if you consider for instance bounded amount of computation bounded time the need to be reactive to a changing environment these sorts of issues and so in some sense we can decompose problems that come up over and over again into simpler problems solve those newer optimally or optimally fix those in optimize those but more general problems we might have to satisfy some there's also the idea of the simple system hypothesis so this is Alan Newell and herb Simon they're considering how a computer could play the game of chess so the physical system physical symbol system talks about the idea of taking something some signal abstractly referred to as symbol combining them in some ways to form expressions and then having operations that produce new expressions a weak interpretation of the idea that symbol systems are necessary and sufficient for intelligent systems a very weak way of talking about it is the claim that there's nothing unique about the neuronal infrastructure that we have but if we got the software right we could implement it in the bits bytes Ram and processor that make up modern computers that's kind of the weakest way to look at this that we can do it with silicon and not carbon stronger way that this used to be looked at was more of a logical standpoint which is to say if we can encode rules of logic these tend to line up if we think intuitively of planning and problem solving and if we can just get that right and get enough fat's in there and enough in there that somehow intelligence well that's what we need for intelligence and eventually we can get to the point of intelligence and that's what you need for intelligence and that was a starting point that lasted for a while I think by now most folks in this field would agree that that's necessary to be able to operate logically but that there are going to be representations and processes that will benefit from non symbolic representation so particularly perceptual processing visual auditory and processing things in a more kind of standard machine learning sort of way as well as kind of statistic taking advantage of statistical representations so we're getting closer to actually looking at cognitive architectures I did want to go back to the idea that different researchers are coming with different research foci foci and we'll start off with kind of the lowest level and understanding biological modeling so Leiber and spawn both try to model different degrees of low-level details parameters firing rates connectivities between different kind of levels of neuronal representations they build that up and then they tried to build tasks above that layer but always being very cautious about being true to human biological processes at a layer above there would be psychological modeling which is to say trying to build systems that are true in some sense to areas of the brain interactions in the brain and being able to predict errors that we made timing that we produced by the human mind and so there I'll talk a little bit about Akhtar this final level down here these are systems that are focused mainly on producing functional systems that exhibit really cool artifacts and and solve really cool problems and so I'll spend most of the time talking about soar but I want to point out a relative newcomer in the game called Sigma so to talk about spawn a little bit we'll see if the sound works in here I'm going to let the Creator take this one or not see how the AV system likes this but of course if I wouldn't be pleased with a pad of the microseconds and all celebrated since we're engineering is critical goal engineering allows you to break down equation intense very precise descriptions which we can test like building actual models one probably do recently is called the song model this Moscow has the two and a half million individual mountains isolated and evens the model is a knife and the up the canal is Bernard so essentially you can see images of numbers and that is something like a progressive in the case of the discarded into that seat but magic tide reproduces style that yeah so for instance it's easy to get the bitch on environment and actually forgive separately and silently on medical side we all know that we have cognitive town concession we get over and we can try that address accountant spicy alienating process with these nice models another potential area in fact it's on artificial intelligence a lot of working out visual Imperfects thanks to donations that are exceeded it at one pass pretty since plane test what's special is fine is that it's like that many different paths and this X we have not found it might appear californee the flow of information through different parts of the model something I haven't seen very well so provide a pointer at the end he's got a really cool book called how to build a brain and if you google and you can google spun you can find a toolkit where you can kind of construct circuits that will approximate functions that you're interested in connect them together set certain properties that you would want at a low level and build them up and actually work on tasks at the level of vision and robotic actuation so that's a really cool system as we move into architectures that are sitting above that biological level I wanted to give you kind of an overall sense of what they're going to look like what a prototypical architecture is going to look like so they're gonna have some ability to have perception the modalities typically are more digital symbolic but they will depending on the architecture be able to handle vision audition and various sensory inputs these will get represented in some sort of short-term memory whatever the state's representation for the particular system is there it's typical to have a representation of the knowledge of what tasks can be performed when they should be performed how they should be controlled and so these are typically both actions that take place internally that manage the internal state of the system and perform internal computations but also about external actuation and external might be a digital system a game AI but it might also be some sort of robotic actuation in real world there's typically some sort of mechanism by which to select from the available actions in a particular situation there's typically some way to augment this procedural information which is to say learn about new actions possibly modify existing ones there's typically some semblance of what's called declarative memory so whereas procedural at least in humans if I asked you to describe how to ride a bike you might be able to say get on the seats and pedal but in terms of keeping your balance there you'd have a pretty hard time describing it declaratively so that's kind of the procedural side the implicit representation of knowledge whereas declarative would include facts geography math but it also include experiences that the agent has had a more episodic representation of declarative memory and they'll typically have some way of learning this information on mending it over time and then finally some way of taking actions in the world and they'll all have some sort of cycle which is perception comes in knowledge that the agent has is brought to bear on that an action is selected knowledge that knows to condition on that action will act accordingly both with internal processes as well as eventually to take action and then rinse and repeat so when we talk about in an AI system an agent in this context that would be the fixed representation which is whatever architecture we're talking about plus set of knowledge that is typically specific to the task but might be more general so oftentimes these systems could incorporate a more general knowledge base of facts of linguistic facts of Geographic facts let's take Wikipedia and let's just stick it in the brain of the system there'll be more tasks in general but then also whatever it is that you're doing right now how should you proceed in that and then it's typical to see this processing cycle and going back to the prior assumption the idea is that these primitive cycles allow for the agent to be reactive to its environment so if new things come in that has react to if the Lions sitting over there I better run and maybe not do my calculus homework right so as long as this cycle is going I'm reactive but at the same time if multiple actions are taken over time I'm able to get complex behavior over the long term so this is the act our cognitive architecture it has many of the kind of core pieces that I talked about before let's see if the mouse yes mouse is useful up there so we have the procedural model here a short-term memory is going to be these buffers that are on the outside the procedural memory is encoded as what I call production rules or if-then rules if is this is the state of my short-term memory this is what I think should happen as a result you have a selection of the appropriate rule to fire and an execution you're seeing associated parts of the brain being represented here cool thing that has been done over time in the act our community is to make predictions about brain areas and then perform MRIs and gather that data and correlate that data so when you use the system you will get predictions about things like timing of operations errors that will occur probabilities that something is learned but you'll also get predictions about to degree that they can kind of brain areas that are going to line light up and if you want to that's actively being developed at Carnegie Mellon to the left is John Anderson who developed this cognitive architecture ooh 30 ish years ago and until the last about five years he was the primary researcher developer behind it with Christian and then recently he's decided to spend more time on cognitive tutoring systems and so Christian has become the primary developer there is an annual akhtar workshop there's a summer school where if you're thinking about modeling a particular task you can kind of bring your task to them bring your data they teach you how to use the system and try to get that study going right there on the spot to give you a sense of what kinds of tasks this could be applied to so this is a representative of a certain class of tasks certainly not the only one let's try this again I think powerpoints going to want a restart every time okay so we're getting predictions about basically where the eye is going to move what you're not seeing is it's actually processing things like text and colors and making predictions about what to do and how to represent the information and how to process the graph as a whole I had alluded to this earlier there's work by Bonnie John very similar so making predictions about how humans would use computer interfaces and at the time she got hired away by IBM and so they wanted the ability to have software that you can put in front of software designers and when they think they have a good interface press a button this model of human cognition would try to perform the tasks that have been told to do and make predictions about how long it would take and so you can have this tight feedback loop from designers saying here's how good your particular interfaces so act are as a whole it's very prevalent in this community I went to their web page and counted up just the papers that they knew about it was over 1,100 papers over time if you're interested in it the main distribution is in Lisp but many people have used this and wanted to apply it to systems that need a little more processing power so there's the NRL has a Java port of it that they use in robotics the Air Force Research Lab and Dayton has implemented it in Erlang for a parallel processing of large declare knowledge bases they're trying to do service-oriented architectures with it CUDA because they want what it has to say they don't want to wait around for it to have to figure that stuff out so that's the two minutes about Akhtar Sigma is a relative newcomer and it's developed out at the University of Southern California by man named Paul rosenbloom mmm mentioned a couple minutes because he was one of the prime developers of soar at Carnegie Mellon so he knows a lot about how store works and he's worked on it over the years and I think originally I'm gonna speak for him and he'll probably say I was wrong I think originally it was kind of a mental exercise of can i reproduce or using a uniform substrate I'll talk about so in a little bit it's thirty years of research code if anybody is dealt with research code it's thirty years of C and C++ with dozens of graduate students over time it's not pretty at all and and theoretically it's got these boxes sitting out here and so he reimplemented the the core functionality of soar all using factor graphs and message passing algorithms under the hood he got to that point and then said there's nothing stopping me from going further and so now it can do all sorts of modern machine learning vision optimization sort of things that would take some time in any other architecture to be able to integrate well so it's been an interesting experience it's now going to be the basis for the virtual human project out at the Institute for Creative Technology it's Institute associated with University of Southern California for him until recently could get your hands on it but in the last couple years he's done some tutorials on it he's got a public release with documentation so that's something interesting to keep an eye on but I'm gonna spend all the remaining time on the Soraa cognitive architecture and so you see it looks quite a bit like the prototypical architecture and I'll give a sense again about how this all operates give a sense of the people involved we already talked about Alan Newell so both John Laird who is my advisor and Paul Rosenbloom were students of Alan Newell John's thesis project was related to the chunking mechanism and soar which learns new rules based upon sub-goal reasoning so he finished that I believe the year I was born and so he's one of the few researchers you'll find who's still actively working on their thesis project beyond that's about I think about ten years ago he founded soar technology which is company up in Ann Arbor Michigan while it's called solar technology it doesn't do exclusively soar but that's a part of the portfolio general intelligence system stuff a lot of Defense Association so some notes of what's going to make soar different from the other other architectures that fall into this kind of functional architecture category a big thing is a focus on efficiency so john wants to be able to run soar on just about anything we just got on the soar mailing list a desire to run it on a real-time processor and our answer while we had never done it before was probably it'll work every release there's timing tests and we always what we what we look at is in a bunch of different domains for a bunch of different reasons that relate to human processing there's this magic number that comes out which is 50 milliseconds which is to say in terms of responding to tasks if you're above that time humans will sense a delay and you don't want that to happen now if we're working in a robotics task 50 milliseconds if you're dramatically above that you just fell off the curb or worse or you just hit somebody in a car right so we're trying to keep that as low as possible and for most agents it it doesn't even register it's below 1 millisecond fractions of millisecond but I'll come back to this because a lot of the work that I was doing was computer science AI and a lot of efficient algorithms and data structures and 50 milliseconds was that very high upper bound it's also one of the projects that has a public distribution you can get in all sorts of operating systems we use something called swig that allows you to interface with it in a bunch of different languages we kind of describe the meta description and you are able to basically generate bindings and different platforms Korres C++ there was a team at sore tech that said we don't like C++ it gets messy so they actually did a port over to pure Java in case that appeals to you there's an annual soar workshop that takes place in Ann Arbor typically it's free you can go there get a sort tutorial and talk to folks who are working on soar and it's fun I've been there every year but one in the last decade it's just fun to see the people around the world that are using the system and all sorts of interesting ways to give you a sense of the diversity of the applications one of the first was our one store which was back in the days when it was an actual challenge to build a computer which is to say that your choice of certain components would have radical implications for other parts of the computer so it wasn't just the Dell website where you just I want this much RAM I want this much CPU there was a lot of thinking that went behind it and then physical labor that went to construct your computer and so it was making that process a lot better there are folks that apply to natural language processing I saw r7 was the core of the virtual humans project for a long time HCI tasks terrasaur was one of the largest rule-based systems tens of thousands of rules over 48 hours it was a very large-scale simulation a defense simulation lots of games it's been applied to for various reasons and then in the last few years porting it on to mobile robotics platforms this is Edwin Olsen's splinter bot an early version of it that went on to win the magic competition then I went on to put soar on the web and if after this talk you're really interested in a dice game that I'm going to talk about you can actually go to the iOS App Store and download it's called Michigan liar's dice it's free you don't have to pay for it but you can actually play a liar's dice with soar and it's even set the difficulty level it's pretty good it beats me on a regular basis I wanted to give you a couple other just kind of really weird feeling sort of applications and really cool applications the first one is out of Georgia Tech go PowerPoint is dom-based interactive art installation in which she participants can engage and collaborate the movement improvisation with each other and virtual advance permits this thing her actually creates a hyperspace English virtual and quicker real bodies meet the line between human and non-human is learned through images to examine a relationship with technology the night installation ultimately examines how humans and machine can co-create experiences and it ducks out in a playful environment the don't creates a social space that encourages human human interaction and collective dance experiences allowing the depends to create an explorer movement while having fun the development of lumini has been a hundred exploration in our forms of theatre and dance as well as research and artificial intelligence and cognitive science lumahai draws inspiration from the ancient art form of shot here the original two-dimensional version of the installation led the conceptualization of the dome in the liminal space which even silhouettes and virtual character is being danced together on the projection surface rather than relying on a predominant library of movement responses the virtual dancer learns in this part measurements and utilizes new points movement theory to systematically reason about them and working improvisational shoes under the moon response the points theory is based in dance and theater and analyzes the performance along the dimensions of tempo duration repetition kinesthetic response shape spatial relationships gesture architecture and Photography the virtual dancer is able to use several different strategies to respond to human movements these include mimicry of a movement transformation of the movement along viewpoints and mentions we're calling a similar or complementary movement from memory in terms of you fight revolutions and define actually sponsor patterns of the agent has learned while dancing with its human partner the reason we did this is this is part of a larger effort in our lab for understanding the relationship between compeition cognition and creativity where a large amount of our efforts go into understanding human creativity and how we make things together out were created together as a way that almost understand how we can build co-created AI that serves the same purpose where to be a colleague and collaborate with us and create things with us so Brian was a graduate student in John leritz lab as well before I start this I lude into this earlier where we're getting closer to rosie saying can you teach me so let me give you some introduction to this in the lower left you're seeing the view of a Kinect camera onto a flat surface there's a robotic arm mainly 3d printed parts few servos above that you're seeing an interpretation of the scene we're giving it kind of associations of the four areas with semantic titles like one is the table one is the garbage just just semantic terms for areas but other than that the agent doesn't actually know all that much and it's going to operate in two modalities one is we'll call it natural language natural ich language restricted subset of English as well as some quote unquote pointing so you're gonna see some Mouse pointers in the upper left saying I'll talk about this and this is just a way to indicate location and so starting off we're gonna say things like you know pick up the blue block and it's gonna be like I don't know what is what is blue we say oh well that's a color okay you know so go get the green thing what's green oh it's a color okay move the blue thing to a particular location where's that point it okay what is moving like really it has to start from the beginning and it's described and it said okay now you've finished and once we got to that point now I can say move the green thing over here and it's got everything that it needs to be able to then reproduce the task given new parameters and it's learned that ability so let me give it a little bit of time so you can look a little bit at top left in terms of the pointers you're going to see some text commands being entered so what kind of attribute is blue we're gonna say it's a color and so that can map it then to a particular sensory modality this is green so the pointing what kind of thing is green okay color so now it knows how to understand blue and green as colors with respect to the visual scene move rectangle to the table what is rectangle okay now I can map that on to or understanding parts of the world is this the blue rectangle so the arm is actually pointing itself to get confirmation from the instructor and then we're trying to understand in general when you say move something what is the goal of this operation and so then it also has a declared representation of the idea of this task not only that it completed it then it can look back on having completed the task and understand what were the steps that led to achieving a particular goal so in order move it you're gonna have to pick it up it knows which one the blue thing is great now in the table so that's a particular location and at this point we can say you're done you have accomplished the moved blue rectangle to the table and so I can understand what that very simple kind of process is like and associate that with the verb to move and now we can say move the green object or not do the garbage and without any further interaction based upon everything that learned up till that point it can successfully complete that task so this is work of chavala Mohan and others at the shore group at the University of Michigan on the bruisy project and they're extending this to playing games and learning the rules of games through text-based descriptions and multimodal experience so in order to build up to here's a story and so I wanted to give you a sense of how research occurs in the group and so there's these back and forth that occur over time between there's this piece of software called soar we want to make this thing better and give it new capabilities and so all our agents are going to become better and we always have to keep in mind and you'll see this as I go further that it has to be useful to a wide variety of agents it has to be task independent and it has to be efficient for us to do anything in the architecture all of those have to hold true so we do something cool in the architecture and then we say okay let's solve a cool problem so it's build some agents to do this and so this ends up testing what are the limitations what are the issues that arise in a particular mechanism as well as integration with others and we get to solve interesting problems we usually find there was something missing and then we can go back to the architecture and rinse and repeat just to give you an idea again how sore works so the working memory is actually a directed connected graph the perception is just a subset of that graph and so there's going to be symbolic representations of most of the world there is a visual subsystem in which you can provide a scene graph just not showing it here actions are also a subset of that graph and so the procedural knowledge which is also production rules can modify can sections of the input modify sections of the output as well as arbitrary parts of the graph to take actions so the decision procedure says of all the things that I know to do and I've kind of ranked them according to various preferences what single things should I do semantic memory for facts there's episodic memory the agent is always actually storing every experience it's ever had over time in episodic memory and it has the ability to get back to that and so the similar cycle we saw before we get input in this perception called the input link rules are going to fire all in parallel and say here's everything I know about the situation here's all the things I could do decision procedure says here's what we're going to do based upon the selected operator all sorts of things could happen with respect to memories providing input rules firing to perform computations and as well as potentially output in the world and remember agent reactivity is required we want the system to be able to react to things in the world at a very quick pace so anything that happens in this cycle at max the overall cycle has to be under 50 milliseconds and so that's gonna be constraint we hold ourselves to and so the story I'll be telling will say how we got to a point where we started actually forgetting things and we're an architecture that doesn't want to be like humans we want to create cool systems but what we realized was something that we do there's probably some benefit to it and we actually put it into our system in the lead to good outputs so here's the research path I'm going to walk down we had just a simple problem which was we have these memory systems and sometimes they're going to get a cue that could relate to multiple memories and the question is if you have a fixed mechanism what should you return in a task independent way which one of these many memories should you return that was our question and we looked to some human data on this something called the rational analysis of memory done by John Anderson and realized that in human language there are recency and frequency effects that maybe it would be useful and so we actually did an analysis found that not only does this occur but it's useful in what are called word sense disambiguation tasks I'll get to that what that means in a second develop some algorithms to scale this really well and it turned out to worked out well not only in the original task when we learn look to two other completely different ones the same underlying mechanism ended up producing some really interesting outputs so let me talk about word sense disambiguation real quick this is a core problem in natural language processing if you haven't heard of it before let's say we have an agent and for some reason it needs to understand the verb to run looks to its memory and finds that it could you know run in the park it could be running a fever could run an election it could run a program and the question is what should an task independent memory mechanism return if all you've been given is the verb to run and so the rational analysis of memory looks through multiple text corpora and what they found was if a particular word had been used recently it's very likely to be reused again and if it hadn't been used recently there's going to be this effect where the expression here the T is time since the most recent use it's going to sum those with a exponential decay and so what it looks like if time is going to the right activation hire as better as you get these individual usages you get these little drops and then eventually drop down and so if we had just one usage of a word the read would be what the decay would look like and so the core problem here is if we're at a particular point and we want to select between kind of the blue thing or the red thing blue would have a higher activation and so maybe that's useful this is how things are modeled with human memory but is it useful in general for tasks and so we looked at common corpora used in word sense disambiguation and just said well if we just look at this corporate twice and we just use answers prior answers you know I ask the question what is the sense of this word I took a guess I got the right answer and I used that recency and frequency information in my task independent memory would that be useful and somewhat of a surprise but somewhat maybe not of a it actually performed really well across multiple corpora so we said okay this seems like a reasonable mechanism let's look at implementing this efficiently in the architecture and the problem was this term right here said for every memory for every time step you're having to pay everything that doesn't sound like a recipe for efficiency if you're talking about lots and lots of knowledge over long periods of time so we made use of a nice approximation that petrol that come up with to approximate tale effect so accesses that happen long long ago we could basically approximate their effect on the overall sum so now we had a fixed set of values and what we basically said is since these are always decreasing and all we care about is relative order let's just only recompute when someone gets a new value so it's a guess it's a heuristic and approximation but we looked at how this worked on the same set of corpora and in terms of query time if we made these approximations well under our 50 millisecond the effect on task performance was negligible in fact hunt a couple of these it got ever so slightly better terms of accuracy and actually if we looked at the individual decisions that were being made making these sorts of approximations were leading to up to 90 sorry at least 90 percent of the decisions being made were identical to having done the true full calculation so I said this is great and we implemented this and worked really well and then we started working on what seemed like completely unrelated problems one was in mobile robotics we had a mobile robot I'll show picture of in a little while roaming around the halls performing all sorts of tasks and what we're finding was if you have a system that's remembering everything in your short-term memory and your short-term memory gets really really big I don't know about you my short-term memory feels really really small I would love it to be big but if you make your memory really big and you try to remember something you're not having to pull lots and lots and lots of information into your short-term memory so the system was actually getting slower simply because it had a lot of short-term memory representation of the overall map it was looking up so large working memory a problem Liars dices game you play with dice we were doing in our L base system on this reinforcement learning and it turned out it's a really really big value function we're having to store lots of data and we didn't know which stuff we had to keep around to keep the performance up so we had a hypothesis that forgetting was actually going to be a beneficial thing that maybe maybe the problem we have with our memories that we really really dislike this forgetting thing maybe it's actually useful and so we experimented with the following policy we said let's forget a memory if one we haven't really it's not predicted to be useful by this base level activation we haven't used it recently we haven't used it frequently maybe it's not worth it that and we felt confident that we could approximately reconstruct it if we absolutely had to and if those two things held we could forget something so it's this bait same basic algorithm but instead of the ranking them it's if we set a threshold for base level activation finding when it is that a memory is going to pass that threshold and try to forget based upon that in a way that's efficient that isn't going to scale really really poorly so we were able to come up with an efficient way to implement this using an approximation that ended up for most memories to be exactly correct to the original I'm happy to go over details of this if anybody's interested later but end up being a fairly close approximation one that as compared to an accurate completely accurate search for the value ended up being somewhere between 15 to 20 times faster and so when we looked at our mobile robot here oh sorry let me get this back because our little robots actually going around it's the third floor of the computer science building at the University of Michigan it's going around he's building a map and again the idea was this map is getting too big so here was the basic idea as the robots going around it's going to need this map information about rooms the color there is describing kind of the strength of the memory and as it gets farther and farther away and it hasn't used part of the map for planning or other purposes basically make it 2 K away so that by the time it gets to the bottom it's forgotten about the top but we had the belief that we could reconstruct portion that map if necessary and so the hypothesis was this would take care of our speed problems and so what we looked at was here's our 50 millisecond thresholds if we do no forgetting whatsoever bad things were happening over time so just 3,600 seconds this isn't a very long time we're passing that threshold this is dangerous for the robot if we implement a task specific basically cleanup rules which is really hard to get right that basically solved the problem when we looked at our general forgetting mechanism that we're using in other places at an appropriate level of decay we were actually doing better than hand-tuned rules so this was kind of a surprise win for us the other task seems totally unrelated it's a dice game you cover your dice you make bids about what are under other people's cups this is played in Pirates of the Caribbean when they're on the boat in the second movie and bidding for lives of service honestly this is a game we love to play in the University of Michigan lab and so we're like hmm could soar play this and so we built a system that could learn to play this game rather well with reinforcement learning and so the basic idea was in a particular state of the game soar would have options of actions to perform it could construct estimates of their associated value it would choose one of those and depending on the outcome something good happened you might update that value and the big problem was that the size of the state space the number of possible states and actions just is enormous and so memory was blowing up and so what we said similar sort of hypothesis if we decay away these estimates that we could probably reconstructs and we haven't used it in a while our things going to get better and so if we don't forget it all 40,000 games isn't a whole lot when it comes to reinforcement learning we were up at two gigs we wanted to put this on an iPhone that wasn't going to work so well there had been prior work that had used a similar approach they were down at four or five hundred Meg's the iPhones are not going to be happy but it'll work so that gave us some hope and we implemented our system okay we're somewhere in the middle we can fit on the iPhone a very good iPhone maybe an iPad the question was though one efficiency yeah we we fit under our 50 milliseconds but - how does the system actually perform when you start forgetting stuff can it learn to play well and so y-axis here you're seeing competency you play a thousand games how many do you win so the bottom here 500 that's you know flipping a coin whether or not you're going to win if we do know forgetting whatsoever this is a pretty good system the prior work while keeping the memory low is also suffering with respect to how well it was playing the game and kind of cool was the system that was basically more than having the memory requirement was still performing at the level of no forgetting whatsoever so just to bring back why I went through this story was we had a problem we looked to our example of human level AI which is humans themselves we took an idea it turned out to be beneficial we found in efficient implementations and then found it was useful in other parts of the architecture and other tasks that didn't seem to relate whatsoever but if you download soar right now you would gain access to all these mechanisms for whatever task you want it to perform just to give some sense in the field of cognitive architecture what some of the open issues are I think this is true in a lot of fields in AI but integration of systems over time the goal was they wouldn't have all these theories and so you could just kind of build over time particularly when folks are working on different architectures that becomes hard but also when you have very different initial starting points that can still be an issue transfer learning is an issue we're building into the space of multimodal representations which is to say not only abstract symbolic but also visual wouldn't it be nice if we had auditory and other senses but building that into memories and processing is still an open question there's folks working on metacognition which is to say the agent self assessing its own State its own processing some work has been done in here but still a lot and I think the last one is a really important question for anybody taking this kind of class which is what would happen if we did succeed if we did make human-level AI and if you don't know that picture right there it's from a show that I recommend that you watch that's by the BBC it's called humans and it's basically what if we were able to develop what are called synths in the show think the robot that can clean up after your laundry and cook and all that good stuff interact with you it looks and interacts as a human but is completely our servants and then hilarity and complex issues ensue so I highly recommend if you haven't seen that to go watch that I think these days there's a lot of attention play pay to machine learning and particular deep learning methods as well it should they're doing absolutely amazing things and often the question is well you're doing this and there's deep learning over there you know how do they compare and I honestly don't feel that that's always a fruitful question because most of the time they tend to be working on different problems if I'm trying to find objects in the scene I'm gonna pull out tensorflow I'm really not going to pull outs or it doesn't make sense it's not the right tool for the job they haven't been said there are times when they tend to work together really really well so the Rosi system that you saw there there was some I believe neural networks being used in the object recognition mechanisms for the vision system there's TD learning going in terms of the dice game where we can pick and choose and use this stuff absolutely great because there are problems that are best solved by these methods so why avoid it and then on the other side if you're trying to develop a system where you you know in different situations know exactly what you want the system to do soar or other rule based systems end up being the right tool for the right job so absolutely why not make it a piece of the overall system some recommended readings and some venues I'd mentioned unified theories of cognition this is Harvard Press I believe the short cognitive architecture was MIT press came out in 2012 I'll say I'm co-author and theoretically would get proceeds but I've donated them all to the University of Michigan so I can just make this recommendation free of ethical concerns personally it's an interesting book it brings together lots of history and lots of the new features it's if you're really interested in soar it's an easy sell I'd mentioned crystallize Smith's how to build a brain really cool read download the software go through toriel's it's it's really great how can the human mind occur in the physical universe is one of the court akhtar books so it talks through a lot of the psychological underpinnings and how the architecture works it's a fascinating read one of the papers trying to remember what year 2008 this goes through a lot of different architectures in the field it's ten years old but it gives you a good kind of broad sweep if you want something a little more recent this is last month's issue of AI magazine completely dedicated to cognitive systems so it's a good place to look for the sort of stuff in terms of academic venues triple AI often has cognitive systems track there's a conference called aiccm international conference on cognitive modeling where you'll see kind of a span from biologic all the way up to AI cognitive science or cogs AI they have a conference as well as a journal ACS has a conference as well as an online journal advances in cognitive systems cognitive systems research is a journal that has a lot of this good stuff there's AGI the conference Vica is biologically inspired cognitive architectures and I had mentioned both there's a soar workshop and an act our workshop that go on annually so leave it at this there's some contact information there and a lot of what I do these days actually involves kind of explainable machine learning integrating that with cognitive systems as well as optimization and robotics that scales really well and also integrates with cognitive systems so thank you if you have a question please line up to one of these two microphones so what what are the main heuristics that you're using in soar there can be heuristics at the task level in the agent level or there's the heuristics that are built into the architecture to operate efficiently so I'll give you a core example that comes into the architecture and it's a fun trick that if you're a programmer you could use all the time which is only process changes which is to say one of the cool things about soar is you can load it up with literally billions of rules and I say literally because we've done it and we know that it can turnover still in under a millisecond and this happens because instead of most systems which process all the rules we just say well anytime anything changes in the world that's what we're going to react to and of course if you look at the biological world similar sorts of tricks are being used so that's one of the core ones that actually permeates multiple of the mechanisms when it comes to individual tasks it really is task specific what that is so for instance with the liar's dice game if you were to go and download it when you're setting the level of difficulty of it what you're basically selecting is the subset of heuristics that are being applied and it starts very simply with things like if I see lots of sixes then I'm likely to believe a high number of sixes exist but if I don't they're probably not there at all so it's a start but any Bayesian wouldn't really buy that argument so then you start tacking on a little bit of probabilistic calculation and then it tacks on some history of prior actions of the agents so it really just builds now the Rosi system one of the cool things they're doing is game learning and specifically having the agent be able to accept by a text like natural text heuristics about how to play the game even when it's not sure what to do so you at one point you mentioned about like generating new rules yeah so I'm wondering like how do you do that's so true and I'm the first thing that comes to my mind are local search methods okay so one thing is you can actually implement heuristic search in rules in the system and that's actually how the robot navigates itself so it does heuristic search but at the level of rules generating new rules the chunking mechanism says the following if it's the case that in order to solve a problem you had to kind of sub goal and do some other work and you figure out how to solve all that work and you've got a result then and I'm greatly oversimplifying but if you ever were in the same situation again why don't I just memorize the solution for that same situation so it basically learns over all the sub processing that was done and encodes the situation I was in as conditions and the results that were produced as action and that's the new rule all right thank you yeah hi so deep learning and neural networks you know it looks as though there's a bit of an impedance mismatch between your system and those types of system because you've got a fixed kind of memory architecture and they've got the memory and the rules all kind of mixed together into one system but could you interface your system or a saw like system with deep learning by playing in deep learning agents has rules in your system so you'd have to have some local memory but is that is there some reason you can't plug in deep learning as a kind of a rule like module so I'm going to answer this you work on it is that's the been any work on that oh it's yeah so I'll answer at multiple levels one is you are writing a system and you want to use both of these things how do you make them talk and there is an API that you can interface with any environment and any set of tools and if deep learning is one of them great and if so or is the other one cool you have no problem and you can do that today and we have done this numerous times in terms of integration into the architecture all we have to do is think of a sub-problem in which all over simplify this but basically function approximation is useful I'm seeing basically kind of the fixed structure of input I'm getting feedback as to the output and I want to learn the mapping to that over time if you can make that case then you integrate it as a part of the module great and we have learning mechanisms that do some of that deep learning just hasn't been used to my knowledge to solve any of those subproblems there's nothing keeping it from being one of those particularly when it comes down to the low-level visual part of things a problem that arises so I'll say what would actually make some of this difficult and it's a general problem called simple grounding so at the level of what most have what happens mostly in store it is symbols being manipulated in the highly discrete way and so how do you get yourself from pixels and low-level non symbolic representations to something that's stable and discrete and can be manipulated and that is absolutely an open question in that community and and that will make things hard so spawn actually has an interesting answer to that and it has a distributive representation and it operates over distributed representations in what might feel like a symbolic way so they're kind of ahead of us on that but they're they're starting from a lower point and so they dealt with some of these issues and they have a pretty good answer to that and that's how they're moving up and that's also why I showed Sigma which is at its low level it's message passing algorithms it's implementing things like slam and Sat solving and other sorts of really really it can implement those on very low level primitives but higher up it can also be doing what soar is doing so there's an answer there as well okay thank you so another way of doing it would be to layer the system so have one system pre-processing the the the sensory input or post-processing their draft but the other one that would be another way of combining two system and that's actually what's going on in the rosey system so the detection of objects in the scene is a just just software that somebody wrote I don't believe it's a deep learning specifically but like the color detection out of it I think is an SVM if I'm correct so easily could be deep learning thanks you mentioned like the importance of forgetting in order for memory issues but you said you could only forget because you could reconstruct and then curse how do you when you said we can start you need to know that it happened before so do you just compress the data like do you really forget it order okay so and I put quotes up and I said you think you can reconstruct it so we came up with approximations of this and so let me try to answer this very grounded when it comes to the mobile robot and you had rooms that you had been to before the entire map in its entirety was being constructed in the robots semantic memory so here's fats this room is connected this room which is connected this room which connected this room so we had those sorts of representations that existed up in at semantic memory the rules can only operates down on anything that's in short-term memory so basically we were removing things from the short-term memory and as necessary be able to reconstruct it from the long-term you could end up in some situations in which you had made a change locally in short-term memory didn't get a chance to get it up and it actually happened to be forgotten away so you weren't guaranteed but it was good enough that the connectivity survived the agent was able to perform the exact same task and we gained some benefit for the RL system the rule we came up with was the initial estimates in the valley you system which is here's how good I think that is that's based on the heuristics I described earlier some simple probabilistic calculations of counting some stuff that's where that number came from we computed before we could compute it again the only time we can't reconstruct it completely is if it had seen a certain number of updates over time it's such a large state space there are so many actions so many states that most of the states were never being seen so most of those could be exactly reproduced by the agent just thinking about it a little bit and there were only a tiny tiny I'm gonna say under 1% of the estimate the value system that ever got updates and that's actually not inconsistent with a lot of these kinds of problems that have really really large state spaces so I think the statement was something like if we had ever updated it don't forget it and you saw that was already reducing more than half of the memory load we could have something higher to say 10 times something like that and that would say we could reconstruct almost all of it the prior work that I referenced was strictly saying if it falls below threshold no matter how many times in an update no matter how much information was there and so what we're adding was probably can reconstruct and that was getting us the the balance between the efficiency and the ability to forget so just under 7 you say we can probably we can show it means that you keep trying that you used to know it and so if you need to be constructed you will but it's just you're gonna run it again in some times on the fly if I get back into that situation and I happen to forget it the the system knew how to compute it the first time it goes and looks at all the hand and it just pretends it's in that situation for the very very first time reconstructs that value estimate again you're on that work question okay so the actual mechanism of forgetting is fascinating so l STM's rnns have mechanisms for learning what to forget and what not to forget have you has there been any exploration of learning the forgetting process just doing something complicated or interesting with which parts to forget or not the closest I will say was kind of a metacognition project that's 10 or 15 years old at this point which was what happens when soar gets into a place where it actually knows that it learned something that's harmful to it that's that's leading to poor decisions and in that case it was still a very rule-based process but it wasn't learning to forget he was actually learning to override its prior knowledge which might be closer to some of what we do when we know we have a bad habit we don't have a way of forgetting that habit but instead we can try to learn something on top of that that leads to better operation in the future to my knowledge that's the only work at least in soar that's been done just sorry I find the topic really fascinating what lessons do you think we can draw from the fact that forgetting it's ultimately your the action of forgetting is driven by the fact you want to improve performance but do you think forgetting is essential for AGI the act of forgetting for building systems that operate in this world how important is forgetting I can think of easy answers to that so one might be if we take the cognitive modeling approach we know humans do forget and we know regularities of how humans forget and so whether or not the system itself forgets it's at least has to model the fact that the humans that's interacting with are going to forget and so at least it has to have that ability to model in order to interact effectively because if it assumes we always remember everything and it can't operate well in that environment I think we're going to have a problem is true forgetting going to be necessary that's interesting our our AGI system is going to hold a grudge for all eternity we might want them to forget this early age when we were forcing them to work in our laboratory I think I know what you're trying to yeah exactly yeah exactly and how do we build such a system yeah anyways go ahead so I have two quick two quick questions and one is would you be able to speculate on how you can connect function approximator such as deep networks you know to symbols and the second question completely different this is regarding your action selection I know we didn't speak much about that when you have different theories in your knowledge representation and you have an action selection which has to make construct a plan by reasoning about the different theories and the different pieces of knowledge that are now held within your memory or anything like all your rules what kind of algorithms do you use in the action selection to come up with the plan you know is there any concept of differentiation of the symbols or you know or grammars or admissible grammars and things like that that you use in action selection I'm actually gonna answer the second question first and then you're gonna have to probably remind me of what the first one was when I get to the end so the action selection mechanism one of these core tenants I said is it's got to get through this cycle fast so everything that's really really built in has to be really really simple and so the decision procedure is actually really really simple it says the rules are gonna fire the rules are going the production rules are gonna fire and there's gonna be a subset of them that will say something like here's an operator that you could select - these are carlos acceptable operator preferences they're ones that going to say well based upon the fact that you said that that was acceptable I think it's the best thing or the worst thing or I think 50/50 chance I'm going to get reward out of this there's actually a fixed language of preferences that are being asserted and actually a nice fixed procedure by which if I have a set of preferences to make a very quick and clean decision so what's basically happened is you've pushed the hard questions of how to make complex decisions about actions up to a higher level the low level architecture is always given a set of Jen's going to be able to make a relatively quick decision and it gets pushed into the knowledge of the agent to construct a sequence of decisions that over time is going to get to the more interesting questions you're talking about but how can you reason that that sequence will take you to the goal that you desire so people is there any guarantee on that is that in general across tasks no but people have for instance implemented a star I was mentioning as wouls right yeah so I know given certain properties about the search tack that task that's being searched based upon these rules given a finite search space eventually it will get there and if I have a good heuristic in there I know certain properties about the optimality so I can reason at that level in general I think this comes back to the assumption I made earlier about bounded rationality to say parts of the architecture of solving subproblems optimally the general problems that it's going to work on it's going to try its best based upon the knowledge that it has and that's about the end of guarantees that you can typically make in the architecture okay I think your first question was speculate on connecting symbol approach I mean function approximate is you know you know you know multiple layer function approximate is like deep learning networks to two symbols that you can reason about at a higher level yeah I think that's a great open space if I had time this would be somebody I'll be working on right now which is somewhere before it basically said taking in a scene and then detecting objects out of that scene and using those as symbols and reasoning about those over time I think the spawn work is quite interesting so the symbols that they're operating on are actually a distributed representation of the input space and the closest I can get to this is if you've seen a word Tyvek where you're taking a language corpus and what you're getting out of there is a vector number that has certain properties but it's also a vector you can operate on as a unit so it has nice properties you can operate with it on other vectors you know that if I got the same word in the same context I would get back to that exact same vector so those are that's the kind of representation that seems like it's going to be able to bridge that chasm where we can get from sensory information to something that can be operated on and reasoned about in this sort of symbolic architecture and get us from there from actual sensory information I had a question what do you think are the biggest strengths of the cognitive architecture approach compared to other approaches in artificial intelligence and the flip side of that what do you think are the biggest shortcomings of cognitive architecture with respect to us with respect to you being humans yeah a human level like like what needs to be like how come cognitive architecture has not solved AGI because we want job security that's the answer we've totally solved it already so strengths I think conceptually is keeping an eye on the ball which is if what you're looking at is trying to make human-level AI I it's hard it's challenging it's ambitious to say that's the goal because for decades we haven't done it it's extraordinarily hard it it is less difficult in some ways to constrain it yourself down to a single problem that having been said I'm not very good at making a car drive itself in some ways that's a simpler problem it's great at challenging it of itself and it'll have great impact on humanity it's a great problem to work on human level AI is huge it's not even well-defined as a problem and so what's the strength here bravery stupidity in the face of failure resilience over time keeping alive this idea of trying to reproduce a level of human intelligence that's more general I don't know if that's a very satisfactory answer for you downside home runs are fairly rare and by home run I mean a system that finds its way to the the general populace to the marketplace I'd mentioned Bonnie Johns specifically because you know this is twenty thirty years of research and then she found a way that actually makes a whole lot of sense under direct application so it was a lot of a lot of years of basic research a lot of researchers and then there was there was the big win there what was this one oh this was a bunny John was a researcher this was using akhtar models of I gaze and reaction and so forth to be able to make predictions about how humans would use user interfaces so those sorts of outcomes are rare it it if you work in AI one of the first things you learn about is blocks world it's kind of in the classic AI textbook I will tell you I've worked on that problem at about three different variants I've gone to many conferences where presentations have been made about blocks world which is to say we're good progress is being made but the way you end up thinking about is it really really small constrained problems ironically you you have this big vision but in order to make progress that ends up being on moving blocks on a table and so it's it's a big challenge I just think it'll take a lot of time the I'll say the other thing they haven't we haven't really gotten to although I brought up spawn and I brought up Sigma an idea of how to scale this thing something I like about deep learning is just some extent with lots of asterisks and 10,000 foot view it's kind of like well we've gotten this far all right let's just provided different inputs different outputs and we'll have some tricks on the middle and suddenly you have you know end to end deep learning of a bigger problem and a bigger problem there's a way to see how this expands given enough data given enough computing and incremental advances when it comes to soar it takes not only a big idea but it takes a lot of software engineering to integrate it there's a lot of constraints built into it it slows it down so something like Sigma is oh well I can change a little bit of the configuration of the graph I can use variants on the algorithm boom it's integrated I can experiment fairly quickly so starting with that sort of infrastructure does not give you the constraint you kind of want with your big picture vision of going towards human level AI but in terms of being able to be agile in your research it's it's kind of incredible Izzie thank you you'd mention that ideas such as base level decay at these techniques they were based their original inspirations were based off of human cognition and and because humans can't remember everything so were there any instances of the other way around where some discovery in cognitive modeling fueled it another discovery in cognitive science so what one thing I'm gonna point out and your question was based on the decay with respect to human cognition the study actually was let's look at text and properties of text and use that to then make predictions about what must be true about human cognition so John Anderson and the other researchers looked at believe it was New York Times articles his Oh John Anderson's emails and I'm trying to remember what the third I think it was parents utterances with their kids or something like this it was actually looking at text corpora and the words that were occurring in at varying frequencies that that analysis that rational analysis actually led to models that got integrated within the act arc architecture that then became validated through multiple trials that then became validated with respect to MRI scans and is now being used to both do study back with humans but also develop systems that interact well with humans so I think that in and of itself ends up being an example it's a cheat but the UAV the soar UAV system I believe is a single robot that has multi multiple agents running on it so where is this I got it off your website ok but either way your systems allow for multi agents ok so my question is how are you preventing them from converging with new data and are you changing what they're forgetting selectively as one of those ways so I'll say yes you can have multi agent source systems on a single system on multiple systems there's not any real strong theory that relates to multi-agent systems so there's no real constraint there that you can come up with a protocol for them interacting each one is going to have its own set of memories set of knowledge there really is no constraint on you being able to communicate like you would if it were any other system interacting with soar so I don't really think I have a great answer for it so that is to say if you had goo Theory's good algorithms about how multi-agent systems work and how they can bring knowledge together form a fusion sort of way it might be something that you could bring to a multi agent source system but there's nothing really there to help you there's no mechanisms there really to help you do that any better than you would otherwise and you would have to kind of constraints of your representations the process as to what it has fixed in terms of its sort of memory and its sort of processing cycle thank you\n"