Aquarius Cognitive Engine forms appropriate decisions by integrating flexible goal settings and learned abstracted concepts through a modular network
Just as it has been observed in animal models in the lab, and as it has been implemented in reinforcement learning (RL) models, the Cognitive Engine can associate value levels to specific objects. When Binny sees an object that he has eaten and from which has experienced a reward numerous times, a part in his "brain" will light up, signaling to Binny a potential rewarding event.
When Binny isn't stressing about finding food, water, or someone to connect with socially, he will engage in exploratory "play". During play, Binny will approach, pick up, and manipulate various objects, including putting those objects in his mouth (just as babies do). Over time and after many repetitions, the Cognitive Engine allows Binny to form specific sequence chunks that can get called up when he wants to experience a certain outcome (e.g. walking over, picking up, and eating a flower when hungry).
Certain modules of the Cognitive Engine are designed to respond in a certain way when expected sequences go awry. This allows the system to emphasize surrounding cues during an unexpected event so as to allow heightened learning later on during sleep or a default mode where motor sequences can be reconfigured.
Equipped with a drive to explore through "play", the Cognitive Engine encourages its agent to learn many different sequences to achieve similar outcomes. Possibly the highlight of all of its faculties, the Cognitive Engine employs nearly all modules (frontal and posterior cortices, as well as left and right hemispheres) during a moment of failure. If the settings are right to maintain rumination on a desired goal, and the agent has experienced enough strategies, the engine can put together a novel approach to overcoming an obstacle! The Aquarius Cognitive Engine might be the first system to implement functional creativity in an embodied agent!
Applying the two cognitive faculties above (reacting to uncertainty), Binny's "brain" gives him the ability learn how different outcomes unfold in correlation to the facial expressions of other individuals (other bots, or a human user). In the demo video linked below, you'll see that Binny can detect that he will not receive a high value, social experience when he just reaches up for a hug (during a test). He has learned that a frowny face leads to no success there. However, if he ruminates on the goal after that obstacle, he can think of another approach to elicit a happy face from a contemporary...bringing a toy into the mix!
A virtual robot guided by a Cognitive Engine
The technology that drives Binny (the virtual robot) can be called an artificial neural network control system. This system can be broken into two parts: an Abstraction circuit, and an Emotion (or Flexibility) circuit. The two circuits are further broken into smaller modules, and the two circuits work tightly together to drive learning and planning.
The modular system of this design is inspired by current scientific literature on human neuroanatomy. The functions of the various modules also follow general rules described in a Trends in Neuroscience article** written by my former thesis advisor, Randy O'Reilly.
The Abstraction system is based off of the functions theorized in the 'Cold' and 'What'/'How' regions of the brain (see image). This circuit involves pulling out patterns of spatial relationships between goals and other objects surrounding the agent. This system also allows the agent to form complex plans in order to achieve its goal. Those plans can involve tools (subgoals) that help the agent overcome an obstacle.
The Emotion (Flexibility) system is based off of the functions theorized in the 'Hot' zones of the neocortex. This is the circuitry that allows the agent to form goal representations and to integrate them with the representations of the physical world (partly formed by the Abstraction system). The Emotion system helps select a strategy to complete a task and tracks the agent's progress towards its goals at the end of that motor plan. If obstacles come along, this system flexibly recruits other regions of the Emotion or Abstraction systems in order to get the agent back on a better track.
Many of the smaller modules throughout this control system were designed with other various functions or properties. In other words, some fine tuning was needed in order to allow the network as a whole to operate smoothly. Nevertheless, the various functions and properties assigned were based off of theory discussed in dozens of other scientific journal articles covering all neocortical areas of the human and animal brain.
**The What and How of Prefrontal Cortical Organization, O'Reilly 2010
**The What and How of Prefrontal Cortical Organization, O'Reilly 2010
Copyright © 2023 Aquarius Cognition - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.