From published studies in cognitive neuroscience, we have a starting spot for an applicable model of decision-making in a humanoid robot. A truly autonomous robotic agent needs an ability to proportionally react to a variable environment. That kind of flexible decision making starts with a model of human emotion.

Drawing on research into functional brain systems, this project develops a neural network control circuit for flexible robotic behavior. The system drives motivation, tracks goal progress, and adjusts responses when plans fail. These capabilities are implemented through the emotion states listed above.

With these functional states above, and the best current theories on functional organization in the brain, the Emotion system can be integrated with a Motor Planning system as shown above
Check out the video below for a deep dive in the Emotion Model referenced above. The video steps through a popular psych study, showing how behavior emerges from the theoretical model. Finally, I show how flexible decision making can be implemented in a humanoid robot
A testing app was created to aid in the robot development. Servo motor configuration and robot gripper grasping strength testing was carried out with a Python Tkinter UI application. Check it out!
A virtual robot guided by a Cognitive Model
Early development in this R&D project consisted of integrating a rudimentary emotional system and a motor planing system into a cognitive model. “Binny”, the virtual agent, was able to demonstrate some important capabilities relating to robot autonomy.
Just as it has been observed in animal models in the lab, and as it has been implemented in reinforcement learning (RL) models, this cognitive model can associate value levels to specific objects. When Binny sees an object that he has eaten and from which has experienced a reward numerous times, a part in his "brain" will light up, signaling to Binny a potential rewarding event.
When Binny isn't stressing about finding food, water, or someone to connect with socially, he will engage in exploratory "play". During play, Binny will approach, pick up, and manipulate various objects, including putting those objects in his mouth (just as babies do). Over time and after many repetitions, the cognitive model allows Binny to form specific sequence chunks that can get called up when he wants to experience a certain outcome (e.g. walking over, picking up, and eating a flower when hungry).
Certain modules of the cognitive model are designed to respond in a certain way when expected sequences go awry. This allows the system to emphasize surrounding cues during an unexpected event so as to allow heightened learning later on during sleep or a default mode where motor sequences can be reconfigured.
Equipped with a drive to explore through "play", the cognitive model encourages its agent to learn many different sequences to achieve similar outcomes. Possibly the highlight of all of its faculties, this model employs nearly all modules during a moment of failure. If the settings are right to maintain rumination on a desired goal, and the agent has experienced enough strategies, the model can put together a novel approach to overcoming an obstacle! This demonstrates functional creativity in an embodied agent!
Applying the two cognitive faculties above (reacting to uncertainty), Binny's "brain" gives him the ability learn how different outcomes unfold in correlation to the facial expressions of other individuals (other bots, or a human user). In the demo video linked below, you'll see that Binny can detect that he will not receive a high value, social connection when he just reaches up for a hug (during a test). He has learned that a frowning face leads to no success there. However, if he ruminates on the goal after that obstacle, he can think of another approach to elicit a happy face from a contemporary...bringing a toy into the mix!
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.