Currently, the cognitive model is self-driven to reproduce outcomes that it has observed. This "drive" + "causal relationship" learning is important for inquiring another individual information about a task. The ability to ask questions will remove the need to overtrain robots on each specific task movement.
The model learns causal relationships by watching the world, including watching a human do something. From a single demonstration, it can build its own plan to repeat the task. No labeled data, no scripted training run.
Things go wrong. When the agent hits an obstacle it wasn't trained for, it scales its approach back step by step, and either finds a new path or asks for help. The result is a robot that handles the messy middle of a task without a human watching over it.
Every interaction adds to the model's experience of object features and causal relationships. When something new comes up, it can recombine those pieces into a plan it was never explicitly taught, finding solutions on its own.
Reading the room is the same skill as recovering from a failed plan: both require tracking what's happening and adjusting in proportion. We've shown this in a simple social experiment with our humanoid rover: appropriate, human-like responses, not just default cheerfulness. Comfort is what brings people back.
The model learns nouns, verbs, and prepositions the way a child does -- by associating words with the objects, actions, and relationships it directly experiences. Language and understanding develop together, so a robot can act on a simple spoken command without being trained on that exact phrasing.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.