I am interested in using machine learning techniques to build embodied artificial intelligence (e.g. autonomous robots). This is a very interdisciplinary pursuit, found at the crossroads of computer science, cognitive science, biology, engineering, psychology, neuroscience, philosophy, physics, mathematics, and so many more interesting fields. But I tend to describe it as an approach to artificial intelligence through the lens of the biologically optimization we see in nature (though the converse is surely also true).
This high-level goal is broken up into two interelated and interdependent subgoals. The first is the automated design of decision making systems for autonomous agents. This pursuit draws inspiration from behaviorist psychology, and uses deep reinforcement learning to shape the way in which an artificial agent modifies its behavior to maximize positive rewards and minimize punishment over time – while building a model of the world around it to help understand when and how these rewards take place. In practice this involves optimizing a deep neural network to choose an appropriate action (e.g. output a joint motor command) from a high-dimensional input state (e.g. a camera image or video stream) – and this network is optimized by changing its weight in such as way as to minimize its temporal difference error (i.e. the mismatch between the reward it expected to get before an action and the reward that it actually experienced after taking that action).
The second high-level goal in design autonomous agents is the automated design of the physical layout of robot bodies (i.e. morphologies). This includes their shape, size, sensor, and actuator layouts -- as well as how they grow and develop over time, and/or change in response to environmental stimuli. In essence, I loosely mimic the same evolutionary and developmental processes that biology used to create us (and all life on this planet), but apply them to artificial, rather than biological, creatures. I do this both in an attempt to create robotic behavior on the order of complexity of biological life, and simply to better understand the mechanisms and optimization of evolutionary and developmental processes themselves.
For a classic example of the evolution of body plans, locomotion, and competitive strategy in an artificial setting, check out this great – way ahead of it’s time – video from a personal hero and friend: Karl Sims, Evolved Virtual Creatures (1994)
From a cognitive perspective, once I have optimized a functioning agent, I can stop and analyze the artificial neurons and connections inside it’s brain. This is something that is incredibly difficult to perform at any reasonable scale inside a biological brain, yet if the neurons and connectivity of the artificial brain follows the same general ideas and principles that their biological counterparts do, I can start to understand what types of features occur inside that brain, and even how this information relates to the structure of the agent’s body plan or environment. I can use these techniques to think about things like internal mental representations of the outside world, pattern recognition and association, sensory motor control, prediction and decision making, and hopefully even one day understand general awareness and cognition within this framework.
My current research focuses not only on the high level processes and decision making that we often think of as “cognition”, but is particularly interested in the way which our bodies and our situation within a particular environment make the tasks that might be difficult to do (explicitly and consciously) into processes that are instincutal, automatic, and seamless. This is a facet of embodied cognition (specifically, morphological computation). In exploring the co-optimization of robotic morphologies (body plans) and controllers (brains), I am able to better design and more efficient robotic platforms, but more interestingly I am able to help better understand this poorly-studied aspect of animal behavior and cognition.
In essence, while most roboticists are simply trying to find the controller which allows a given robot to successfully perform a given task (i.e the smartest and most complex brain), I am also trying to understand which robot will use the simplest controller to solve a given task (i.e. the body plan that will allow that task to be solved by the simplest possible brain). I find this question interesting and important because biological creatures are constantly driven to be as efficient and effective as possible, and nature has clearly optimized their (our) shape and form to allow for this. If we can understand how this process works, and combine it with biologically inspired optimization techniques that allow us to create complex and flexible decision making systems, I beleive that we can also use it to make our machines more efficient, effective, and scalable.
This emphasis on embodiment and geometric design of forms (these are morphologies in my robotics work, but more generally can be any physical shape, object or pattern) also enables me to collaborate with incredible engineers to focus on design automation in the context of automated manufacturing (like 3D printing). The understanding of how our bodies grow and develop into a finely tuned apparatus can be applied to produce functional and optimized designs for any number of complexly interacting trade-offs and objectives in an engineering context. Carefully balancing these trade-offs, or even intuitively understanding how design decisions affect the function of a given form, becomes more and more challenging as 3D printers allow us to create arbitrarily complex forms from increasingly complex materials. Luckily, the “blind watchmaker” of evolutionary design doesn’t need to comprehend or model this system to be able to optimize it. In mimicking the divergent and inventive nature of evolutionary and developmental optimization with artificial design systems we are able to build the “creative machine.”