I am generally interested in using machine learning techniques to build embodied artificial intelligence (e.g. autonomous robots). This is a very interdisciplinary pursuit, found at the crossroads of computer science, cognitive science, biology, engineering, psychology, neuroscience, philosophy, physics, mathematics, and so many more interesting fields. But I tend to describe it as an approach to artificial intelligence through the lens of the biologically optimization we see in nature (though the converse is surely also true).
This high-level goal is broken up into two interelated and interdependent research trajectories. The first is the automated design of decision making systems for autonomous agents. This pursuit draws inspiration from behaviorist psychology, and uses deep reinforcement learning to shape the way in which an artificial agent modifies its behavior to maximize rewards over time – while building a model of the world around it to help understand when and how these rewards take place. In practice this involves optimizing a deep neural network to choose an appropriate action (e.g. to output a joint motor command) from a high-dimensional input state (e.g. a camera image or video stream).
My study of autonomous and flexible decision making systems is especially focused on creating agents that are able to understand and carry out many different tasks simultaneously. This ability to perform flexible and general decision making is often referred to as "general artificial intelligence". In contrast to traditional AI agents who are specialized for a single task (e.g. playing chess) or environment (e.g. a robot in a warehouse), generally intellgent agents are designed to extrapolate knowledge across many domains (e.g. learning to play multiple different games without forgetting previously learned behaviors). This major research thrust of creating flexible and adaptable agents has been generously funded by DARPA's program for Lifelong Learning Machines.
The second research thrust in designing autonomous intellignet agents is the automated design of the physical layout of robot bodies (i.e. morphologies). This includes their shape, size, sensor, and actuator layouts -- as well as how they grow and develop over time, and/or change in response to environmental stimuli. The optimization of robot layouts is critical, as the way in which a robot physically interacts with the world (takes actions and collects sensory information) drastically affects what sort of decision making processes it will, and should, carry out. To accomplish this goal, I loosely mimic the same evolutionary and developmental processes that biology used to create us (and all life on this planet), but apply them to artificial, rather than biological, creatures. I do this both in an attempt to create robotic behavior on the order of complexity of biological life, and simply to better understand the mechanisms and optimization of evolutionary and developmental processes themselves.
For a classic example of the evolution of body plans, locomotion, and competitive strategy in an artificial setting, check out this great – way ahead of it’s time – video from a personal hero and friend: Karl Sims, Evolved Virtual Creatures (1994)
From a cognitive perspective, once I have optimized a functioning agent, I can stop and analyze the artificial neurons and connections inside it’s brain. This form of "AI neuroscience" is critical for "opening the black box" and understanding the interanal representaitons and logical decision making of artificial neural networks -- an increasingly important task as neural network models make increasingly complex and important decisions in our society (e.g. driving cars, deciding jail sentences, or firing weapons). The idea of understanding artificial neural system also holds potential for increasing understanding in biological neural sysetem, where recordings of high-resolution spike trains are expensive, invasive, and noisy. As I rely so heavily on our understanding of biological brains to help me design artificial neural networks, it's rewarding to also get the chance to work with outstanding neuroscientists and help improve our knowledge of this incredibly complex model system.
My research focuses not only on the high level processes and decision making that we often think of as “cognition”, but is particularly interested in the way which our bodies and our situation within a particular environment make the tasks that might be difficult to do (explicitly and consciously) into processes that are instincutal, automatic, and seamless. This is a facet of embodied cognition (specifically, morphological computation). In exploring the co-optimization of robotic morphologies (body plans) and controllers (brains), I am able to better design and more efficient robotic platforms, but more interestingly I am able to help better understand this poorly-studied aspect of animal behavior and cognition.
In essence, while most roboticists are simply trying to find the controller which allows a given robot to successfully perform a given task (i.e the smartest and most complex brain), I am also trying to understand which robot will use the simplest controller to solve a given task (i.e. the body plan that will allow that task to be solved by the simplest possible brain). I find this question interesting and important because biological creatures are constantly driven to be as efficient and effective as possible, and nature has clearly optimized their shape and form to allow for this. If we can understand how this process works, and combine it with biologically inspired optimization techniques that allow us to create complex and flexible decision making systems, I beleive that we can also use it to make our machines more efficient, effective, and scalable.
This goal of using fleixble and robust developmental processes, and optimizing feedback loops between the brain and body of a robot is also valuable to think about in the context of other nested, multi-scale, or meta-learning processes. My research applies the ideas of learning, development, and embodied cognition to also investigate the process of finding the best deep neural network topology to enable the simplest and most efficient learning of control policies and decision making system.
This emphasis on embodiment and geometric design of forms (these are morphologies/topolgoies in my robotics and neural network research – but more generally can be any physical shape, object or pattern) also enables me to collaborate with incredible engineers to focus on generative design in the context of automated manufacturing (like 3D printing). The understanding of how our bodies grow and develop into a finely tuned apparatus can be applied to produce functional and optimized designs for any number of complex and interacting trade-offs and objectives in an engineering context. Carefully balancing these trade-offs, or even intuitively understanding how design decisions affect the function of a given form, becomes more and more challenging as 3D printers allow us to create arbitrarily complex forms from increasingly complex materials. Luckily, the “blind watchmaker” of evolutionary design doesn’t need to comprehend or model this system to be able to optimize it. In mimicking the divergent and inventive nature of evolutionary and developmental optimization in artificial systems we are able to build the “creative machine.”