Research
Our research goal is to uncover the fundamental principles underlying learning and computation in the brain. To this end, we are exploring the intersection between neuroscience and artificial intelligence (AI). While modern AI systems demonstrate remarkable performance in various tasks, such as image recognition and language processing, the brain surpasses them in terms of efficient learning from limited data and energy-efficient computation. Our objective is to develop a theoretical understanding of how the intricate connectivity patterns at both micro and macroscopic levels, along with complex learning mechanisms, contribute to the brain's exceptional efficiency.
Ongoing Projects
Synaptic Plasticity and Credit Assignment
A big challenge for learning in the brain is that synapses need to solve the synaptic credit assignment problem using only locally available information. While many algorithms have been proposed, it remains unclear how different credit assignment algorithms shape neural representation differently and whether they are consistent with learning in the brain. Moreover, their scalability, such as sample/computational complexity and convergence rate, remains mostly unknown. We aim to bridge the gaps by generating testable predictions from previously-proposed credit assignment algorithms, proposing new algorithms inspired by diverse synaptic and neural plasticity mechanisms in the brain, and analyzing these algorithms rigorously from mathematical optimization and statistical learning theory. We also study plasticity mechanisms from a Bayesian perspective by interpreting plasticity as Bayesian inference in the synaptic weight space.
Computational and Theoretical Connectomics
Thanks to advances in experimental techniques, data on neuron-to-neuron connectivity are becoming increasingly available on both micro- and macroscopic scales, with cell-type specificity and even developmental profile. Despite this progress, more work needs to be devoted to theoretically elucidating why a certain connectivity structure is selected through evolution and development. While these data are often assumed to be the constraint on a computational model, recent results, including our work, indicate that it is possible to build up normative theory on both micro- and macroscopic connectivity structures. In close collaboration with experimental laboratories, we develop connectome-inspired normative models of neural computation and learning and produce testable predictions on connectivity and neural activity.
Neural and Behavioral Data Analysis
Modern systems neuroscience experiments generate complex large-scale data. For instance, in the International Brain Laboratory, neural and behavioral data were collected from hundreds of mice performing tens of millions of trials, which raises the challenge of inferring behavioral policy and neural computation algorithms from given data. In collaboration with experimental laboratories, we infer algorithms implemented by neural networks and animals while developing scalable approximated Bayesian approaches for these inverse learning problems.
Structured Knowledge Processing
The mammalian brain can process structured knowledge, such as spatial maps, relational structures, and sequential memories. However, little is known about how the brain constructs them in short- and long-term memory and utilizes the stored knowledge for inference and decision-making. This is also a challenge for modern AI, which often requires hand-crafted architecture or an extensive amount of data to process structured data. We study how the brain manipulates structured knowledge from modeling and data analysis, and propose brain-inspired algorithms for potential engineering applications.