Visual cortex; Retina; Synaptic Plasticity; Learning; Perception; Information Theory; Machine Learning
I study the way the brain represents information about the outside world, and the way in which those representations are learned. My immediate goal is to build on my expertise in machine learning and sensory neuroscience to create a camera to brain translator that could restore sight to the blind, and could be used in computer vision systems. In parallel, I will develop new data science methods that will infer the brain’s learning rules from in vivo neural data, and use those methods to determine how behavioural context affects synaptic plasticity in the visual cortex. Next, I will use the brain’s learning rules to make next-generation machine learning algorithms that will be more flexible and efficient than the current state of the art. Finally, I will reveal how the interaction between different retinal ganglion cell types supports the communication of visual information from the eyes to the brain. That work may have strong implications for the development of next-generation retinal prosthetics.