Machine learning, e.g., “deep learning”, is a powerful tool that is increasingly being applied to solve challenging problems, including those faced in neuroscience. The performance of deep neural networks far surpasses previous computer vision algorithms on tasks such as image and facial recognition, and their emergent properties appear to mimic those found in biological systems. However, it is often unclear how studying artificial neural networks can help us to better understand biological systems.
I have been developing an approach, which I think is distinct from the way this research is typically conducted, with a strong focus on establishing and maintaining the correspondence between artificial and biological systems so that the insights gained from the artificial system can be more explicitly linked to the brain. Thus far, I have applied this approach to how visual motion information is processed and combined with vestibular information to solve the problem of causal inference, i.e., determining whether multiple sensory inputs were caused by the same or different events. If you are skeptical about what we can learn from neural networks, I hope that some of the work I present will begin to convince you otherwise.