On 22 September 2020 at 2.15 p.m., in Narva Rd. 18 room 2049, Ilya Kuzovkin will defend his thesis „Understanding Information Processing in Human Brain by Interpreting Machine Learning Models” for obtaining the degree of Doctor of Philosophy (Computer Science).
Prof. Raul Vicente (Institute of Computer Science UT).
Prof. Tim Kietzmann (Radboud University, The Netherlands);
Dr. Fabian Sinz (University of Tübingen, Germany).
Building a model of a complex phenomenon is an ancient way of gaining knowledge and understanding of the reality around us. Models of planetary motion, gravity, particle physics are examples of this approach. In neuroscience, there are two ways of coming up with explanations of reality: a traditional hypothesis-driven approach, where a model is first formulated and then tested using the data, and a more recent data-driven approach, that relies on machine learning to generate models automatically.
Hypothesis-driven approach provides full understanding of the model, but is time-consuming as each model has to be conceived and tested manually. Data-driven approach requires only the data and computational resources to sift through potential models, saving time, but leaving the resulting model itself to be a black box. Given the growing amount of neural data, we argue in favor of a more widespread adoption of the data-driven approach, reallocating part of the human effort from manual modeling. The thesis is based on three examples of how interpretation of machine-learned models leads to neuroscientific insights on three different levels of neural organization. Our first interpretable model is used to characterize neural dynamics of localized neural activity during the task of visual perceptual categorization. Next, we compare the activity of human visual system with the activity of a convolutional neural network, revealing explanations about the functional organization of human visual cortex. Lastly, we use dimensionality reduction and visualization techniques to understand relative organization of mental concepts within a subject's mental state space and apply it in the context of brain-computer interfaces.
Recent results in neuroscience and AI show similarities between the mechanisms of both systems. This fact endorses the relevance of our approach: interpreting the mechanisms employed by machine learning models can shed light on the mechanisms employed by our brain.