Date
Share

Vincent Cheung

Sony CSL Tokyo

Vincent Cheung is a Project Researcher at Sony CSL Tokyo, where he works on connecting physiological signals and neural activity during music-listening towards building biosignal-informed recommendation systems. He has a background in mathematics and computational neuroscience, and he completed his doctoral studies on the neurocognitive mechanisms underlying musical pleasure at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany. His work has been covered by major news outlets worldwide in over 12 languages, including CNN, The Times, Frankfurter Allgemeine Zeitung, la Repubblica, and has been featured in Scientific Reports’ Top 100 most read articles in neuroscience.

Maximising brain decoding performance from fMRI data with cross-validated voxel-wise whole-brain feature selection

The goal of brain decoding is to infer cognitive states and stimulus information from subjects’ neural activity. While non-invasive neuroimaging methods such as functional MRI (fMRI) offer excellent spatial resolution on the millimetre-scale, the number of potential decoding features far exceeds the number of samples that could be obtained in a typical experiment. In this talk, I will describe the basic principles of fMRI data acquisition and analysis, and present our recent method on maximising brain decoding performance from fMRI data using cross-validated voxel-wise whole-brain feature selection. I demonstrate its application in improving decoding performance in the auditory and visual domain, and discuss its potential in reconstructing perceived stimuli from neural activation.