I began my Master's in Neuroscience in 2006 using functional magnetic resonance imaging (fMRI) to investigate multisensory integration in speech perception. My initial question asked whether the brain combines relevant visual information with incoming auditory signals at a low level of processing (such as primary auditory cortex). For example, prior knowledge of the content of a severly distorted or degraded speech utternace can render it perfectly intelligible. Given the complex network of feedback connections among the levels of the auditory system, it is possible that the brain uses this prior knowledge to directly constrain or shape the incoming auditory signal. In 2008, after one and a half years of fun playing with fMRI data, I decided to switch directly into the Neuroscience PhD program. My current work continues what began as a Master's project; I have since completed another fMRI study am currently running psychophsyical experiments which attempt to objectively measure this subjective increase in the perceptual clarity of distorted speech. In addition to speech perception and multisensory integration, my research interests include advanced fMRI modelling techniques (such as functional and effective connectivity), realtime DSP algorithms, and finding a way to batch script any tedious task I have to perform more than once.