Recent successes in decoding speech from cortical signals provide hope for restoring function to those who have lost the ability to speak normally. Despite these successes, the exact cortical representation and functional dynamics of speech production remain unknown. Prominent theoretical models of speech production in the literature differ in their hypothesized functional organization of speech motor cortex. Using electrocorticography, with its fine spatial and temporal resolution, we can analyze the exact spatial and temporal cortical dynamics related to complex speech mechanisms.
This dissertation addresses various unknowns in the current speech brain-computer interface literature and recommends a methodology for successful speech classification from electrocorticographic electrodes. Addressing the current limitations and barriers to widespread BCI adoption, I here seek to add to the engineering merit of the communicative BCI field with the mechanistic analysis and results of three separate studies. In the first study, I seek to determine what factors contribute to successful phonemic decoding of an ECoG signal. In the second study, I seek to determine cortical representation of phonemic categorization in speech production. In the third study, I leverage classification results to address the structure of cortical correlates of speech production. The result of these studies outlines a set of guidelines for future speech-BCI research that will work towards useful speech-BCI neuroprothetics.