Research

Brain-machine interfaces

Non-invasive BMI

My current research investigates the properties of non-invasive brain-machine interfaces (BMI) , using EEG for rehabilitation and restoration of communication for individuals with severe speech and motor deficits. There are two major techniques I employ:

  1. sensorimotor rhythms for continuous prediction and instantaneous audio-visual feedback of synthesized vowel sounds from speech and motor imagery
  2. multimodal EEG signals as input to discrete choice-based systems for use as alternative input modalities for traditional augmentative and alternative communication systems, including self-generated motor rhythms and feedback-based neural responses to perceptual events (e.g. P300 event-related potentials, SSVEP steady state visual evoked potentials, and ASSR auditory steady state responses

Invasive BMI

A major research obstacle for intracortically based BCIs is the long-term chronic recording of neural units. Initial steps have been taken toward such recordings, but I am interested in refinement of these techniques for increased reliability and stability of single-/multi-unit recordings.Such refinements include improved electrode (hardware) designs and robust spike detection and classification techniques (software). I am also involved with the design and implementation of sophisticated neural decoding algorithms, primarily for predicting speech information (either acoustic or articulatory) from spiking activity using continuous, adaptive filter techniques.

Funding

This research has been supported in part by the NIH/NIDCD (R03 DC011304; J. Brumberg: PI, R01 DC007683; F. Guenther: PI) and by CELEST, an NSF Science of Learning Center (NSF SBE-0354378; E. Mingolla: PI).


Speech interfaces and technology

I am broadly interested in speech technology for investigating behavioral and neural mechanisms of speech. I am especially interested in developing technology applications to aid diagnosis and therapeutic interventions for speech disorders. I am currently developing interfaces, including the Prosodic Marionette with the CADLAB at Northeastern University.

Speech modeling

My previous research involved development and experimental simulation of a computational model of speech production, the Directions Into Velocities of Articulators (DIVA) model in the Speechlab at Boston University. I was responsible for creation of a standardized graphical user interface to the model to aid in collaborative activity with speech researchers around the world. The latest version of the DIVA model is available for download.


Computer graphics

I am broadly interested in the field of computer graphics programming, specifically methods used for modeling of natural phenomena and parallel processing utilizing the GPU. Follow this link for some examples of graphics programming completed in a graduate-level computer graphics course at BU.

Computer vision

My particular interest lies in the exploitation of human visual systems for use as underlying frameworks for artificial computer vision. Specifically, space and time variant sampling systems of the human retinal circuitry provides high resolution imaging for humans over the entire visual field. I developed a C/C++ program as a part of my graduate coursework simulating two layers of retinal/LGN integrate and fire neurons for space and time invariant image processing. It takes advantage of large, quickly integrating receptive fields in the periphery and small, slowly integrating receptive fields in the fovea. The code can be found at: [presentation pdf, demo (requires OpenGL and GLUT)].