- Automated Assessment of Surface Electromyography (sEMG) Data During Swallowing
- Features of Communicative vs. Transition Motion in American Sign Language
- Image Analysis to Identify Features from Videofluroscopy during Pediatric Dysphagia
- Localized Electrical Impedance Assessment of Vocal Folds Tissues
- Noise Quantification and Compensation in Mobile Environments for Hearing Assessment
- Physiological Sensing to Detect and Inform on Stuttering Events
- Wearable Sensors for Quantifying Pediatric Chewing and Swallowing
- Web Electroencephalography (EEG) Tools for Speech Language Processing
Automated Assessment of Surface Electromyography (sEMG) Data During Swallowing
Faculty Advisor: Memorie Gosa, Communicative Disorders, Todd Freeborn, Electrical and Computer Engineering
Dysphagia, or difficulty swallowing, is frequently found in adults following significant illness or injury that affects the neurologic, digestive, or respiratory systems. It is estimated that dysphagia affects 99 million people worldwide. Individuals diagnosed with dysphagia are often taught a variety of swallowing maneuvers by clinicians to strengthen pharyngeal muscles and facilitate swallowing. However, patient progress in learning new maneuvers and muscle strengthening can be difficult to track due to a lack of quantitative feedback and the general complexity of swallowing. Surface electromyography is a technique with potential to measure swallowing muscle effort due to its noninvasive nature and ability to provide biofeedback. In this project, students will be involved in designing algorithms to automatically assess different musculature movements from sEMG measures to support clinicians during evaluation and therapy of swallowing disorders.
REU Participant Role: Students will be trained on the physiology of nerve signaling, facial musculature, and how to collect sEMG measurements using clinical equipment (Synchrony Dsyphagia Solutions by ACP). Students will visualize and analyze sEMG using MATLAB which will require the application of smoothing and noise-removal algorithms. From these datasets students will identify features specific to swallowing events in both time and frequency domains, and methods which can automatically assess sEMG datasets in real-time and remove movement artifacts.
Features of Communicative vs. Transition Motion in American Sign Language
Faculty Advisors: Evguenia Malaia, Communicative Disorders, Sevgi Zubeyde Gurbuz, Electrical and Computer Engineering
Unlike auditory modality, visual modality in humans is not specialized for communication. However, users of sign language are able to easily distinguish the time periods of communicative hand motion by their interlocutors from time periods of motion for everyday tasks. What parameters of hand motion in American Sign Language trigger processing for language comprehension? If the signal is intelligible, i.e. adequate for human comprehension, what parameters of the signal are both necessary and sufficient?
Envelope and entropy analyses have proven fruitful for predicting perception for speech and production for sign language data. For speech, the time-resolved changes in power spectral entropy of the signal have been shown to predict sonority hierarchy, syllabification, and intelligibility of distorted speech. For sign languages, global measures of potential information (power-spectral density) in 2-dimensional visual frequency space distinguish between sign language communication and non-communicative human motion.
REU Participant Role: Students will investigate the question of the motion-based difference between sign motion and transitional (non-communicative) motion at different levels of temporal resolution (single sign, phrase, narrative) using 3D motion capture data marked for linguistic vs. non-linguistic components. In this project, we wish to explore how alternative motion sensors can be used to inform on the kinematic triggers for language understanding. 2D and 3D data representations will be used to visualize sensor sensor data and explore how they can relate to understanding the identification of communicative motions with American Sign Language.
Image Analysis to Identify Features from Videofluroscopy during Pediatric Dysphagia
Faculty Mentors: Dr. Yu Gan, Electrical and Computer Engineering, Dr. Memorie Gosa, Communicative Disorders
When infants present with swallowing difficulties, referred to as dysphagia, it is necessary to determine if airway compromise is a component of their feeding difficulties. Assessment for dysphagia can be accomplished using imaging instrumentation technology including: videofluroscopic swallow study (VFSS), fiberoptic endoscopic evaluation, ultrasonography, manometry, scintigraphy, and cervical auscultation. The most common instrumental assessment for dynamic assessment of oropharyngeal swallowing function in pediatric patients is VFSS. VFSS uses x-ray imaging to visualize what is occurring internally during swallowing events. Clinicians look for abnormalities in the oral, pharyngeal, and esophageal phases of swallowing function during recorded events, commenting on: bolus extraction, formation, and propulsion, spillover prior to the swallow, oral residue, oral transit time, timing of pharyngeal swallow initiation, strength of pharyngeal swallow, pharyngeal residue, presence of laryngeal penetration, aspiration, or nasopharyngeal backflow, pharyngeal transit time, opening of cricopharyngeal sphincter, clearance of bolus through cervical esophagus, retrograde movement through cervical esophagus to pyriform sinuses. Currently, to identify these features, clinicians must participate in frame-by-frame analysis which is labor intensive, time consuming, and not-realistic in a clinical setting where multiple VFSS are completed each day. However, there are currently no available imaging processing tools to automate these tasks for clinicians, providing the motivation for this project.
REU Participants Role:Students will be trained on the swallowing and clinical features observed during VFSS and image processing techniques. Using existing VFSS video data, the students will investigate methods to extract clinically relevant features from the imaging data, applying machine learning techniques [30] towards the goal of automated and reliable identification and detection of airway compromise characteristics.
Localized Electrical Impedance Assessment of Vocal Folds Tissues
Faculty Mentors: Dr. Todd Freeborn, Electrical and Computer Engineering, Dr. Memorie Gosa, Communicative Disorders
Voice problems in children can significantly interfere with psychosocial development and academic progress. These problems are most often the result of repeated vocal trauma and mis-use. Over time, this
results in physical changes to the vocal folds. To confirm the pathology of vocal lesions requires visual diagnosis using either a stroboscopic or endoscopic procedure which can be a significant burden on young children. To reduce this burden, non-invasive methods are being investigated. One possible technique for this characterization collects the electrical impedance of the localized tissues, which are expected to change as a result of the tissue changes (callous, benign growths), and does not require visualization or phonation. In this project, students will be involved in designing algorithms to identify vocal folds positions and features from raw impedance data collected from the localized tissue.
REU Participants Role: Students will be trained to collect and analyze electrical impedance measurements from biological tissues using impedance analyzers (Keysight E4990A, ImpediMed SFB7). Students will design experiments to collect impedance measurements from adult and pediatric populations during abduction and adduction of the vocal folds. Using MATLAB, students will analyze the data to visualize datasets, determine normative ranges, identify features specific to vocal folds positions, potential breathing/movement/electrode artifacts, and methods to automate these analyses.
Noise Quantification and Compensation in Mobile Environments for Hearing Assessment
Faculty Mentors: Dr. Steve Shepard, Mechanical Engineering, Dr. Marcia Hay-McCutcheon, Communicative Disorders
The Hear Here Alabama research project examines issues associated with hearing and access to hearing healthcare in West Central and South Alabama through their mobile audiology clinic. The clinic is equipped with sound booths and audiological testing equipment that can assess hearing sensitivity in a variety of community settings. To our knowledge, no other university within the country has a mobile audiology clinic that is capable of conducting research in the field and able to provide clinical services when necessary. The sound booths provide a quiet testing environment to make accurate measurements of hearing sensitivity. Without these types of settings, the outcomes from audiological measurements might not reflect true hearing abilities. Excessive background noise masks a client’s ability to hear the presentation of very soft sounds or interferes with the physiological measurements by obscuring responses from the neural fibers associated with hearing. Too much ambient noise could be present when the heating and air conditioning unit of the truck is operating or when the truck is parked in environments with high electrical activity. The aim of this project is to develop systems to quantify and report the noise levels and develop methods to filter specific frequencies from the listening environments to prevent potential inaccurate measurements of hearing sensitivity.
REU Participants Role: Students will be trained to conduct experiments to quantify ambient noise levels, characterize how the levels affect the accurate measurement of hearing acuity, and investigate methods to modify environments to prevent inaccurate assessments of hearing. They will learn about standards for background noise, typical and atypical behavioral and physiological measures of hearing, instrumentation for acoustic measurements, and software tools to process audio signals to identify critical features. Ultimately, students will be tasked with developing systems to effectively monitor, report, and maintain ambient noise levels for the accurate measurement of hearing sensitivity in communities where resources to these services are scarce or non-existent.
Physiological Sensing to Detect and Inform on Stuttering Events
Faculty Mentors: Dr. Ryan Taylor, Electrical and Computer Engineering, Dr. Anthony Buhr, Communicative Disorders
Stuttering is a disorder that affects millions of people all over the world (affecting 1% of the general population), characterized by involuntary prolongations, repetitions, and pauses that disrupt the flow of speech. The negative effects of stuttering include disruptions to the quality of life and mental health of people who stutter. Therapy goals for people who stutter include reducing the frequency of stuttering, decreasing the tension and struggle during moments of stuttering, and using effective communication strategies. There are also assistive hearing technologies available, that delay and alter the frequency of one’s voice which have been shown to be effective at reducing stuttering. Dr. Buhr is currently investigating methods to monitor stuttered speech and physiological markers during acquisition of taboo words using muscle activation, heart rate, and galvanic skin response. Typically for people who stutter, muscle tension can impact the face and neck area and impede the forward flow of speech. The aim of this work is to investigate technologies for an assistive biofeedback device to support people who stutter to become more aware of what they are doing when they stutter and thus make changes toward moving through a stuttering event with less tension.
REU Participants Role: Students will be trained on the background and impacts of stuttering, and how to collect and visualize physiological measurements (sEMG, galvanic skin response, heart rate) using a Biopac data acquisition system. Students will design tests to collect physiological data (i.e. which muscle groups to measure, where to place electrodes) during speech-related tasks and analyze collected data using MATLAB to identify features indicating the onset of a stuttering event (i.e. muscle tensing, changes in skin response/heart rate). Using these features, students will design and evaluate low-cost audio, visual, and haptic feedback systems for informing people who stutter of their physical state.
Wearable Sensors for Quantifying Pediatric Chewing and Swallowing
Faculty Mentors: Dr. Edward Sazonov, Electrical and Computer Engineering, Dr. Memorie Gosa, Communicative Disorders
Typically developing infants transition from a diet entirely composed of liquids in the form of breastmilk or formula to a diet rich with a variety of textured foods in just two short years. A diverse diet rich in macro and micro nutrients is essential for appropriate linear and neurological growth during childhood. Dysphagia is a skill-based disorder that involves disturbance in the usually seamless swallowing sequence that interrupts the safety or adequacy of oral intake. Dysphagia can have a serious impact on pediatric growth and maturation and therefore must be diagnosed accurately and treated aggressively. Highly textured foods, such as fruits, vegetables, and meats, require mature chewing skills. To date, very little is known about the developmental steps required to realize a mature chewing pattern. Additionally, assessment procedures used to distinguish normal chewing from disordered chewing (dysphagia) rely on subjective judgements made while observing a child eat. The Automatic Ingestion Monitor (AIM) was developed and previously validated. AIM has the ability to provide accurate counts of chews, specifically it can measure chewing rate and number of chews per bolus. We propose that these objective measures of chewing rate and number of chews per bolus may be first measures to distinguish between those with normal and disordered chewing abilities.
REU Participants Role: Students will be trained on the background/mechanics of chewing and sensing methods for detecting these motions. Students will be trained on the use of the AIM for data collection and will design experiments to collect data from jaw motion in the vertical and horizontal planes. Further, they will visualize collected sensor data and identify features that may be used to quantify the mechanics of chewing motions
Web Electroencephalography (EEG) Tools for Speech Language Processing
Faculty Mentors: Dr. Spyridoula Cheimariou, Communicative Disorders Dr. Chris Crawford, Computer Science
Understanding language is relatively effortless and extremely fast. However, for adults with neurogenic language disorders, especially aphasia, understanding language becomes a very difficult task. Recent research is investigating how brain signals in language comprehension change after a stroke and also with healthy aging. Previous research by Dr. Cheimariou has offered a large set of EEG data for younger (N=26) and older (N=27) adults and individuals with aphasia (N=5) [23], [24]. However, while recent advancements in brain-computer interface technology has led to increases in computer systems that collect EEG data for research, there is a lack of software solutions that can be conveniently deployed for end-users or easily used by researchers from different disciplines. This project will build a framework around the language comprehension EEG data, expanding on the current work of Dr. Crawford from the Department of Computer Science [18], to create a web-based JavaScript framework to support speech pathology students, faculty, and patients that is portable, modular, easy to use, and extensible.
REU Participants Role: Students will be trained on the background of EEG measurements, brain signals during language comprehension, tools to collect EEG data (BrainVision ActiChamp), current methods to analyze EEG data, and current JavaScript frameworks for EEG data. Students will work to understand the specific research and clinical needs of speech pathologists towards creating a system that can categorize different brain signals according to the population, and apply common signal processing algorithms (i.e. filtering, smoothing, artifact reduction) and extraction features (i.e. coherence, N400). Students will also analyze the performance of this framework across multiple platforms and usability by different end-users.