Skip to main content

Aversive sound classification and filtration using deep neural networks

Resource type
Thesis type
(Thesis) Ph.D.
Date created
2022-09-29
Authors/Contributors
Abstract
Decreased sound tolerance (DST) is common among many children with Autism Spectrum Disorder (ASD). When children are exposed to specific aversive sounds at school, they may be very distressed and can react with behaviours such as covering their ears, yelling, screaming, or running out of the room to avoid the aversive sound. Schools' approaches for accommodating DST include letting students wear earplugs or earmuffs or allowing them to leave to take breaks in a quiet area. Most wearable devices (e.g., earmuffs, ear plugs, noise canceling headphones) tend to block or attenuate all sounds indiscriminately, including speech, and if the child leaves the classroom to escape the noise, this will disrupt learning and social interaction. Therefore, existing strategies tend to interfere with the child's full participation in class and other activities. This thesis aims to develop an intervention tool to selectively filter out aversive sounds for children with ASD. Ideally, this tool will attenuate unwanted sounds (e.g., dog barking, sirens, jackhammers) while letting other sounds (e.g., the teacher's voice) to be heard. In this thesis, Deep Neural Network (DNN) methods and signal processing techniques are employed to intelligently identify the aversive sounds in the environment, attenuate them from the ambient sound and pass the rest of the sound to users. To identify aversive sounds, a combination of a Recurrent Neural Network (RNN) and a Convolutional Neural Network (CNN) is used. After the aversive sound is identified, another part of this thesis is dedicated to filter the aversive sound. A DNN-based learning framework is proposed to address the audio denoising problem for real-time applications. The proposed method has the ability to suppress stationary noises such as engine and air conditioner, and also non-stationary, and dynamic noises such as dog barking, siren, and jackhammer. Further, a Graphical User Interface (GUI) is designed to combine the identification and filtration components of the intervention. The user-friendly GUI enables the users to initiate specific tasks in order to hear their surrounding sounds without any disturbances. In order to evaluate the performance of the proposed intervention technique, several testing sessions are conducted with autistic individuals.
Document
Extent
87 pages.
Identifier
etd22181
Copyright statement
Copyright is held by the author(s).
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Arzanpour, Siamak
Thesis advisor: Birmingham, Elina
Language
English
Download file Size
etd22181.pdf 3.4 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 1