Resource type
Thesis type
(Thesis) Ph.D.
Date created
2023-12-18
Authors/Contributors
Author: Gong, Yu
Abstract
The recent success of deep neural networks largely relies on their significant capacity to learn meaningful representations. A large number of parameters store the experience learned from the training data, and the representations are the activations of the hidden layers that represent the direct response to new data. Nonetheless, these highly performant models are sensitive to shifts in the data distribution or changes in the task. The development of conventional deep neural networks also necessitates their application where computational constraints become crucial. However, the reduction of size or precision can significantly undermine the quality of representations. Therefore, it is vital to explore further in deep representation learning when dealing with practical scenarios. In this dissertation, we will first discuss and compare different methods that are designed to regularize and then introduce our proposed approaches to improve conventional deep representation learning under some practical scenarios. First, we focus on improving probabilistic representations with incomplete heterogeneous data. Second, we present the challenge of learning from imbalanced data and offer our solution to regularize and acquire more effective representations. Third, we focus on the problem of computational constraints on how to fully explore the representations of deep neural networks. In summary, we provide solutions to regularize and enhance traditional deep representation learning when facing changes in the data distributions or model settings.
Document
Extent
118 pages.
Identifier
etd22857
Copyright statement
Copyright is held by the author(s).
Supervisor or Senior Supervisor
Thesis advisor: Mori, Greg
Language
English
Member of collection
Download file | Size |
---|---|
etd22857.pdf | 39.25 MB |