Skip to main content

Representation and synthesis of 3D biomedical visual data

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2024-06-10
Authors/Contributors
Abstract
The success of deep learning (DL) on a wide range of computer vision tasks can be attributed to the availability of large-scale annotated datasets, such as ImageNet and MS-COCO. However, such large-scale densely annotated datasets are lacking for biomedical visual computing tasks, e.g., classification, and segmentation. Moreover, curation of such datasets is expensive, tedious, and time consuming. In this thesis, we focus on developing methods for accurate and efficient representation and synthesis of 3D biomedical visual data, in particular human skin and anatomical trees. In the first contribution, we propose a framework coined DermSynth3D, to synthesize "in the wild" image datasets of skin lesions using 3D scans of human meshes. Specifically, DermSynth3D blends skin disease patterns onto 3D textured meshes using a differentiable renderer, producing diverse photo-realistic 2D dermatological images mimicking "in-the-wild" acquisitions, and the corresponding dense annotations for semantic segmentation of skin, skin-conditions, and body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. We demonstrate the effectiveness of synthesized data by training DL models on synthetic data and evaluating them on various dermatology tasks, such as segmentation and detection, using real 2D dermatological images. In the second contribution, coined TrIND (pronounced as Trendy), we address the challenge of efficiently and accurately representing anatomical trees and the lack of datasets for such structures. To be more precise, we first utilize implicit neural representations (INRs) for capturing the intricate complex geometry and topology of anatomical trees and then perform denoising diffusion on the weight space of pre-optimized INRs to generate novel trees from the INRs sampled during the reverse diffusion process. TrIND enables high-fidelity tree reconstruction at any desired resolution with compact storage, demonstrating versatility across anatomical sites and tree complexities, and generates novel plausible tree-structures. Through extensive evaluation, we showcase the effectiveness of both DermSynth3D and TrIND, underscoring their potential in advancing dermatological image analysis and vascular modeling for clinical diagnosis and surgical treatment planning.
Document
Extent
65 pages.
Identifier
etd23120
Copyright statement
Copyright is held by the author(s).
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Hamarneh, Ghassan
Language
English
Member of collection
Download file Size
etd23120.pdf 27.78 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 1