Skip to main content

Deep learning for quantitative image analysis of positron emission tomography

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2019-03-05
Authors/Contributors
Abstract
Positron emission tomography (PET) is a popular imaging technique that produces a 3Dimage volume capturing functional processes within the body. In cancer studies, PET is increasingly used for diagnosis and evaluation of tumor extension, treatment planning, and disease follow-up. Although PET has several limitations, including low spatial resolution and relatively low signal-to-noise ratio, it remains the modality of choice for its high sensitivity to tracer uptake in lesions. With the adoption of PET imaging, accurate segmentation and quantification of metabolic activities, specially tumor activities, is crucial and challenging because of the large variations of shape and intensity of tumor uptake patterns. In this thesis, we make two main contributions on automated tumor lesion detection and segmentation in PET. To automate segmentation, it is important to distinguish between normal active organs and activity due to abnormal tumor growth. In our first contribution, we propose a deep learning method to localize and detect normal active organs in 3D PET. Our method adapts the object detection deep convolutional neural network architecture of YOLO to detect multiple organs in 2D slices and aggregates the results to produce semantically labeled 3D bounding boxes. We evaluate our method on 479 18F-FDG PET scans and show promising results compared to the state-of-the-art organ localization methods. The second contribution addresses the challenge of creating accurate ground truth segmentation maps for training machine learning approaches for tumor delineation. We propose a fully convolutional network model to automatically delineate tumor regions in PET (i.e., indicates the border of cancerous lesions) while relying on weak bounding boxes annotations. To achieve this, we propose a novel loss function that dynamically combines a supervised component, designed to leverage the training bounding boxes, with an unsupervised component, inspired by the Mumford-Shah piecewise constant level-set image segmentation model. The model is trained end-to-end with the proposed differentiable loss function and is validated on a public clinical of 57 PET scans of head and neck tumors. Using only bounding box annotations as supervision, our model achieves results competitive with state-of-the-art supervised and semi-automatic segmentation approaches.
Identifier
etd20104
Copyright statement
Copyright is held by the author.
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Hamarneh, Ghassan
Member of collection
Model
English

Views & downloads - as of June 2023

Views: 0
Downloads: 0