<div class="csl-bib-body">
<div class="csl-entry">Kromp, F. (2019). <i>Machine learning for tissue image analysis</i> [Dissertation, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/78816</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/78816
-
dc.description.abstract
Microscopy-based bioimage analysis workflows allow biologists and pathologists to analyze cellular processes at the single-cell level, thereby taking advantage of the statistical power of analyzing thousands of cells. Thus, even minor biological changes within the analyzed cell population can be quantified. Currently, methods to analyze fluorescence-based bioimages are limited or lack accuracy and generalizibility, including i) a lack of methods to accurately segment nuclei or other cell features of different tissues, ii) insufficient methods to accurately classify different cell populations and iii) missing strategies to describe the topographic distribution of cells within a tissue. To overcome these limitations, a framework has been developed that allows efficient annotation of fluorescence images. Based thereon, fluorescence images were annotated by experts and used to train, test and evaluate four deep neural networks (Mask R-CNN, U-Net, U-Net ResNet, DeepCell) for image segmentation. In order to identify and classify cell populations with high accuracy, a method was developed and evaluated based on the identification of outliers in the data. To analyze intra-tumor heterogeneity, an image analysis workflow was developed to visualize and quantify multiple genetic traits. Our results show that the developed image annotation framework allows a fast and efficient annotation of fluorescence images. The Mask R-CNN architecture allows the segmentation of nuclear images of different preparations with precision and recall values of 0.78, while the U-Net architecture can segment spot-shaped, subcellular structures with an F1 score of 0.70. The proposed method for classifying cell populations achieves results comparable to other state-of-the-art single cell analyses but with the advantage of the microscopic analysis of each single cell. All developments have been integrated into an image analysis workflow that allows biologists and pathologists to visualize and quantify intra-tumor heterogeneity. The presented research results demonstrate i) the applicability and usefulness of deep neural networks for the segmentation of fluorescence-based bioimages, will ii) contribute to further improvement of the segmentation of fluorescence images by means of the generated data sets, and iii) will show that with the methods described here, biology experts can determine the influence of experimental factors on the cell phenotype and study genetic and phenotypic intra-tumor heterogeneity to better understand the phenotype/genotype and spatial distribution of cells in tissue.
en
dc.format
xvii, 168 Seiten
-
dc.language
English
-
dc.language.iso
en
-
dc.subject
machine learning
en
dc.subject
deep neural networks
en
dc.subject
nuclear image segmentation
en
dc.subject
quantitative bioimage analysis
en
dc.subject
fluorescence microscopy
en
dc.subject
image annotation
en
dc.title
Machine learning for tissue image analysis
en
dc.type
Thesis
en
dc.type
Hochschulschrift
de
dc.contributor.affiliation
TU Wien, Österreich
-
dc.publisher.place
Wien
-
tuw.thesisinformation
Technische Universität Wien
-
tuw.publication.orgunit
E194 - Institut für Information Systems Engineering