E186 - Institut für Computergraphik und Algorithmen
-
Date (published):
2013
-
Number of Pages:
102
-
Abstract:
In order to (preferably) automatically derive the neuronal structures from brain tissue image stacks, the research field computational neuroanatomy relies on computer assisted techniques such as visualization, machine learning and analysis. The image acquisition is based on the so-called transmission electron microscopy (TEM) that allows resolution which is high enough to identify relevant structures in brain tissue images (less than 5 nm per pixel). In order to get to an image stack (or volume) the tissue samples are sliced (or sectioned) with a diamond knife in slices of 40 nm thickness. This approach is called serial-section transmission electron microscopy (ssTEM). The manual segmentation of these high-resolution, low-contrast and artifact afflicted images would be impracticable alone due to the high resolution of 200,000 images of size 2,000,000 x 2,000,000 pixel in a cubic centimeter tissue sample. But, the automatic segmentation is error-prone due to the small pixel value range (8 bits per pixel) and diverse artifacts resulting from mechanical sectioning of tissue samples. Additionally, the biological samples in general contain densely packed structures which leads to a non-uniform background that introduces artifacts as well. Therefore, it is important to quantify, visualize and reproduce the automatic segmentation results interactively with as few user interactions as possible. This thesis is based on the membrane segmentation proposed by Kaynig-Fittkau [2011] which for ssTEM brain tissue images outputs two results: (a) a certainty value per pixel (with regard to the analytical model of the user selection of cell membrane pixels) which states how certain the underlying statistical model is that the pixel is belonging to the membrane, and (b) after an optimization step the resulting edges which represent the membrane. In this work we present a visualization-assisted method to explore the parameters of the segmentation. The aim is to interactively mark those regions where the segmentation fails in order to structure the post- or re-segmentation and to prove-read the segmentation results. This is achieved by weighting the membrane pixels by the uncertainty values resulting from the segmentation process. We start here and employ user knowledge once more to decide which data and in what form should be introduced to the random forest classifier. The aim here is to improve the segmentation results by either improving the segmentation quality, by increasing the segmentation speed or by reducing the memory consumption for the segmentation. In this regard we focus our attention especially on the visualizations of the uncertainty, the errors and the multi-modal data. The interaction techniques are explicitly used in those cases where we expect the highest gain at the end of the exploration. We show the effectiveness of the proposed methods using the freely available ssTEM brain tissue dataset of the drosophila fly. Because we lack the expert knowledge in the field of neuroanatomy we base our assumptions and methods on the underlying ground truth segmentations of the drosophila fly brain tissue dataset. In this work we carry out five experiments with six feature sets and three training sets for the segmentation of membranes. The experiments indicate that creating new features with so-called aspect windows help improve the prediction performance through lower prediction error and higher precision. Furthermore, the visualization-assisted feature exploration presented in this work leads to a reduction of the original feature set of 42% which results in slightly higher (0:07%) prediction error. The reduction of the feature set also leads to shorter processing times and to lower memory space requirements which is necessary especially for large ssTEM brain tissue images.
en
Additional information:
Abweichender Titel laut Übersetzung der Verfasserin/des Verfassers Zsfassung in dt. Sprache