EE Seminar: Unsupervised methods for joint segmentation of an image set
M.Sc. student under the supervision of Prof. Nahum Kiryati and Prof. Nir Sochen
Wednesday, June 24th, 2015 at 15:00
Room 011, Kitot Bldg., Faculty of Engineering
Unsupervised methods for joint segmentation of an image set
Segmentation of a region of interest in an image is significantly influenced by the availability of prior information. However, in many cases such prior information is not available or not compatible with the image in hand. Luckily, nowadays applications can benefit from the large availability of images with similar or close contents, e.g., multiple overlapping viewpoints of the same object or use of various image acquisition methods. As a result, there is a shift from the classical prior based segmentation to a co-segmentation approach. Under the co-segmentation framework simultaneous segmentation of two images is applied where each segmentation is supported by the other to utilize the large commonality between the two images. In the first part of this work we review the current state-of-the-art co-segmentation methods and present a generalization of the pair-wise methods to account for image ensemble. Furthermore, we set a theoretical framework to draw the connection between our generalized pair-wise co-segmentation method to a probabilistic atlas-based approach.
In the second part of this work, we present a novel method for co-segmentation of common regions of interest (ROIs) in multiple image volumes of possibly different qualities or in the presence of inconsistencies. In contrast to the classical atlas-based approaches, only a single annotated image is used as a prior. The joint segmentation process is supported by the evolving segmentation of each of the individual images while accounting for the varying confidence levels. The proposed approach uses soft segmentation. Labeling uncertainty of a given voxel is determined by its spatial proximity to the estimated ROI's boundaries and the dynamically changing segmentation's confidence, learned throughout the joint segmentation process. Our contribution consists of a robust segmentation method that advances existing co-segmentation algorithms. The proposed algorithm is supported by a theoretical derivation which shows that it is a generalization of previous approaches. Promising results are demonstrated for the joint segmentation of neuroanatomical structures across 50 MR scans of different subjects. The proposed algorithm supports multi-modal data where each modality reveals different features of the ROI. A significant variance in the confidence levels was observed when using the proposed method for cross-modality joint segmentation of brain tumor tissues (BraTS).