Patient-specific anatomical structures form the basis for medical models and simulations based on them. For this purpose, the required 3-dimensional structures are extracted from medical image data. A collection, structuring and pre-processing of suitable data sets form the basis for this.

The research focuses on interactive semi-automatic machine-learning methods with the highest possible degree of automation in order to keep the effort for medical domain experts as low as possible on the one hand and to meet the required time aspect for e.g. “Tomorrows Patient” on the other hand by means of adequate workflow and user interface.

The success of machine learning methods for registration and segmentation depends largely on the amount of available training data. Especially in medical fields of application, data with a corresponding ground truth is often missing or is not accessible for data protection reasons. For this reason, this research area is researching methods that allow for a simple (interactive) creation of ground-truth.

Remedial methods such as One-Shot-Learning, which aims to generate a large amount of training data from a data set by means of augmentation, Transfer Learning, whereby knowledge models already learned are used for similar problems, and Domain Adaptation, which adapts learned models to new data distributions.

Methods for rigid and non-rigid registration of multimodal 3-dimensional image data allow specialized aggregated visualizations for medical professionals and new possibilities for machine learning.

References