Deep learning on large datasets for pre-training is a promising direction to enable downstream tasks on small annotated datasets in new domains or on new tasks. This particularly applies to learning of representations from data that naturally consist of paired image and text data, e.g., medical images with a written report. We will look into the working principles of the pre-training of multi-modal models with image and text data, and test if these can be investigated also with much smaller data subsets. In particular, we will investigate whether the use of ontologies and knowledge graphs for creating a training dataset allows this dataset to be much smaller while generalizing better to unseen data. Moreover, we will investigate the limitations of learned image-text models by explicitly testing the representation of grammatical structures, their corresponding visual patterns, and their recombination.
(supervised by PI Bast)
Natural language processing tasks in concrete applications.
(supervised by PI Brox)
Creation of a targeted vision-language dataset, training of vision-language models, segmentation with text supervision, visual question answering, and scaling of the approaches.
Institute of Medical Biometry and Statistics,
Faculty of Medicine and Medical Center –
University of Freiburg