B05

B05

Cross-modal representation learning, with applications to search in radiology reports and auto-filling of report templates

Project summary

Deep learning on large datasets for pre-training is a promising direction to enable downstream tasks on small annotated datasets in new domains or on new tasks. This particularly applies to learning of representations from data that naturally consist of paired image and text data, e.g., medical images with a written report. We will look into the working principles of the pre-training of multi-modal models with image and text data, and test if these can be investigated also with much smaller data subsets. In particular, we will investigate whether the use of ontologies and knowledge graphs for creating a training dataset allows this dataset to be much smaller while generalizing better to unseen data. Moreover, we will investigate the limitations of learned image-text models by explicitly testing the representation of grammatical structures, their corresponding visual patterns, and their recombination.

Our methods

  • Combining knowledge- and data-driven modeling
  • Neural networks
  • Pre-training

Principal investigator

Doctoral researcher

Principal investigator

Doctoral researcher

Principal investigator

Clinician scientist

Administrative Manager

Marc Schumacher

Institute of Medical Biometry and Statistics,
Faculty of Medicine and Medical Center –
University of Freiburg