We hypothesize that deep learning can yield state-of-the-art performance also for small data sets by meta-learning how to regularize them to avoid overfitting and further improving them by ensembling. We aim to develop the fundamental methods for doing so, and to apply them to various data modalities. We will extend the advances made for rather large tabular datasets to “scale down” deep learning to be effective also in the regime of small datasets. Specifically, we will develop approaches to search for optimal combinations of regularization methods, based on a meta-learning approach across many small datasets and by ensembling different combinations of regularization methods. We will also tackle the more structured data modalities of longitudinal data and image data.
(supervised by PI Grabocka)
Work on meta-features for controlling the negative transfer of the meta-learning methods, adopting regularization cocktails for longitudinal datasets, and developing novel hyperparameter optimization techniques for stacking ensembles.
(supervised by PI Hutter)
Meta-learning for regularizing models on tabular datasets, extending the paradigm towards image datasets, and training optimal ensembles of neural networks.