projects

Project Ideas from MLEVN

If you are interested in working on any of the listed projects, please open an issue on Github to track the progress.

This page lists relatively general tasks. We have two more pages for more specific tasks

  1. Project ideas related to MIMIC-III clinical prediction benchmarks
  2. Project ideas related to biological NLP

Dataset distillation for other tasks

A recent paper by researchers from MIT, FAIR and Berkeley [1] shows how one can generate a very small synthetic dataset which is enough to train a neural network to achieve good performance on MNIST dataset. The authors plan to extend the work to larger image datasets and to non-image datasets. Adapting the method for text and EHR data can be both challenging and interesting.

[1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros, Dataset Distillation, arXiv

Neuron deletion vs generalization for other tasks

Victoria Poghosyan is working on this.

There is a paper by Deepmind [1] about the stability of a trained large ConvNet when removing some of its neurons (Fig. 1). They also showed that there is a correlation between generalization and robustness (Fig. 3b).

[1] Ari S. Morcos, David G.T. Barrett, Neil C. Rabinowitz, Matthew Botvinick, On the importance of single directions for generalization, arXiv

Overfitting ability of recurrent networks

Tatev Mejunts is working on this

Attempt to confirm the results of the famous paper on “rethinking generalization” [1] for recurrent networks

[1] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, Understanding deep learning requires rethinking generalization, arXiv

Test the quality of multilingual embeddings

There are two recent pretrained multilingual embeddings:

The task is to validate the quality of these embeddings for transfer learning.

Test connectivity of local minima in neural networks trained on NLP tasks

Hakob Tamazyan is working on this.

There is a recent paper [1] by Vetrov’s team that shows the following. If one trains a deep neural net (e.g. ResNet) on ImageNet and finds two local minima A and B (with different weight initializations), then there exists a point C in the space of weights such that the loss function is almost constant along the straight lines AC and BC. This has not been tested yet on NLP tasks.

[1] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson, Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, arXiv

A short list of non-image-classification tasks