Project Ideas from MLEVN
If you are interested in working on any of the listed projects, please open an issue on Github to track the progress.
This page lists relatively general tasks. We have two more pages for more specific tasks
- Project ideas related to MIMIC-III clinical prediction benchmarks
- Project ideas related to biological NLP
Dataset distillation for other tasks
A recent paper by researchers from MIT, FAIR and Berkeley [1] shows how one can generate a very small synthetic dataset which is enough to train a neural network to achieve good performance on MNIST dataset. The authors plan to extend the work to larger image datasets and to non-image datasets. Adapting the method for text and EHR data can be both challenging and interesting.
[1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros, Dataset Distillation, arXiv
Neuron deletion vs generalization for other tasks
Victoria Poghosyan is working on this.
There is a paper by Deepmind [1] about the stability of a trained large ConvNet when removing some of its neurons (Fig. 1).
They also showed that there is a correlation between generalization and robustness (Fig. 3b).
- Choose a non-image-classification task, train a few (different) types of networks. Could be one academic task + one task from industry
- Generate Figure 1 for these networks (without label noise)
- Generate Figure 3b for these networks, look for clusters
- Investigate if robustness can be used as an early stopping signal (Figure 4)
- Investigate the role of dropout / recurrent dropout, compare with ConvNet results (Figure 5a)
- (harder) Investigate the role of recurrent batch-norm (Figure 5b)
- (harder) Generate Figure 1 with label noise. Requires proof that the network will still be able to overfit (see the task “Overfitting ability of recurrent networks”)
[1] Ari S. Morcos, David G.T. Barrett, Neil C. Rabinowitz, Matthew Botvinick, On the importance of single directions for generalization,
arXiv
Overfitting ability of recurrent networks
Tatev Mejunts is working on this
Attempt to confirm the results of the famous paper on “rethinking generalization” [1] for recurrent networks
- Choose a non-image-classification task
- Think about the equivalents of random pixels and shuffled pixels for the selected tasks
- Generate Figure 1 from the paper
- Think about the effect of regularization (augmentation, dropout, etc.)
[1] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, Understanding deep learning requires rethinking generalization, arXiv
Test the quality of multilingual embeddings
There are two recent pretrained multilingual embeddings:
- Unsupervised, based on Facebook’s MUSE
- Supervised using Google Translate API by BabylonPartners
- There is an old blogpost about (possibly) more techniques by Sebastian Ruder.
The task is to validate the quality of these embeddings for transfer learning.
- Take a famous academic dataset in English. Could be Amazon’s reviews. Find a related dataset (with the same labels) in a different language
- Train models on the English dataset using pure English vectors and different types of multilingual embeddings. See if multilingual embeddings are performing worse
- Train and evaluate similar models on the (smaller) dataset in another language
- Evaluate English models with multilingual embeddings on the smaller dataset
- Train on a combination of English dataset and the other one. Plot a chart of accuracy depending on the size of the second dataset.
Test connectivity of local minima in neural networks trained on NLP tasks
Hakob Tamazyan is working on this.
There is a recent paper [1] by Vetrov’s team that shows the following. If one trains a deep neural net (e.g. ResNet) on ImageNet and finds two local minima A and B (with different weight initializations), then there exists a point C in the space of weights such that the loss function is almost constant along the straight lines AC and BC. This has not been tested yet on NLP tasks.
- Take one or two popular NLP datasets, e.g. from this list
- Train several popular neural models (both CNN and LSTM based) with few random initializations
- Use the algorithms described in [1] to find the “C” points
- Draw pictures like Figure 1 of [1]
- (harder) Understand the number of possible C points for each A/B pair
[1] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson, Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, arXiv
A short list of non-image-classification tasks