projects

Project Ideas from MLEVN

If you are interested in working on any of the listed projects, please open an issue on Github to track the progress.

Neuron deletion vs generalization for other tasks

Victoria Poghosyan is working on this.

There is a paper by Deepmind [1] about the stability of a trained large ConvNet when removing some of its neurons (Fig. 1). They also showed that there is a correlation between generalization and robustness (Fig. 3b).

[1] Ari S. Morcos, David G.T. Barrett, Neil C. Rabinowitz, Matthew Botvinick, On the importance of single directions for generalization, arXiv

Overfitting ability of recurrent networks

Tatev Mejunts is working on this

Attempt to confirm the results of the famous paper on “rethinking generalization” [1] for recurrent networks

[1] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, Understanding deep learning requires rethinking generalization, arXiv

Test the quality of multilingual embeddings

There are two recent pretrained multilingual embeddings:

The task is to validate the quality of these embeddings for transfer learning.

Test connectivity of local minima in neural networks trained on NLP tasks

Hakob Tamazyan is working on this.

There is a recent paper [1] by Vetrov’s team that shows the following. If one trains a deep neural net (e.g. ResNet) on ImageNet and finds two local minima A and B (with different weight initializations), then there exists a point C in the space of weights such that the loss function is almost constant along the straight lines AC and BC. This has not been tested yet on NLP tasks.

[1] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson, Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, arXiv

Tasks related to MIMIC-III benchmarks

The main repo of the benchmark: YerevaNN/mimic3-benchmarks

Implement more neural architectures for the benchmark

Adversarial training on the benchmark data

Adversarial reprogramming for the benchmark tasks

There is an interesting paper by Google Brain team on adversarially reprogramming pretrained neural networks to perform a new task. The idea is demonstrated to reprogram ImageNet network to perform MNIST classification. So far, there is no evidence that it might work on recurrent networks. This fact makes this task a risky one :) A paper from UC San Diego applied the technique for text classification tasks based on LSTMs and CNNs

Visualizing the neural models

David Karamyan is working on this.

There are lots of papers on “visualizing and understanding” convolutional networks, mostly starting from [1]. In recent years a few similar papers appeared for RNNs, especially about sentiment analysis [2,3]. Another recent paper does similar things for RNNs running on EHR notes [4].

[1] Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, arXiv

[2] Jiwei Li, Xinlei Chen, Eduard Hovy and Dan Jurafsky, Visualizing and Understanding Neural Models in NLP, NAACL-HLT 2016, ACLWEB

[3] Leila Arras, Gregoire Montavon, Klaus-Robert Muller, and Wojciech Samek, Explaining Recurrent Neural Network Predictions in Sentiment Analysis, 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2017, ACLWEB

[4] Jingshu Liu, Zachariah Zhang, Narges Razavian, Deep EHR: Chronic Disease Prediction Using Medical Notes, arXiv

Improve multitask learning

It is generally a hard problem to determine weights for the tasks in the multitask training setting (TODO: any reference?). The experiments on MIMIC benchmarks showed that the networks overfit on some tasks earlier than on others.

Is it possible to create an architecture that will automatically modify the weights during the training? Something similar to [1]…

[1] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas, Learning to learn by gradient descent by gradient descent, 2016, arXiv

A short list of non-image-classification tasks