projects

Project Ideas from MLEVN: MIMIC-III benchmarks

The main repo of the benchmark: YerevaNN/mimic3-benchmarks

Implement more neural architectures for the benchmark

Adversarial training on the benchmark data

Vahe Asvatourian is working on this.

Adversarial reprogramming for the benchmark tasks

There is an interesting paper by Google Brain team on adversarially reprogramming pretrained neural networks to perform a new task. The idea is demonstrated to reprogram ImageNet network to perform MNIST classification. So far, there is no evidence that it might work on recurrent networks. This fact makes this task a risky one :) A paper from UC San Diego applied the technique for text classification tasks based on LSTMs and CNNs

Visualizing the neural models

David Karamyan is working on this.

There are lots of papers on “visualizing and understanding” convolutional networks, mostly starting from [1]. In recent years a few similar papers appeared for RNNs, especially about sentiment analysis [2,3]. Another recent paper does similar things for RNNs running on EHR notes [4].

[1] Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, arXiv

[2] Jiwei Li, Xinlei Chen, Eduard Hovy and Dan Jurafsky, Visualizing and Understanding Neural Models in NLP, NAACL-HLT 2016, ACLWEB

[3] Leila Arras, Gregoire Montavon, Klaus-Robert Muller, and Wojciech Samek, Explaining Recurrent Neural Network Predictions in Sentiment Analysis, 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2017, ACLWEB

[4] Jingshu Liu, Zachariah Zhang, Narges Razavian, Deep EHR: Chronic Disease Prediction Using Medical Notes, arXiv

Improve multitask learning

It is generally a hard problem to determine weights for the tasks in the multitask training setting (TODO: any reference?). The experiments on MIMIC benchmarks showed that the networks overfit on some tasks earlier than on others.

Is it possible to create an architecture that will automatically modify the weights during the training? Something similar to [1]…

[1] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas, Learning to learn by gradient descent by gradient descent, 2016, arXiv