AI502 Paper list for share

Presentation
– Each student should select 1 paper.
– Bold papers can be selected by at most 2 students.
– Other papers can be selected by at most 3 student.

 

Implementation
– Each student should select 5 papers including 1 presentation paper.
– Each paper can be selected by at most 14 students.
– Each student can select at most 1 bold papers for implementation

 

Policy

– Each Paper Implementation should be uploaded 1 week before presentation

– Every student needs to reproduce models/experiments in 5 papers (including presenting paper) (40% in grading)
• Submit the document/report as well
• Tentative deadline is one week before the presentation

 

Paper List

1. Regularization for Deep Learning (2)[4]

– Dropout: A Simple Way to Prevent Neural Networks from Overfitting : 10/2 [Link]

– Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift : 10/2 [Link], MNIST, ImageNet

 

2. Convolutional Neural Networks (6)[15]

– Very Deep Convolutional Networsk for Large-scale image recognition (VGG) : 10/14 [Link], ILSVRC-2012 dataset

– Deep Residual Learning for image recognition (ResNet): 10/14 [Link], ImageNet 2012, CIFAR10

– Group Normalization : 10/16 [Link], ImageNet

– MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications : 10/16 [Link],  ILSVRC 2012

– Fully Convolutional Networks for Semantic Segmentation : 10/28 [Link], PASCAL VOC 2011, 2012

– Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization : 10/28 [Link]

 

3. Recurrent Neural Networks (6)[17]

– Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation : 11/4 [Link]

– Sequence to Sequence Learning with Neural Networks : 11/4 [Link], WMT’14 English to French dataset

– Connectionist Temporal Classification : Labelling Unsegmented Sequence Data with Recurrent Neural Networks : 11/6 [Link]

– DRAW: A Recurrent Neural Network For Image Generation : 11/6 [Link], MNIST, SVHN, CIFAR-10,

– Neural Machine Translation by Jointly Learning to Align and Translate : 11/11 [Link]

– Attention is All You Need : 11/11 [Link]

 

4. Autoencoders (2) & Variational Autoencoder (2)[11]

– Extracting and Composing Robust Features with Denoising Autoencoders : 11/18 [Link]

– k-Sparse Autoencoders : 11/8 [Link]

– Stochastic Gradient Variational Bayes and Variational Autoencoder : 11/25 [Link]

– beta-VAE : 11/25 [Link]

 

5. Generative Adversarial Networks (6)[17]

– Generative Adversarial Nets : 12/4 [Link], MNIST, TFD, CIFAR-10

– Wasserstein GAN : 12/4 [Link]

– Spectral Normalization for GANs : 12/9 [Link], CIFAR-10, STL-10

– Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks : 12/9 [Link]

– InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets : 12/11 [Link]

– Adversarial Autoencoders : 12/11 [Link]

Leave a Comment