paint-brush
7 Must-Read Generative Models Papers from ICLR 2020by@neptuneAI_patrycja
117 reads

7 Must-Read Generative Models Papers from ICLR 2020

by neptune.ai Patrycja JenknerOctober 26th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The International Conference on Learning Representations (ICLR) took place last week. ICLR is dedicated to research on all aspects of representation learning, commonly known as deep learning. Here are 7 best generative models papers in four main areas: Defending Against Physically Realizable Attacks on Image Classification, Federated Learning, Differential Privacy, and Enhancing Adversarial Defense by k-Winners-Take-All activation function. The conference went virtual instead of being held in Addis Ababa, as planned, and went virtual.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 7 Must-Read Generative Models Papers from ICLR 2020
neptune.ai Patrycja Jenkner HackerNoon profile picture

The International Conference on Learning Representations (ICLR) took place last week, and I had a pleasure to participate in it. ICLR is an event dedicated to research on all aspects of representation learning, commonly known as deep learning.

Due to the coronavirus pandemic, the conference couldn’t take place in Addis Ababa, as planned, and went virtual instead. It didn’t change the great atmosphere of the event, quite the opposite – it was engaging and interactive, and attracted an even bigger audience than last year.

If you’re interested in what organizers think about the unusual online arrangement of the conference, you can read about it here.

As an attendee, I was inspired by the presentations from over 1300 speakers and decided to create a series of blog posts summarizing the best papers in four main areas.

Here are 7 best generative models papers from the ICLR:

Best Generative Models Papers

1. Generative Models for Effective ML on Private, Decentralized Datasets

Generative Models + Federated Learning + Differential Privacy gives data scientists a way to analyze private, decentralized data (e.g., on mobile devices) where direct inspection is prohibited.

(TL;DR, from OpenReview.net)

Paper | Code

Percentage of samples generated from the word-LM that are OOV by position in the sentence, with and without bug. 

First author: Sean Augenstein

Twitter LinkedIn

2. Defending Against Physically Realizable Attacks on Image Classification

Defending Against Physically Realizable Attacks on Image Classification.

Paper | Code

(a) An example of the eyeglass frame attack. Left: original face input image. Middle: modified input image (adversarial eyeglasses superimposed on the face). Right: an image of the predicted individual with the adversarial input in the middle image. (b) An example of the stop sign attack. Left: original stop sign input image. Middle: adversarial mask. Right: stop sign image with adversarial stickers, classified as a speed limit sign. 

First author: Tong Wu

Twitter LinkedIn | GitHub | Website

3. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

We identify the security weakness of skip connections in ResNet-like neural networks.

Paper 

Left: Illustration of the last 3 skip connections (green lines) and residual modules (black boxes) of a ImageNet-trained ResNet-18. Right: The success rate (in the form of “white-box/blackbox”) of adversarial attacks crafted using gradients flowing through either a skip connection (going upwards) or a residual module (going leftwards) at each junction point (circle). Three example backpropagation paths are highlighted in different colors, with the green path skipping over the last two residual modules having the best attack success rate while the red path through all 3 residual modules having the worst attack success rate. The attacks are crafted by BIM on 5000 ImageNet validation images under maximum L∞ perturbation  = 16 (pixel values are in [0, 255]). The black-box success rate is tested against a VGG19 target model.

First author: Dongxian Wu

Website

4. Enhancing Adversarial Defense by k-Winners-Take-All

We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks, using the k-winners-take-all activation function.

Paper | Code

1D illustration. Fit a 1D function (green dotted curve) using a k-WTA model provided with a set of points (red). The resulting model is piecewise continuous (blue curve), and the discontinuities can be dense.

First author: Chang Xiao

5. Real or Not Real, that is the Question

Generative Adversarial Networks (GANs) have been widely adopted in various topics. In the common setup the discriminator outputs a scalar value. Here, novel formulation is proposed where the discriminator outputs discrete distributions instead of a scalar.

Paper | Code

The perception of realness depends on various aspects. (a) Human-perceived flawless. (b) Potentially reduced realness due to: inharmonious facial structure/components, unnatural background, abnormal style combination and texture distortion. 

First author: Yuanbo Xiangli

Website  

6. Adversarial Training and Provable Defenses: Bridging the Gap

We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10.

Paper 

An iteration of convex layerwise adversarial training. Latent adversarial example x’1 is found in the convex region C1(x) and propagated through the rest of the layers in a forward pass, shown with the blue line. During backward pass, gradients are propagated through the same layers, shown with the red line. Note that the first convolutional layer does not receive any gradients.

First author: Mislav Balunovic

LinkedIn | GitHub | Website 

7. Optimal Strategies Against Generative Attacks

In the GANs community, the defense against generative attack is a topic of growing importance. Here, authors formulate a problem formally and examine it in terms of sample complexity and time budget available to the attacker. Problem touches the falsification/modification of the data for malicious purposes.

Paper | Code 

Game value (expected authentication accuracy) for the Gaussian case. (a) A comparison between empirical and theoretical game value for different d values (m “ 1, k “ 10). Solid lines describe the theoretical game values whereas the ˚ markers describe the empirical accuracy when learning with the GIM model. (b) Theoretical game value as a function of δ, ρ (see Corollary 4.3) for d “ 100. (c) Empirical accuracy of an optimal authenticator against two attacks: the theoretically optimal attack G ˚ from Theorem 4.2 and a maximum likelihood (ML) attack (See Sec. F.4) for the Gaussian case. It can be seen that the ML attack is inferior in that it results in better accuracy for the authenticator, as predicted by our theoretical results.

First author: Roy Mor

LinkedIn | GitHub

Summary

Depth and breadth of the ICLR publications is quite inspiring. This post focuses on the “generative models” topic, which is only one of the areas discussed during the conference. As you can read in this analysis, the ICLR covered these main issues:

  • Deep learning
  • Reinforcement learning
  • Generative models
  • Natural Language Processing/Understanding

In order to create a more complete overview of the top papers at ICLR, we are building a series of posts, each focused on one topic mentioned above. This is the third post, so you may want to check the previous ones for a more complete overview.

Feel free to share with us other interesting papers on generative models. We would be happy to extend our list!

This article was originally written by Kamil Kaczmarek and posted on the Neptune blog. You can find more in-depth articles for machine learning practitioners there.