Home > News content

This AI "master" simple stroke level, amazing to netizens: actually do not use Gan

via:博客园     time:2020/12/20 15:23:18     readed:117

Source: qubit

Jin Lei from Aofei Temple

Qubit report official account number QbitAI

What level can AI draw with simple strokes?

To give a picture of American actor rami Malek, the effect is like this.


Is it close to the original?

Let's take a look at the effect of group photo input of friends.


Although there are many characters, the simple stroke effect still can distinguish the characters in the play.

If the hair is very thick, can AI hold it?



These are from an AI called Artline.


Do you think that Gan is responsible for such a vivid effect?


Artline does not use Gan at all


It's really amazing because of the effect of art line.

So how does it do it?

The authors of Artline share the three technologies behind it


Progressive Resizing

Generator Loss

Next, let's take a look at the details behind each technology.

The technology cited in the part of self attention comes from the research proposed by LAN goodfill, the father of Gan two years ago.


The author's explanation is as follows

It didn't make much difference.

In this study, attention mechanism is added to Gan generation and sngan idea is introduced into generator.

It is necessary to solve some problems of traditional Gan, such as:

It is difficult to find the dependence in the image using a small convolution kernel

Using large convolution kernel will lose the efficiency of convolution network parameters and calculation

The core self attention mechanism in the study is shown in the figure below.


Among them, f (x), G (x) and H (x) are ordinary 1x1 convolutions, and the difference lies in the size of output channels.

Then, the output of F (x) is transposed, multiplied by the output of G (x), and normalized by softmax to get an attention map.

After getting the attention map, it is multiplied by H (x) pixel by pixel to get the adaptive attention feature maps.


The results show that the effect of introducing self attention mechanism is better than that of FID and is.

The second technical inspiration involved in Artline came from a study by NVIDIA in 2018.


This research mainly proposes a new method of training against neural network.

The core idea is to train the generator and discriminator step by step: start with low resolution, and gradually add new layers to refine details as the training process progresses.


This method not only speeds up the training speed, but also is more stable, which can produce high-quality images.

The last technology involved in Artline is a study proposed by Li Feifei's team of Stanford University in 2016.


This research mainly solves the problem of time-consuming in style conversion.


This model can be divided into two parts: image conversion network on the left and loss network on the right.

Its super-resolution reconstruction also uses the above network model, but the specific internal image conversion network part is slightly different.

Compared with the previous research, the effect of this network has reached a considerable level, but the speed has increased by a hundred times to three orders of magnitude.


About the author


The author of Artline's project is vijish madhavan.

In GitHub, he confessed that he was not a programmer. He also pointed out some defects of Artline, such as the unsatisfactory effect when processing images with pixels less than 500px.

Now, Artline can play online!

Colab link:

https://colab.research.google.com/github/vijishmadhavan/Light-Up/blob/master/ArtLine (Try_ It_ On_ Colab) . ipynb.ipynb )


GitHub project address:


China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments