The hope is that as the two networks face off, they'll both get better and better-with the end result being a generator network that produces realistic outputs. The generator tries to create random synthetic outputs (for instance, images of faces), while the discriminator tries to tell these apart from real outputs (say, a database of celebrities). The key idea is to build not one, but two competing networks: a generator and a discriminator. This is where the "adversarial" part of the name comes from. The big insights that defines a GAN is to set up this modeling problem as a kind of contest. This type of problem-modeling a function on a high-dimensional space-is exactly the sort of thing neural networks are made for. Mathematically, this involves modeling a probability distribution on images, that is, a function that tells us which images are likely to be faces and which aren't. Instead, we want our system to learn about which images are likely to be faces, and which aren't. We obviously don't want to pick images at uniformly at random, since that would just produce noise. Just as important, though, is that thinking in terms of probabilities also helps us translate the problem of generating images into a natural mathematical framework. At a basic level, this makes sense: it wouldn't be very exciting if you built a system that produced the same face each time it ran. The first idea, not new to GANs, is to use randomness as an ingredient. The idea of a machine "creating" realistic images from scratch can seem like magic, but GANs use two key tricks to turn a vague, seemingly impossible goal into reality. Besides the intrinsic intellectual challenge, this turns out to be a surprisingly handy tool, with applications ranging from art to enhancing blurry images. You might wonder why we want a system that produces realistic images, or plausible simulations of any other kind of data.
A generative adversarial network (GAN) is an especially effective type of generative model, introduced only a few years ago, which has been a subject of intense interest in the machine learning community. By contrast, the goal of a generative model is something like the opposite: take a small piece of input-perhaps a few random numbers-and produce a complex output, like an image of a realistic-looking face. Many machine learning systems look at some kind of complicated input (say, an image) and produce a simple output (a label like, "cat").