Image-to-image translation with GANs

Nicolas Beuve

feb. 18 2021

Zoom

Image-to-image translation is a realm aiming at transposing images from one representation to another, like generating an aerial map of a region based on a photograph. Results in this field were greatly improved since the arrival of GAN models in 2014. GANs (Generative Adversarial Nets) are neural networks, specialized in sample generation. When applied to an image, those models are able to generate convincing samples that are similar to images from a reference dataset while remaining completely original.

illustration

Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron Courville and Yoshua Bengio. 2014. Generative Adversarial Nets.

Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial Nets.

Phillip Isola and Jun-Yan Zhu and Tinghui Zhou and Alexei A. Efros. 2018. Image-to-Image Translation with Conditional Adversarial Networks.

Jun-Yan Zhu and Taesung Park and Phillip Isola and Alexei A. Efros. 2020. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.