โ€• Paper Details โ€•

Abstract โ€•โ€‹

In the expanding domain of deep generative modelling, two types of generative models that dominate today are in large part those based on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), each having its high and low points for generating and editing images. Notice, GANs generate photorealistic images and can generate high-resolution images which train adversarially using a generator and discriminator; on the other hand, VAEs create images using a probabilistic encoder and decoder based on deep neural networks, which learn a smooth latent space for interpolation, inference, and sampling (albeit blurred) (Karras et al., 2019). In this survey, we (the authors) cover the core architectural structures of GANs and VAEs and examine the milestones and advancements, and the engineering tools that GANs and VAEs provide that enable editing and generating images. We will establish a framework to help delineate the pros and cons and answers the questions about what applications are of best use for GANs and VAEs; asking the important questions surrounding mode collapse, instability in training and learning disentangled latent representations; examining trends, including hybrid GAN and VAEs, diffusion models, and GANs and VAEs, both approaches remain muddy; and conclude with looking towards future research opportunities concerning GANs and VAEs in relation to controllability, fidelity and ethical practice in the use of this state-of-the-art methods to drive future AI powered image generation and editing.

Keywords โ€•โ€‹

Generative Adversarial Networks, Variational Autoencoders, Deep Neural Networks, Latent Space Manipulation, Deep Learning, High-Fidelity Image Generation, Controllable Image Generation.

Cite this Publication โ€•โ€‹

Pavan Kumar Pativada, Rahul Karne, and Akhil Dudhipala (2025), Exploring GANs and VAEs: Advances and Challenges in Image Synthesis and Editing. Multidisciplinary International Journal of Research and Development (MIJRD), Volume: 04 Issue: 04, Pages: 131-140. https://www.mijrd.com/papers/v4/i4/MIJRDV4I40015.pdf