Conditional Generative Adversarial Networks and Deep Learning Data Augmentation: A Multi-Perspective Data-Driven Survey Across Multiple Application Fields and Classification Architectures
Carregando...
Arquivos
Fontes externas
Fontes externas
Data
Orientador
Coorientador
Pós-graduação
Curso de graduação
Título da Revista
ISSN da Revista
Título de Volume
Editor
Tipo
Artigo
Direito de acesso
Arquivos
Fontes externas
Fontes externas
Resumo
Effectively training deep learning models relies heavily on large datasets, as insufficient instances can hinder model generalization. A simple yet effective way to address this is by applying modern deep learning augmentation methods, as they synthesize new data matching the input distribution while preserving the semantic content. While these methods produce realistic samples, important issues persist concerning how well they generalize across different classification architectures and their overall impact in accuracy improvement. Furthermore, the relationship between dataset size and model accuracy, as well as the determination of an optimal augmentation level, remains an open question in the field. Aiming to address these challenges, in this paper, we investigate the effectiveness of eight data augmentation methods—StyleGAN3, DCGAN, SAGAN, RandAugment, Random Erasing, AutoAugment, TrivialAugment and AugMix—throughout several classification networks of varying depth: ResNet18, ConvNeXt-Nano, DenseNet121 and InceptionResNetV2. By comparing their performance on diverse datasets from leaf textures, medical imaging and remote sensing, we assess which methods offer superior accuracy and generalization capability in training models with no pre-trained weights. Our findings indicate that deep learning data augmentation is an effective tool for dealing with small datasets, achieving accuracy gains of up to 17%.
Descrição
Palavras-chave
conditional generative adversarial networks, data augmentation, deep learning, deep neural networks, image processing
Idioma
Inglês
Citação
AI (Switzerland), v. 6, n. 2, 2025.




