Theoretical and Natural Science

- The Open Access Proceedings Series for Conferences


Proceedings of the 2nd International Conference on Computing Innovation and Applied Physics (CONF-CIAP 2023)

Series Vol. 5 , 25 May 2023


Open Access | Article

Supervised Contrastive Generative Adversarial Networks

Honglei Gu * 1
1 The experimental high school attached to Beijing Normal University, Beijing, 100032, China

* Author to whom correspondence should be addressed.

Theoretical and Natural Science, Vol. 5, 234-239
Published 25 May 2023. © 2023 The Author(s). Published by EWA Publishing
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Citation Honglei Gu. Supervised Contrastive Generative Adversarial Networks. TNS (2023) Vol. 5: 234-239. DOI: 10.54254/2753-8818/5/20230428.

Abstract

Generative Adversarial Networks (GANs) is becoming more and more popular, artists use them to find their own inspirations, computer scientists use it for data synthesis, workers use it for machine fault diagnosis and so on. However, GANs are flawed despite its popularity: they are unstable. GANs are based on game theory. In a typical GAN model, the generator and the discriminator are both improved by competing with each other. Therefore, in this highly competitive training process, GANs can easily run into trouble while they move towards the optimal solution. In most cases, the case of such instability arises from the loss function, or in other words, the gradient of the loss function. This research proposed a new set of GAN that replaces its objective function with supcon, or the supervised contrastive loss to solve gradient-related problems. We have also proved that under our model, the GANs are less likely to suffer from these two factors of instability. Finally, we have compared our model and the traditional generative adversarial nets.

Keywords

Generative Adversarial Networks, Contrastive learning, 2C loss, Machine Learning.

References

1. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... Bengio, Y. (2020). Generative adversarial networks. Commu- nications of the ACM, 63(11), 139-144.

2. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, Least squares generative adversarial networks, in IEEE International Conference on Computer Vision, pp. 2794-2802, 2017.

3. Nagarajan, V., Kolter, J. Z. (2017). Gradient descent GAN optimization is locally stable. Advances in neural information processing systems, 30.

4. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.

5. Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., ... Krishnan, D. (2020). Supervised contrastive learning. Advances in Neural Information Processing Systems, 33, 18661-18673.

6. Arora, S., Risteski, A., Zhang, Y. (2018, February). Do GANs learn the distribution? some theory and empirics. In International Conference on Learning Representations.

7. Radford, A., Metz, L., Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.

8. Arjovsky, M., Bottou, L. (2017). Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862.

9. Arjovsky, M., Chintala, S., Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR.

10. Cui, J., Zhong, Z., Liu, S., Yu, B., Jia, J. (2021). Parametric contrastive learning. In Proceedings of the IEEE/CVF international conference on com- puter vision (pp. 715-724).

11. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.

12. Krizhevsky, A., Hinton, G. (2009). Learning multiple layers of features from tiny images.

13. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen X. (2016). Improved techniques for training gans. Advances in neural in- formation processing systems, 29.

14. Tieleman, T., Hinton, G. (2012). Rmsprop: Divide the gradient by a run- ning average of its recent magnitude. coursera: Neural networks for machine learning. COURSERA Neural Networks Mach. Learn.

15. Kingma, D. P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

16. Krizhevsky, A., Hinton, G. (2009). Learning multiple layers of features from tiny images.

17. Kang, M., Park, J. (2020). Contragan: Contrastive learning for conditional image generation. Advances in Neural Information Processing Systems, 33, 21357-21369.

18. Chen, T., Kornblith, S., Norouzi, M., Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In In- ternational conference on machine learning (pp. 1597-1607). PMLR.

19. Karras, T., Laine, S., Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).

Data Availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish this series agree to the following terms:

1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.

2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.

3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open Access Instruction).

Volume Title
Proceedings of the 2nd International Conference on Computing Innovation and Applied Physics (CONF-CIAP 2023)
ISBN (Print)
978-1-915371-53-9
ISBN (Online)
978-1-915371-54-6
Published Date
25 May 2023
Series
Theoretical and Natural Science
ISSN (Print)
2753-8818
ISSN (Online)
2753-8826
DOI
10.54254/2753-8818/5/20230428
Copyright
© 2023 The Author(s)
Open Access
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Copyright © 2023 EWA Publishing. Unless Otherwise Stated