The use of the ChatGPT language model in the creation of malicious programs


  • Vladica Ubavić Republic Geodetic Authority, Belgrade, Serbia
  • Marina Jovanović-Milenković Educons University, Faculty of Project and Innovation Management, Belgrade, Serbia
  • Oliver Popović Toplica Academy of Applied Studies, Department of Business Studies Blace, Serbia
  • Marija Boranijašević Academy of Applied Technical and Preschool Studies, Niš, Serbia



ChatGPT, Cybersecurity, Malicious tools, Artificial intelligence


ChatGPT is a new chatbot developed by the company OpenAI and is an interface for the language model (Large Language Model). Although its primary function is to mimic a human interlocutor in a conversation, ChatGPT is capable of interpreting unusual responses as well. It can generate computer programs, answer exam questions, write poetry and song lyrics. The analysis of multiple hacking communities shows that there are already cases of cybercriminals using ChatGPT to develop malicious tools. This paper deals with determining the possibility of generating malicious programs using the ChatGPT language model. The authors have shown that it is possible to exploit ChatGPT to generate a script that can be used for bruteforce attacks.


Download data is not yet available.


Bernstein, D. J. (2006). Curve25519: new Diffie-Hellman speed records. In Public Key Cryptography-PKC 2006: 9th International Conference on Theory and Practice in Public-Key Cryptography, New York, NY, USA, April 24-26, 2006. Proceedings 9 (pp. 207-228). Springer Berlin Heidelberg.

Bushard, B. (2023, January 10). Fake Scientific Abstracts Written by ChatGPT Fooled Scientists, Study Finds. Forbes.

Eslahi, M., Salleh, R., & Anuar, N. B. (2012). Bots and botnets: An overview of characteristics, detection and challenges. In 2012 IEEE International Conference on Control System, Computing and Engineering (pp. 349-354). IEEE.

Eurostat. (2022, December 22). ICT security in enterprises.

Florida, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 681-694.

Gutmann, P. (2007). The commercial malware industry. In DEFCON conference. Las Vegas, USA.

Kocic, B. (2022). Healthcare financing through the development of modern information systems. International Scientific Conference UNITECH 2022 (pp. 169-171). Technical University of Gabrovo, Bulgaria.

Kulesh, S. (2023, January 5). Why ChatGPT can be dangerous for every internet user. Times of India.

Nasteski, V. (2017). An overview of the supervised machine learning methods. Horizons. b, 4, 51-62.

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.

Sharma, R., Sharma, N., & Mangla, M. (2021, May). An analysis and investigation of infostealers attacks during COVID'19: a case study. In 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC) (pp. 443-449). IEEE.

Sun, X., & Rajput, S. (2006). Contemporary Malware Trends and Countermeasures. In High-Capacity Optical Networks & Enabling Technologies (HONET). Charlotte, North Carolina.

The Guardian (2022, December 22). The Guardian view on ChatGPT: an eerily good human impersonator.

Tung, L. (2023, January 26). ChatGPT can write code. Now researchers say it's good at fixing bugs, too.

Ubavić, V. S., Bogdanović, B. P., & Milićević, V. J. (2014). Zloupotrebe elektronske pošte. Bizinfo (Blace), 5(2), 57-65.

Vučković, Z., Vukmirović, D., Milenković, M. J., Ristić, S., & Prljić, K. (2018). Analyzing of e-commerce user behavior to detect identity theft. Physica A: Statistical Mechanics and its Applications, 511, 331-335.




How to Cite

Ubavić, V., Jovanović-Milenković, M., Popović, O., & Boranijašević, M. (2023). The use of the ChatGPT language model in the creation of malicious programs. BizInfo (Blace) Journal of Economics, Management and Informatics, 14(2), 127–136.