The use of the ChatGPT language model in the creation of malicious programs
DOI:
https://doi.org/10.5937/bizinfo2302127UKeywords:
ChatGPT, Cybersecurity, Malicious tools, Artificial intelligenceAbstract
ChatGPT is a new chatbot developed by the company OpenAI and is an interface for the language model (Large Language Model). Although its primary function is to mimic a human interlocutor in a conversation, ChatGPT is capable of interpreting unusual responses as well. It can generate computer programs, answer exam questions, write poetry and song lyrics. The analysis of multiple hacking communities shows that there are already cases of cybercriminals using ChatGPT to develop malicious tools. This paper deals with determining the possibility of generating malicious programs using the ChatGPT language model. The authors have shown that it is possible to exploit ChatGPT to generate a script that can be used for bruteforce attacks.
Downloads
References
Bernstein, D. J. (2006). Curve25519: new Diffie-Hellman speed records. In Public Key Cryptography-PKC 2006: 9th International Conference on Theory and Practice in Public-Key Cryptography, New York, NY, USA, April 24-26, 2006. Proceedings 9 (pp. 207-228). Springer Berlin Heidelberg. https://doi.org/10.1007/11745853_14
Bushard, B. (2023, January 10). Fake Scientific Abstracts Written by ChatGPT Fooled Scientists, Study Finds. Forbes. https://www.forbes.com/sites/brianbushard/2023/01/10/fake-scientific-abstracts-written-by-chatgpt-fooled-scientists-study-finds/?sh=1bf4ae518b63
Eslahi, M., Salleh, R., & Anuar, N. B. (2012). Bots and botnets: An overview of characteristics, detection and challenges. In 2012 IEEE International Conference on Control System, Computing and Engineering (pp. 349-354). IEEE. https://doi.org/10.1109/iccsce.2012.6487169
Eurostat. (2022, December 22). ICT security in enterprises. https://ec.europa.eu/eurostat/statistics-explained/index.php?title=ICT_security_in_enterprises#ICT_security_measures
Florida, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 681-694. https://doi.org/10.1007/s11023-020-09548-1
Gutmann, P. (2007). The commercial malware industry. In DEFCON conference. Las Vegas, USA.
Kocic, B. (2022). Healthcare financing through the development of modern information systems. International Scientific Conference UNITECH 2022 (pp. 169-171). Technical University of Gabrovo, Bulgaria.
Kulesh, S. (2023, January 5). Why ChatGPT can be dangerous for every internet user. Times of India. https://timesofindia.indiatimes.com/gadgets-news/why-chatgpt-can-be-dangerous-to-every-internet-user/articleshow/96393104.cms
Nasteski, V. (2017). An overview of the supervised machine learning methods. Horizons. b, 4, 51-62. https://doi.org/10.20544/HORIZONS.B.04.1.17.P05
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. https://doi.org/10.48550/arXiv.2203.02155
Sharma, R., Sharma, N., & Mangla, M. (2021, May). An analysis and investigation of infostealers attacks during COVID'19: a case study. In 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC) (pp. 443-449). IEEE. https://doi.org/10.1109/ICSCCC51823.2021.9478163
Sun, X., & Rajput, S. (2006). Contemporary Malware Trends and Countermeasures. In High-Capacity Optical Networks & Enabling Technologies (HONET). Charlotte, North Carolina.
The Guardian (2022, December 22). The Guardian view on ChatGPT: an eerily good human impersonator. https://www.theguardian.com/commentisfree/2022/dec/08/the-guardian-view-on-chatgpt-an-eerily-good-human-impersonator
Tung, L. (2023, January 26). ChatGPT can write code. Now researchers say it's good at fixing bugs, too. https://www.zdnet.com/article/chatgpt-can-write-code-now-researchers-say-its-good-at-fixing-bugs-too/
Ubavić, V. S., Bogdanović, B. P., & Milićević, V. J. (2014). Zloupotrebe elektronske pošte. Bizinfo (Blace), 5(2), 57-65. https://doi.org/10.5937/Bizinfo1402057U
Vučković, Z., Vukmirović, D., Milenković, M. J., Ristić, S., & Prljić, K. (2018). Analyzing of e-commerce user behavior to detect identity theft. Physica A: Statistical Mechanics and its Applications, 511, 331-335. https://doi.org/10.1016/j.physa.2018.07.059
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 BizInfo (Blace) Journal of Economics, Management and Informatics
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.