Researchers discovered that cybercriminals began using the ChatGPT Artificial Intelligence (AI) tool, developed by OpenAI, to recreate malware strains and execute malicious software attacks.
ChatGPT is an artificial intelligence chat developed by OpenAI, trained to hold a text conversation. It is a chat that is based on the GPT 3.5 language model, and in recent days it has surprised people with the naturalness of its responses and its ability to generate and link ideas and remember previous conversations.
To be able to use this chatbot, it is only necessary to have an OpenAI account, where it can be downloaded for free. The accessibility to this tool is what put cybersecurity companies, such as Check Point, on alert.
At the end of last December and given the growing interest in ChatGPT, the technical director of Check Point software, Eusebio Nieva, warned that “anyone with minimal resources and zero knowledge of code can easily exploit it”.
Now, a group of researchers from this cybersecurity company found indications that malicious actors are using ChatGPT to execute malware campaigns following methods commonly used in this type of action, which he collects in a post on his official blog.
Check Point found further evidence of ChatGPT misuse by this threat actor in the creation of a Java snippet. It downloads a common SSH client, putty, and runs it covertly, using Powershell.
The cybersecurity company has commented that earlier this year several threat actors mentioned bad practices with ChatGPT in this type of forum. Then, cybercriminals became more interested in generating counterfeit art with DALL-E 3 and selling it through legitimate platforms such as Etsy.
Source: IT User
(Reference image source: File)