Cyber ​​criminals use ChatGPT to create hacking tools and write code

Cyber ​​criminals use ChatGPT to create hacking tools and write code

Experts and novice cybercriminals have already started using OpenAI’s chatbot ChatGPT to create hacking tools, security analysts said.

In one documented example, the Israeli security firm Check Point was spotted(Opens in a new window) a thread on a popular underground hacking forum by a hacker who said he’s experimenting with the popular AI chatbot to “emulate malware strains.”

The hacker then compressed Android malware written by ChatGPT and shared it on the internet. The malware was able to steal interesting files, forbes reports(Opens in a new window).

The same hacker showed another tool that installed a backdoor on a computer and could infect a PC with more malware.

Check Point noted in its review(Opens in a new window) the situation that some hackers used ChatGPT to create their first scripts. In the above forum, another user shared Python code that he said could encrypt files and was written using ChatGPT. The code, he said, was the first such code he had written.

While such code could be used for innocuous reasons, Check Point said it “could be easily modified to completely encrypt someone’s computer without user interaction.”

The security firm stressed that while ChatGPT-encoded hacking tools seem “pretty simple,” it’s “only a matter of time before more sophisticated threat actors improve the way they use AI-based tools for evil.”

Recommended by our editors

A third case of using ChatGPT for fraudulent activity reported by Check Point involved a cybercriminal who demonstrated that it was possible to create a dark web marketplace using the AI ​​chatbot. The hacker posted on the underground forum that he used ChatGPT to create code that uses a third-party API to get current cryptocurrency prices used for the dark web market’s payment system.

ChatGPT’s developer, OpenAI, has implemented some controls that prevent blatant requests to the AI ​​to create spyware. However, the AI ​​chat box came under even more scrutiny after security analysts and journalists found it could write grammatically correct phishing emails without typos(Opens in a new window).

OpenAI did not immediately respond to a request for comment.

SecurityWatch<\/strong> newsletter for our top privacy and security stories delivered right to your inbox.”,”first_published_at”:”2021-09-30T21:22:09.000000Z”,”published_at”:”2022-03-24T14:57:33.000000Z”,”last_published_at”:”2022-03-24T14:57:28.000000Z”,”created_at”:null,”updated_at”:”2022-03-24T14:57:33.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs”>

Do you like what you read?

Sign up for security guard Newsletters for our top privacy and security stories, delivered straight to your inbox.

This newsletter may contain advertisements, offers or affiliate links. By subscribing to a newsletter, you agree to our Terms of Use and Privacy Policy. You can unsubscribe from the newsletter at any time.

Leave a Comment