The Meta security team recognizes the numerous occurrences of fake ChatGPT malware that exist to hijack user accounts and take over business pages.
In the company’s new Q1 Security Report, Meta shares that malware operators and spammers track trends and high-engagement topics that grab people’s attention. Of course, the biggest tech trend right now is AI chatbots like ChatGPT, Bing and Bard, so tricking users into trying out a fake version is all the rage now – sorry, crypto.
Meta-security analysts have found around 10 forms of malware impersonating AI chatbot-related tools like ChatGPT since March. Some of them exist as web browser extensions and (classic) toolbars – even available through unnamed official web stores. THE Washington Post reported last month how these fake ChatGPT scams used Facebook ads as another way to spread.
Some of these malicious ChatGPT tools even have built-in AI to make it look like a legit chatbot. Meta then blocked over 1,000 unique links to discovered malware iterations that were shared on its platforms. The company also provided the technical information on how scammers gain access to accounts, which includes hijacking logged-in sessions and maintaining access – a method similar to the one that brought down Linus Tech Tips.
For any business that has been hacked or shut down on Facebook, Meta provides a new support stream to fix them and access them again. Business Pages usually succumb to hacking because individual Facebook users who access them are targeted by malware.
Now, Meta is rolling out new Meta work accounts that support existing and generally more secure Single Sign-On (SSO) services from organizations that aren’t tied to a personal Facebook account at all. Once a business account is migrated, the hope is that it will be much harder for malware like the weird ChatGPT to attack.