Threat Actor Allegedly Offers 20 Million Logins for Sale in OpenAI Data Breach
Significant security problems utilizing Open AI’s Chat GPT platform have been reported recently; hacked user credentials were sold on the dark web. More than 225,000 Chat GPT account credentials were found to be for sale on underground markets between January and October of 2023. The main source of these credentials was information-stealing malware like Red Line, Raccoon, and LummaC2.
Security researchers discovered more than 200,000 hacked Open AI credentials that were for sale on the dark web in a different incident. These login credentials made it possible for unauthorized users to access Chat GPT’s premium capabilities and revealed private user data, such as business plans, trade secrets, and source code. Rather than a direct breach of Open AI’s infrastructure, the breaches were ascribed to malware-based log harvesting.
Open AI has confirmed that hacked login credentials resulted in unlawful access and misuse of user accounts, thereby acknowledging these security concerns. The business stressed that external malware infestations, not flaws in Chat GPT’s infrastructure, were the cause of the intrusions. It is recommended that users use caution when utilizing AI platforms, refrain from disclosing private information, and put strong security measures in place, like turning on two-factor authentication and creating strong, one-of-a-kind passwords.
These incidents highlight how crucial it is to keep up diligent cybersecurity procedures, particularly when using AI-driven platforms. To protect their personal information, users should be proactive and stay aware of potential threats.