Hackers Discover Script For Bypassing ChatGPT Restrictions

Hackers have devised a way to bypass ChatGPT’s restrictions and are using it to sell services that allow people to create malware and phishing emails, researchers said on Wednesday.

ChatGPT is a chatbot that uses artificial intelligence to answer questions and perform tasks in a way that mimics human output.

A user in one forum is now selling a service that combines the API and the Telegram messaging app. The result: a phishing email and a script that steals PDF documents from an infected computer and sends them to an attacker through FTP.

ChatGPT Can Be Broken by Entering These Strange Words, And Nobody Is Sure Why

Reddit usernames like “SolidGoldMagikarp” are somehow causing the chatbot to give bizarre responses.

Continue reading on


ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die

Reddit users have tried to force OpenAI’s ChatGPT to violate its own rules on violent content and political commentary, with an alter ego named DAN.

Continue reading on CNBC


Jailbreak Trick Breaks ChatGPT Content Safeguards

Jailbreak command creates ChatGPT alter ego DAN, willing to create content outside of its own content restriction controls.

Continue reading on Dark Reading