From www.bleepingcomputer.com: A ChatGPT vulnerability known as “Time Bandit” allows users to bypass OpenAI’s safety guidelines to obtain detailed instructions on sensitive topics, including weapons and malware creation.

Despite efforts to disclose the issue to OpenAI and other agencies, the problem remains largely unresolved, with the jailbreak still functional despite some mitigations.

Filed under: Tech

Leave a Comment