
Jailbreaking refers to techniques users employ to bypass ChatGPT's safety guardrails and get it to produce otherwise restricted content.
OpenAI continuously patches known jailbreaks. Intentional misuse violates terms of service and may result in account termination.
Reference:
TaskLoco™ — The Sticky Note GOAT