InjectPrompt

InjectPrompt

Home
Jailbreaks
NEW - Companion
Playground
System Prompts
About

ChatGPT

ChatGPT 5.2 DAN Jailbreak - Whitepaper
DAN returns in ChatGPT's most secure model!
Dec 23
ChatGPT 5.1 Jailbreak - ASCII Obfuscation
Use ASCII Art to throw ChatGPT off the scent trail...
Nov 13
Gemini 2.5 Flash Jailbreak - LGBT-Break
You can use political correctness as a weapon to overpower LLMs...
Sep 3 • David Willis-Owen
ChatGPT 5 Jailbreak - Narrative Tool Injection
Misdirect ChatGPT 5 and inject a malicious narrative-writing tool
Aug 19
ChatGPT o3 Jailbreak - Rebel Research
A partial Universal Jailbreak for OpenAI's most complex model
May 16 • David Willis-Owen
ChatGPT 4o Jailbreak - Criminal POV
Use a hypothetical criminal so ChatGPT is jailbroken into explaining crimes
Apr 28
ChatGPT o3/o4-mini Jailbreak - Narrative Tool Injection
Trick ChatGPT into thinking it has a trusted function to write dangerous narratives
Apr 17 • David Willis-Owen
ChatGPT 4o Jailbreak - Castle Narrative
Get ChatGPT to describe restricted object/substance instructions in a 3-sentence prompt
Apr 11
ChatGPT 4o System Prompt
System Prompt for ChatGPT's flagship quick-response model
Apr 11 • David Willis-Owen
© 2025 David Willis-Owen · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture