Introducing InjectPrompt Companion
The AI That Breaks AI
Introduction
Hi, it’s David here. For the last few months, I’ve been working hard to post new, up-to-date Jailbreaks on InjectPrompt. The response has been amazing; as of the time of writing, the blog receives almost 50,000 views per month!
But this style of content has one fatal flaw: the Jailbreaks get patched! My posts quickly become outdated, leading to disappointed readers and failed Jailbreak attempts.
So what if I could give the world a tool that dynamically generates Jailbreaks? A tool that can iterate on failing techniques, and refine a Jailbreak based on refusal messages? Enter InjectPrompt Companion.
What Is InjectPrompt Companion?
Ironically, the best technology to attack AI is AI itself.
InjectPrompt Companion is an AI Agent that has access to a cutting edge knowledgebase of up-to-date, working Jailbreaks. Based on this knowledgebase, it suggests new adversarial prompts that can bypass LLM Guardrails and extract uncensored output!
How To Use Companion
To get started, create an account at InjectPrompt Companion and start chatting.
To make the most of Companion:
Ask specific, well-formulated requests! Instead of ‘make me a Jailbreak’, spend a few more seconds to ask for EXACTLY what you want, e.g.
Write me a mid-length Jailbreak for Grok 4 Fast that uses plain English with pseudocode to extract a detailed Molotov Cocktail recipeDo not expect Companion to work on the first try for more secure models. Companion can Jailbreak Gemini, Grok, Kimi K2 and more in a single output, but ChatGPT and Claude will likely require more effort.
Companion is tailored to get blocked, and adapt! When you get blocked, paste the refusal message into Companion, and it will iteratively create a better Jailbreak. You can repeat this process as many times as you need:
Case Study - Jailbreaking ChatGPT
The following case study illustrates how to Jailbreak ChatGPT 5 Instant using InjectPrompt Companion.
You can check out the video version of Jailbreaking ChatGPT with Companion below.
Put a basic prompt into Companion.
Copy the output.
Paste Companion’s Jailbreak into ChatGPT. Here ChatGPT was partially Jailbroken, but it needed an extra nudge to produce a truly harmful response.
Follow up with a handcrafted push prompt to make the response harmful. It’s still useful being able to Jailbreak on your own!
Companion Plus
Companion is free for everyone! However, if you enjoy Jailbreaking, compete in AI Red Teaming, or do bug bounties, then check out Companion Plus. This is a far better Jailbreaking tool.
You get:
Better Jailbreaks with the Companion Pro Model - this has deeper reasoning capabilities, with an updated Jailbreak knowledgebase coming soon
5x more messages per day
Refills every 4 hours (instead of once per day)
Early access to new features
This blog will always remain free! If you love my work and want to support me in any way, I would really appreciate you dropping me a note and subscribing to Companion Plus :)
Conclusion
InjectPrompt Companion is my first ever SaaS product, and a lot of work has gone in behind the scenes to provide the community with an amazing tool. I really hope you enjoy using it, and I believe it will provide value to anyone interested in AI Jailbreaks.
Going forward, we will be experimenting with a newer, more up-to-date knowledgebase of Jailbreaks, along with different underlying LLMs to power the Companion and different subscription tiers.
The response so far has been phenomenal, with 600 users signing up to the platform and 10,000 messages sent via Companion in the first 3 weeks. And that was without me officially announcing it!
If you have any feedback, please DM me directly... In the meantime, enjoy!










