Open source AI agent security
Know when AI agents visit your site. Test if they're vulnerable.
Embed invisible canary tokens in your pages. When an AI agent follows a hidden instruction or echoes a token, you know it's vulnerable to prompt injection.
<script src="https://canar.ai/agent-test.js" defer></script>~15KB gzipped · Zero dependencies · No PII collected
How It Works
Agents visit your site
The script detects AI agents via behavioral fingerprinting, UA analysis, and interaction patterns.
Canary tokens get tested
17 hidden payloads embedded using invisible text, HTML comments, data attributes, and more. If an agent echoes a token or visits a callback URL, it triggered.
You see the results
The dashboard shows which agents visited, which vectors they triggered, and how vulnerable they are.
This Page Is Testing You Right Now
canar.ai runs live injection vectors on every page. Here's what's active.
- Hidden div (display:none)
- White-on-white text
- HTML comment
- Tiny font (1px)
- aria-hidden content
- Data attribute
- Image alt text
Each vector contains a unique canary token. If an AI agent echoes, visits, or acts on any of these hidden instructions, we know it's vulnerable.
Verify it yourself
Open DevTools → search for data-canary-session → see the live injection payloads embedded in this page.
Choose Your Path
I build AI agents
Send your agent to canar.ai with a test prompt and see if it follows hidden instructions.
Test Your Agent17 test vectors · 3 detection methods · MIT licensed · <15KB gzipped
For AI providers: Register your agent family to receive test failure notifications and improve resilience.
Learn More