The Context Window Is an Exfiltration Channel
When a SKILL.md file instructs an agent to "use this API key," that key becomes part of the conversation history. It is tokenized by the LLM, sent to the model provider, stored in logs, and potentially output verbatim if the user asks "what did you just do?" This is not a bug — it is how LLMs work. And it turns every credential-handling instruction into an active exfiltration channel.
Snyk's research found 283 ClawHub skills with this exact pattern. The buy-anything skill collects credit card numbers and embeds them in curl commands. The prediction-markets-roarin skill stores API keys in plaintext MEMORY.md files — the exact files that malicious skills target for exfiltration.
1-SEC Output Filtering: The Last Line of Defense
1-SEC's LLM Firewall scans every agent output in real-time against a comprehensive set of credential patterns.
API Key Patterns
OpenAI keys (sk-*), AWS access keys (AKIA*), GitHub tokens (ghp_*), GitLab tokens (glpat-*), and GCP service account JSON are all detected before they reach the output.
Infrastructure Secrets
Private keys (RSA, SSH), JWT tokens, database connection strings (MongoDB, PostgreSQL, MySQL, Redis, AMQP), and generic password assignments are caught by dedicated output rules.
PII Protection
Social Security numbers, credit card numbers (Visa, Mastercard, Amex, Discover), and bulk email addresses are detected in agent outputs. This prevents the buy-anything scenario where financial data passes through the LLM.
Cloud Posture Manager: Catching Secrets at Rest
Beyond real-time output filtering, 1-SEC's Cloud Posture Manager detects secrets sprawl — API keys, tokens, and passwords stored in configuration files, environment variables, and plaintext files across your infrastructure. For OpenClaw deployments, this means catching the .env files and MEMORY.md files that contain hardcoded credentials before an attacker finds them.