RUNTIME
SECURITY FOR
AI AGENTS
Your AI agents have access to your infrastructure. Nothing stops them from using it wrong.
Three attack classes your
existing stack cannot see.
Each demonstrated in the wild. Firewalls, WAFs, CASB, and SIEM missed every one.
Firewalls inspect packets. WAFs inspect HTTP headers. EDRs watch process creation. None operate at tool-call granularity. None track data flow across calls. None enforce at the moment connect() fires. The only enforcement point that cannot be bypassed from userspace is the kernel.
Lilith Zero
SDK
Taint propagation + policy hooks at the application layer. Ships with pre-exec hooks for Claude Code and GitHub Copilot out of the box. Works with OpenClaw, closing dozens of CVEs still unpatched in production agent systems. No kernel requirements. No infrastructure changes.
Questions or security feedback?
from lilith_zero import Lilith
async with Lilith(
"python mcp_server.py",
policy="policy.yaml",
) as lz:
result = await lz.call_tool(
"read_file",
{"path": "/data/report.txt"},
)ONE SECURITY
ARCHITECTURE.
Start open source at the application layer. Upgrade to kernel-level enforcement when the threat model demands it. Both run the same Cedar policy language and taint engine.
Lilith Zero
Application-layer enforcement for MCP agents. No kernel required.
- -Cedar policy engine: policy-as-code, human-readable
- -64-bit taint bitmask per agent session
- -Python and TypeScript SDKs
- -HMAC-signed tamper-evident audit log
- -Apache 2.0 licensed, full source available
FIPS140-2
Lilith
Kernel-level enforcement. Zero agent code changes. Zero trust gaps.
- -BPF-LSM at Ring 0, transparent to any agent framework
- -SPIFFE/SPIRE cryptographic workload identity
- -Fail-closed BPF heartbeat: all connections blocked if daemon dies
- -Ed25519-signed policy capsules with anti-rollback watermarks
- -FIPS 140-3 capable (aws-lc-rs crypto backend)
Professional Services
Architecture review, deployment, and ongoing security posture.
- -AI agent attack surface assessment and threat modeling
- -White-box security audit of existing AI agent pipelines
- -Cedar policy authoring and formal verification
- -SPIRE deployment and SPIFFE identity integration
- -Bespoke incident response and remediation planning
OPEN SOURCE RESEARCH
Publishing our findings to secure the future of AI
Red-Teaming Agent
A comprehensive framework for LLM safety through adversarial prompt generation and automated evaluation.
Hack the AI
Red-Teaming game where users hack realistic multimodal agent systems with RAG, memory, and tool usage.
CHIMERA
Cryptographic Honeypot & Intent-Mediated Enforcement Response Architecture
Agency Without Assurance
Investigating the security risks of autonomous agents with full computer access and OpenClaw configuration vulnerabilities.
STAY UPDATED
Get the latest research on agentic security and product updates directly to your inbox.


