Initializing Python runtime...
Tip: Agent-Aegis evaluates policies in under 1ms
AI agent governance in your browser. No install needed.
Find ungoverned AI calls in your codebase. Paste Python code or pick a preset to see what aegis scan detects.
One function activates everything: guardrails, policy enforcement, auto-patching, audit logging, and cost tracking. Drop an aegis.yaml in your project root and call aegis.auto_instrument().
guardrails:
pii:
action: mask # mask | block | warn | log
categories:
- email
- credit_card
- ssn
- korean_rrn
- api_key
injection:
action: block # block | warn | log
sensitivity: medium # low | medium | high
integrations:
patch_openai: true # auto-patch OpenAI client
patch_anthropic: true # auto-patch Anthropic client
audit:
backend: sqlite # sqlite | redis | postgres
# Option A: Two lines of Python
import aegis
aegis.auto_instrument()
# Option B: Zero code changes — just an env var
$ AEGIS_INSTRUMENT=1 python my_agent.py
# Fine-grained control
aegis.auto_instrument(
frameworks=["langchain", "openai_agents"], # specific frameworks only
on_block="warn", # "raise" (default) | "warn" | "log"
)
| Guardrail | Default | Catches |
|---|---|---|
| Prompt injection | Block | 85+ patterns, multi-language (EN/KO/ZH/JA) |
| PII detection | Warn | 12 categories (email, credit card, SSN, API keys…) |
| Prompt leak | Warn | System prompt extraction attempts |
| Toxicity | Warn | Harmful/abusive content (opt-in to block) |
Your code Agent-Aegis layer (invisible)
───────── ───────────────────────
chain.invoke("Hello") ──▶ [input guardrails] ──▶ LangChain ──▶ [output guardrails] ──▶ response
Runner.run(agent, "query") ──▶ [input guardrails] ──▶ OpenAI SDK ──▶ [output guardrails] ──▶ response
crew.kickoff() ──▶ [task guardrails] ──▶ CrewAI ──▶ [tool guardrails] ──▶ response
Define rules in YAML: which actions are auto-approved, need human review, or are blocked.
Send agent actions (navigate, read, write, delete) through the policy engine.
Instantly see risk level, approval decision, matched rule, and full audit trail.
# 50+ lines of DIY governance... per action type
if action.type == "delete":
if action.risk > THRESHOLD:
logger.warning(f"High-risk: {action}")
if not await ask_human_approval(action):
raise PermissionError("Denied")
# No audit trail
# No policy hot-reload
# Breaks when you add a new action type
result = await executor.run(action)
Scan MCP tool definitions for poisoning patterns. Ported from Agent-Aegis ToolDescriptionScanner with 10 regex detection patterns and Unicode normalization.
Simulate LLM cost tracking with budget limits and threshold transitions. Ported from Agent-Aegis CostTracker with real model pricing data.
Interactive hash-chain audit log using Web Crypto SHA-256. Ported from Agent-Aegis CryptoAuditChain for tamper-evident logging.
Map Agent-Aegis features to regulatory requirements across 5 frameworks. Ported from Agent-Aegis ComplianceMapper.
Detect covert power through what an agent excludes. Audit selection-by-negation patterns and compute justification gaps between declared and assessed impact. Based on Santander "Selection as Power" (arXiv:2602.14606).
terraform plan for AI agent policies. Preview how a policy change affects live actions, run regression tests, and see the PR comment — all before merging.
Real-time PII detection and masking. Ported from Agent-Aegis PIIGuardrail with Luhn validation for credit cards, Korean RRN/phone patterns, and API key detection.
Real-time prompt injection detection across 8 categories. Ported from Agent-Aegis InjectionGuardrail with multi-language support (Korean, Chinese, Japanese) and configurable sensitivity.
See the streaming guardrail problem in real-time. Left: LLM streams freely and leaks PII. Right: Agent-Aegis catches it. Same response, different outcome. Based on a real LangChain issue.
aegis.auto_instrument() adds full security in 1 function call — no behavior changes, just safety.
agent.run(action) # 💥 anything goes
aegis.auto_instrument() # 🛡️ everything governed
aegis.auto_instrument() once at startupauto / approve / blockDefine rules for each action type: auto-approve safe reads, require human review for writes, block dangerous operations.
Your AI agent (LangChain, CrewAI, OpenAI, etc.) sends each action through Agent-Aegis before executing it.
Agent-Aegis runs 4 guardrail scans + risk eval in 2.65ms (0.5% of a typical LLM call). Auto-approve, review, or block — every decision audit-logged.
Every action is evaluated, logged, and auditable — zero blind spots.
$ pip install agent-aegisversion: "1"
rules:
- name: read_auto
approval: auto
- name: write_review
approval: approve
- name: delete_block
approval: blockimport aegis
aegis.auto_instrument() # auto-patches all frameworks, activates everything
# OpenAI/Anthropic calls are now governed.
# PII masked, injections blocked, all audited.$ docker run -p 8000:8000 \
-v ./policy.yaml:/app/policy.yaml \
ghcr.io/acacian/aegis:latest
# REST API at http://localhost:8000See it in action — no install required
Click any scenario to load its policy and test actions
The difference between hoping your AI agent behaves and knowing it does
| Without Governance | With Agent-Aegis | |
|---|---|---|
| Policy changes | Redeploy code | Edit YAML, hot-reload |
| Risk evaluation | Manual if/else chains | 2.65ms with guardrails, declarative rules |
| Audit trail | Build your own logging | Built-in, compliance-ready |
| Human approval | Custom workflow code | One-line approval handler |
| Framework support | Build per framework | 7 adapters, one policy |
| Setup time | Days to weeks | 5 minutes |
Add governance to any Python AI agent in 5 minutes. One pip install, one YAML file.