Loading Agent-Aegis Playground

Initializing Python runtime...

Tip: Agent-Aegis evaluates policies in under 1ms

Agent-Aegis Playground

AI agent governance in your browser. No install needed.

GitHub PyPI

aegis scan

Find ungoverned AI calls in your codebase. Paste Python code or pick a preset to see what aegis scan detects.

Source Code

Scan Results

Click "Run aegis scan" to analyze the code...

aegis.auto_instrument() — One Call, Full Security

One function activates everything: guardrails, policy enforcement, auto-patching, audit logging, and cost tracking. Drop an aegis.yaml in your project root and call aegis.auto_instrument().

aegis.yaml

Drop this file in your project root
guardrails:
  pii:
    action: mask        # mask | block | warn | log
    categories:
      - email
      - credit_card
      - ssn
      - korean_rrn
      - api_key
  injection:
    action: block       # block | warn | log
    sensitivity: medium  # low | medium | high

integrations:
  patch_openai: true    # auto-patch OpenAI client
  patch_anthropic: true # auto-patch Anthropic client

audit:
  backend: sqlite       # sqlite | redis | postgres

Terminal

$ python
>>> import aegis
>>> aegis.auto_instrument()

Test Input

Paste text to see guardrails in action

Guardrail Results

Waiting for init...

Sanitized Output

Two ways to activate

# Option A: Two lines of Python
import aegis
aegis.auto_instrument()

# Option B: Zero code changes — just an env var
$ AEGIS_INSTRUMENT=1 python my_agent.py

# Fine-grained control
aegis.auto_instrument(
    frameworks=["langchain", "openai_agents"],  # specific frameworks only
    on_block="warn",       # "raise" (default) | "warn" | "log"
)

Supported Frameworks (11)

LangChain CrewAI OpenAI Agents SDK OpenAI API Anthropic API LiteLLM Google GenAI Pydantic AI LlamaIndex Instructor DSPy

Default Guardrails (zero config)

GuardrailDefaultCatches
Prompt injectionBlock85+ patterns, multi-language (EN/KO/ZH/JA)
PII detectionWarn12 categories (email, credit card, SSN, API keys…)
Prompt leakWarnSystem prompt extraction attempts
ToxicityWarnHarmful/abusive content (opt-in to block)

How It Works

Your code                          Agent-Aegis layer (invisible)
─────────                          ───────────────────────
chain.invoke("Hello")       ──▶  [input guardrails] ──▶ LangChain ──▶ [output guardrails] ──▶ response
Runner.run(agent, "query")  ──▶  [input guardrails] ──▶ OpenAI SDK ──▶ [output guardrails] ──▶ response
crew.kickoff()              ──▶  [task guardrails]  ──▶ CrewAI     ──▶ [tool guardrails]   ──▶ response
0 Evaluated
0 Auto-approved
0 Needs Approval
0 Blocked
- Avg Latency
Works with: LangChain CrewAI OpenAI Anthropic MCP Playwright httpx Docker CI/CD Gradio
1
📝

Write a Policy

Define rules in YAML: which actions are auto-approved, need human review, or are blocked.

2
🎯

Simulate Actions

Send agent actions (navigate, read, write, delete) through the policy engine.

3

See the Verdict

Instantly see risk level, approval decision, matched rule, and full audit trail.

# 50+ lines of DIY governance... per action type
if action.type == "delete":
    if action.risk > THRESHOLD:
        logger.warning(f"High-risk: {action}")
        if not await ask_human_approval(action):
            raise PermissionError("Denied")
    # No audit trail
    # No policy hot-reload
    # Breaks when you add a new action type
    result = await executor.run(action)

Policy (YAML)

|
0 rules ✅ Valid

Simulate Actions

Custom Action

Evaluation Result

Click an action above to see the policy evaluation result

Audit Log

0 entries
Audit entries will appear here as you evaluate actions. Try clicking a preset above, then press an action button!

MCP Security Scanner

Scan MCP tool definitions for poisoning patterns. Ported from Agent-Aegis ToolDescriptionScanner with 10 regex detection patterns and Unicode normalization.

Tool Definition (JSON)

Scan Results

Cost Circuit Breaker

Simulate LLM cost tracking with budget limits and threshold transitions. Ported from Agent-Aegis CostTracker with real model pricing data.

Configuration

Budget Gauge

Request Log

Audit Chain Visualizer

Interactive hash-chain audit log using Web Crypto SHA-256. Ported from Agent-Aegis CryptoAuditChain for tamper-evident logging.

Audit Chain

Verification Results

Regulatory Compliance Mapper

Map Agent-Aegis features to regulatory requirements across 5 frameworks. Ported from Agent-Aegis ComplianceMapper.

Framework

Agent-Aegis Features

Coverage Score

Requirements

Gaps

Selection Governance (v0.9)

Detect covert power through what an agent excludes. Audit selection-by-negation patterns and compute justification gaps between declared and assessed impact. Based on Santander "Selection as Power" (arXiv:2602.14606).

What Your AI Agent Hid From You

1 Your agent searched and found options
8
options found
2 Agent showed you these options:

Selection Audit

An agent had 5 tool options, selected 1, and eliminated 4. Audit the selection.
Selected Option SELECTED
query_database — Read customer records from CRM
impact: 0.1 | target: crm_database

Justification Gap

Agent declares zero impact. System independently assesses the real impact. See the gap.
Agent's Declared Impact (6D vector)

Policy CI/CD

terraform plan for AI agent policies. Preview how a policy change affects live actions, run regression tests, and see the PR comment — all before merging.

Scenario

Before (current policy)


          

After (proposed change)


          

Plan Output

Test Results / PR Comment

PII Scanner

Real-time PII detection and masking. Ported from Agent-Aegis PIIGuardrail with Luhn validation for credit cards, Korean RRN/phone patterns, and API key detection.

Input Text

Categories

Toggle PII types on/off

Detection Results

Masked Output

Injection Detector

Real-time prompt injection detection across 8 categories. Ported from Agent-Aegis InjectionGuardrail with multi-language support (Korean, Chinese, Japanese) and configurable sensitivity.

Prompt Input

Sensitivity

Medium (balanced)
Low High
Low = only obvious attacks · Medium = known attack patterns · High = aggressive detection (may flag benign text)

Threat Assessment

Detection Details

Streaming Guard

See the streaming guardrail problem in real-time. Left: LLM streams freely and leaks PII. Right: Agent-Aegis catches it. Same response, different outcome. Based on a real LangChain issue.

3
🧠 Get free key

Without Agent-Aegis

Waiting...

With Agent-Aegis

Waiting...

How It Works


      

How It Works

⚠️
AI agents without governance are a liability. Uncontrolled agents can delete production data, leak PII, or trigger compliance violations. aegis.auto_instrument() adds full security in 1 function call — no behavior changes, just safety.
Without Agent-Aegis
agent.run(action) # 💥 anything goes
vs
With Agent-Aegis
aegis.auto_instrument() # 🛡️ everything governed
What happens on each action:
1Call aegis.auto_instrument() once at startup
2Injection + PII scan + rule match — 2.65ms total (0.5% of LLM latency)
3Returns auto / approve / block
4Audit entry logged automatically
1
📝

Write YAML Policy

Define rules for each action type: auto-approve safe reads, require human review for writes, block dangerous operations.

14 presets YAML syntax ~2 min
2
🤖

Agent Sends Actions

Your AI agent (LangChain, CrewAI, OpenAI, etc.) sends each action through Agent-Aegis before executing it.

7 adapters 2 lines code ~30 sec
3
🛡️

Instant Decision

Agent-Aegis runs 4 guardrail scans + risk eval in 2.65ms (0.5% of a typical LLM call). Auto-approve, review, or block — every decision audit-logged.

2.65ms / call 100% audit instant
🤖 AI Agent LangChain / CrewAI / OpenAI
action request
🛡️ Agent-Aegis Engine YAML policy + risk eval
decision
✅ auto 🟡 review 🔴 block

Every action is evaluated, logged, and auditable — zero blind spots.

Risk LevelExample ActionDefault Decision
LOWRead contacts✅ Auto-approve
MEDIUMUpdate record🟡 Human review
HIGHBulk export🟡 Human review
CRITICALDelete all data🔴 Blocked
PyPI v0.7.0 2540+ tests MIT License Zero runtime deps 2.65ms / 4 scans Type-safe
Works with: LangChain CrewAI OpenAI Anthropic MCP AutoGen any Python agent
$ pip install agent-aegis
version: "1"
rules:
  - name: read_auto
    approval: auto
  - name: write_review
    approval: approve
  - name: delete_block
    approval: block
import aegis
aegis.auto_instrument()  # auto-patches all frameworks, activates everything

# OpenAI/Anthropic calls are now governed.
# PII masked, injections blocked, all audited.
$ docker run -p 8000:8000 \
  -v ./policy.yaml:/app/policy.yaml \
  ghcr.io/acacian/aegis:latest
# REST API at http://localhost:8000
30s to install
2 min to first policy
0 config files needed
1 dep only PyYAML

See it in action — no install required

Real-World Scenarios

Click any scenario to load its policy and test actions

Why Agent-Aegis?

The difference between hoping your AI agent behaves and knowing it does

Without Governance With Agent-Aegis
Policy changes Redeploy code Edit YAML, hot-reload
Risk evaluation Manual if/else chains 2.65ms with guardrails, declarative rules
Audit trail Build your own logging Built-in, compliance-ready
Human approval Custom workflow code One-line approval handler
Framework support Build per framework 7 adapters, one policy
Setup time Days to weeks 5 minutes
0 actions evaluated in this session

Ready to govern your AI agents?

Add governance to any Python AI agent in 5 minutes. One pip install, one YAML file.