EN
Governance & Guardrails
Budgets, approvals, permissions, audit logs — the controls that keep agents safe.
- Allowlist vs Blocklist (Why Default-Deny Wins) + Code★★★Blocklists rot. Allowlists scale. A practical tool policy model that doesn’t accidentally permit the next dangerous tool you add.
- Budget Controls for AI Agents (Steps, Time, $) + Code★★★If your agent can spend unlimited time and money, it will. A production budget policy that stops runs safely and returns stop reasons you can alert on.
- Cost Limits for Agents (Token + Tool Spend) + Code★★★Token budgets don’t stop tool spend. Cost limits track model + tools together, gate expensive actions, and force explicit approval before the agent burns real money.
- Human-in-the-Loop Approvals (Write Gates) + Code★★★If a tool has irreversible side effects, the agent shouldn’t run it unattended. Approval gates that are fast, auditable, and don’t deadlock your system.
- Kill Switch Design for AI Agents (Stop Writes Fast) + Code★★★When your agent starts doing damage, you need a kill switch that actually stops it: global + per-tenant toggles, tool-level disable, and safe shutdown semantics.
- Step Limits for Agents (Stop Loops Early) + Code★★★If your agent has no step limit, it’s a background process with feelings. Step limits, repeat-detection, and stop reasons that prevent infinite ‘one more try’ runs.
- Tool Permissions for AI Agents (Least Privilege) + Code★★★Prompts don’t enforce permissions. Gate every tool call in code with default-deny allowlists, per-tenant scoping, and approvals for writes.