Permission System
Build a 3-tier permission gate with canUseTool and audit trail.
Exercise 1: Permission Modes
Test all 4 permission modes and document the behavior differences. Each mode changes how the agent handles tool approval.
import { query, PermissionMode } from "@anthropic-ai/claude-agent-sdk";
// Test each permission mode with the same prompt
const modes: PermissionMode[] = [
"default", // Ask before every tool call
"acceptEdits", // Auto-approve file edits, ask for Bash
"bypassPermissions", // Auto-approve everything (dangerous!)
"plan", // Read-only, no writes allowed
];
for (const mode of modes) {
console.log(`\n=== Testing mode: ${mode} ===`);
const response = query({
prompt: "Read package.json, then create a backup at package.json.bak",
options: {
permissionMode: mode,
allowedTools: ["Read", "Write", "Bash"],
},
});
const log: string[] = [];
for await (const msg of response) {
if (msg.type === "permission_request") {
// This fires when the agent asks for permission
log.push(`PERMISSION_REQUEST: ${msg.toolName} — ${msg.action}`);
console.log(` [${mode}] Permission requested: ${msg.toolName}`);
}
if (msg.type === "tool_call") {
log.push(`TOOL_CALL: ${msg.tool_name}`);
}
if (msg.type === "tool_denied") {
log.push(`DENIED: ${msg.tool_name} — ${msg.reason}`);
console.log(` [${mode}] DENIED: ${msg.tool_name}`);
}
}
console.log(` Results: ${log.length} events`);
log.forEach((l) => console.log(` ${l}`));
}
// Your task: Fill in this table after running:
// Mode | Read allowed? | Write allowed? | Asks user? |
// default | | | |
// acceptEdits | | | |
// bypassPermissions | | | |
// plan | | | |
Your task: Run with each mode and fill in the comparison table. Which mode would you use for a CI/CD pipeline? Which for an interactive coding session?
Exercise 2: Tiered canUseTool
Build a 3-tier permission system: auto-allow read-only, auto-approve safe writes, human confirmation for everything else. Add hooks.PermissionRequest for audit logging.
import { query, AskUserQuestion } from "@anthropic-ai/claude-agent-sdk";
// Tier definitions
const TIER_1_AUTO_ALLOW = ["Read", "Glob", "Grep"]; // Read-only: always safe
const TIER_2_AUTO_APPROVE = ["Write", "Edit", "StrReplace"]; // Safe writes: auto-approve
const TIER_3_HUMAN_CONFIRM = ["Bash", "Shell", "Delete"]; // Dangerous: require human
const auditLog: Array<{
time: string;
tool: string;
tier: number;
decision: "allow" | "approve" | "confirm" | "deny";
input_preview: string;
}> = [];
const response = query({
prompt: "Reorganize the src/ folder: rename files, update imports, and run tests",
options: {
allowedTools: [...TIER_1_AUTO_ALLOW, ...TIER_2_AUTO_APPROVE, ...TIER_3_HUMAN_CONFIRM],
canUseTool: async (toolName, input) => {
const entry = {
time: new Date().toISOString(),
tool: toolName,
tier: 0,
decision: "deny" as const,
input_preview: JSON.stringify(input).slice(0, 80),
};
// Tier 1: Read-only — auto-allow, no questions asked
if (TIER_1_AUTO_ALLOW.includes(toolName)) {
entry.tier = 1;
entry.decision = "allow";
auditLog.push(entry);
return { behavior: "allow" };
}
// Tier 2: Safe writes — auto-approve with logging
if (TIER_2_AUTO_APPROVE.includes(toolName)) {
entry.tier = 2;
entry.decision = "approve";
auditLog.push(entry);
console.log(` [auto-approved] ${toolName}`);
return { behavior: "allow" };
}
// Tier 3: Dangerous ops — ask the human
if (TIER_3_HUMAN_CONFIRM.includes(toolName)) {
entry.tier = 3;
const answer = await AskUserQuestion({
question: `Allow ${toolName}?\nInput: ${JSON.stringify(input).slice(0, 120)}`,
options: ["Yes", "No"],
});
entry.decision = answer === "Yes" ? "confirm" : "deny";
auditLog.push(entry);
return answer === "Yes"
? { behavior: "allow" }
: { behavior: "deny", message: "User denied" };
}
// Unknown tool — deny by default
auditLog.push(entry);
return { behavior: "deny", message: "Unknown tool" };
},
hooks: {
PermissionRequest: (toolName, input, decision) => {
// This hook fires AFTER each permission decision
console.log(`[audit] ${toolName} → ${decision}`);
},
},
},
});
for await (const msg of response) {
if (msg.type === "result") {
console.log("\n=== Audit Log ===");
console.table(auditLog);
console.log(`\nTotal decisions: ${auditLog.length}`);
console.log(`Tier 1 (auto): ${auditLog.filter(e => e.tier === 1).length}`);
console.log(`Tier 2 (approved): ${auditLog.filter(e => e.tier === 2).length}`);
console.log(`Tier 3 (human): ${auditLog.filter(e => e.tier === 3).length}`);
}
}
Your task: Run this with a complex refactoring prompt. Review the audit log. Try adding a Tier 0 (always deny) for ["Delete"] and see what happens when the agent tries to delete files.
Exercise 3: Audit Trail Report
Build a complete audit system that logs every permission decision, tracks denials vs approvals, and generates a security report.
// Challenge: Build a complete permission audit system
//
// Requirements:
// 1. Log every permission decision with full context:
// - Timestamp, tool name, input preview, tier, decision
// - Time elapsed since session start
// - Cumulative count per tool
//
// 2. Track security metrics:
// - Total requests, approvals, denials
// - Denial rate per tool
// - Most frequently used tool
// - Longest time between permission checks
//
// 3. Generate a final security report:
//
// === Security Audit Report ===
// Session: 2025-01-15T10:30:00Z → 2025-01-15T10:32:45Z
// Duration: 2m 45s
//
// Tool | Requests | Approved | Denied | Denial Rate
// Read | 12 | 12 | 0 | 0%
// Write | 5 | 5 | 0 | 0%
// Bash | 4 | 3 | 1 | 25%
// Delete | 2 | 0 | 2 | 100%
//
// Risk Score: MEDIUM (1 dangerous tool denied)
// Recommendation: Review Bash usage — 3 commands auto-approved
//
// 4. (Bonus) Write the report to a file using the agent itself:
// - After the main task completes, run a second query()
// - Prompt: "Save this audit report to ./audit-report.md"
// - Pass the report string as context
// Hints:
// - Use a Map<string, ToolStats> for per-tool tracking
// - Calculate risk score: HIGH if any dangerous tool denied >50%,
// MEDIUM if any denied, LOW if all approved
// - Use hooks.Stop to trigger report generation