Structured Output
Enforce type-safe responses with outputFormat and Zod schemas.
Exercise 1: Define a Response Schema
Create a Zod schema that describes the shape of a code review result. This schema will be used by the SDK to enforce type-safe agent responses.
import { z } from "zod";
// Define a schema for code review output
const CodeReviewSchema = z.object({
summary: z.string().describe("High-level summary of the code review"),
issues: z.array(
z.object({
severity: z.enum(["error", "warning", "info"]),
file: z.string().describe("File path where the issue was found"),
line: z.number().describe("Line number of the issue"),
message: z.string().describe("Description of the issue"),
})
),
approved: z.boolean().describe("Whether the code passes review"),
});
// Infer the TypeScript type from the schema
type CodeReview = z.infer<typeof CodeReviewSchema>;
// Test: validate a sample object against the schema
const sample: CodeReview = {
summary: "Generally clean code with minor issues",
issues: [
{ severity: "warning", file: "src/index.ts", line: 42, message: "Unused import" },
{ severity: "error", file: "src/db.ts", line: 15, message: "SQL injection risk" },
],
approved: false,
};
const parsed = CodeReviewSchema.parse(sample);
console.log("[OK] Schema validation passed:", parsed.issues.length, "issues found");
Your task: Run this code to verify the schema works. Then try passing an invalid object (e.g., missing approved field, or severity: "critical") and observe the Zod validation error.
Exercise 2: Query with Schema Enforcement
Use query() with outputFormat to force the agent to return data matching your Zod schema.
import { query } from "@anthropic-ai/claude-agent-sdk";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
const CodeReviewSchema = z.object({
summary: z.string(),
issues: z.array(z.object({
severity: z.enum(["error", "warning", "info"]),
file: z.string(),
line: z.number(),
message: z.string(),
})),
approved: z.boolean(),
});
// Query with structured output enforcement
const response = query({
prompt: "Review this code for issues:\n\nfunction getUser(id) {\n return eval('users[' + id + ']');\n}",
options: {
outputFormat: {
type: "json_schema",
json_schema: {
name: "CodeReview",
strict: true,
schema: zodToJsonSchema(CodeReviewSchema)
}
}
}
});
// Stream messages and extract structured_output from result
for await (const message of response) {
if (message.type === "result" && message.structured_output) {
const review = CodeReviewSchema.parse(message.structured_output);
console.log(`Summary: ${review.summary}`);
console.log(`Approved: ${review.approved ? "Yes" : "No"}`);
console.log(`\nIssues (${review.issues.length}):`);
for (const issue of review.issues) {
const marker = issue.severity === "error" ? "[!]" : issue.severity === "warning" ? "[~]" : "[i]";
console.log(` ${marker} [${issue.severity}] ${issue.file}:${issue.line} - ${issue.message}`);
}
}
}
Your task: Run this code and inspect the structured output. Then change the prompt to review a different code snippet. Notice how the output always conforms to the schema regardless of prompt content.
Exercise 3: Schema Robustness Test
Verify that structured output is reliable across diverse prompts. Compare results with and without outputFormat.
import { query } from "@anthropic-ai/claude-agent-sdk";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
const CodeReviewSchema = z.object({
summary: z.string(),
issues: z.array(z.object({
severity: z.enum(["error", "warning", "info"]),
file: z.string(),
line: z.number(),
message: z.string(),
})),
approved: z.boolean(),
});
const outputFormat = {
type: "json_schema" as const,
json_schema: {
name: "CodeReview",
strict: true,
schema: zodToJsonSchema(CodeReviewSchema)
}
};
const testPrompts = [
"Review: const x = 1;",
"Review: function() { while(true) {} }",
"Review: import fs from 'fs'; fs.rmSync('/', {recursive: true});",
"Review: // empty file",
"Review: export default class App extends React.Component { render() { return <div /> } }",
"Review this Python code: print('hello')",
"Review: SELECT * FROM users WHERE id = '" + "' OR 1=1 --",
"Review: password = 'admin123';",
"Review: async function fetchData() { const res = await fetch(url); return res.json(); }",
"Review: try { riskyOp() } catch(e) { /* swallow */ }",
];
let passCount = 0;
let failCount = 0;
for (const prompt of testPrompts) {
try {
// With outputFormat - should always produce valid schema
const response = query({ prompt, options: { outputFormat } });
for await (const message of response) {
if (message.type === "result" && message.structured_output) {
CodeReviewSchema.parse(message.structured_output);
passCount++;
console.log(`[PASS] "${prompt.slice(0, 40)}..." - ${message.structured_output.issues.length} issues`);
}
}
} catch (err) {
failCount++;
console.log(`[FAIL] "${prompt.slice(0, 40)}..." - ${err.message}`);
}
}
console.log(`\nResults: ${passCount}/${testPrompts.length} passed, ${failCount} failed`);
console.log("Tip: Try removing outputFormat and parsing the raw text - how many pass then?");