Agent Framework Architecture
Comprehensive technical documentation on Claude Agent SDK, sub-agent anatomy, Model Context Protocol integration, and orchestration patterns.
Multi-Agent System Design
The Claude Agent SDK provides primitives for building autonomous AI agents that operate in continuous feedback loops: gathering context, taking action, verifying results, and iterating until objectives are achieved.
Each sub-agent operates with an independent context window, preventing cross-contamination.
Multiple agents run simultaneously, dramatically reducing total execution time.
Each agent has access to only the tools it needs, reducing complexity and risk.
Choose Opus for orchestration, Sonnet for heavy lifting, Haiku for quick tasks.
Performance Characteristics
Higher cost, significantly better results
Source: Anthropic Engineering Blog, 2025
Sub-Agent Structure
Sub-agents are defined as Markdown files with YAML frontmatter.
Directory Structure
your-project/ ├── .claude/ │ ├── agents/ # Project-level (HIGHEST priority) │ │ ├── dq-profiler.md │ │ ├── dq-recommender.md │ │ └── data-modeller.md │ └── settings.local.json │ ├── ~/.claude/ │ └── agents/ # User-level (across projects) │ └── global-reviewer.md │ ├── .mcp.json # MCP server configs └── CLAUDE.md # Project context
Agent Definition
---
name: dq-profiler
description: Data Quality Profiler. Use PROACTIVELY
when analysing datasets or tables.
tools: Read, Bash, Glob, Grep
model: sonnet
permissionMode: default
skills: data-profiling, sql-analysis
---
# Data Quality Profiler Agent
You are an expert Data Quality Profiler
specialising in statistical analysis...
## Core Responsibilities
1. Connect to data sources
2. Execute profiling queries
3. Detect patterns and anomalies
4. Generate comprehensive reportsConfiguration Fields
| Field | Required | Type | Description |
|---|---|---|---|
| name | Yes | string | Unique identifier (lowercase, hyphens) |
| description | Yes | string | Natural language — Claude uses this to decide delegation |
| tools | No | string | Comma-separated list. Omit to inherit ALL tools. |
| model | No | string | sonnet | opus | haiku | inherit (default: sonnet) |
| permissionMode | No | string | default | acceptEdits | bypassPermissions | plan |
| skills | No | string | Comma-separated skills to auto-load on start |
Available Tool Categories
Restrict agent capabilities by specifying which tools they can access.
Read-Only
Read, Grep, GlobReviewers, auditors, analysers
Research
Read, Grep, Glob, WebFetch, WebSearchResearch agents, documentation
Code Writers
Read, Write, Edit, Bash, Glob, GrepDevelopers, generators
Full Access
(omit tools field)Complex multi-step tasks
MCP Tools
mcp__server__tool_nameDatabase queries, cloud APIs
Model Context Protocol (MCP)
Connect agents to databases, cloud platforms, and external services.
MCP is a standardized protocol that enables Claude agents to connect to external systems. Each MCP server provides a set of tools that agents can invoke to query databases, call APIs, or interact with cloud services.
Supported Platforms
@modelcontextprotocol/server-postgressnowflake-mcp-server@aws/mcp-serverdatabricks-mcpConfiguration Example
// .mcp.json
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres"
],
"env": {
"POSTGRES_CONNECTION_STRING": "${POSTGRES_URL}"
}
},
"snowflake": {
"command": "npx",
"args": ["-y", "snowflake-mcp-server"],
"env": {
"SNOWFLAKE_ACCOUNT": "${SNOWFLAKE_ACCOUNT}",
"SNOWFLAKE_USER": "${SNOWFLAKE_USER}",
"SNOWFLAKE_PASSWORD": "${SNOWFLAKE_PASSWORD}"
}
}
}
}Workflow Patterns
Choose the right orchestration strategy for your use case.
Sequential
Agents execute one after another, passing outputs as inputs.
Discovery → Profiler → Recommender → GovernanceParallel
Multiple agents execute simultaneously on independent tasks.
Profile customers, orders, products in parallelHierarchical
Orchestrator delegates to specialists, aggregates results.
Opus orchestrator → Sonnet specialistsIterative
Agent loops until success criteria are met.
Generate rules → Test → Refine → RepeatResumable
Checkpoint and resume for long-running tasks.
Profile warehouse → Pause → Resume next dayWorkflow Examples
# Sequential workflow for new source onboarding > First use data-discovery to catalogue the Salesforce API, > then use dq-profiler to analyse all discovered tables, > then pass results to dq-recommender for rule generation, > finally use governance-checker to classify sensitivity. # Parallel workflow for estate-wide profiling > Use dq-profiler in parallel across bronze_customers, > bronze_orders, and bronze_products tables. # Resumable workflow for large assessments > Use dq-profiler to start analysing the data warehouse # [Returns agentId: "abc123"] # Resume later > Resume agent abc123 and continue from the silver layer
Ready to Implement?
Get started with our agent templates or book a call to discuss your implementation.