Back to Claude Agents
Technical Deep-Dive

Agent Framework Architecture

Comprehensive technical documentation on Claude Agent SDK, sub-agent anatomy, Model Context Protocol integration, and orchestration patterns.

Architecture

Multi-Agent System Design

The Claude Agent SDK provides primitives for building autonomous AI agents that operate in continuous feedback loops: gathering context, taking action, verifying results, and iterating until objectives are achieved.

Context Isolation

Each sub-agent operates with an independent context window, preventing cross-contamination.

Parallel Execution

Multiple agents run simultaneously, dramatically reducing total execution time.

Specialized Tooling

Each agent has access to only the tools it needs, reducing complexity and risk.

Model Selection

Choose Opus for orchestration, Sonnet for heavy lifting, Haiku for quick tasks.

Performance Characteristics

Task Completion Improvement+90.2%
Token Consumption~15x

Higher cost, significantly better results

Context Pollution RiskIsolated

Source: Anthropic Engineering Blog, 2025

Anatomy

Sub-Agent Structure

Sub-agents are defined as Markdown files with YAML frontmatter.

Directory Structure

your-project/
├── .claude/
│   ├── agents/           # Project-level (HIGHEST priority)
│   │   ├── dq-profiler.md
│   │   ├── dq-recommender.md
│   │   └── data-modeller.md
│   └── settings.local.json
│
├── ~/.claude/
│   └── agents/           # User-level (across projects)
│       └── global-reviewer.md
│
├── .mcp.json             # MCP server configs
└── CLAUDE.md             # Project context
1.claude/agents/ — Project-specific (highest priority)
2CLI --agents flag — Current session
3~/.claude/agents/ — User-level (lowest priority)

Agent Definition

---
name: dq-profiler
description: Data Quality Profiler. Use PROACTIVELY
             when analysing datasets or tables.
tools: Read, Bash, Glob, Grep
model: sonnet
permissionMode: default
skills: data-profiling, sql-analysis
---

# Data Quality Profiler Agent

You are an expert Data Quality Profiler
specialising in statistical analysis...

## Core Responsibilities
1. Connect to data sources
2. Execute profiling queries
3. Detect patterns and anomalies
4. Generate comprehensive reports

Configuration Fields

FieldRequiredTypeDescription
nameYesstringUnique identifier (lowercase, hyphens)
descriptionYesstringNatural language — Claude uses this to decide delegation
toolsNostringComma-separated list. Omit to inherit ALL tools.
modelNostringsonnet | opus | haiku | inherit (default: sonnet)
permissionModeNostringdefault | acceptEdits | bypassPermissions | plan
skillsNostringComma-separated skills to auto-load on start
Tools

Available Tool Categories

Restrict agent capabilities by specifying which tools they can access.

Read-Only

Read, Grep, Glob

Reviewers, auditors, analysers

Research

Read, Grep, Glob, WebFetch, WebSearch

Research agents, documentation

Code Writers

Read, Write, Edit, Bash, Glob, Grep

Developers, generators

Full Access

(omit tools field)

Complex multi-step tasks

MCP Tools

mcp__server__tool_name

Database queries, cloud APIs

Integration

Model Context Protocol (MCP)

Connect agents to databases, cloud platforms, and external services.

MCP is a standardized protocol that enables Claude agents to connect to external systems. Each MCP server provides a set of tools that agents can invoke to query databases, call APIs, or interact with cloud services.

Supported Platforms

PostgreSQL
@modelcontextprotocol/server-postgres
Snowflake
snowflake-mcp-server
AWS
@aws/mcp-server
Databricks
databricks-mcp

Configuration Example

// .mcp.json
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres"
      ],
      "env": {
        "POSTGRES_CONNECTION_STRING": "${POSTGRES_URL}"
      }
    },
    "snowflake": {
      "command": "npx",
      "args": ["-y", "snowflake-mcp-server"],
      "env": {
        "SNOWFLAKE_ACCOUNT": "${SNOWFLAKE_ACCOUNT}",
        "SNOWFLAKE_USER": "${SNOWFLAKE_USER}",
        "SNOWFLAKE_PASSWORD": "${SNOWFLAKE_PASSWORD}"
      }
    }
  }
}
Orchestration

Workflow Patterns

Choose the right orchestration strategy for your use case.

Sequential

Agents execute one after another, passing outputs as inputs.

Best for: Linear workflows with dependencies
Discovery → Profiler → Recommender → Governance

Parallel

Multiple agents execute simultaneously on independent tasks.

Best for: Independent tasks, speed optimization
Profile customers, orders, products in parallel

Hierarchical

Orchestrator delegates to specialists, aggregates results.

Best for: Complex multi-domain tasks
Opus orchestrator → Sonnet specialists

Iterative

Agent loops until success criteria are met.

Best for: Debugging, refinement tasks
Generate rules → Test → Refine → Repeat

Resumable

Checkpoint and resume for long-running tasks.

Best for: Large data estate assessments
Profile warehouse → Pause → Resume next day

Workflow Examples

# Sequential workflow for new source onboarding
> First use data-discovery to catalogue the Salesforce API,
> then use dq-profiler to analyse all discovered tables,
> then pass results to dq-recommender for rule generation,
> finally use governance-checker to classify sensitivity.

# Parallel workflow for estate-wide profiling
> Use dq-profiler in parallel across bronze_customers,
> bronze_orders, and bronze_products tables.

# Resumable workflow for large assessments
> Use dq-profiler to start analysing the data warehouse
# [Returns agentId: "abc123"]

# Resume later
> Resume agent abc123 and continue from the silver layer

Ready to Implement?

Get started with our agent templates or book a call to discuss your implementation.