OpenClaw 9-Layer Prompt Architecture
Why your AI agent needs layered prompts. Deep dive into OpenClaw's 9-layer architecture with design rationale and implementation code.
OpenClaw Agent System Prompt Architecture Explained (9 Layers)
Author: huangserva (@servasyy_ai)
Published: March 5, 2026
Source: X/Twitter
This document provides a detailed breakdown of the complete System Prompt structure that OpenClaw Agent sends to LLMs.
Version
- Version: v2.1
- Updated: 2026-03-05
Quick Start for Beginners
- Layer 7 (Workspace Files) - Configuration files you can directly edit
- Layer 8 (Bootstrap Hook) - Scripts you can write to dynamically inject content
- Other layers - Auto-generated by the framework, understand but don't modify
Common Use Cases
- Want to define Agent identity? → Edit Layer 7's IDENTITY.md
- Want to add project documentation? → Use Layer 8's bootstrap-extra-files Hook
- Want to inject real-time context? → Use Layer 8's before_prompt_build Hook
- Want to control file size? → Adjust bootstrapMaxChars configuration
Getting started with the SDK? Follow the Claude Agent SDK configuration guide to set up your environment variables before diving into the prompt architecture.
The 9 Layers
Layer 1: Identity & Core Instructions
Analogy: Like the "Instructions" section of an operating manual - tells the LLM who you are, what you can do, and how you should respond.
Design Trade-off:
- Balance: Flexibility vs Consistency
- Decision: Framework generates uniformly to ensure consistent base behavior across all Agents
- Benefits:
- Users don't need to repeat basic rules for each Agent
- All Agents automatically gain new capabilities when framework upgrades
- Reduces configuration error risk
- Cost:
- Users cannot modify these core rules
- Special behaviors can only be achieved indirectly through Layer 7/8
Layer 2: Tool Definitions
Analogy: Like a Swiss Army knife's tool list - tells the LLM what tools you have, what each does, and how to use them.
Why JSON Schema?
- Balance: Flexibility vs Type Safety
- Decision: Use strict JSON Schema to define tool parameters
- Benefits:
- LLM can understand tool usage more accurately
- Framework can validate parameters before calling
- Auto-generate documentation and type definitions
- Cost:
- Adding new tools requires complete Schema
- Cannot support fully dynamic parameter structures
Layer 3: Skills Catalog
Analogy: Like a restaurant's "specialty menu" - tells the LLM what professional domain "recipes" are available to call.
Why directory scanning instead of manual registration?
- Balance: Flexibility vs Maintenance Cost
- Decision: Auto-scan ~/development/openclaw/skills/ directory
- Benefits:
- Adding new Skills only requires placing in directory, no config changes
- All Agents automatically get new Skills
- Reduces configuration error risk
- Cost:
- Cannot precisely control which Skills each Agent can use
- All Skills injected into System Prompt (increases token consumption)
Layer 4: Model Aliases
Analogy: Like "keyboard shortcuts" - give complex model paths short aliases for easy calling.
Why model aliases?
- Balance: Flexibility vs Readability
- Decision: Allow users to define short aliases for commonly used models
- Benefits:
- Simplify model calls (glm-5 instead of zhipu/glm-5)
- Support multi-Provider switching (same alias can map to different Providers)
- Convenient for A/B testing and model migration
- Cost:
- Need to maintain alias configuration file
- May cause confusion (same alias in different Agents might point to different models)
Layer 5: Protocol Specifications
Analogy: Like "traffic rules" - define standard protocols for Agent-system interaction.
Why protocol specifications?
- Balance: Freedom vs Consistency
- Decision: Define standardized interaction protocols (Silent Replies, Heartbeats, Reply Tags, etc.)
- Benefits:
- Ensure consistent behavior across all Agents
- Support automated monitoring and health checks
- Simplify multi-Agent collaboration
- Cost:
- Limits Agent's free expression
- Requires LLM to strictly follow protocol (may be ignored)
Layer 6: Runtime Context
Analogy: Like a "dashboard" - tells the LLM the real-time status of the current runtime environment.
Why inject runtime info every time?
- Balance: Token Consumption vs Context Accuracy
- Decision: Inject latest runtime state with each request
- Benefits:
- LLM knows current time (avoid time confusion)
- LLM knows current model (avoid capability misjudgment)
- LLM knows current environment (avoid path errors)
- Cost:
- Each request consumes ~2KB tokens
- Information may contain redundancy
Layer 7: Workspace Files (User-Editable)
Analogy: Like "your work notes" - static configuration files you can directly edit.
Why is only this layer statically editable?
- Balance: Framework Stability vs User Freedom
- Decision: Separate "change" from "unchanging" - framework layer ensures consistency, user layer allows personalization
- Benefits:
- Users can define Agent identity, work specifications, memory
- Framework upgrades won't break user configuration
- Config files can be version controlled, backed up, shared
- Cost:
- Users cannot modify framework core behavior
- Need to learn TELOS framework and file structure
Core Files:
- IDENTITY.md - Agent identity and persona
- MEMORY.md - Long-term memory and learned patterns
- TOOLS.md - Tool documentation and usage notes
- AGENTS.md - Workspace index and guidelines
- USER.md - Information about the human user
Layer 8: Bootstrap Hooks (Dynamic Injection)
Analogy: Like "plugins" - scripts that run at startup to dynamically inject content.
Available Hooks:
- bootstrap-extra-files - Add additional files to workspace context
- before_prompt_build - Inject real-time context before prompt construction
Layer 9: Message History
Analogy: Like "conversation transcript" - the actual back-and-forth between user and Agent.
Conclusion
Understanding these 9 layers helps you:
- Know what you can customize (Layers 7-8)
- Understand what the framework handles automatically (Layers 1-6, 9)
- Make informed decisions about Agent configuration
- Debug issues more effectively
For more information, visit the OpenClaw documentation.
This article was originally published by huangserva on X/Twitter. Republished with attribution.
Editorial context
Why this article matters
OpenClaw 9 Layer Prompt Architecture belongs to a broader ClawList coverage cluster: one place for the pages shaping how clawlist covers prompt architecture, layered agent instructions, and context design. This article matters because it turns that cluster into a concrete read for operators designing agent systems, prompt layers, or reusable AI workflows.
Primary angle
AI workflow coverage
Best next move
Pair this article with Ollama - Local LLM Runtime if you want to turn the idea into a testable workflow.
Why now
This piece helps readers decide what is signal versus noise in openclaw 9 layer prompt architecture.
Best for
Best for operators designing agent systems, prompt layers, or reusable AI workflows. If you are deciding whether this topic changes your current stack, this is the kind of page you read before you commit engineering time or rewrite an ops process.
Read with caution
Product screenshots, pricing, and launch claims can change faster than the underlying workflow pattern, so verify current vendor details before rollout.
Architecture patterns rarely transfer one-to-one across agent runtimes, so adapt the pattern to your own tool surface instead of copying it blindly.
Next Best Step
Keep this session moving with the System Prompt Architecture hub
This hub consolidates the system prompt architecture cluster so rankings, internal links, and follow-up CTA traffic all reinforce one primary narrative instead of splitting across lookalike pages.
Explore the full architecture hub
See the canonical page, supporting articles, and the keyword ownership plan for this cluster.
Install Skills CLI
Turn prompt design decisions into reusable, installable workflows.
Continue with the blog archive
Keep reading adjacent workflow and architecture breakdowns from the same search session.
Tags
Related Skills
Ollama - Local LLM Runtime
Pre-release download for Ollama, a tool for running large language models locally with easy installation and model management.
Tool Call Retry
Auto-retry failed LLM tool calls with exponential backoff and error correction.
Create New OpenClaw Instance on GCP VM
Deploy OpenClaw on Google Cloud Platform with Tailscale networking and Brave Search integration.
Related Articles
OpenClaw 9-Layer System Prompt Pattern
The complete 9-layer system prompt pattern used by OpenClaw. Includes bootstrap protocol, memory management, and skill loading architecture.
News-Driven Multi-Agent Stock Selection System
Multi-agent architecture for stock selection combining news analysis, market validation, and risk management with explainability and auditability.
Lovable's Sandbox Infrastructure and Traffic Migration Strategy
Analysis of Lovable's architecture managing 30,000 sandboxes with custom scheduler for seamless geographic failover and traffic migration.