HomeBlogContinuous Planning and Knowledge Consolidation with Markdown
AI

Continuous Planning and Knowledge Consolidation with Markdown

Explores iterative planning technique using live Markdown for AI task execution, balancing continuous updates with token efficiency.

February 23, 2026
8 min read
By ClawList Team

Continuous Planning with Live Markdown: The AI Skill That Thinks While It Works

How iterative, self-updating Markdown documents are transforming complex AI task execution — and what developers need to know about the token cost.


When we think about how AI agents tackle complex tasks, we often imagine a clean, linear workflow: receive instruction → plan → execute → deliver. But anyone who has built or worked closely with AI automation systems knows that reality is far messier. Tasks evolve. New information surfaces mid-execution. Initial assumptions turn out to be wrong.

A fascinating skill shared by @leeoxiang on X/Twitter challenges this linear model entirely — and the implications for AI automation design are worth unpacking in depth. The approach draws inspiration from Manus, the autonomous AI agent, and introduces what might be described as a "continuous thinking, continuous rewriting" pattern using live Markdown documents. Let's break down what this technique is, why it matters, and how developers can apply it in their own AI pipelines.


What Is Continuous Planning with Live Markdown?

At its core, this skill replaces the traditional one-shot planning model with an iterative, self-updating planning document. Here's the fundamental shift:

  • Traditional approach: The AI generates a plan at the beginning of a task, then executes it step by step with little to no revision.
  • Continuous planning approach: The AI maintains a living Markdown document that is constantly revised and enriched as the task unfolds.

Think of it like the difference between writing a project brief once in a meeting and never touching it again versus maintaining a shared Notion or Confluence page that the team actively updates as the project progresses. The document doesn't just describe what will happen — it becomes a dynamic knowledge artifact that captures what is happening, what has been learned, and what needs to change.

The Markdown document typically contains several evolving sections:

# Task: [Current Objective]

## Current Understanding
- Initial interpretation of the task
- Updated assumptions as of [timestamp/step]

## Plan
- [x] Completed step 1
- [ ] Step 2 (revised based on finding X)
- [ ] Step 3 (newly added after discovering Y)

## Key Findings & Knowledge
- Discovery 1: ...
- Discovery 2: ...

## Open Questions
- What is the correct approach for edge case Z?

This document is read back into the context at each reasoning cycle, giving the AI a persistent, structured memory of everything it has done, learned, and still needs to do. It's planning meets knowledge management, all within a single continuously updated file.


Why This Pattern Is Powerful for Complex AI Tasks

The insight behind this technique is deceptively simple: complex tasks require complex thinking, and complex thinking is rarely linear.

1. Plans Need to Evolve With Reality

When an AI agent is tasked with something non-trivial — say, researching a technical topic, debugging a multi-file codebase, or orchestrating a multi-step data pipeline — the initial plan is almost always incomplete. New sub-problems emerge. Dependencies that weren't visible at the start become critical. Assumptions get invalidated by actual results.

By continuously rewriting the plan in Markdown, the AI is essentially practicing adaptive reasoning. Instead of being locked into an outdated roadmap, it recalibrates after each meaningful step. This mirrors how experienced engineers actually work: they sketch a plan, hit unexpected complexity, revise the approach, and document what they've learned along the way.

A practical example: imagine an AI agent tasked with auditing a GitHub repository for security vulnerabilities. A one-shot plan might say: "Scan all Python files for SQL injection patterns." But mid-execution, the agent discovers the repo uses an ORM and raw SQL is almost nonexistent. A continuous planning system would update the plan on the fly — pivoting to check ORM misuse patterns, then adding a new section to the knowledge document about the codebase's architecture.

2. Knowledge Consolidation Happens Automatically

One of the hidden benefits of this approach is emergent knowledge documentation. As the AI updates its Markdown file throughout task execution, it's simultaneously building a structured record of:

  • What approaches were tried and why
  • What worked and what didn't
  • Domain-specific insights discovered during the task
  • Edge cases and exceptions encountered

This is enormously valuable for human oversight and auditability. At the end of a long agentic run, you don't just get a result — you get a well-structured document that explains the entire reasoning journey. For teams using AI automation in production, this kind of transparency is critical.

It's also useful for multi-agent systems where one agent's findings need to be handed off to another. Instead of passing raw outputs or overly long conversation histories, you pass the structured Markdown — a compact, human-readable knowledge artifact.

3. Inspired by Manus: Proven in Practice

The explicit reference to Manus is significant. Manus gained attention in the AI community precisely because its agentic architecture demonstrated that AI systems could handle sustained, complex tasks over long time horizons — something that requires exactly this kind of iterative self-reflection and planning.

The Manus-inspired design here suggests maintaining persistent working memory through the document, with each reasoning loop:

  1. Reading the current state of the Markdown plan
  2. Acting on the next appropriate step
  3. Updating the document with results, discoveries, and revised plans
  4. Repeating until task completion

This loop mirrors the ReAct (Reasoning + Acting) paradigm popular in agentic AI research, but with the added dimension of persistent, structured documentation rather than ephemeral scratchpad reasoning.


The Real Cost: Token Consumption and How to Manage It

This brings us to the elephant in the room — and it's a big one for anyone deploying this pattern in production.

Because the Markdown document is read back into context on every reasoning cycle, and because it grows as more information is consolidated into it, token consumption scales aggressively with task complexity. A task that might take 5,000 tokens with a one-shot plan could easily consume 3–5x more tokens with continuous planning, depending on how long and how detailed the Markdown document becomes.

This is not a reason to avoid the pattern — but it is a reason to engineer it thoughtfully:

  • Summarize aggressively: Periodically compress older sections of the document. Completed steps can be collapsed into brief summaries rather than detailed logs.
  • Prune completed context: Once a sub-task is definitively done and its learnings are captured, remove the verbose step-by-step log and retain only the key insight.
  • Use hierarchical sections: Keep a high-level summary at the top (always included in context) with detailed sub-sections that can be selectively included only when relevant.
  • Set token budgets per update cycle: Build tooling that monitors document size and triggers summarization automatically when thresholds are hit.
def should_compress_plan(plan_markdown: str, token_limit: int = 2000) -> bool:
    estimated_tokens = len(plan_markdown.split()) * 1.3  # rough estimate
    return estimated_tokens > token_limit

def compress_completed_sections(plan_markdown: str) -> str:
    # Parse sections, summarize completed tasks, retain open items
    # Implementation depends on your Markdown parser of choice
    ...

The token cost is real, but for complex, high-value tasks where accuracy and adaptability matter more than raw efficiency, continuous planning often justifies the investment.


Conclusion: A New Mental Model for Agentic AI Design

The continuous planning with live Markdown technique represents a meaningful evolution in how we think about AI task execution. It moves the needle from "AI as executor" to "AI as collaborative, adaptive reasoner" — one that plans, learns, revises, and documents as it goes.

For developers and AI engineers building agentic systems, the takeaways are clear:

  • Don't treat the initial plan as sacred. Build systems that expect and accommodate plan revision.
  • Use structured documents as working memory. Markdown is an elegant, model-friendly format for this.
  • Invest in token management. Continuous planning is powerful but not free — design compression strategies from the start.
  • Value the knowledge artifact. The updated Markdown at task completion isn't just a byproduct — it's a deliverable in its own right.

As AI agents take on increasingly complex, long-horizon tasks, patterns like this will move from experimental to essential. The teams that master iterative planning architectures now will be far better positioned as autonomous AI systems become central to real-world workflows.


Inspired by an insight from @leeoxiang on X/Twitter. Originally referencing the Manus autonomous AI agent architecture.

Published on ClawList.io — your resource hub for AI automation and OpenClaw skills.

Tags

#markdown#planning#knowledge-management#ai-workflows#prompt-engineering
Continuous Planning and Knowledge Consolidation with Markdown | ClawList Blog | ClawList.io