AI

Convert Content to Xiaohongshu Infographic Series

Prompt engineering technique to decompose input content into Xiaohongshu-style infographic series using Claude, Gemini, and ChatGPT.

February 23, 2026
7 min read
By ClawList Team

How to Convert Any Content into Xiaohongshu-Style Infographic Series Using AI Prompt Engineering

Published on ClawList.io | Category: AI Automation | Author: ClawList Editorial Team


If you've ever wanted to repurpose a blog post, technical document, or long-form content into a visually engaging, shareable social media series — specifically in the wildly popular Xiaohongshu (Little Red Book) aesthetic — then this prompt engineering technique is exactly what you need. Shared originally by @dotey on X/Twitter, this clever prompt chain workflow lets you decompose any input content into a series of standalone infographic image prompts, ready to be fed into image generation models like Gemini's Nano Banana Pro, Midjourney, or DALL-E.

Let's break down how it works, why it matters, and how developers and AI engineers can implement it in their own automation pipelines.


What Is the Xiaohongshu Infographic Style — and Why Does It Matter?

Xiaohongshu (小红书), often compared to a Chinese hybrid of Instagram and Pinterest, has a very distinctive visual language. Posts on the platform typically feature:

  • Soft pastel or vibrant gradient backgrounds
  • Large, readable headline text overlaid on the image
  • Bullet-pointed key takeaways formatted as infographic slides
  • Clean, minimalist layouts with icon accents
  • A multi-image carousel format (usually 3–9 slides per post)

This style has exploded in popularity not just in China but globally, because it communicates information densely yet digestibly. For developers and content creators building AI-powered content pipelines, being able to automatically convert raw content into this format represents a massive productivity unlock.

Whether you're building a content repurposing automation, a social media scheduler, or an OpenClaw skill that generates marketing assets, this technique plugs directly into your existing LLM workflows.


The Core Technique: Prompt Decomposition into Image Prompt Chains

The fundamental insight behind @dotey's approach is elegantly simple: instead of asking an AI to generate an image directly, you first ask it to plan the infographic series and then generate one standalone image prompt per slide.

Here's the general workflow:

[Raw Input Content]
        ↓
[LLM: Claude / Gemini / ChatGPT]
        ↓
[Structured Series Plan + Individual Image Prompts]
        ↓
[Image Generator: Gemini Nano Banana Pro / DALL-E / Midjourney]
        ↓
[Xiaohongshu-style Infographic Series]

The Master Prompt Template

The core prompt used in this technique instructs the LLM to act as a visual content planner. Here's a cleaned-up, English-adapted version of the prompt structure:

You are a visual content strategist specializing in Xiaohongshu (Little Red Book) 
infographic series design.

Given the following input content, decompose it into a series of 5–8 individual 
infographic slides in the Xiaohongshu style.

For EACH slide, output:
1. Slide number and title
2. Core message (1–2 sentences)
3. Key bullet points (3–5 items)
4. A complete, standalone image generation prompt for that slide

Image prompts should follow this format:
- Style: Xiaohongshu infographic, clean minimalist design
- Color palette: [specify warm/cool/pastel tones]
- Typography: Bold headline, readable body text in Chinese aesthetic
- Layout: Card-style layout, centered composition
- Content elements: [specific text and icons for this slide]

Input Content:
[PASTE YOUR CONTENT HERE]

Output each slide's image prompt in a clearly labeled code block so it can be 
directly copied into an image generator.

Why This Two-Step Approach Works

The magic here is separation of concerns:

  1. The LLM handles content intelligence — it understands narrative structure, identifies key points, and knows how to chunk information logically across multiple slides.
  2. The image generator handles visual rendering — it receives precise, self-contained instructions without needing to understand the broader content context.

This pattern is reusable across Claude, Gemini, and ChatGPT, making it model-agnostic. You can slot in whichever LLM you prefer for the planning stage, then use any image generation backend for rendering.


Practical Use Cases for Developers and Automation Engineers

This technique isn't just useful for social media managers. Here are several high-value applications for technical audiences:

1. Automated Content Repurposing Pipelines

Feed in a technical blog post, documentation page, or research paper and automatically generate a ready-to-post infographic series. This is ideal for:

  • Developer advocacy teams who want to amplify technical content on social platforms
  • AI newsletter creators who want visual summaries of complex topics
  • SaaS marketing teams running content automation workflows

2. OpenClaw Skill Development

If you're building OpenClaw skills for AI automation platforms, this workflow maps cleanly to a multi-step skill:

skill: content_to_infographic_series
steps:
  - name: parse_input
    action: extract_text
    input: url_or_document

  - name: generate_slide_plan
    action: llm_call
    model: claude-3-5-sonnet
    prompt: xiaohongshu_decomposition_template

  - name: render_slides
    action: image_generation
    model: gemini_nano_banana_pro
    input: slide_prompts_from_previous_step

  - name: package_output
    action: export
    format: carousel_ready_images

3. Educational Content Creation

Educators and technical writers can convert lengthy tutorials or how-to guides into slide-by-slide visual explainers. The Xiaohongshu aesthetic happens to be extremely effective for step-by-step instructional content — each card naturally maps to one step in a process.

4. Prototype Rapid Iteration

Product teams can describe a feature, run it through this pipeline, and get a visual pitch deck-style infographic series in minutes — useful for internal stakeholder presentations or social proof content.


Tips for Getting the Best Results

Based on the technique shared by @dotey and community experimentation, here are best practices:

  • Feed focused content: The prompt works best with content that has 3–7 distinct key points. Extremely long or unfocused inputs may produce inconsistent slide structure.
  • Specify your color palette explicitly: Including color direction in your master prompt (e.g., "warm coral and cream tones" or "tech-blue gradient") dramatically improves visual consistency across slides.
  • Iterate on the first LLM output: Before sending prompts to the image generator, review and refine the LLM's slide plan. Small tweaks to slide titles or bullet points at this stage cost nothing and improve the final output significantly.
  • Use numbered outputs: Ask the LLM to output each image prompt in a clearly labeled, numbered block. This makes it trivial to automate the image generation step or batch-process prompts.
  • Test across models: Claude tends to excel at structured content decomposition. Gemini handles visually descriptive prompts well. ChatGPT with GPT-4o is a reliable all-rounder. Try a few and see which output you prefer for your specific content type.

Conclusion: A Simple Pattern with Powerful Implications

What @dotey has shared is more than a social media trick — it's a reproducible prompt engineering pattern that demonstrates how to chain LLMs and image generators together to produce structured, multi-asset visual content at scale.

For developers building AI automation tools, content pipelines, or OpenClaw skills, this technique represents exactly the kind of modular, composable AI workflow that scales gracefully. The input can be anything — a markdown document, a PDF, a URL, a voice transcript. The output is a polished, platform-native content series ready for distribution.

As image generation models continue to improve in their ability to render accurate text within images (a historically weak point), workflows like this will only become more powerful and more valuable.

Try the prompt yourself in Claude, Gemini, or ChatGPT today, and share your results. The full example session from @dotey is available at the original post: https://x.com/dotey/status/2010497572704501766


This post was originally published on ClawList.io — your developer resource hub for AI automation and OpenClaw skills. Follow us for more prompt engineering techniques, automation workflows, and LLM integration guides.


Tags: prompt engineering Xiaohongshu infographic automation AI content creation Gemini Claude ChatGPT image generation OpenClaw content repurposing

Tags

#prompt-engineering#content-generation#infographics#claude#gemini

Related Articles