Using OpenClawd on Telegram for Cost Savings
Guide on using OpenClawd Telegram bot with /new command to reduce API costs
How to Save Significantly on API Costs When Using OpenClawd on Telegram
TL;DR: If you're using the OpenClawd bot on Telegram, typing /new before starting a fresh conversation can meaningfully reduce your API costs. Here's why it matters and how to use it effectively.
Introduction: The Hidden Cost of Conversational Context
If you've been experimenting with AI assistants through Telegram bots powered by large language models, you've likely noticed that costs can add up faster than expected. The reason isn't always obvious: it's not just what you say to the model — it's everything the model remembers.
OpenClawd is an OpenClaw-compatible AI assistant available on Telegram. Like most LLM-backed chatbots, it maintains conversational context across messages in a session. This is useful for coherent multi-turn dialogue, but it comes with a cost that many users overlook.
This post explains exactly why context accumulation drives up API spend, and how one simple command — /new — can keep your usage lean and your wallet intact.
Why Conversational Context Is Expensive
Every time you send a message to an LLM-backed bot, the API doesn't just receive your latest message. It receives the entire conversation history — every prior message, every response — concatenated into a single prompt. This is how models maintain context and produce coherent replies.
Here's a simplified view of what gets sent to the API on message #5 of a conversation:
[System prompt]
[User message 1]
[Assistant response 1]
[User message 2]
[Assistant response 2]
[User message 3]
[Assistant response 3]
[User message 4]
[Assistant response 4]
[User message 5] ← your actual new question
LLM APIs price by token consumption — both input and output tokens count. As a conversation grows longer, the input token count grows with every single message, even if your latest question is just a few words. By message 10 or 20 of a long session, you may be paying for thousands of tokens of context that are completely irrelevant to what you're currently asking.
A Concrete Example
Suppose you had a 30-message debugging session earlier in the day about a Python script. Later, you want to ask a quick, unrelated question about Markdown formatting. If you're still in the same Telegram session, OpenClawd will dutifully include all 30 prior messages in the API call — none of which help answer your Markdown question. You're paying for noise.
This is not a bug. It's by design: context persistence makes multi-step reasoning possible. But for users switching tasks or starting fresh topics, it's unnecessary overhead.
The /new Command: What It Does and When to Use It
The /new command tells OpenClawd to clear the current conversation history and start a fresh session. When you send /new, the bot discards the accumulated context. Your next message goes to the API with only the system prompt — no prior history.
/new
That's it. One command, and you're back to a clean slate.
When to Use /new
Use /new whenever you're:
- Switching topics entirely. If you were working through a data pipeline problem and now want help writing a regex, there's no reason to carry the previous context forward.
- Starting a new task or project. Beginning a fresh coding session? Clear the slate.
- After a long debugging session. Once a problem is resolved, the conversation history is no longer useful for future questions.
- Testing prompts. If you're iterating on system prompts or testing model behavior, always start clean to avoid prior turns contaminating your results.
- When responses start feeling sluggish or off-topic. Long contexts can sometimes degrade response quality as the model tries to reconcile a large history with a new question.
When Not to Use /new
Don't use /new in the middle of a multi-step task where continuity matters:
- Iterative code refactoring across multiple files
- A long research session where earlier answers inform later questions
- Any workflow where the model needs to remember decisions made earlier in the conversation
Context has genuine value when you're working on something coherent and continuous. The goal is to be deliberate about when you keep it and when you discard it.
Practical Workflow: Integrating /new into Your Daily Usage
Here's a simple discipline that keeps costs predictable without sacrificing the quality of your interactions:
1. Treat each distinct task as its own session.
Before starting a new task, send /new. This mirrors how you'd open a new document or terminal tab — it's a clean boundary between work items.
2. Use context intentionally for multi-step work.
For anything requiring continuity — a long debugging session, a multi-part writing task, iterative prompt engineering — let the context grow. That's what it's there for.
3. End sessions explicitly.
When you're done with a task, send /new as a closing action. This way, if you return to Telegram later for something unrelated, you're not starting from a stale, expensive context.
A minimal daily pattern might look like this:
/new
→ Ask your question or start your task
[work through the task, multiple exchanges]
/new
→ Start next task
This alone can reduce your token consumption substantially over a week of regular use, particularly if you work across multiple domains or projects in a single day.
Why This Matters More Than You Might Think
It's easy to underestimate how fast context costs scale. Here's a rough illustration using generic LLM pricing as a reference:
| Session Length | Approx. Input Tokens per Message | Cost Multiplier | |---|---|---| | Message 1 | ~500 tokens | 1x | | Message 10 | ~5,000 tokens | 10x | | Message 25 | ~12,500 tokens | 25x |
By message 25, every new question you ask costs 25 times more in input tokens than your first message did — even if the question itself is identical. For developers running multiple sessions daily, this compounds quickly.
The /new command resets that multiplier back to 1x.
Conclusion
Using AI bots on Telegram is convenient, but convenience can quietly become expensive if you're not managing conversational context deliberately. OpenClawd's /new command gives you direct control over session boundaries — use it to strip away irrelevant history and send only what the model actually needs.
The habit is simple: new task, new session. Type /new, then proceed. Over time, this single practice can represent a meaningful reduction in your API spend without any loss in the quality of your interactions.
If you found this useful, the original tip comes from @huangyun_122 on X. Follow them for more practical AI workflow insights.
Published on ClawList.io — your developer resource hub for AI automation and OpenClaw skills.
Tags
Related Articles
Vercel's React Best Practices as Reusable Skill
Vercel distilled 10 years of React expertise into a skill, demonstrating how organizations should package internal best practices as reusable AI agent skills.
AI-Powered Product Marketing with Video and Social Media
Guide on using AI to create product advertisement videos, user testimonials, and product images for social media marketing campaigns.
Engineering Better AI Agent Prompts with Software Design Principles
Author shares approach to writing clean, modular AI agent code by incorporating software engineering principles from classic literature into prompt engineering.