[Twitter threads] My Impressions After Using OpenClaw for a Few Days

This article is machine translated
Show original

Chainfeeds Summary:

Every time you burn tokens and time to do Vibe Coding, you're playing a slot machine. You're betting that it will produce a perfect piece of code, betting that it won't be a pile of junk that can never be fixed.

Article source:

https://x.com/0xTodd/article/2020504904838897767

Article Author:

0xTodd


Opinion:

0xTodd: Initially, I used Claude Sonnet 4.5, and it was truly the kind of experience where token consumption was like water flowing out of a bucket. If I wrote a slightly longer paragraph, or got really into a conversation, my limit would be gone in no time. The feeling was quite disjointed: the model was indeed intelligent, responsive, and produced high-quality output, but every generation felt like burning money. It's hard to go from luxury to frugality, so I tried switching to a cheaper model, like Claude 3.5, but I couldn't accept it immediately. It felt less intelligent, and the output quality noticeably declined. So I found myself in an awkward situation—I knew the expensive ones were better, but I couldn't bear to use them all the time; the cheaper ones were usable, but they felt lacking. Gradually, I came to a very realistic conclusion: you get what you pay for, sometimes even less. Truly good, stable, and intelligent models are expensive, and justifiably so. This experience is quite similar to real life: once you get used to something good, it's hard to go back to using something mediocre. Ultimately, it became a constant process of weighing options: when to use the expensive model, and when to hold off on using the cheaper one—it almost became a daily choice. This price difference also gradually changed my mindset when using AI. I found myself subconsciously opening my monthly subscription to Gemini or GPT when I wanted to discuss certain issues, because I felt, "I've already paid, might as well use it," and I was less willing to use OpenClaw, which charges per case. Even when it might be more suitable, I would always think more about the cost. Switching back and forth between models and platforms is actually very disruptive to the experience, and frankly, quite annoying. Sometimes I even felt like I was in that "overly frugal" state from before—for example, if I went out for dinner, I would have to turn off all the lights and air conditioning at home before I felt at ease. At the same time, I gradually realized one reason why people like OpenClaw: it speaks more like a human, more respectful of people. On the contrary, GPT and Gemini, unless deliberately tuned, often deliberately behave like an AI, with a too standard and official tone. OpenClaw seems to have incorporated a certain persona into its design, and this naturalness is very appealing to many people who don't want the hassle of tweaking prompts. Regarding "resolving token anxiety," many people simply switch to domestic models, such as DeepSeek, which offer abundant resources and are readily available. While they may still lag behind Claude Opus 4.5 and 4.6 in terms of reasoning ability and depth, the 20-fold price difference makes it irresistible for many to choose the cheaper option. My most realistic need now is a "hybrid" AI – for everyday chat, research, and simple tasks, I can use a cheaper model; but once I enter high-intensity scenarios like coding and complex reasoning, I automatically switch to the strongest model, such as Claude Opus 4.5 or 4.6, to provide truly usable output, not toy-like answers. Using the right tool for the right task is the most reasonable combination. However, in reality, contextual costs have become a new source of anxiety. I'm a bit of a context-sensitive person; I know that the longer the context, the more computational power it consumes, so I don't like mixing various things into a single conversation. If the tasks are unrelated, I'd rather open a new window. But OpenClaw sometimes doesn't allow clearing the context, and watching it continue processing with 100K or 200K of historical data is frustrating, especially when using expensive models—it feels like every round is burning money. Plus, many functions require APIs, such as searching, checking coin prices, and connecting to data sources, making it almost a continuous subscription and purchase. Security-wise, it's not as scary as I imagined; it has several built-in protection mechanisms, and I don't store important assets or passwords there. Gradually, I realized that each generation is like a micro-investment—you're betting it will produce a perfect piece of code, not a never-ending pile of unfinished work.

Content source

https://chainfeeds.substack.com

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments