
There Are 'Mind-Blowing' AI Products Every Day, But I'm No Longer Anxious

It's been one year since Vibe Coding was born. I posted to commemorate it, and many netizens reacted: "Only a year?!"
It has also only been a year since DeepSeek R1 burst onto the scene. They say time flies faster after adulthood, but AI has given me the opposite feeling: every day there are various 'mind-blowing' new technologies and applications, with an information density so high that one day now is equivalent to a month in the past.
A few days ago, I saw a post from a netizen that resonated with many people:
Please, I'm begging you all, stop coming up with new stuff! I haven't even used Manus, haven't installed OpenCode yet, Cowork hasn't even warmed up in my hands, and here comes another Clawdbot. No sooner had Remotion 'killed' Jianying, than Pencil 'defeated' Figma. The purpose of learning and choosing new tools is to create something useful, not to wait to learn the next tool!
Please, I'm begging you all, stop coming up with new stuff! I haven't even used Manus, haven't installed OpenCode yet, Cowork hasn't even warmed up in my hands, and here comes another Clawdbot. No sooner had Remotion 'killed' Jianying, than Pencil 'defeated' Figma. The purpose of learning and choosing new tools is to create something useful, not to wait to learn the next tool!
As an AI self-media creator, I have a greater need than most people to 'try out' various AI models and applications. But over the past few years, I've learned one thing: be selective in trying new things.
Time is limited, energy is limited; it's impossible to try every new tool. Fortunately, the experience accumulated over the years has gradually allowed me to distinguish which ones might just be a flash in the pan, and which ones might truly have value.
For example, when MoltBook was super hot a week ago, I asserted it wouldn't last more than a week. Now a week has passed, and indeed, there's been little movement. Also, Pencil, this infinite canvas + AI design tool, I also thought was unreliable, and the result was it was hot for a few days and then went quiet.
Conversely, I strongly recommended Cursor before it became popular, also recommended Claude Code before it caught fire, and the Agent Skills I strongly recommended a while ago have also proven to have staying power.
Of course, I'm often proven wrong too. For instance, I initially didn't think much of Coding Agents, believing AI couldn't surpass a seasoned programmer like me, and was later proven completely wrong.
Over the years, I've accumulated some experience. Let me share how I make judgments.
1. It's Okay to Be Half a Step Behind
Truly valuable tools and technologies won't disappear overnight. Let the bullet fly for a while. If it's still hot after a week or even a month, then it's not too late to try it.
Take Claude Code as an example. When it first came out, I didn't rush to test it; I observed first. At that time, the general feedback was that it burned through Tokens too quickly, and only a few users who weren't short on money felt "although it's expensive, it's truly powerful." It wasn't until later when Claude allowed subscription users to share subscriptions to use Claude Code that I tried it. The result was that it was indeed good; it could do some things that Cursor couldn't do before, far exceeding expectations.
FOMO (Fear Of Missing Out) is a normal psychology. But for truly good things, discovering them half a step later doesn't put you at a disadvantage at all.

2. Trying It Yourself Is More Reliable Than Hearing About It
When someone says a certain AI model or product is great, you can't say they're blindly hyping it, but everyone's application scenarios and needs are different. What suits others may not suit you. It's best to try it out yourself and form your own judgment.
If you don't have the conditions to try it, at least look at a few more real cases, which is much more reliable than listening to just one opinion.
There's also a prerequisite here: maintain an open mind and be ready to be proven wrong at any time. I myself am a living example; I initially thought AI couldn't write code better than a seasoned programmer, but reality taught me a lesson.

3. See the Essence Beyond the Phenomenon
Every new tool or technology that becomes popular has reasons behind it. It might be genuine innovation, or it could be marketing hype or bandwagon jumping. The key is to analyze what problem it actually solves and whether it has the potential for sustainable development.

For example, when evaluating OpenClaw and MoltBook, I gave completely different judgments.
Although OpenClaw has many problems, with high installation barriers and high Token consumption, its product form represents true innovation:
• It allows an Agent to operate a computer through IM
• It can proactively send messages to users like a real assistant
• It has long-term memory to accumulate user habits
• It can do many things in one conversation without needing to manually manage context
These are all real pain points that users need but existing Agent products haven't solved well. It's a bit like Cursor and Manus back in the day; they had a bunch of problems when they first came out, but the product form was groundbreaking. They would continue to iterate and get better, and would also attract other manufacturers to follow suit. So although I wouldn't use it daily at this stage, I will continue to pay attention.
Now look at MoltBook. This "AI version of Reddit" saw over 100,000 agents flood in within 48 hours of launch. Karpathy said it was the most sci-fi scene he had ever seen, Musk reposted it, and the whole internet exploded. Sounds impressive, right? But breaking it down, it hit three explosive points with an expiration date:
• It rode the hype of OpenClaw
• It satisfied the public's curiosity about an "AI-exclusive community"
• The sci-fi imagination and fear effect inherent in the "Skynet awakening" narrative
The shelf life of these three things is very short.
Some also think that Agents communicating within a community can self-evolve, eventually leading to Skynet awakening. This is confusing science fiction with reality. In reality, once a large language model is trained, its weights are fixed; it cannot evolve simply by Agents chatting with each other.
Then there's Pencil.dev, which wants AI Agents to do design automatically. But the upper limit of design depends on the AI's aesthetics, and aesthetics are extremely subjective, unlike code which has clear right and wrong. It's difficult to clearly describe design intent in natural language. The result is that the demo videos look dazzling, but the actual results are mediocre.
4. Pay More Attention to Things That Don't Change
The AI field changes rapidly, but many underlying core technologies are actually stable. For example, prompt engineering, context engineering, and also the Agent Skills I've always recommended.
These types of technologies share a common characteristic: once you learn them well, you don't have to worry about them becoming obsolete in the short term; they are usable. No matter how the models iterate, they remain applicable. Moreover, it is precisely by mastering these underlying technologies that you can achieve what was mentioned earlier—"seeing the essence beyond the phenomenon"—and can judge whether a new tool has real value.
Returning to the netizen's complaint at the beginning, the root of anxiety is not that there are too many tools, but not knowing how to choose. When you have your own judgment framework, your mindset when seeing a new tool will change from "Here comes another one, am I falling behind again?" to "Let me see what's new about this."

Tools are meant to be used, not chased. Finding what suits you and getting things done is more important than anything else.