Is AI Just an “Ultimate Shockwave” of Hype?
Every few months, a new batch of AI buzzwords explodes—then fades into oblivion. All this does is feed brain-dead AI-hype influencers and financiers chasing the next hot narrative. Where’s the actual, measurable value? Pure shockwave storytelling.
Where did last year’s hot AI trends go? The past two years gave us workflow, agents, RAG, skills, MCP, subagents, opencode, OpenClaw, etc. Strip away the packaging—isn’t it all just prompt optimization and context management? Just automation scripts wrapped around chatbots that burn 10x more tokens without changing core capabilities.
It’s all “brute force scaling”—pile on compute until you hit limits, distill all human knowledge, then what? Can quantity actually create quality breakthroughs?
Just Polishing the Edges
The rest is endless tweaking of minor features:
-
Now it can browse the web
-
CLI command line support
-
IDE plugins
-
Desktop apps with visuals
-
Multimodal (images, voice)
-
Embeddable like OpenClaw
Every tiny feature addition triggers hype storms about “self-improving AI.” Massive token consumption for nested context = “autonomous iteration”? I’m dizzy from the hype cycles.
Where’s the Real Business Value?
AI’s only proven revenue model: subscriptions + API fees. How’s this different from WPS Office or Tencent Video memberships?
Beyond C2C consumer spending, what B2B actually works? Can it create ecosystem effects like the internet did?
Until hallucinations are solved, will enterprises bet the farm? Taobao, Xiaohongshu, Douyin slapping together RAG search/AI chatbots—does this actually improve user experience? Do consumers even notice? Does it cut costs? (Token spend often exceeds human agents!) Pure AI box-ticking.
Two Years of “AI Boom” = Shockwave Noise
Can’t compare to 2023’s first GPT semester PPT or 2024’s first Claude code jaw-dropping moments. AI’s biggest impact? Helping undergrads/grad students with finals and theses.
AI bubble? Inevitable. Trillions invested need mass workforce replacement to break even. Current AI replaces… what exactly?
Reality Check #1: Beyond niche domains (coding, animation), humans interact with the physical world. AI’s stuck in digital sandboxes with abysmal interaction quality at insane costs. Don’t give me “embodied intelligence”—unachievable in 10 years, cash burn in 5.
Reality Check #2: Asian/African/Middle Eastern labor costs less than tokens + electricity. A steamed bun + basic education beats GPU farms.
My AI Tech Stack Understanding (Read Before Debating)
Functions → ML → NLP → LLMs evolutionary path:
-
Functions: Cat eats 1kg shit/day × 10 days = 10kg
-
ML: Cat/dog image classification, tomorrow’s weight prediction from today’s data
-
NLP: Sentiment analysis, text classification, keyword extraction. My project: enterprise carbon reports → extract specific emissions data
-
LLMs: Anything in, anything out. Transformer-based text prediction
LLMs = highest-probability next token:
-
“I love you” → “wife”
-
Question → “most likely answer sequence”
Prompt = question, Context = conversation memory. Multi-turn = accumulating context.
Agent/RAG/Workflow Reality Check
Simplest Agent: Script → scrape Toutiao news → add to context → summarize
RAG: “How to return?” → vector DB search → inject traditional FAQ → generate response
Function Calling: “Scrape news” → JSON {search(topic="news")}
MCP: Remote function library, no hardcoding in agent
LangChain: Rigid frameworks
Workflow: Visual drag-drop
Skills: skill.md docs → inject into context
Agent: Natural language goals → AI plans executionMore flexible = less reliable:
LangChain: Predictable, limited scope
Agent: Universal, token sink + hallucination centralReal Coding Experience
Agent “software development”: AI writes code → copy → error → ask again → copy → debug → stitch together…
Semi-pro reality: You still need architecture understanding + debug skills. Agents help experts save time, not vibecoding magic.
Market agent flavors:
-
Code-specific (Codex)
-
General purpose (OpenClaw)
-
CLI, IDE plugins, desktop apps, chat integrations
Core pattern: Nested LLM calls + script execution. “Self-improvement” = context explosion via token burn.
30% UX gain, 3000% token cost. JARVIS? Maybe in my next life.
No Comments