Digitaliziran si

AI Tools Are Fidget Spinners

Or: What Happens When Everyone Can Ship Everything and Nobody Can Keep Anything


March 23, 2026. OpenAI publishes “Creating with Sora Safely”. Polished blog post. Safety architecture. Content protections. The future of video generation.

March 24, 2026. They kill it.

The reasons came later — reallocating compute ahead of IPO, “trade-offs on products with high compute costs”. The kind of explanations you write after the decision, not before. Nobody at OpenAI’s marketing team got the memo before publishing that safety blog.

The app hit one million downloads in five days. Peaked at 3.3 million in November. Anchored a billion-dollar Disney deal that never closed. Total revenue from in-app purchases? About $2.1 million. Rounding error for a $730 billion company.

This isn’t a startup failing. This is the biggest AI company on earth publishing safety documentation for a product whose death warrant was already signed.

That’s not a fidget spinner. That’s the packaging.


The Spinner Economy

Fidget spinners. Massive supply glut. Zero differentiation. Burned out in months. The AI tool economy is the same pattern at industrial scale.

Product Hunt data for March 2026: between 6 and 74 new products launching per day. Seventy-four on March 12. On a Tuesday. Weekends included — 16 products on a Sunday. Product Hunt’s own 2025 recap: “you couldn’t scroll for five seconds without running into AI content.” Thirteen of their top fifteen launches that year? Tagged “Artificial Intelligence.”

You know. You’ve done it yourself. Downloaded the shiny new AI tool. Played with it obsessively for two weeks. Made a hundred things. Showed your friends. Then never opened it again. Not because it broke. Because the novelty wore off and there was nothing underneath to bring you back.

The pattern repeats across tools. Gamma. Creative writing assistants. Sora. Intense novelty burst, then silence. Other hedonic activities bring people back — games, music, cooking. But AI creative apps? One-and-done. Once the trick stops surprising you, there’s nothing to return to.

A lot of AI hype is parlor tricks. You bought a ticket. You’ll buy another one. Don’t pretend otherwise.


What Gets Buried

The spinner economy has a second-order effect. It buries the tools that actually work.

The invisible models. DeepSeek, MiniMax, Kimi K-series. Run on a serious personal PC. No subscription. No API queue. No product team killing the service while you sleep. At launch, DeepSeek V3 scored 82.6% on HumanEval — outperforming GPT-4o at the time — and 90.2% on MATH-500. Benchmarks move fast, but the economics don’t: API costs roughly 29 times less than GPT-4o. Training cost: $5.5 million versus GPT-4’s hundred million and up. Open weights. MIT license. Self-hostable.

Nobody talks about it at conferences. No launch event. No influencer access program. No growth team.

Visibility is not utility.

The frequency paradox. NotebookLM is genuinely excellent for processing academic papers. It turns dense research into podcast-style audio you can absorb while doing something else. Real value. But most people don’t work through academic papers every day. You use it twice a month. By engagement metrics, it looks dead.

The paradox: the better and more specific the tool, the less frequently you need it, the more “dead” it appears. Compliance audits. Contract review. Data migration. Incredibly useful. Used rarely. The spinner economy punishes tools that solve real problems infrequently.

Engagement is not value.

The architecture nobody evaluates. Coding CLIs. Claude Code runs on JS/Node — Anthropic’s own docs say it requires at least 4 GB of RAM. In practice, it’s worse. Users report memory leaks growing to 12 GB during normal usage, 20 GB on simple text conversations causing system crashes, and in extreme cases 93 GB heap allocation from a single /resume command. The issue has been reported since mid-2025 and keeps resurfacing. This is a tool for calling an API and moving text around. OpenCode started as a Go project, got rewritten in TypeScript/Bun. Codex — Rust, good, but heavily coupled to OpenAI’s ecosystem.

Then there’s Crush. Continuation of OpenCode’s original Go codebase, maintained by its original authors at Charm. Goroutines are architecturally correct for this problem: IO-bound tasks waiting for LLM responses, no garbage collector pressure on idle coroutines. Runs on an old Raspberry Pi without hitting swap. Readable codebase because the team treats architecture as a first-class concern.

Nobody talks about Crush. Everyone talks about Claude Code and Codex. The famous tools get compared on which model they call. Nobody asks why a text-shuffling CLI needs 4 GB minimum and routinely leaks into double digits.

I’ve done the Anthropic certifications — Claude Code, the API courses. NVIDIA deep learning. ISO 42001. Information security auditor. I evaluated the hyped tools from the inside. Not from the sidelines. And I still chose the one that fits my environment.

Being better at one thing doesn’t matter if the architecture makes it unusable where you actually work.


What Survives

The dividing line: hype is R&D. Pragmatism is operations.

R&D needs things that break fast. Reach a proof of concept. Maybe an MVP. Operations needs things that run a hundred percent of the time. Easy to support. Easy to explain. Most AI tools are built for R&D demos. They never make the jump.

The spinner economy is R&D-shaped. Operations tools are invisible by design.

What survives shares a pattern: open source. Not as ideology — as operational insurance. I choose open source consistently because I want to see how things work, improve them, and leave when I need to. Both n8n and Crush have companies behind them. They need to make money. They may vendor lock-in eventually. But the exit cost is low today. They use markdown and JSON. Not proprietary formats. Converting to another tool is realistic. Not a rewrite.

The open source tool may not be the best today. But it’s the one you can leave tomorrow without burning everything down.

n8n is a good example. Flashier automation tools launch on Product Hunt, trend for a day, disappear. n8n stays because it solves the operations problem, not the demo problem. Fork it. Pin a version. Read the code. Contribute without owning maintenance forever. Not exciting. Works.

To be fair — not everything that launches is a spinner. n8n has been around for years. No flashy announcements. No trending on Product Hunt every quarter. No articles about it. You just use it, and it works. That’s the difference.


The Enterprise Spinner

Everything above applies to individual tools. But the spinner pattern scales into the enterprise stack too — and the clearest example is Microsoft.

“Open” formats — docx, xlsx — that don’t match the spec they claim to follow. Microsoft Office still defaults to the “Transitional” variant — not the ISO-standardized Strict version — creating what the FSFE calls “an undocumented, proprietary specification” masquerading as a standard. The spec itself is roughly 7,000 pages, making correct third-party implementation virtually impossible. Internal APIs are undocumented black boxes. No LLMs are trained on their proprietary code, because there’s nothing to train on. The result: the entire AI ecosystem literally cannot help you with Microsoft internals. Every other stack gets better AI tooling every month. Microsoft’s gets worse by comparison. Not because it’s getting worse. Because it was never open enough for the models to learn from.

Copilot. An overpermissioning problem baked into the architecture — not a single bug, but a pattern. A defect discovered in January 2026 let Copilot summarize emails marked confidential, bypassing DLP policies entirely. A zero-click vulnerability (CVE-2025-32711) allowed attackers to exfiltrate sensitive data without any user interaction. The US Congress banned staffers from using Copilot over data security concerns. Research shows Copilot accesses nearly three million sensitive records per organization. For any company operating under ISO 27001 or 42001, this is a compliance liability. Not a feature.

And in the EU? The US CLOUD Act means any data touching Microsoft infrastructure is subject to US government access. Regardless of where it’s physically stored. The conflict with GDPR Article 48 is irreconcilable — and it’s not theoretical. The International Criminal Court replaced Microsoft with European alternatives after the chief prosecutor was locked out of his Outlook account under US political pressure. Two previous EU-US data frameworks have already been struck down by the CJEU. A third challenge is pending.

This is the enterprise version of the fidget spinner problem. Not a tool you try for two weeks and forget — a stack you adopt for a decade and can’t leave. The vendor lock-in is deeper. The exit cost is higher. The data sovereignty risk is structural. And the instinct is the same: “everyone uses it, so it must be fine.”

“Nobody got fired for buying Microsoft.” Until someone does.


Before You Spin: A Quick Gap Analysis

Before adopting the next AI tool — or keeping the current one — ask yourself five questions. Honest answers only.

  1. Vendor lock-in. Can you export your data in a format another tool can read? Or are you storing work in something only this vendor understands? If the tool dies tomorrow — like Sora did — what happens to your work?

  2. Data sovereignty. Where does your data go when you press Enter? Can you point to a document that tells you? If you’re in the EU and using a US-hosted service, have you read the CLOUD Act implications? Has your DPO?

  3. Architecture fit. Does this tool fit your actual environment? Not the demo environment. Yours. How much RAM does it need? Can it run alongside everything else you use? Or does it assume it owns the machine?

  4. Exit cost. If this tool doubles its price, changes its terms, or gets acquired — how many hours does migration take? Days? Weeks? If the answer is “we’d have to rebuild,” you don’t have a tool. You have a dependency.

  5. Frequency vs. value. Are you evaluating this tool by how often you open it — or by what it does when you do? The best tools might sit idle for weeks. That’s not a bug. That’s specificity.

If more than two answers make you uncomfortable, you’re spinning a fidget spinner. It feels productive. It isn’t.


TL;DR


This article was brainstormed in collaboration with Claude. The opinions, tool choices, and architectural biases are entirely human. The fidget spinners were real.

#En #AI #Tools #Open-Source