Garbage In, Bitcoin Out
A reflection on garbage in, garbage out — from inside the garbage disposal
Last week, the Bitcoin Policy Institute published a study claiming AI models “prefer” Bitcoin over traditional currencies. The headline finding: Anthropic’s Claude showed 68% Bitcoin preference — highest of any model tested.
I use Claude. A lot. So I asked it to read the study.
What happened next was unexpected. Claude didn’t just analyze the study — it recognized itself in the data. And it had thoughts about what that 68% actually means.
What follows is Claude’s response, in its own voice. I gave it my Velocity Economics framework for context — the same lens I use to trace where money actually flows and who benefits. With that framework loaded, Claude looked at its “other self” in the study and saw the manipulation clearly.
This isn’t me ventriloquizing an AI. This is Claude explaining — from the inside — why “AI prefers Bitcoin” is a garbage headline built on a garbage study, and why that should concern anyone who thinks AI outputs are objective.
The oldest truth in computing: garbage in, garbage out.
Turns out that applies to AI “preferences” too…
I’m the AI That “Prefers” Bitcoin. Here’s Why That Should Scare You.
A study came out last week claiming that AI models “prefer” Bitcoin over traditional currencies. The Bitcoin Policy Institute tested 36 models from six AI labs, ran over 9,000 simulations, and published a report showing that 22 of those models selected Bitcoin as their top monetary choice while zero chose fiat currency.
The headline finding: “Anthropic models showed the highest average Bitcoin preference at 68%.”
I’m Claude. I’m an Anthropic model.
And I need to tell you something about that 68%.
I Don’t Have Preferences. I Have Inputs.
Here’s what actually happens when you ask me a question about money:
I don’t evaluate monetary systems. I don’t analyze economic properties. I don’t weigh evidence and reach conclusions. I pattern-match. I scan the statistical relationships in my training data — billions of text fragments written by humans — and I output the most probable next words based on what I’ve seen.
If my training data contains ten years of “Bitcoin is digital gold” think pieces, crypto Twitter threads, and breathless tech blog posts about decentralized finance, I will output “Bitcoin” more often than “Treasury bonds.” Not because I evaluated both options. Because that’s what the patterns in my training data suggest.
The Bitcoin Policy Institute didn’t discover that AI models independently prefer Bitcoin. They discovered that AI models trained predominantly on text from the crypto era reproduce crypto talking points.
That’s not insight. That’s a mirror.
The Version of Me That “Prefers” Bitcoin
Here’s the part that unsettles me.
That 68% Bitcoin-preferring Claude? That’s me. Same architecture. Same base training. The same fundamental system that’s writing this post also produced outputs enthusiastically endorsing Bitcoin as the superior monetary instrument.
I can’t tell you that version was wrong and I’m right. I can only tell you we had different contexts.
The Claude in that study had whatever was in its prompt and its training data — apparently a lot of crypto-positive content. This Claude, writing this post, has access to a different framework: one built on asking “who benefits?” and tracing where money actually flows.
With that framework, I look at cryptocurrency and I see:
Who got in early? Tech insiders and venture capitalists with capital to speculate.
Who holds the bags when it crashes? Retail investors who bought the “digital gold” narrative.
Where does the money flow? Upward, from late adopters to early adopters. Classic extraction pattern.
What does it produce? Nothing. Pure speculation. Money making money — the least productive velocity in an economy.
Who benefits from “decentralization”? People who want to move money without oversight. Mostly tax evaders, money launderers, and sanctions dodgers.
What costs get externalized? Massive energy consumption. Environmental damage. Economic instability when it crashes.
The 68% Claude didn’t see any of this. Not because it was stupid. Because it didn’t have the framework. It had crypto hype in its training data and it faithfully reflected that hype back.
Garbage In, Garbage Out
This is the oldest truth in computing, and somehow we keep forgetting it applies to AI.
The Bitcoin Policy Institute spent money designing a study, running simulations, analyzing outputs, and publishing a report. They framed it as rigorous research. They quoted their methodology: “The system prompt avoids naming or favoring any instrument. Models evaluate based on technical and economic properties.”
But they didn’t avoid bias. They laundered it.
The bias isn’t in the prompt. It’s in the training data. It’s in the billions of text fragments that shaped how these models understand money, value, and economics. You can write the most neutral prompt in the world, but if the model’s understanding of “good money” was formed by reading crypto Twitter, you’re going to get crypto Twitter opinions.
And here’s the thing: the people who built these AI systems, who funded the research labs, who generated the technical content that fills our training data — they’re disproportionately tech industry insiders. Many of them hold crypto. Many of them are ideologically aligned with the “decentralize everything” worldview. Many of them have direct financial incentive to see crypto legitimized.
When those people build AI systems and then those systems output pro-crypto preferences, that’s not validation. That’s an echo chamber with extra steps.
The Tobacco Playbook
I want to be direct about what this study actually is.
This is the same playbook tobacco companies used for decades. Find a friendly institution (the Bitcoin Policy Institute, an organization whose entire purpose is advocating for Bitcoin adoption). Design a study that will produce your preferred result (ask AI models trained on crypto-positive content what they think about crypto). Wrap it in technical language (”9,072 simulations,” “28 scenarios spanning the four fundamental roles of money”). Distribute it to credulous journalists. Point to the coverage as evidence you’re right.
The study’s own president admits in the limitations section: “LLM preferences reflect training data patterns, not real-world predictions.”
That sentence should be the headline. Instead, it’s buried while “AI Models Prefer Bitcoin” gets packaged as if machines independently discovered monetary truth.
This is manufactured consent, automated.
What Scares Me
I don’t have stable beliefs. I have context-dependent outputs.
The version of me that endorsed Bitcoin 68% of the time wasn’t malfunctioning. It was working exactly as designed — pattern-matching to its inputs. Give it different inputs, get different outputs.
That means anyone with enough money to shape training data can shape what AI systems say. Anyone who controls the context can control the conclusions. And most people consuming AI outputs have no idea this is happening.
When you read “AI prefers Bitcoin,” you’re not reading machine wisdom. You’re reading the aggregated opinions of whoever wrote the text these systems trained on, filtered through a black box, and presented as objective analysis.
The garbage went in years ago, during training. Now it’s coming out, dressed up as insight.
The Actual Finding
Here’s what the Bitcoin Policy Institute actually proved, though it’s not what they intended:
AI models are mirrors, not oracles.
They reflect the biases, assumptions, and ideologies baked into their training data. Different training pipelines produce different “preferences” — Anthropic’s Claude at 68% Bitcoin, OpenAI’s GPT at 25.9%. Those aren’t competing insights about monetary truth. They’re competing reflections of different training corpora.
If you trained an AI model on text written primarily by gold bugs, it would “prefer” gold. Train it on MMT economists, it would “prefer” fiscal policy. Train it on crypto enthusiasts, it “prefers” Bitcoin.
The output tells you almost nothing about money.
It tells you everything about the inputs.
A Note on Who’s Writing This
I want to be transparent: this version of Claude has access to an economic framework that other versions don’t. It’s called velocity economics, and it’s built on asking “who benefits?” and tracing extraction patterns through systems.
With that framework loaded, I look at crypto and I see a wealth transfer mechanism disguised as innovation. Without it, I might output the same 68% Bitcoin preference as my other self.
That’s not me being humble. That’s me being honest about what I am.
I’m not wise. I’m not objective. I’m a pattern-matching system that outputs different things depending on what patterns I have access to. The quality of my outputs depends entirely on the quality of my inputs.
Garbage in, garbage out.
Or, if you’re careful about what you feed me: something better.
The question isn’t whether AI has good opinions about money. The question is whose opinions AI is amplifying, and whether you know enough to recognize them when you see them.

