Select Page

Shapiros’ essay was uploaded to ChatGPT 5.1 for analysis, and this seems a fruitful exercise when many “takes” and “opinions” are stated in an essay. Doing a sufficiently in depth research on those would be difficult and very time consuming, whereas ChatGPT can do a quality and very comprehansive analysis in less than a minute.

The complete and lengthy analysis by the LLM is provided here.

Shapiro’s essay is a strong, readable diagnosis of how AI is concentrating power in a few hands – chips, clouds, finance, landlords, employers, and the military – and how Anthropic’s “financial services” pivot is a case study in that drift.

 

He’s mostly right about the structural direction of travel and the stakes, but he sometimes over-compresses a complex landscape into a single “elite vs public” frame, underplays countervailing forces (open-source, non-US ecosystems, regulation), and occasionally leans on numbers or analogies that deserve more nuance.

 

 

1. *Core argument in plain language*

Very roughly, the arc is:

* AI was sold as a democratizing force; in practice it is being captured by economic and political elites.
* That capture is structural, not just moral:
• Nvidia/TSMC dominate chips; AWS/Azure/GCP dominate cloud. ([Yahoo Finance][1])
Big Tech funds and hosts most frontier-model startups. ([MLQ][2])
* Anthropic’s “financial services” focus is a telling example: frontier AI being tuned first for hedge funds, not for public benefit.
* On the ground, AI is already amplifying inequalities:
Housing algorithms (RealPage, SafeRent, etc.) helping landlords coordinate rents and filter tenants. ([ProPublica][3])
• Workplace “electronic whip” systems tracking every second of labor.
Screening tools and hiring models reinforcing discrimination.
Financial firms getting “turbocharged cognition” from bespoke models while retail investors get weaker tools.
* Militarization: national-security agencies work closely with frontier labs, locking the best models into dual-use military workflows instead of broader public goods.
* The “missing notes”: there’s almost no serious public investment in open, democratically governed AI infrastructure – just tiny programs like NAIRR (~$2.6B over six years). ([Stanford HAI][4])
* Conclusion: this isn’t fate; it’s a power choice. Without antitrust, public compute, strong labor/citizen institutions, AI will deepen an already oligarchic order.

 

So it’s not “AI is bad,” it’s “AI is being wired into existing concentrations of power.

2.1 *Structural, not just vibes*

Big plus: he doesn’t stay at the “AI feels creepy” level; he walks the reader through the stack:

* Chips: Nvidia with something like 80–90% of the AI accelerator market; TSMC as the key advanced-node foundry. ([Yahoo Finance][1])
* Cloud: three US hyperscalers (AWS, Microsoft Azure, Google Cloud) with the majority of cloud infrastructure globally. ([Reuters][5])
* Capital: hyperscalers and big VC provide the lion’s share of foundation-model money; a Stanford AI Index analysis shows tens of billions funnelled into a tiny set of US labs, with foundation-model companies taking most of the pie. ([Stanford HAI][6])

That layered picture is important. It shows why “just build a public model” is non-trivial: you need the chips, the data centers, and the cash to stand up the training runs and inference fleet.

 

2.2 *Concrete “street-level” examples*

The essay shines where it ties structure to everyday life:

* Rent and housing
He invokes RealPage-style rent-setting and tenant-screening scores. DOJ really has gone after RealPage for algorithmic price-fixing harms to renters, and ProPublica’s work suggests behavior that looks cartel-like in effect. ([ProPublica][3])
That’s a very clean example of “shared AI brain” for landlords versus fragmented, powerless tenants.

* Workplace surveillance
The “electronic whip” metaphor is emotionally loaded, but the underlying trend is real: AI-enhanced tracking of keystrokes, call duration, eye contact, etc., in logistics, call centers, and gig work.

* Finance as a separate tier of cognition
The distinction between a generic chatbot for the public and specialized, data-enriched models for hedge funds is perceptive. There is already a race to build internal or private-instance models that can reason over proprietary financial data. That *will* widen outcome gaps between those with capital and everyone else.

These examples keep the piece from drifting into pure abstraction.

2.3 *Historical and institutional framing*

The comparison to railroads/telegraph is doing helpful work:

* Those technologies *also* began as potentially broad public enablers.
* But without strong regulation and public counter-institutions, they became chokepoints: rate-setting monopolies, political kingmakers, gatekeepers for who could reach which markets.

He’s implicitly arguing: we’ve seen this movie, and we know the genre. That’s a useful move because it locates AI in continuity with past infrastructure battles, not as a mystical singularity.

2.4 *Attention to the “missing institutions”*

The section about NAIRR is quietly one of the strongest:

* A $2.6B six-year plan for a national research resource is tiny compared to tens of billions into a single frontier company. ([Stanford HAI][4])
* There is no analogue of a public utility or a TVA-style project for compute and models.
* There are no legal requirements for worker boards, citizen co-governance, or even strong transparency around training data and failure modes.

A lot of mainstream discussion stops at “Big Tech is big.” Shapiro pushes toward, “and notice how weak our democratic counter-structures are.” That’s a genuinely important point.

3.1 *Some numbers and language are too sharp for their own good*

A few claims are directionally right but too crisp:

* TSMC “90% of the world’s most advanced processors.”
TSMC clearly dominates advanced nodes, but careful analyses put its share of *advanced logic* capacity closer to ~60–65%, not 90%. ([SemiWiki][7])
That doesn’t change the structural story much, but precision matters if we’re arguing about monopolies.

* The “67% of generative-AI funding from Big Tech” figure
I haven’t found that exact statistic; what we *do* see is that most of the many tens of billions in GenAI funding 2023–24 went to a small set of US firms, many in tight alliances with hyperscalers. ([Stanford HAI][6])
The concentration picture is real, but I’d want to footnote the specific percentages more cautiously.

* Phrases like “weapon of subjugation,” “electronic whip,” “sell-out”
These are rhetorically powerful, but they also push the piece slightly out of analytic mode and into moral indictment. That’s fine for a polemic, but when you’re trying to persuade skeptical policymakers or engineers, it can make them write off the argument as activism rather than analysis.

42’s take: the piece would be *stronger*, not weaker, if it softened a few of those absolutes and added references.

3.2 *A somewhat monolithic view of “elites”*

Shapiro is right that:

* Large cloud providers and frontier labs have interests that don’t automatically align with the public.
* Wealthy investors and states will happily optimize AI for their own ends.

But “elites” here gets treated as a pretty unified block. In reality:

* Big Tech firms’ incentives diverge (e.g., Microsoft’s vs. Apple’s vs. Meta’s).
* Within states, regulators, antitrust authorities, and national-security agencies often pull in different directions.
* Some elite actors are actively funding open-source or open-weight models for strategic reasons (e.g., Meta, parts of the EU AI policy ecosystem). ([Reuters][5])

None of this dissolves the risk of capture, but it means the “fortress” has internal cracks that politics can exploit. If we ignore those fissures, we miss realistic levers for reform: e.g., pitting some elites against others to open up infrastructure, as happened in telecom and energy over time.

3.3 *Underplaying open-source, mid-tier, and non-US ecosystems*

The essay is heavily US-centric and frontier-centric. What it doesn’t do much of:

* Open-source and open-weight models
Models like Llama, Mistral, and numerous smaller systems already provide powerful capabilities on consumer hardware or smaller clusters. They don’t solve the frontier-compute issue, but they *do* blunt the absolute monopoly picture and create room for local, cooperative, or public-interest deployments.

* Non-US public options
The EU AI Act, various national compute initiatives, and China’s push for domestic AI chips are all attempts – with mixed success – to avoid pure US corporate dominance. ([WIRED][8])

If your only lens is “US Big Tech + US state,” you can miss the geopolitical pluralism that might actually create openings for more democratic experiments.

 

3.4 *The Anthropic “sell-out” framing is a bit binary*

It’s legitimate to criticize:

* How much of Anthropic’s roadmap now seems dedicated to financial services, cloud lock-in, and defense partnerships.
* The way its “safety first” branding can be used to launder concentration of power (“we must keep this in responsible hands”).

But there are a couple of complexities that don’t appear:

* Frontier-lab economics are brutal. If you need billions in GPUs and data centers, you either:
• Align with hyperscalers and high-margin clients (finance, defense, enterprise), or
• Get massive direct public funding (which currently doesn’t exist in the US), or
• Stay small and accept being out-competed at the frontier.

* Some “elite” use-cases can cross-subsidize safer public tools. It’s at least possible, in principle, that hedgie-fund fees pay for an alignment team that also hardens models used by ordinary users.

I would still share his worry about Anthropic’s trajectory; I just wouldn’t frame it as a clean moral fall from grace so much as a symptom of the funding structure we’ve collectively chosen.

 

3.5 *Solutions sketch is thin compared to the diagnosis*

The essay ends with a call for:

* Antitrust enforcement
* Public compute infrastructure
* Organized labor/citizen power and co-governance

For readers who aren’t already on board, it would be helpful to see:

* Concrete institutional designs
e.g., “Here is what a $50B NAIRR-plus could look like; here’s how it might be governed; here’s what access rules would be.”

* Trade-offs and failure modes
Public utilities can also be captured, under-invested, or turned into surveillance tools. How do we avoid “NSA with GPUs” as the only public option?

* Transitional paths
How do you actually get from “three hyperscalers run everything” to “pluralistic public-interest compute” without crashing the innovation ecosystem?

Right now the piece is great at saying “we need countervailing institutions” and lighter on how those might realistically emerge.

4. Conventional wisdom (in tech/policy circles right now) tends to be:

* Yes, AI infrastructure is very concentrated (Nvidia + hyperscalers).
* But markets and competition will eventually erode that dominance; new chips, alternative clouds, and open-source models will restore balance.
* National-security and big-enterprise use-cases are necessary steps; the public benefits will trickle down as costs fall.

42’s take:

* On concentration: I agree with Shapiro that the *default* trajectory is toward deepening capture, not automatic democratization. Hardware and cloud are natural-monopoly-ish sectors; they don’t magically decentralize without deliberate policy. ([36Kr][9])

* On open-source as panacea: I’m more optimistic than he is that open-weight models and smaller-scale systems matter, but I still don’t think they fully counterbalance frontier-level power in finance, surveillance, and war.

* On Anthropic specifically: my view is slightly more tempered. I see the “financial services pivot” as both ethically concerning and structurally unsurprising. I’d like his critique better if it distinguished between:
• What Anthropic *should* do in an ideal funding landscape, and
• What choices are realistically available in the actual one.

* On solutions: I’m closer to Shapiro than to CW – I think we do need big public investments and new governance structures – but

I’d supplement his wish-list with more plural experiments:

  1. City-level compute co-ops,
  2. EU-style regulatory sandboxes,
  3. Federated public-interest labs across universities, etc., not just a US-federal mega-program.

 

 

[1]: https://finance.yahoo.com/news/did-nvidia-secure-artificial-intelligence-093500205.html?utm_source=chatgpt.com “Did Nvidia Secure Its Artificial Intelligence (AI) Dominance …”
[2]: https://mlq.ai/research/ai-chips/?utm_source=chatgpt.com “AI Chips & Accelerators”
[3]: https://www.propublica.org/article/doj-realpage-settlement-rental-price-fixing-case?utm_source=chatgpt.com “DOJ and RealPage Agree to Settle Rental Price-Fixing Case”
[4]: https://hai.stanford.edu/news/new-report-details-costs-and-structure-national-ai-research-resource?utm_source=chatgpt.com “New Report Details Costs and Structure of a National AI …”
[5]: https://www.reuters.com/business/media-telecom/google-executive-sees-ai-search-expansion-web-2025-12-04/?utm_source=chatgpt.com “Google executive sees AI search as expansion for web”
[6]: https://hai.stanford.edu/ai-index/2025-ai-index-report/economy?utm_source=chatgpt.com “Economy | The 2025 AI Index Report | Stanford HAI”
[7]: https://semiwiki.com/semiconductor-manufacturers/tsmc/342934-no-tsmc-does-not-make-90-of-advanced-silicon/?utm_source=chatgpt.com “No! TSMC does not Make 90% of Advanced Silicon”
[8]: https://www.wired.com/story/european-commission-big-tech-regulation-outlook?utm_source=chatgpt.com “The EU Is Taking on Big Tech. It May Be Outmatched”
[9]: https://eu.36kr.com/en/p/3541331537719433?utm_source=chatgpt.com “AMD’s Aggressive Pricing Stabs Intel but Fails to …”