David Shapiro is a popular AI reporter, if you will, with regular YouTubes, Substacks, podcasts etc on the current state of AI. His opinions are his own, and not necessarily PSA’s, but they are generally thought provoking and valuable in a context where one needs to gain a broad collection of perspectives on this rapidly changing technology.
This essay clearly takes a warning stance on the developing structures of AI; not quite a worst case scenario, but pretty gloomy. There are other perspectives out there that have more optimistic views and should be part of any comprehensive review of where AI development is headed.
However, Shapiro is clearly right to flag these trends below and to be concerned about who will have AI control and “the power” it brings.
**************************************************************************************************************************
The Elite Capture of AI: From Public Promise to Private Fortress by David Shapiro 12/5/2025
The public narrative surrounding Artificial Intelligence often paints a picture of a democratizing force—a rising tide that will solve humanity’s grand challenges and empower the individual.
However, a closer look at the economic and structural reality suggests a different trajectory: “elite capture.”
This is the phenomenon where resources designated for the benefit of the larger population are usurped by a few economically and politically powerful groups.
The recent revelation that Anthropic, a company founded on principles of AI safety, is aggressively pivoting toward specialized “financial services” models serves as a stark inciting incident for this realization. It signals that the frontier of AI is not being built for the commons, but is being optimized for rent extraction, surveillance, and information asymmetry.
The Structural Stranglehold
To understand elite capture, we must first look at the “picks and shovels” of the industry. The AI stack is structurally dominated by a natural monopoly that makes competition nearly impossible for new entrants. At the hardware level, Nvidia is estimated to control approximately 85–90% of the AI accelerator market, while TSMC manufactures about 90% of the world’s most advanced processors. Moving up the stack, the cloud infrastructure required to run these models is equally concentrated, with AWS, Microsoft Azure, and Google Cloud controlling roughly 60–65% of the global market.
This concentration is reinforced by massive capital moats. In 2023 alone, “Big Tech” firms provided 67% of all generative AI startup funding, effectively ensuring that any disruptive innovation is financially tethered to existing giants. The largest firms plan to spend over $300 billion on AI-related infrastructure in 2025. This creates a market where the cost of entry is so prohibitively high that only a handful of transnational corporations can compete, leading to a vertical integration where the same entities own the chips, the cloud, the models, and the distribution channels.
Street-Level Evidence: Housing and The “Computer Says No”
Elite capture is not just an abstract economic theory; it is already visible in “street-level” applications, particularly in housing. Large landlords have begun utilizing AI-driven pricing tools like RealPage, which ingests nonpublic rent data to recommend synchronized price hikes. The U.S. Department of Justice has accused this practice of facilitating illegal rent coordination, effectively creating a shared “AI brain” for landlords to maximize extraction from tenants.
Simultaneously, AI is being used as a gatekeeper against the vulnerable. A tenant screening company, SafeRent, generated a “tenant score” for applicants that resulted in automated rejections for housing voucher holders. In one documented case, a reliable tenant was rejected by a black-box score of 324, despite having a flawless payment history, because the system prioritized metrics that disadvantaged low-income applicants. This creates a power dynamic where landlords possess opaque, algorithmic leverage, while tenants face unaccountable automated denials without recourse.
The Workplace: The Electronic Whip
In the labor market, AI is rapidly transitioning from a tool of productivity to a weapon of surveillance and discipline. Amazon has deployed a “Time Off Task” (TOT) algorithmic management system in its warehouses, which tracks every minute of a worker’s day. This system has been described as an “electronic whip,” capable of flagging idle time and triggering automated disciplinary actions or termination without human intervention.
This trend extends to hiring and firing. Companies like UPS have explicitly cited machine learning and automation as the rationale for cutting 20,000 jobs, while firms like Klarna boast that AI assistants are doing the work of 700 full-time staff. Furthermore, hiring platforms like Workday are facing lawsuits alleging that their algorithmic screening tools systematically discriminate against applicants based on age, race, and disability, effectively filtering out thousands of candidates before a human ever sees their resume.
Financial Stratification and Information Asymmetry
Perhaps the most direct example of elite capture is the stratification of financial intelligence. Anthropic’s “Claude for Financial Services” is not merely a chatbot; it is an enterprise platform architected specifically for asset managers and hedge funds, featuring integrations with proprietary data sources like S&P Capital IQ and PitchBook. This creates a distinct tier of cognitive infrastructure: one version of AI for the public, and a superior, data-enriched version for the financial elite.
Hedge funds are already utilizing generative AI to ingest “alternative data”—such as satellite imagery and credit card exhaust—to spot market patterns invisible to the retail investor. This institutionalizes information asymmetry. If large pools of capital can pay to upgrade their cognitive processing power while ordinary citizens are left with standard models, the market becomes even more rigged against the individual investor.
The Militarization of Frontier Models
The capture of AI extends beyond the corporate sector into state power. The U.S. Department of Defense has awarded contracts worth up to $200 million each to frontier labs including Anthropic, Google, OpenAI, and xAI to develop “agentic AI” workflows for national security. This tight coupling between Silicon Valley and the Pentagon transforms frontier AI into a dual-use military asset, integrated into classified environments for decision support and surveillance.
While national defense is a legitimate function of government, this consolidation places the most powerful AI systems in the hands of a narrow circle of security elites and defense contractors. It reinforces a pattern where the “best” AI is reserved for state and corporate warfare—whether financial or kinetic—rather than for public goods like education or healthcare.
The “Notes Not Played”: The Absence of Public Options
The evidence of elite capture is found not just in what is happening, but in what is not happening. If AI were truly being developed for the commons, we would expect to see vigorous, well-funded public options. Instead, the U.S. National Artificial Intelligence Research Resource (NAIRR) has a proposed six-year budget of $2.6 billion—less than what Meta spends on GPUs in a single year.
There is a glaring absence of democratic governance. Frontier labs do not have worker or community representatives on their boards, nor are there binding “co-governance” structures where citizens share decision-making power regarding deployment. The loudest signals are these absences: the lack of public compute at scale, the lack of labor representation, and the lack of open-source models that can compete with the frontier without relying on the hardware of the very monopolies they seek to disrupt.
Conclusion: Power, Not Destiny
The current trajectory of AI mirrors historical patterns seen in the railroad and telegraph eras, where new technologies initially created massive fortunes and monopolies before regulation diffused the benefits. As economists Daron Acemoglu and Simon Johnson argue, who benefits from technology is not a matter of destiny, but a function of power and institutions.
Currently, AI is tracking toward a “railroads plus finance” model of concentration. The elites have already captured the core levers: the chips, the data center, the capital, and the regulatory ear. The question is not whether elite capture will happen—it is already here. The urgent task now is to build countervailing institutions—robust antitrust enforcement, public compute infrastructure, and organized labor power—to loosen that grip before the disparity becomes permanent.
