
TL;DR
- AI Now Institute warns that the U.S. government’s AI policies risk concentrating power in a handful of AI firms.
- The think tank’s new report, Artificial Power, outlines systemic issues tied to the AGI race.
- Consequences already include algorithmic bias and threats to democracy, data privacy, and national security.
- AI Now warns that unchecked AI growth is a political choice—not a technological inevitability.
Big Tech’s AI Ascent: Engineered, Not Earned
In a candid new report titled Artificial Power, the AI Now Institute challenges the narrative driving the rush toward artificial general intelligence (AGI). According to co-authors Amba Kak and Dr. Sarah Myers West, the AI boom is being engineered by powerful interests, not simply propelled by scientific progress.
The Institute, which focuses on the social implications of AI, argues that the race toward AGI is accelerating without sufficient democratic oversight or public accountability.
During an interview on TechCrunch’s Equity podcast, Kak and West explained how Big Tech is leveraging billions in public-private partnerships to build massive compute infrastructure and foundational models—despite widespread doubts about profitability.
“The future we’re being sold is not inevitable,” said Kak. “These are political and economic decisions being framed as technical destiny.”
Trump’s AI Action Plan: Who Benefits?
The Trump administration’s AI agenda has largely been welcomed by Silicon Valley giants, with major players like OpenAI, Google, and Microsoft securing government contracts and regulatory leeway.
While framed as necessary to remain globally competitive—particularly against China—critics say the plan sidesteps real concerns like environmental degradation, labor disruption, and algorithmic discrimination.
According to the report, this public funding is subsidizing a handful of private actors while leaving the public exposed to untested technologies and potential harms.
The Hidden Harms of AGI Hype
The AI Now report outlines several immediate and long-term dangers of unchecked AGI development:
- Environmental impact from energy-intensive model training
- Algorithmic discrimination in hiring, credit scoring, and policing
- National security risks from AI weaponization and surveillance
- Erosion of democratic institutions through influence campaigns and misinformation
- Lack of data sovereignty, with private firms hoarding critical datasets
These concerns reflect patterns of corporate capture, where policy decisions are increasingly shaped by tech firms with outsized lobbying power. As The Markup and others have reported, many AI labs operate under a “move fast, regulate later” philosophy that leaves civil society playing catch-up.
AI Power, Politics & Consequences
Area of Impact | Risk Highlighted | Source |
Environmental Degradation | Energy use in AI model training and infrastructure | Nature |
Algorithmic Discrimination | Racial and gender bias in automated decision-making | Brookings |
Democratic Erosion | AI-generated disinformation influencing elections and governance | Stanford HAI |
National Security | AI-enhanced cyberweapons, surveillance tools | CNAS |
A Democratic AI Future? Still Possible
Despite the bleak picture, AI Now insists alternatives are still viable. The report calls for:
- Transparency mandates around AI model training and deployment
- Stronger data protection laws, especially around biometric and behavioral data
- Democratic oversight, including public audits of government-AI partnerships
- Support for independent AI research, free from corporate interests
Amba Kak emphasized that civil society needs a seat at the table: “The power being consolidated right now will shape how everyone lives in the next decade. We cannot afford to be passive observers.”
Regulating the Irregular: A Challenge Ahead
The podcast discussion echoed concerns from other watchdogs who say current efforts to regulate AI—such as the EU AI Act—do not go far enough to address consolidation or systemic risk.
And in the U.S., the push for “safe and trustworthy AI” often lacks clarity and enforcement. AI Now warns that vague promises of “alignment” or “ethical AI” frequently mask deeper issues of monopolization and regulatory capture.
Conclusion: The Price of the AGI Race
The AI Now Institute is not anti-AI—but it is calling for a broader, more honest conversation about where AI is headed and who gets to decide. Its Artificial Power report is a stark reminder that the future of AI is not only a technical issue—it’s a political one.
Until tech companies and governments prioritize accountability over acceleration, the public risks paying the true cost of the AGI arms race.