AI revolutionBreaking NewsChinaJD VanceNVIDIAPoliticsUncategorized @usUSWhite House

The AI tribes vying for power

Days before the Democrats left the White House, the president’s National Security Advisor, Jake Sullivan, issued a stark warning. “The next few years,” he said, “will determine whether artificial intelligence leads to catastrophe — and whether China or America prevails in the AI arms race.”

In this remark, Sullivan displayed far more lucidity regarding AI than did other members of the Democratic establishment. (Take Kamala Harris’ bewildering characterisation of bias and misinformation as “existential” threats.) Their successors in the White House, though, see AI as central to their plans for American dominance. Last week, the Trump administration slapped strict export controls on Nvidia’s H20 chip. And on Monday, the president’s science and technology policy chief, Michael Kratsios, told an audience of technologists that AI would be central to the new “Golden Age of America”. 

It is easy enough to say that a new technology will power prosperity. The reality, though, is that AI is more challenging a topic for the White House than Kratsios let on. There were some clues in JD Vance’s speech on artificial intelligence in Paris in February. It was his first major policy address as vice president, and it revealed an understanding of AI that befits him as a former tech investor. 

But it was also a careful political balancing act. Vance promised that the Trump administration’s AI policy would, simultaneously, be highly deregulatory, cripple American adversaries, and “centre American workers”. These wide-ranging undertakings hint at the underlying reality of Vance’s speech: that the Trump administration’s emerging AI policy is being shaped by distinct factions with different priorities and worldviews.

As AI capabilities improve, the ideological divisions have only sharpened. Silicon Valley accelerationists push for unfettered development while invoking Chinese competition as a shield against regulation. Meanwhile, the populist elements of Trump’s base remain deeply sceptical of coastal elites building “machine gods” that could displace American workers. 

These contradictions explain Vance’s rhetorical tightrope: promising deregulation to please potential big tech donors, while assuring workers that they will have “a seat at the table” in a future that may not require their labour. It is what you would expect from an administration being pulled in three directions.

The strongest pull comes from the “accelerationists”. This is a group that includes venture capitalists, such as Marc Andreessen, and their venture firms like Andreessen Horowitz (a16z), and tech giants interested in AI like Meta and Nvidia. 

They spend hundreds of millions in lobbying and political spending. After pharmaceuticals, tech giants are DC’s largest lobbyists. They have one lobbyist for every two Congress members, spending a record $61.5 million in reported federal lobbying in 2024, not including unreported think tank donations and direct political fundraising. It is commonly recognised in Silicon Valley that when a senator visits, the first dinner stop they make is a fundraiser hosted by Marc Andreessen or David Sacks, now the White House’s AI and crypto csar. 

The group has said again and again that technological advancement should not be hampered by over-regulation or a fear of hypothetical future risks. It is a group that has been scarred by the crippling environmental regulation of the late 20th century — particularly for nuclear power, where environmentalist groups made similarly “existential” arguments about nuclear safety. Accelerationists liken such thinking to that of the Effective Altruists, who have long called for caution in the development of AI, and whose lobbying influenced the Biden administration.

Much of Andreessen’s writing about AI is an implicit rebuke to Effective Altruism. In his manifesto “Why AI Will Save The World”, he positions AI as “quite possibly the most important — and best — thing our civilization has ever created”. He rejects concerns about AI killing humanity (calling this a “category error”, since AI is “math — code — computers”); ruining society through harmful outputs (dismissing objections as “thought police”); eliminating jobs (Andreessen counters this with economic theories on productivity growth); or causing inequality (arguing technology democratisation benefits everyone).

As we get closer to powerful AI systems beyond “math — code — computers”, and public and political concerns grow, the accelerationist faction will likely double down on their framing of regulations as capitulation to China. They will reframe the debate around American dominance. Ted Cruz summed things up on his podcast: “If there are going to be killer robots, I’d rather they be American killer robots than Chinese.” 

Soon enough, national security risks will become clear. These will include AI-assisted bioweapon development, or clear evidence that systems are misaligned with the intent of their operators. At this point, the accelerationists will call for guardrails, but only those of the superficial kind. 

In that event, the accelerationists might sound like they are in line with the China hawks, but will quietly continue business as usual. Meta continues to open-source its latest models, despite reports of their use by China’s military. Nvidia has lobbied against, and blatantly violated, export controls. These companies’ political capital comes not only from their willingness to spend money on lobbying, but from their availability as “experts”. Much policymaking is based on available “expertise”, which often means corporate lobbyists.

The accelerationists, then, now have the most power over the White House’s AI policy. But they might understand the least about what’s coming.

The national security hawks have a clearer view of the future. This is a group whose origins can be traced, at least in part, to the first Trump administration. In 2019, the president presciently signed the first ever executive order on AI, recognising “the paramount importance of American AI leadership to the economic and national security of the United States”. 

This view persisted among select members of the Biden administration, including Sullivan and Ben Buchanan. Sullivan and Buchanan have been succeeded in the White House by a faction that includes traditional Republican “China Hawks” such as the new National Security Advisor, Mike Waltz, who view AI as the defining front in a new cold war. The faction has allies in AI lab leaders, like Anthropic’s Dario Amodei, who have publicly advocated for the US to maintain leadership in the development of AI, even if it involves means such as export controls. 

To the national security hawks, it is an undeniable fact that China must not achieve superintelligence first. There are two distinct ways to ensure victory: Accelerate America or Obstruct China.

The choice of approach depends on the perceived risks of AI beyond the China competition. It’s no good beating Beijing if, by winning, the US loses control of advanced systems. The greater the perceived existential risk from advanced AI itself, the more inclined officials become toward obstruction over acceleration.

“It’s no good beating Beijing if, by winning, the US loses control of advanced systems.”

The “Accelerate America” hawks believe that the US must be ahead in AI development at all costs. As such, President Trump’s recent executive order calls for “American Dominance” in AI.  At minimum, this means streamlining permitting, removing regulatory barriers, and avoiding hindrances to American progress. And, encouraged by the recent tariffs, reshoring efforts are gaining momentum: Nvidia announced on Monday that it will manufacture its latest Blackwell AI chips in the US, as part of a broader $500 billion investment in the next four years. Hindrances to accelerationists include things like creator copyright enforcement. Accelerate America hawks view any government interference that slows American labs as ultimately benefiting Chinese competitors, who operate without such constraints. 

Moving further along the spectrum, mid-level hawkish interventions would see the military and government industries not merely adopting advanced AI systems, but actively co-developing them. The most aggressively hawkish iteration of the Trump administration would designate AGI as a national priority, overseeing a “Manhattan Project” for superintelligent AI, and giving unprecedented access to resources. One such resource would be the NSA’s reams of high quality data.

This policy would entail large government involvement, which could trigger a nation-state arms race. China, we can imagine, would perceive American government-led AI development as an existential threat requiring matching countermeasures. Another problem with the plan is that it would transfer development authority away from technical experts and toward bureaucrats and military planners. As a result, it would decrease the competence of those overseeing the potential intelligence explosion. 

The other group of hawks, the “Chinese Obstruction” camp, focuses less on winning and more on ensuring that China loses. If empowered, this faction would enact rigorous national security evaluations of AI systems, the idea being to build up state capacity and defensive expertise within AI. (This is the current mandate of the US AI Safety Institute.) The obstructionists could extend to federally-backed intervention. This strategy would recognise that open-sourcing AI significantly benefits China, which has strong technical talent but weaker computational resources. With this in mind, note Sam Altman’s recent announcement that OpenAI will open-source one of its leading models. He is trying to get the beating of DeepSeek, a Chinese rival, perhaps to the detriment of national security.

By keeping model weights private, as well as restricting knowledge of training methodologies, America might be able to maintain its lead. This strategy rests on information asymmetry. As such, it might include export controls on both hardware and knowledge. There could be criminal penalties for sharing restricted AI information with Chinese entities, and policymakers could classify certain AI techniques as controlled defence technologies. There are already signs of this attitude. Senator Tom Cotton recently proposed barring citizens from foreign adversaries like China from working in the national laboratories. 

The obstructionist hawks borrow from nuclear deterrence theory. As a result, they understand that the creation of transparent thresholds for AI development (such as the size of a training run), and the imposition of consequences for those who violate those thresholds, might make the development of superintelligence less economically and politically attractive. (That was the argument of a well-circulated paper whose three authors included a former Google CEO and an advisor to Elon Musk’s xAI company). Under such a strategy, the US would publicly commit to specific countermeasures triggered by Chinese advancement toward superintelligence. These countermeasures could range from economic sanctions to targeted cyber operations against research infrastructure to more aggressive interventions.

Finally, the most extreme obstructionist approach recognises that the AI arms race itself might be destabilising, regardless of who wins. This strategy would involve America voluntarily restraining its own superintelligence development in exchange for verifiable international agreements — perhaps the “Deal of the Century” (or Anthropocene) for President Trump. It would require sophisticated verification mechanisms, potentially including international inspection regimes similar to nuclear non-proliferation treaties. The fundamental premise is that the prospect of Chinese superintelligence is so troubling that America should sacrifice its own in order to avoid it.

The populist Right is the third major faction likely to influence the administration’s approach to AI. This is a group that is dormant in formal policy circles but vocal in the cultural landscape that elected Trump.

Take the comments made by Tucker Carlson, Steve Bannon, and Joe Rogan regarding superintelligence and the Trump administration’s official policies. Carlson has characterised AI as a “threat to human civilisation”. On his appearance on Rogan’s podcast, Carlson went further, declaring: “If it’s bad for people we should strangle it in its crib. Why not just blow up the data centres?” Steve Bannon predicts “artificial superintelligence that is far more intelligent, far more creative, far more powerful than human beings”. Joe Rogan doesn’t think we are ready for it.

This faction splits into two sociologically distinct but overlapping groups. The religious traditionalists — Catholics, Protestants and Evangelicals — fundamentally reject Silicon Valley’s quest to create “machine gods”. To them, this represents not merely technological hubris but potential heresy. They question whether humans should attempt to construct intelligences exceeding our own capabilities — a modern Tower of Babel disconnected from Biblical values.

The second group — workers traumatised by globalisation and the ramifications of the NAFTA trade deal — seeks a better vision for automation than the previous waves, which devastated manufacturing communities. Trump’s base won’t tolerate another generation of workers discarded by technological change. Groups like American Compass represent their interests, demanding AI deployment that strengthens rather than displaces the middle class. Even so, we’ll only see these effects later: AI will affect cognitive work before it affects physical work. As a recent analysis puts it, AI will first come for “films, finance and programming” — not exactly blue-collar labour. 

These two factions find common ground in the New Right’s broader opposition to war and global entanglements. If superintelligence development continues unchecked, the populists will fear a Cold War with China, potentially triggering a Taiwan invasion and disrupting peaceful American life — precisely the scenario Trump promised to avoid.

From a sociological perspective, they see AI’s development as being driven by young, well-paid, liberal Californian engineers racing to build machine gods — while risking both American lives and livelihoods. And after witnessing existing technology’s destruction of family values and youth culture, they fear that AI will do further damage, constraining freedoms while concentrating power in government and tech firms.

As we get more powerful AI systems, the populist Right is likely to become more comfortable with government intervention to protect their lives and livelihoods. The populists will push for law and order policies — “AI agents must follow the law” — while advocating for populist measures to limit automation’s impact. Such measures might include requirements of human oversight; taxation of computational activity; and pro-union regulations.

Most importantly, this group embraces America First principles — rejecting both unnecessary military escalation with China and trade policies that sacrifice American workers. And they offer a third path beyond hawkish-libertarian binary that dominates Washington. 

For the time being, it appears that the Effective Altruists are an afterthought and that the accelerationists and the more aggressive hawks together have the upper hand. Shortly after Michael Kratsios was confirmed as the director of the Office of Science and Technology Policy, President Trump urged him to “blaze a trail to the next frontiers of science” and make America an “unrivalled” global leader in emerging technologies. Kratios’ speech this week expressed an almost identical spirit.

But the arrival of powerful AI will fracture both domestic and international politics. It is a fundamentally destabilising technology — socially, economically, and geopolitically. Increasingly realistic chatbots will support, and then substitute, human friendships and relationships. Autonomous AI agents will replace cognitive labour, then robots will take care of the rest. And the development of this technology might herald the beginning of a new geopolitical Cold War with China — where a Taiwan invasion becomes ever more enticing, data centres become strategic missile targets, and cyberweapons make it onto the escalation ladder. 

It is rarely prudent to second-guess Trump. Elon Musk, for a few weeks the “first buddy”, appears now to be on his way out of the White House. Musk, having said on a recent podcast that AI has a “10-20% chance of killing everyone”, might have been one of the more moderating influences on the president, if only with regard to AI.

Where things go from here, therefore, is anyone’s guess. But despite the prospective exit of Musk, Trump is uniquely placed to achieve what others thought impossible: meaningful coordination domestically and internationally. In short, he has an opportunity to strike the “Deal of the Century” with China. His foreign policy successes — the Abraham Accords in the Middle East, historic summit with Kim Jong Un, pressuring European allies to pay up for defence — show his deftness in using US leverage for American, and global, interests. 

In this spirit, he could avoid war, or worse, by adopting the containment strategy of the obstructionist hawks; protecting his white-collar-voting base; and making a deal with China that mimics nuclear arms deals. The more aggressive hawks might object to it, but it’s worth remembering how quickly Trump has normalised, at least among Republicans, the idea of pragmatic engagement with Russia. Norms are out, negotiations are in, and with AI moving at lightning speed, it is negotiations that are the appropriate course of action.




Source link

Related Posts

1 of 63