
A super-intelligence bypassing all human control. AI that will not just replace all workers, but even CEOs. Guardrails on big tech being seen by some as a harbinger of the antichrist… Wait. What? When it comes to AI, panicked headlines around unexpected intelligence and capability are coming thick and fast and with every new scare another zero is added to someone’s valuation.
It is a collective failure of public discourse that we haven’t called bullshit on this hyperbole sooner, much of it manufactured to obfuscate the truth.
Fearmongering is a well-oiled feature of the AI business model; it keeps the hype up and cash flowing while the real problems – runaway costs, brittle infrastructure and a dangerous concentration of power – go largely unchallenged. Any business weaving AI into its operations should pause to ask: who benefits and what’s happening while we’re all kept distracted by these far-fetched notions?
But first, a disclaimer: my background is in information security, interface politics and digital systems. I use AI daily, both to analyse socio-political threats and as an assistant in my wider research work. I believe it’s very likely that AI, in some form, is here to stay. But we have a major problem: the rhetoric doesn’t match the economic and infrastructural costs needed to support this supposed doomsday technology leap. In fact, it doesn’t even support the capabilities of AI today. If you’re now hooked on AI, you should have a problem with it too.
The financial logic of the AI boom is absurd, which is why it demands such narrative camouflage. Large language models like ChatGPT, Gemini, Grok and Claude are ravenous machines. Every prompt burns through vast computing power via specialised processors provided by just one manufacturer – Nvidia.
Nvidia makes sure that every prompt costs real money. If you strip away the sweetheart GPU deals, tax breaks and circular investment between a handful of firms and price the same computation at the open market rate for accelerator time, the real cost of responses from the largest models runs tens or sometimes hundreds of times higher than the prices users see. The only reason you aren’t paying that – and the only reason these systems look cheap or flat-rate – is that those companies are eating the difference and recycling subsidies and investor cash through one another.
Nvidia sells chips to Microsoft, Microsoft invests in OpenAI, OpenAI drives demand for Nvidia. Everyone inside the bubble cheers the illusion of profit, buying time in the hope that one company creates artificial general intelligence (AGI) and ‘wins capitalism’.
Back in reality, meanwhile, basic maths must apply: if every user prompt requires so much processing power, the economic model will collapse the moment the subsidies are taken away. If the business justification for AI is for competitiveness and cost-efficiency, then building a dependence on this subsidy is a huge gamble. This is true regardless of whether any of these companies actually succeed in their quest to develop AGI.
The truth is boring
Part of what keeps the frenzy alive is little more than a parlour trick. The old saying goes that to a hammer salesman, everything looks like a nail. In the age of AI, the hammers can talk too. That has impressed everyone. But the people selling the tools now insist they can also think, feel and even worry about being switched off.
It’s a clever sleight of hand that makes for headlines straight out of dystopian fiction. Predictive models that string words together are recast as sentient beings on the verge of consciousness and world domination. The story sells better than the truth and the truth is boring: these systems are statistical parrots trained to guess the next word.
Each new ‘revelation’ about machines passing the Turing Test or ‘begging for life’ drives another round of headlines and investment. The illusion works because it flatters both fear and ego – fear that we’re losing control and ego that we’ve built something godlike. Meanwhile, a handful of companies cash in on the performance, selling us the fantasy that their talking hammers are anything more than expensive tools.
To panic or not to panic
If the talking hammers metaphor goes too far, then simply think of what your business is doing as an addiction. AI works like a performance-enhancing drug: it gives you a short-term edge while making you dependent upon its use. It boosts productivity here and there, drafts your emails, cleans up your code and slowly convinces you that you can’t work without it.
Some have claimed that AI models like ChatGPT and others ‘rewire’ the brain to create this dependency. The truth is far more banal. It is habit-forming – both for us as individuals and our professional lives. And like most habit-forming products, the key is that you do not control it. Most of what passes for ‘efficiency’ today is borrowed from a system running on a 99% discount sticker. When that subsidy disappears and you can’t afford it, the comedown will be brutal. Are you or your business ready for that?
So, if we really want to indulge in panic, then let’s save it for the moment the economic reality hits home and the price tag skyrockets.
The cloud providers and chip manufacturers will be fine when the hype money runs out because they’ll still own the infrastructure. This is a deeply cynical business model, but it’s one that’s proven to work. Look at the Gold Rush – the miners went broke while the merchants selling the shovels made all the money. Now the startups, consultants and mid-sized firms betting everything on AI-powered tools will be the ones left staring into an empty pan.
Or here’s a much better idea: let’s not indulge in panic at all and do something practical instead.
Start by rejecting the myth that this is the only way things can work or that we’re powerless. Open, participatory standards can give users – whether businesses, institutions, or individuals – a stake in how AI is built and used. Transparency around data, training and cost would strip away the mystique and force the industry to grow up. Ensure that AI is deployed locally first, so that companies must always compete against systems that everyday people can run in their homes for minimal cost. If AI is the future, then it needs to be built in a way that ensures it can’t collapse or where entire populations are not exploited by a parasitic sector of the tech industry.
In 2023, the World Ethical Data Foundation published the Open Standard, pushing for a different model. As a global network of developers, researchers and policymakers, we’re working to make technology accountable to those who depend on it, rather than a few billionaires. Our aim is to replace blind trust in a creaky system edging towards an asymmetric power structure with civic trust.
Panic and fear cause confusion and confused people lack the agency to build any kind of future. But if we invest in the ethical, decentralised and equitable foundations now, we can develop systems that reward collaboration over extraction and we can progress with a sound economic and infrastructural model in place.
Cade Diehm is head of research at the World Ethical Data Foundation.
A super-intelligence bypassing all human control. AI that will not just replace all workers, but even CEOs. Guardrails on big tech being seen by some as a harbinger of the antichrist… Wait. What? When it comes to AI, panicked headlines around unexpected intelligence and capability are coming thick and fast and with every new scare another zero is added to someone’s valuation.
It is a collective failure of public discourse that we haven’t called bullshit on this hyperbole sooner, much of it manufactured to obfuscate the truth.
Fearmongering is a well-oiled feature of the AI business model; it keeps the hype up and cash flowing while the real problems – runaway costs, brittle infrastructure and a dangerous concentration of power – go largely unchallenged. Any business weaving AI into its operations should pause to ask: who benefits and what’s happening while we’re all kept distracted by these far-fetched notions?
