
A cursory glance at this year’s headlines suggests that “technology” might as well be synonymous with “artificial intelligence”. Even as many struggle to prove its value, business leaders can’t seem to get enough of generative AI, eager as they are to alchemise pilot projects into something revolutionary.
Spending on the technology is through the roof. AI has rocketed up the agenda for governments around the world, which increasingly view the tech as a matter of national security. The UK and the US, for instance, are developing ambitious sovereign-compute projects. Meanwhile, Silicon Valley firms are spending record amounts on AI infrastructure.
However, the technology has not come without controversy. Although more people than ever before are using AI tools, citizens are increasingly concerned about AI’s impact on jobs and society as a whole.
The year in artificial intelligence so far
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
January
February
March
April
May
June
July
August
September
October
November
December
UK AI Action Plan
Keir Starmer sets out Labour’s vision to make the UK an “AI superpower”, adopting a majority of policy items devised by Matt Clifford, the government’s then-adviser on AI.
Project Stargate
Donald Trump kicks off his second presidential term by announcing a $500bn (£371bn) venture with OpenAI, Softbank and Oracle to develop AI infrastructure.
Deepseek
Nvidia’s stock tanks when word spreads about Deepseek, a Chinese GenAI startup that allegedly trained its LLMs using a fraction of the energy required by its Western rivals to train theirs.
EU AI Act
The first provisions of the EU’s sweeping AI Act come into force, banning dangerous applications of AI, such as for social manipulation.
Google ethics
Google removes from its AI principles a clause restricting its development of AI tech for weaponry or surveillance.
Guardian + OpenAI
A report emerges that the Guardian encouraged strikebreakers to use GenAI to write headlines, following a journalist walkout in December over the publication’s sale of the Observer to Tortoise Media.
Musicians copyright
Kate Bush, Damon Albarn and 1,000 other musicians put their names to a symbolic ‘silent album’ to protest the UK government’s planned changes to copyright law, which would make AI’s use of creative works opt out for artists, rather than opt in, by default.
Oxford + OpenAI
Oxford University and OpenAI announce a five-year collaboration to provide students and faculty with access to “cutting-edge AI tools to enhance teaching, learning and research”.
Manus AI
The Chinese agentic-AI tool becomes publicly available, kicking off a wave of interest in autonomous AI agents and amassing millions of pre-registrations.
Tech giant AI spend
A Bloomberg analysis finds that US hyperscalers are on track to spend $371bn (£275) on computing resources and data centre infrastructure in 2025, a record high for annual AI investment.
Gemini 2.5 launch
Google Deepmind releases its most powerful model yet, a so-called thinking model that can solve highly complex problems by “reasoning” through multiple decision paths before responding.
Meta mass piracy
Writers attend a protest, arranged by The Society of Authors, outside Meta’s London office over allegations that the firm pirated reams of copyrighted creative work to train its AI models.
Google AI mode
Google searchers in the US are met with AI summaries at the top of their search results whether they like it or not. The feature would later become the default in India, the UK and 180 other countries.
Softbank AI hub
Masayoshi Son, Softbank’s chief executive, proposes a $1tn (£742bn) AI-infrastructure hub to be developed in conjunction with Taiwan Semiconductor Manufacturing Co in the burning deserts of Arizona.
Grok heils Hitler
Elon Musk comes under fire after X’s AI chatbot, Grok, is caught praising the Nazi dictator Adolf Hitler, following an update to its system.
GPT-5 launch
OpenAI hails the new GPT-5 as its most powerful model yet, but users say it’s sterile and impersonal.
Perplexity + Chrome
With US authorities calling for reforms at Google to address its alleged monopoly on online search, Perplexity, an AI-search platform, offers to buy the Chrome browser for $34.5bn (£25.6bn). A US federal judge would later find no legal justification to force the sale of the browser.
UK gov Agentic
The UK’s embrace of AI gets personal, as the government proposes deploying AI “helpers” for everyday citizens by 2027. The AI agents would assist people with everything from life admin to career choices.
The race for AI skills
With AI supposedly coming for everyone’s jobs, from coders to poets, organisations are increasingly waking up to the need to upskill, well, everyone. For governments and enterprises of all sizes, preparing workers to use AI has become a priority in recent years.
For instance, Infosys, an IT-services firm, has introduced a layered upskilling programme, where employees receive training based on the level of AI expertise required for their job function. The programme classifies staff as AI “users”, “builders” or “masters”, and outlines different training modules for each group.
Meanwhile, in the Isle of Man, legislators are working to make AI training available to every citizen and business in the crown territory. And, across the UK, schools, colleges, community hubs and workplaces will soon offer AI-skills courses as part of a government-led initiative to train 7.5 million UK workers (about one-fifth of the workforce) to use GenAI effectively. Keir Starmer, the prime minister, has put AI at the heart of the government’s industrial strategy.
The UK’s AI action plan
When the Labour Party came to power in 2024, the new government recruited Matt Clifford, an investor and AI evangelist, to develop a blueprint for making the UK an AI superpower. Thus began the creation of the ‘AI action plan’, which was unveiled by Downing Street at the start of the year.
The plan will “mainline AI into the veins” of the UK, according to Starmer. Its flagship policies include loosening planning restrictions around so-called AI growth zones (clusters of data centres) and developing sovereign data networks and computing infrastructure.
Starmer also said that the UK must carve its own path with AI regulation, especially given that it now has the freedom to do so, post-Brexit.
Deepseek shakes world markets
When the Chinese AI startup Deepseek launched its R1 ‘reasoning’ model, a system that is free to use and cheap to train, the international markets took notice. Tech leaders had long assumed that LLMs require enormous amounts of energy and infrastructure to train and run, but the Deepseek model proved them wrong. Ripples were felt across the sector – the technology-heavy Nasdaq index plummeted by 3% – but worst affected was America’s largest chipmaker, Nvidia, which shed nearly $600bn (£484bn) in market capitalisation.
It wasn’t long before the company recovered, however, as market observers noted that if AI can be made cheaper and more energy efficient, demand for it would likely increase in the long term. Providers would simply seek to accomplish more with the technology rather than use less of it. Nvidia serves as a bellwether for the broader AI and tech industry.
The Deepseek saga failed to undermine the business models of any major semiconductor firm, although shares in Broadcom and Taiwan Semiconductor Manufacturing Co also initially slid on the launch. It may, however, have ignited an AI arms race between global superpowers.
The market shock brought to light China’s growing competency in developing AI models. Curious timing, given that Donald Trump, meanwhile, was preparing to launch a trade war intended at least in part to undermine competition against the US tech industry.
AI: the regulatory picture
The EU became the first major trade bloc to legislate against potentially harmful uses of AI. Its AI Act became law in 2024 and will be applied and enforced in stages. The legislation aims to regulate transparency and accountability for AI systems and sets out levels of acceptable risk for AI applications based on societal, ethical and legal considerations.
Like GDPR, the regulations apply to any organisation operating in the EU, so it’s not only businesses located in the bloc that are impacted. EU member states will enforce the act and non-compliance carries hefty fines. The worst infringers will face penalties of €35m (£30m) or up to 7% of annual worldwide turnover, whichever is higher.
This year, the first and second stages of the act have come into force. The first establishes unacceptable, unethical uses of AI, including for mass surveillance or discrimination. The second targets general-purpose AI platforms such as ChatGPT, establishing a broad range of compliance requirements for companies that use LLMs.
AI providers will be required to assess the safety of their models and demonstrate that they have done so through risk assessments and testing. They’ll also have to retain technical documents on model architectures, make those records available to regulators and publicly disclose training data if authorities deem it necessary.
Competing models
GenAI fans are always eager to try out new iterations of ChatGPT. Before launching its latest model, GPT-5, OpenAI stoked the hype flames by highlighting the LLM’s ability to give “PhD-level” responses, as well as its substantially improved programming abilities for would-be vibe-coders.
But some of ChatGPT’s most loyal users say GPT-5 lacks personality and offers generally unimpressive answers to prompts. OpenAI’s chief executive, Sam Altman, claimed that a malfunctioning model-switching feature meant that users were unknowlingly interacting with a “way dumber” version of the platform.
While AI advocates excitedly anticipate new models, some critics and sceptics have started to wonder how much further GenAI can be improved. If such tools have come as far as they can go, the long-discussed AI bubble could be set to burst.
Nevertheless, OpenAI’s popularity does not appear to be waning. The UK’s tech secretary, Peter Kyle, has apparently discussed a deal with Altman to provide a ChatGPT account to every UK citizen.

A cursory glance at this year’s headlines suggests that "technology" might as well be synonymous with "artificial intelligence". Even as many struggle to prove its value, business leaders can’t seem to get enough of generative AI, eager as they are to alchemise pilot projects into something revolutionary.
Spending on the technology is through the roof. AI has rocketed up the agenda for governments around the world, which increasingly view the tech as a matter of national security. The UK and the US, for instance, are developing ambitious sovereign-compute projects. Meanwhile, Silicon Valley firms are spending record amounts on AI infrastructure.
However, the technology has not come without controversy. Although more people than ever before are using AI tools, citizens are increasingly concerned about AI's impact on jobs and society as a whole.