DataCentreNews UK - Specialist news for cloud & data centre decision-makers
Email attachment20260326 2085927 qm77xu

AMD & Hammer back CPU-first AI amid grid constraints

Thu, 26th Mar 2026

AMD and Hammer Distribution have introduced a CPU-first infrastructure strategy for AI deployments in the UK, aimed at data centres facing limits on grid connections and electricity use.

The strategy centres on the idea that processors, not just graphics chips, should play a larger role in AI systems as businesses shift from training models to running inference workloads. This is particularly relevant for organisations trying to expand AI projects within fixed power budgets and without waiting for new grid capacity.

The approach comes as UK data centre operators face tighter scrutiny over access to electricity. Government policy papers and regulatory changes from Ofgem and the National Energy System Operator have identified grid availability as a major barrier to further expansion. Recent reforms are intended to remove stalled schemes from the connection queue and prioritise projects considered ready and strategically aligned.

Against that backdrop, Hammer and AMD are positioning CPU deployment as an economic and operational response to power constraints. They argue that making better use of general-purpose processors can cut the electricity needs of some AI projects and allow more systems to run within existing infrastructure.

Inference shift

The focus is on inference, the stage at which trained models answer queries, summarise documents or support search. Many enterprise tasks do not always require heavy use of accelerators and can instead run on AMD EPYC processors, particularly where latency demands are less strict.

Guidance cited by the two companies suggests CPU-led inference can suit models of up to 20 billion parameters in some cases. The workloads highlighted include document workflows, retrieval-augmented generation and summarisation.

This reflects a broader debate in the AI infrastructure market over where expensive accelerators are essential and where they can be used more selectively. Operators are increasingly weighing the cost of deploying graphics processing units against electricity consumption, utilisation rates and connection delays.

Hammer argues that inefficient system design can leave costly hardware underused, especially when processors, memory and accelerators are poorly matched. In its view, central processors play a critical role in feeding data through AI systems and can determine whether a deployment runs smoothly or is held back by bottlenecks.

"The next phase of AI isn't constrained by model ambition so much as power availability and system efficiency," said Adam Blackwell, Director of AI, Server and Advanced Technology at Hammer Distribution.

"By optimizing the CPU's role in the AI pipeline, from data ingest to inference, we are enabling our partners to deliver viable AI solutions that fit within today's strict European energy reporting and power constraints," Blackwell said.

Regulatory pressure

The commercial case is being shaped not only by grid queues but also by energy reporting rules. The European Commission's Energy Efficiency Directive introduces mandatory reporting for data centre performance, increasing pressure on operators to show how effectively electricity is being used.

For infrastructure buyers, power efficiency is now a board-level issue as much as a technical one. Data centre operators and enterprise IT teams must consider whether a system's output justifies its energy use, especially as regulators and customers pay closer attention to environmental performance.

Hammer and AMD frame this as a question of "useful work per watt", arguing that hardware choices will increasingly be judged through that lens. In their view, reducing the number of accelerators in workloads that can tolerate CPU execution may lower total cost of ownership while improving compliance with reporting expectations.

Grid constraints

The timing also reflects recent changes in the UK energy system. Under the "First Ready, First Connected" reforms introduced by Ofgem and NESO, projects deemed non-viable are being removed from the queue to free capacity for developments more likely to proceed.

That creates an incentive for data centre operators to present projects with smaller power demands and clearer readiness. A design that requires fewer upgrades to local electricity infrastructure may stand a better chance of moving forward than one built around a larger initial power request.

The pressure is not confined to the UK. In Ireland, regulators have also tightened conditions for new data centre connections, including requirements around onsite generation and renewable supply, adding to a broader European pattern of tougher oversight for electricity-intensive digital infrastructure.

Hammer, formerly Exertis Enterprise, has operated in storage and server distribution for more than 30 years and works with channel partners across the UK and Europe. AMD is using the partnership to push its EPYC processor range more directly into the debate over how AI systems should be built when power, rather than chip supply alone, is the limiting factor.

At the core of the argument is the idea that, for some businesses, deploying more AI may depend less on securing additional megawatts than on reducing how much electricity each system uses in the first place.