
Data centres are the backbone of modern computing, housing the world’s servers and data processing, but this comes at a cost. Data centres consume huge amounts of electricity. They account for roughly 3% of global energy use, and that figure is expected to rise to 8% by 2030.
Globally, this amounts to an estimated 200 terawatt hours (TWh) annually – more than the total energy consumption of some entire countries. Much of this is due to the demands of AI.
A single ChatGPT query, for example, requires 2.9 watt-hours of electricity, compared with 0.3 watt-hours for an average Google search. Goldman Sachs estimates that AI will soon add another 200TWh to global data centre power consumption, doubling the current energy demand.
Reducing the energy consumption of data centres is therefore vital for achieving national net-zero targets and reducing business costs.
Supermicro is a market leader in designing and delivering the components for data centres, offering servers designed to use less power while exceeding standard performance levels.
In the relentless pursuit of data centre efficiency, every decision – from server selection to cooling infrastructure – must balance peak performance with environmental stewardship, driving both operational excellence and sustainability goals.
Two of the larger data-centre energy demands come from computers’ central processing units (CPU), which execute instructions from computer programs and process data.
While data centres rely on central processing units (CPUs) for general computing, graphics processing units (GPUs) dominate modern AI power demands. Each GPU can consume over 1000 watts – more than double a CPU’s maximum draw of 500 watts.
The impact is multiplied by the architecture of AI-optimised systems, which typically contain eight GPUs for every two CPUs. This means a single AI server can have a GPU power footprint that’s more than eight times larger than its CPU requirements. When multiplied across thousands of servers in large AI data centres, GPU power consumption becomes the dominant factor in both energy use and cooling demands.
Consider the evolution from traditional data centres to today’s AI computing facilities – while CPU requirements have certainly grown, it’s the massive deployment of power-hungry GPUs that is driving the unprecedented surge in energy consumption and thermal management challenges.
Graphics processing units (GPU) are the computing hardware designed to render high-quality images and videos efficiently. Originally used for 3D games, they have evolved to become the engine of AI due to their ability to perform many parallel operations.
Therefore, it’s not hard to see how the requirements for both CPUs and GPUs have skyrocketed in recent years.
There’s only so much air cooling can do to dissipate the heat generated by modern hardware
The main energy demand, however, doesn’t simply come from plugging more and more servers into the mains. It comes from keeping them cool.
As computational demands soar, traditional air cooling methods – relying on fans, heatsinks and HVAC systems – are reaching their limits in data centres. The challenge was made clear when even tech giants Google and Oracle experienced server downtime during Europe’s 2023 heatwave, highlighting the growing thermal management crisis facing the industry.
“Air cooling, while effective in the past, is reaching its physical limits”, explains Upadhyayula. “There’s only so much air can do to dissipate the heat generated by modern hardware. To continue using air cooling with today’s high-power components, you would need larger fans and increased system space to effectively circulate and expel hot air,” he explains.
While increasing the dimensions of systems could help, it conflicts with data-centre goals of optimising rack space and density, he says. “These opposing factors make liquid cooling more attractive since it can maintain or even reduce system size while achieving higher thermal efficiency.”
Over the past decade, Supermicro has pioneered liquid cooling technology, revolutionising data-centre thermal management. Through an innovative system where coolant circulates to specialised cold plates and heat is efficiently extracted from critical components without direct contact.
This addresses the mounting thermal challenges posed by AI workloads. The technology has garnered enthusiasm from industry experts thanks to its ability to deliver superior cooling performance while maintaining compact system footprints – a crucial advantage in space-constrained environments.
The superior efficiency of liquid cooling systems is undeniable, but there are barriers to adoption, including resistance from data-centre operators wary of the unfamiliar technology and infrastructure requirements.
Still, liquid cooling systems permit greater computing density and significantly reduce an organisation’s carbon footprint.
For these reasons, Upadhyayula expects to see “more customers gravitating towards liquid cooling to maintain their current data-centre footprints while achieving higher system performance”.
Achieving this requires a bespoke partnership approach rather than an off-the-shelf sales transaction.
“We invite customers to our engineering test facilities, where we work collaboratively to identify and resolve their challenges. By prioritising customer-specific problem-solving over rigid metrics, we aim to deliver solutions that truly address their operational needs.”
Supermicro maintains complete control over its cooling solutions through comprehensive in-house design and assembly operations. This vertically integrated approach ensures exacting quality standards and enables rapid innovation through direct oversight of every stage from concept to completion.
“Our customer-centric approach involves understanding workloads and tailoring solutions to their specific needs”, explains Upadhyayula. “This includes determining whether air cooling, liquid cooling, or another strategy is the most energy-efficient and effective option. Each system is designed with certified components, ensuring reliability and performance.”
Reshaping sustainable computing, Supermicro’s resource-saving architecture eliminates the need for complete system overhauls by allowing targeted component upgrades, thereby maintaining cutting-edge performance while dramatically reducing electronic waste and operational costs.
The solution can help CIOs gain the flexibility to modernise systems piece by piece, while CFOs benefit from reduced capital spending and lower operating costs, ultimately maximising return on infrastructure investments.
The data centres of the near future will prioritise enhancing efficiency, with liquid cooling and resource-saving architectures playing pivotal roles.
Innovations in cooling technologies can help to move AI from a drain to an enabler - AI-powered management tools can optimise cooling, workload distribution and energy usage in real-time.
Automation will further enhance operational efficiency, reducing human intervention and improving reliability, allowing for higher density in racks and minimising the space requirements for data centres while improving performance.
While the exact form of future data centres will depend on these advances, the overarching goal remains clear: to deliver higher performance with lower environmental impact, paving the way for a sustainable and greener digital future.
In the quest for perfect energy efficiency, the industry pursues the goal of a 1.0 power usage effectiveness (PUE) rating, representing zero energy waste.
PUE measures how efficiently a data centre uses energy by dividing total facility energy consumption by IT equipment energy consumption - the lower the rating, the better the efficiency.
While the current industry average stands at 1.35 PUE, with 1.09 once viewed as near-optimal, Upadhyayula says achieving higher energy efficiency remains a significant challenge.
“It’s akin to railroad tracks appearing to converge in the distance. Although they never truly meet, they guide us toward a shared direction, optimising performance and efficiency simultaneously,” he says.
Supermicro will keep moving the data centre industry in the right direction towards a greener, cleaner future.
The data-centre scorecard: The four pillars of sustainability success
In today’s digital landscape, CIOs and infrastructure leaders face the dual challenge of delivering exceptional performance while meeting rigorous environmental standards.
Success in modern data centre management requires a balanced approach that optimises both operational efficiency and sustainability. Here’s how to evaluate your strategy across four essential pillars.
Intelligent power management
The cornerstone of sustainable data centre operations lies in smart power utilisation. Advanced power management systems now enable real-time adaptation to workload fluctuations, ensuring energy is deployed precisely where and when needed. This dynamic approach prevents unnecessary component strain while significantly reducing energy waste.
Success in this pillar means demonstrating measurable reductions in power consumption without compromising performance. Look for systems that can provide detailed analytics on power usage effectiveness (PUE) and automatically adjust to varying demands. The ability to handle processing spikes efficiently while maintaining optimal performance during quieter periods is crucial for both sustainability and operational excellence.
Efficient cooling systems
Cooling efficiency represents a critical metric in sustainable data centre operations. With the increasing density of computing resources, particularly in AI and high-performance workloads, traditional cooling methods may no longer suffice. Success in this area requires implementing advanced cooling technologies to match specific needs.
A cooling strategy should be evaluated on its ability to maintain optimal operating temperatures while minimising energy consumption. Liquid cooling solutions, for instance, can offer superior heat dissipation for high-density configurations, while optimised airflow designs might suffice for lower-demand applications. The key is selecting solutions that scale with the organisation’s specific needs while maintaining efficiency.
Renewable-energy integration
The transition to renewable energy sources is a defining characteristic of future-ready data centres. Success in this pillar involves more than just purchasing renewable-energy credits – it requires a comprehensive strategy for integrating sustainable power sources into existing operations.
Measure success through the proportion of operations powered by renewables and the reduction in carbon emissions. Consider both centralised and distributed approaches: larger facilities might benefit from direct access to hydroelectric or solar power, while edge locations could leverage local renewable resources. This hybrid approach ensures sustainable power delivery across the entire infrastructure estate.
Advanced thermal design
The fourth pillar focuses on sophisticated thermal management through intentional design. Rather than treating cooling as an afterthought, successful data centres integrate thermal considerations from the ground up. This proactive approach encompasses everything from component placement to airflow optimisation.
Measure success through metrics such as thermal efficiency, component longevity and cooling system performance. Look for designs that minimise hot spots, optimise air or liquid cooling pathways and reduce the overall energy required for thermal management. The most effective solutions will demonstrate improved component life spans while maintaining or enhancing performance capabilities.
To effectively measure success across these pillars, establish clear metrics and regular monitoring procedures. Key performance indicators should include:
- Energy-efficiency ratios
- Carbon-footprint measurements
- Component performance and longevity statistics
- Cooling-system effectiveness
- Renewable-energy utilisation rates
Regularly assessing these metrics helps to identify areas for improvement and validates the effectiveness of sustainable initiatives. The most successful strategies will show continuous improvement across all four pillars while maintaining or enhancing operational performance.
By evaluating your data-centre strategy against these pillars, you can ensure that your infrastructure not only meets current sustainability requirements but is also prepared for future challenges.
Remember that success in sustainable data-centre operations isn’t just about meeting environmental targets – it’s about creating resilient, efficient and future-proof infrastructure that delivers both business and environmental value.
For more information about how NVIDIA’s fully accelerated computing platform has provided leaps in AI training and inference, visit nvidia.com

Data centres are the backbone of modern computing, housing the world’s servers and data processing, but this comes at a cost. Data centres consume huge amounts of electricity. They account for roughly 3% of global energy use, and that figure is expected to rise to 8% by 2030.
Globally, this amounts to an estimated 200 terawatt hours (TWh) annually – more than the total energy consumption of some entire countries. Much of this is due to the demands of AI.
A single ChatGPT query, for example, requires 2.9 watt-hours of electricity, compared with 0.3 watt-hours for an average Google search. Goldman Sachs estimates that AI will soon add another 200TWh to global data centre power consumption, doubling the current energy demand.