Some of the most critical lessons from DeepSeek’s playbook – efficiency, optimization, and democratization – failed to land a spot for the greater industry’s reflection.
The AI Action Summit 2025 wrapped up earlier this month in Paris, bringing together the who’s who of tech, AI, and global political power. To no one’s surprise, the key highlight was global disunity – except, of course, when it came to downplaying energy and security concerns and reaffirming the relentless pursuit of AI dominance at any cost.
Ahead of the summit, Peter Kyle, the UK's Technology Secretary, emphasized the need for "Western, liberal, democratic" countries to spearhead AI advancements, so they could reinforce their democratic principles and liberal values, rather than leaving the playing field open for authoritarian regimes to impose theirs.
Up until now, the West had undisputedly held the lead – that is of course, until DeepSeek catapulted to the top of the charts. And while DeepSeek’s founder did not attend the summit, the AI dark horse was anything but absent from the conversation – though perhaps not entirely for the reasons one might think.
Some of the most critical lessons from DeepSeek’s playbook – efficiency, optimization, and democratization – failed to land a spot for the greater industry’s reflection. It is this lack of inclusivity and sustainability which could be a major missed opportunity for the West.
AI workloads are known to be resource-intensive, but for users, it can be difficult to envision the true scale of consumption. Experts believe AI servers could soon consume as much as 134 terawatt hours of electricity a year. That’s equivalent to the annual power consumption of a large country. Water is another critical resource these servers need to stay cool and function optimally. Water consumption in Virginia’s data center alley alone has alarmingly surged to at least 1.85 billion gallons, which could’ve been enough to support the basic water needs of about 200,000 people annually. These figures expose the insatiable and unsustainable rate at which AI is devouring these levels of these critical resources.
Unhinged resource consumption will inevitably catch up with the AI industry. There’s only so much electricity we can generate and only so much water to go around.
With multi-billion-dollar giants ready to splurge whatever it takes for AI dominance, the cost is hardly a barrier. However, this unhinged resource consumption will inevitably catch up with the AI industry. There’s only so much electricity we can generate and only so much water to go around.
Tesla CEO and xAI founder Elon Musk warned last year that there soon won’t be enough electricity to meet demand. Meta CEO Mark Zuckerberg also seems to agree that tech giants are more likely to run out of electricity before they run out of money. At some point, all big players will have to figure out algorithmic efficiency and optimization strategies to bring consumption down to a sustainable level.
It’s easy to take resources like water and electricity for granted when they are abundant. What’s harder to overlook and even harder to overcome is the specialized hardware and infrastructure capacity that even AI giants can’t seem to produce or purchase in sufficient quantities. AWS CEO Andy Jassy highlighted capacity constraints during Amazon’s latest earnings call, citing shortages of AI chips, server components, and even insufficient energy supply as key factors limiting the cloud division’s growth amid surging AI demand.
This underscores broader industry concerns as representatives from Microsoft and Google have also voiced similar challenges. The key takeaway here is that even they can't scale production and capacity exponentially forever.
The capacity constraints are haunting behemoths like AWS, despite its $100 billion expenditure plans for 2025. Similarly, Meta has allocated $60-$65 billion for AI expansion. The Stargate Project, an AI infrastructure joint venture between OpenAI, SoftBank, and Oracle, launched with an initial investment of $100 billion in January. While enterprises can afford to spend billions on building massive data centers, specialized chips and hardware components, the growing emphasis on big AI spend can push SMBs and small innovators out of the AI loop.
Exorbitant costs and capacity constraints alone have been a major blockage in the democratization of AI.
Exorbitant costs and capacity constraints alone have been a major blockage in the democratization of AI. However, DeepSeek challenged that perception with its latest releases built with a fraction of the budget and specialized chips. AI leaders and aspirants can both take note of DeepSeek’s dedicated focus on algorithmic efficiency and innovative approach to making the most out of the resources they had.
DeepSeek’s V3 model is estimated to have cost around $5.5 million to train, compared to OpenAI’s $100 million cost claims. While the exact figures are considered debatable, DeepSeek’s significantly lower costs are due to its hyper-fixation on highly efficient algorithms and the use of PTX (NVIDIA’s intermediate assembly language for GPUs), which provides fine-tuned control over NVIDIA GPUs in addition to faster execution times.
Having multiple, smaller but dedicated and more efficient models instead of large generic models allows companies to get more accurate results while running on less powerful hardware.
In addition, both V3 and R1 models process high-quality, targeted data rather than vast, generic datasets. Having multiple, smaller but dedicated and more efficient models instead of large generic models allows companies to get more accurate results while running on less powerful hardware. This allows users to lower their reliance on high-end, specialized hardware and maximize the use of resources they already have – an approach perfectly true to emma’s core philosophy. This efficiency and usage optimization has put DeepSeek in line with much larger and established Western competitors.
At emma, strategic optimization, best-of-breed freedom and resource democratization are at the heart of every solution we build. AI and the infrastructure that powers it should be accessible to businesses of all sizes, and that’s precisely what the emma cloud management platform helps with. It provides the ultimate freedom to choose the most suitable cloud infrastructure and environments for AI workloads, across regions, continents, and providers. To ensure sustainability, the platform highlights unused spot instances, idle resources, and underutilized capacity, allowing companies to rightsize and strategically optimize infrastructure utilization.
Future technological and AI leadership will eventually stem from inclusivity and strategy, not deep pockets, walled gardens, or reckless consumption. As capacity demands rise and production maxes out, everyone will have to prioritize optimization and sustainability over reckless expansions and overutilization. Algorithmic efficiency, optimal resource consumption, and reduced wastage are essential for the long-term utilization and advancement of AI. The ultimate winners in this AI race will be those who focus on the bigger picture rather than chasing fleeting successes at exorbitant costs.