20 NOV 2025

Beyond GPUs: The New Compute Stack Powering AI’s Next Decade

Written by David Kakanis

If 2023–2024 crowned foundation models, 2025 belongs to the machines they run on. At Web Summit 2025, hardware had gravitational pull. RankMyAI was there to track these developments firsthand. The message was blunt: AI advantage starts with the right chips, in the right places.

What Web Summit Made Clear: Hardware Is Strategy

This year’s Web Summit put hardware front and center. NVIDIA remained the ecosystem’s anchor, with Siemens deepening its Omniverse-based collaboration and Uber announcing work with NVIDIA across simulation, DGX Cloud, and a Level-4 robotaxi push—proof that compute partnerships have become board-level strategy, not just procurement. Meanwhile, infrastructure providers emphasized fully managed, private AI capacity as a complement to public cloud. (NVIDIA–Uber Announcement)

Beyond GPUs: Four Accelerators to Watch

a) Photonics (Arago): AI at the Speed of Light

Paris-based Arago is building a photonic AI accelerator that offloads the most repetitive tensor operations—such as matrix multiplications—into the optical domain. The promise: order-of-magnitude energy savings and extremely high throughput, wrapped in a software layer compatible with PyTorch and TensorFlow.

Why photonics now? As interconnects, switches, and even compute cores transition from electrical to optical, the boundary between “communication” and “computation” dissolves. This opens space for Optical Processing Units (OPUs) to sit alongside CPUs and GPUs in the data-center stack. Europe’s photonics base—including work from LightOn and others—shows that photonics is moving from research labs into production systems. (LightOn Photonic Coprocessor)

b) Novel Digital Inference (Groq)

Groq continues to push deterministic, low-latency inference through its LPU architecture, which optimizes for tokens per second rather than pure FLOPs. In RankMyAI’s AI Chip Development – Overall ranking (2025), Groq leads—reflecting growing developer demand for ultra-high-throughput inference workloads. (RankMyAI)

c) Wafer-Scale Compute & Sovereign AI (Cerebras)

Cerebras Systems continues to scale its wafer-sized CS-3 engine, powering its Condor Galaxy supercomputers with G42. These systems have trained models such as Jais, an Arabic LLM developed on CG-1.

In November 2025, Cerebras launched “Cerebras for Nations”—a program targeting sovereign AI initiatives, enabling nations to build and govern their own large-scale AI compute. The company also announced a 100 MW data center in Guyana, located near the Gas-to-Energy project, highlighting the tight coupling of compute scale and energy strategy. Cerebras’ momentum includes U.S. defense work, European deployments, and a $8.1B valuation reported in 2025.

d) Quantum On-Ramps (IonQ)

While quantum hardware is not a replacement for classical accelerators, IonQ is demonstrating hybrid quantum–AI pipelines—such as quantum-enhanced GANs and quantum layers that fine-tune LLMs. IonQ showcased these capabilities at Web Summit 2025. The message: quantum will augment, not replace, classical AI, particularly in optimization and simulation workloads. (IonQ)

The Economics: Performance per Watt Is the New KPI

Across the stack, one trend is consistent: the next decade of AI performance will be constrained less by FLOPs and more by energy budgets. Photonics players like Arago target 10× efficiency gains; wafer-scale systems minimize memory movement; inference-first architectures cut latency; and quantum pipelines aim at inherently optimization-heavy tasks. Cerebras also highlights its 10× more efficient inference relative to GPU alternatives due to its wafer-scale memory and compute architecture.

Why it matters: power is the new constraint. Major analyses (IEA, S&P Global, Goldman Sachs) project that global data-center electricity consumption could double by 2030, with AI as the primary driver— reaching roughly 3–4% of global electricity demand. Efficiency now compounds into economics, speed, and sustainability. (Goldman Sachs, S&P Global / IEA)

Partnerships Are the New Platforms

A major takeaway from Web Summit 2025: partnerships drive the AI ecosystem. Enterprises pair with chip vendors; chip vendors partner with infrastructure providers; and nations partner with hardware platforms to build sovereign AI capacity. AI is no longer built by single vendors—it is architected through interconnected alliances.

Where Our Rankings Point

RankMyAI’s AI Chip Development – Overall ranking (late 2025) reflects this shake-up: Groq leads the field; Cerebras sits firmly in the top cohort; and photonics and quantum entrants are rapidly rising as funding grows and pilots become production workloads.

A decade from now, the AI data-center rack will look unfamiliar: CPUs for control, GPUs for general tensor math, OPUs for optical dense compute, LPUs/WSEs for ultra-low-latency or wafer-scale execution, and QPUs for quantum-enhanced optimization. The winners will be those who match workloads to the right silicon—and stitch it together with the right partners.

Sources

  • NVIDIA & Uber Robotaxi Partnership: https://nvidianews.nvidia.com/news/nvidia-uber-robotaxi
  • Goldman Sachs – AI Power Demand Forecast: https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand
  • S&P Global / IEA Data-Center Power Forecast: https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/041025-global-data-center-power-demand-to-double-by-2030-on-ai-surge-iea
  • LightOn Photonic Coprocessor: https://www.lighton.ai/lighton-blogs/lighton-photonic-coprocessor-integrated-into-european-ai-supercomputer
  • RankMyAI – AI Chip Development Rankings: https://www.rankmyai.com
  • IonQ – Quantum Computing for AI: https://ionq.com
  • Cerebras Systems: Corporate announcements, G42 partnership, Condor Galaxy updates, Guyana energy-linked compute strategy (Reuters & Cerebras press releases)

Other articles

Social Media

© 2025 RankmyAI is licensed under CC BY 4.0
and is part of:

logo HvA

Get free insights in your inbox: