AI is devouring power and fiber optics: how telcos can stop being subcontractors to hyperscalers in the era of agent-based AI
- 1 day ago
- 11 min read
Over the past 10–15 years, telcos have invested billions in networks, 4G/5G, and fiber optics, while others—OTT platforms, social media, and hyperscalers—have reaped the benefits of that growth. Data grew by tens of percent annually, while operators’ revenues grew by only a few percent. This is no longer just an industry anecdote: long-term data from GSMA Intelligence and analyses by STL Partners and McKinsey show this divergence in the form of a “scissors effect” that has been closing for a decade—and not in the operators’ favor.

Today, the industry faces another wave: the explosion of generative and agent-based AI. This time, the stakes are different, because AI is not just another app on the internet. It is a layer that is transforming how entire sectors operate—from banking and industry to the public sector. And it relies on resources that telcos already have: fiber optics, locations, power supply, critical networks.
So the question isn’t “Are operators relevant in the AI era?”, but rather: where and how can they finally move beyond the role of “internet pipes” and start capturing real value?
AI is transforming infrastructure: from centralized data centers to a distributed backbone
On one hand, we have massive data centers for training and running the largest models. These are dominated by hyperscalers and specialized colocation firms. On the other hand, the need for distributed infrastructure closer to the user is growing rapidly: inference, agent-based systems responding in near real-time, and edge computing.
An agent-based system supporting a factory, a logistics fleet, or a power grid cannot wait hundreds of milliseconds for a response from a distant cloud region. It needs computing power nearby: in the city, in the region, in the country. And on top of that:
high-capacity fiber-optic cables,
stable power and cooling,
a network designed for low latency and high availability.
These are precisely the elements that telcos have in their DNA. The difference is that until now, they have mainly sold them as "connectivity + SLA." In the AI era, they can become the foundation for entirely new revenue streams. The scale of the investments at stake is, in fact, substantial: the European Data Centre Association estimates cumulative spending on DC infrastructure in Europe between 2026 and 2031 at around €176 billion, and the European Commission speaks of the need for €1.2 trillion in digital infrastructure investment by 2040.
Why this wave is different from video and social media
During the video and social media boom, the pattern was simple:
telcos financed CAPEX—networks, radio upgrades, fiber optics,
customers generated more and more traffic,
the value was captured by the platforms sitting “on top”: streaming services, social media, apps.
The operator was left with growing data volumes, price pressure, and a weak negotiating position.
At this point, we must honestly voice the objection that any experienced telco CxO would make: “I heard exactly the same narrative ten years ago, when people said that telcos would become cloud providers. T-Systems, Orange Business Services, Verizon Cloud, AT&T Cloud—they all tried to enter that market, they all lost to the hyperscalers, and some of those businesses were sold or shut down.” That’s true. That wave, however, was about something else: a universal, global cloud where pure scale, a developer ecosystem, and global marketing win out—exactly the areas where telcos have no structural advantage.
AI is shifting the balance of power in several ways that didn’t exist in 2012:
it’s not just bandwidth that matters, but also the location and quality of infrastructure (latency, reliability, access to power),
the importance of regulation and sovereignty is growing—especially in Europe, where data and computations must often remain within a specific jurisdiction,
an increasing number of AI workloads are business-critical: errors or delays are no longer just an “annoyance,” but become operational and regulatory risks,
energy is becoming a real physical constraint—more on that in a moment.
In other words: the previous wave was about a market where scale and global reach trumped locality. This wave is about a market where locality, regulatory compliance, and access to physical resources (electricity, fiber, facilities) are becoming a structural advantage. This isn’t just rhetoric—it’s a different game.
Three pools of value worth fighting for
1. Fiber optics for new data centers
AI is driving an explosion of investment in data centers—both large campuses and smaller facilities in new locations. Each such DC requires:
multiple independent fiber-optic routes,
often dark fiber, which the customer can “light” themselves,
guaranteed bandwidth and low latency.
Operators with existing fiber-optic infrastructure are the natural providers of this component.
Instead of selling only traditional transmission services, they can:
offer dark fiber and complex redundancy scenarios,
design routes with multiple tenants in mind (data centers, enterprises, public institutions), increasing return on investment,
incorporate "smart connectivity" elements into their offerings—monitoring, segmentation, and quality guarantees for specific AI workloads.
This is not a new category technically—but its scale and strategic importance are changing.
2. Intelligent network services for AI, not just the "internet"
Customers who make intensive use of AI—banks, industry, the public sector—face two major challenges today:
very high costs of data transfer between clouds and data centers (egress, cross-region, cross-cloud),
the need to balance low-latency requirements with local data storage and processing regulations.
This opens up space for a new class of carrier services:
software-defined connectivity designed for AI workloads: the ability to precisely control routing, priority, and security for specific traffic streams (e.g., inference agents supporting critical systems),
solutions that reduce egress costs—local buffers, caches, optimized routing between the customer’s data center and the clouds,
"network as a service" with quality guarantees that are understandable to CIOs/CTOs (latency for inference X, availability for Y, compliance with regulations Z), rather than just a "99.9% SLA."
This requires a shift from pure “bandwidth” sales to software-defined services, with interfaces for cloud/AI teams on the client side.
3. GPU-as-a-Service and edge compute – a new lease on life for old locations
Many telcos have assets that can be repurposed as local AI hubs:
former central offices with good power connections and cooling,
backbone network nodes in cities,
technical facilities in locations attractive from the perspective of latency and regulations.
And here we touch on the argument that is currently the most underestimated in the discussion about AI infrastructure, and at the same time the strongest for European telcos: energy is no longer a cost line but has become a structural constraint. In classic FLAP-D hubs (Frankfurt, London, Amsterdam, Paris, Dublin), waiting times for high-power connections currently range from 7 to 10 years (IEA data, 2025), and in Dublin alone, data centers consumed nearly 80% of the city’s electricity in 2023, with a de facto moratorium on new connections in effect for part of the recent period. In its June 2025 analysis, Ember clearly shows that by 2035, half of Europe’s DC capacity will be built outside the traditional FLAP-D hubs, and growth in the Nordic countries and Southern Europe will be twice as fast as in existing centers. 67% of European data center operators (EDCA 2026 survey) consider power availability to be their biggest challenge—more than financing, more than regulations, and more than the equipment supply chain.
For telecoms, this means something fundamental: the dispersed footprint of old exchanges, technical facilities, and network nodes in regions where power is still available ceases to be a “legacy cost” and becomes one of the scarcest resources in Europe. This applies in particular to Poland and, more broadly, CEE, which—having been on the periphery of hyperscalers’ maps a decade ago—are now becoming the “Western East” of European AI infrastructure: with available power, relatively lower costs, a good transit location, and the domestic presence of operators who historically developed their assets for regulated sectors.
On this foundation, we can build:
small and medium-sized "compute islands" – clusters of GPUs/accelerators for inference,
GPU-as-a-Service offerings, in which the provider delivers not only the hardware but also the boundary conditions: location (e.g., "in country X, under law Y"), availability, security, and network integration,
edge computing for agent-based AI – e.g., for industry (production lines, robotics), mobility (fleets, ITS), and retail (recommendations, in-store analytics).
Here, the telco’s advantage lies not only in having the location and connectivity, but also in its experience in maintaining critical infrastructure 24/7. This is a different conversation than pure “GPU hosting.”
An ecosystem where telcos are not alone
The race for AI infrastructure is not an empty market. The players on the field include:
hyperscalers and colocation companies—building large data center campuses, often requiring dark fiber and edge support, but simultaneously entering areas traditionally dominated by operators,
GPU-as-a-Service providers and AI infrastructure startups—aggressively expanding their offerings for developers and tech companies,
energy companies and infrastructure funds—without their capital and computing power, no viable data center can be built,
regulators and the public sector—defining the rules of the game: access to passive infrastructure, data regulations, and sovereignty requirements.
This means that telcos do not have the luxury of operating in isolation. They must learn to function within a network of partnerships, where they sometimes compete and sometimes collaborate with the same entities.
Challenges that cannot be swept under the rug
Entering the AI infrastructure space is not an “easy, additional revenue stream.” The main risks are:
Uncertainty regarding demand – it is difficult today to accurately predict how quickly and where demand for GPUs, edge computing, and intelligent network services will grow; it is easy to misjudge an investment or enter the market too late.
Fierce competition – hyperscalers have the advantage of scale, capital, and a developer ecosystem; many customers instinctively look to them first.
High CAPEX and short technology cycles – GPUs, cooling, new network standards – all of this changes rapidly; investing “10 years ahead” can be very risky.
Lack of software expertise – many telcos are strong in radio, transmission, and maintenance, but weaker in building cloud products, APIs, developer portals, and consultative sales.
If an operator enters this space “the old way”—like a classic hardware investment—there’s a good chance it will end up as a low-cost subcontractor.
What needs to change in the telecom industry so this opportunity doesn’t pass them by
1. A shift in approach to sales and product
Instead of selling “connectivity + SLA,” the operator must:
build teams that understand the business of hyperscalers and AI customers (specialization, not “one B2B team for everything”),
package the offering in a way that is understandable to cloud/AI teams (e.g., “an edge package for inference and agents in industry X”),
in GPUaaS, focus on consultative sales with a customer success approach, rather than just billing for resources.
This is a mental pivot from the "telco pipeline" to the role of a digital services and managed infrastructure provider.
2. Partnerships instead of solo investments
The operator does not have to—and should not—do everything alone:
with hyperscalers, they can co-create local zones, edge regions, and joint offerings for enterprises,
with infrastructure funds—co-finance data centers, facility upgrades, and fiber-optic networks, sharing CAPEX and risk,
with integrators and software houses—deliver solutions to the customer ranging from “power and fiber” to a ready-to-use AI environment.
It is crucial to design structures (JVs, SPVs, framework agreements) in such a way that the operator retains a stake in the value, rather than merely serving as a construction contractor.
3. Discipline in selecting locations and projects
Instead of building “big boxes” in the hope that “someone will come,” a sensible telco:
maps its assets—where it has ready power, cooling, fiber optics, and an attractive location from the perspective of latency and regulations,
designs investments in phases—first a smaller footprint + the first customer, then expansion,
ensures that every project has not only a technical sponsor but also a concrete commercialization plan.
This resembles a VC investment portfolio more than a classic investment in a new backbone network node.
AI, sovereignty, and the natural role of telcos in Europe
In Europe, the topic of AI is inextricably linked to technological sovereignty: data localization, jurisdiction, sector-specific regulations, and security. Sectors such as public administration, healthcare, energy, defense, and banking cannot rely entirely on a “pure” global cloud.
Here, telecoms have several unique advantages:
they operate under national licenses and oversight,
they are part of critical infrastructure,
they have experience in meeting stringent security and business continuity requirements.
This makes them a natural partner for governments and large regulated players in building sovereign (or “sufficiently sovereign”) AI platforms. The setup can be simple:
the government/public sector as the anchor client,
a telco + local data center operator as the infrastructure provider,
a technology partner as the provider of the platform and agent layers.
On this foundation, sovereign agent systems can be built for government, healthcare, energy, or the military. A prerequisite is an active approach by the telco—moving beyond the mindset of “we’ll provide the connection; let someone else handle the rest.”
A minimal playbook for the operator’s management
If you’re on the board or in management at a telco, a sensible, simple starting plan might look like this:
1. Decide where you want to compete. Are you limiting yourself to connectivity, or do you also want to enter smart network services and GPUaaS/edge compute? Not making a decision is also a decision—usually the worst one.
2. Map out your assets and opportunities. Which locations have power and cooling? Where do you have dark fiber? In which countries/segments is there real demand for AI infrastructure (e.g., large banks, industry, the public sector)?
3. Select 2–3 pilot cases. Don’t try to build an empire right away. Focus on: one connectivity project for a new data center, one smart network project for a major AI client, and one GPUaaS/edge compute pilot in an area where you have a competitive advantage.
4. Build an AI infrastructure team and give it a mandate. Bring together people from networking, data centers, B2B, and cloud partnerships—give them a clear goal and the ability to make decisions faster than in traditional RFP processes.
5. Measure and learn. Look not only at revenue, but also at: time to market, customer acquisition cost, infrastructure utilization, and synergy with the core business.
One more point to keep in mind throughout this process: these infrastructure decisions—where to deploy the edge, what SLAs to set for inference connectivity, what latency thresholds you accept for critical agents—are not a separate pre-project that can be wrapped up and forgotten. In my implementation practice (CDF, Cognitive Deployment Framework—a methodology I’m developing for AI deployments in regulated sectors), I treat them as the gateway to the entire subsequent architecture, because they define what agents will be able to do in real time and what they won’t. An operator who designs their AI footprint with only bandwidth and cost in mind will end up purchasing infrastructure that, as early as next year, will block what the business wants to build on it.
AI really does "gulp down power and fiber optics." The question is: Will your telco be just another company footing the bill for this feast, or will it finally be one of those sitting at the table?
A micro-pattern from practice
In many agent-based AI implementations, network latency has a greater impact on decision quality than the choice of model itself. Some customers achieve better results by running smaller models closer to the data source than larger models in a distant cloud region. Intuition suggests that “bigger model = better result”—in agent-based systems that must respond in near real time, this intuition is usually wrong.
This series breaks down the AI transformation in regulated sectors into seven layers:
These posts appear weekly on the product blogs allclouds.pl — genesis-ai.app/blog and savant-ai.app/blog. The entire series is a record of what I’ve learned from working in regulated sectors—decisions that had to be made faster than caution allowed, mistakes that taught me more than successes, intuition honed in conversations with no script, and the will to build something that doesn’t yet exist. |





Comments