top of page

When Networks Become Native to AI

  • Writer: Christos Makiyama
    Christos Makiyama
  • Feb 14
  • 2 min read

In the mid-90s I started my career when the Internet was emerging.


I worked on ASICs powering high-speed packet and optical networks. I evangelized xDSL in Japan, turning existing telephone lines into broadband infrastructure. I helped introduce Voice over IP when carrying telephone calls over packet networks was still considered radical.


Those ideas were not obvious. They required convincing telecom equipment vendors and operators that embracing architectural change was essential to capture the Internet opportunity.


What I learned then was simple:


A killer application reshapes the network.


The Internet forced bandwidth scaling, service convergence, mobility, and economic efficiency. Networks evolved because the application demanded it.


Today we are at another inflection point.

AI is not just another workload. Its impact may exceed that of the Internet.

Capital is flowing into hyperscale AI datacenters — massive training-optimized facilities filled with accelerators, consuming gigawatts of power. That focus is visible and measurable.


But AI will not live only inside hyperscale datacenters.


Inference will dominate. And inference is latency-sensitive, interactive, distributed, and economically marginal per request.


This changes the structural requirements of connectivity.

Inference will move from centralized facilities to regional nodes, metro layers, the edge, and devices. Compute placement and routing will matter as much as model size.


Our networks were not designed for this world.

They optimize for throughput, not interpretive latency.

They treat intelligence as something above the network, not integrated with it.

They are largely unaware of semantic value.


To make “AI everywhere” feasible and sustainable, connectivity must evolve from a transport layer into a coordination layer.


Compute, memory, routing, edge execution, and hyperscale AI datacenters will need to be co-designed.

Otherwise, we risk building AI systems designed for dynamic, distributed intelligence on connectivity architectures optimized for predictable, linear traffic.


It is like putting a high-performance car on train rails.


The engine may be powerful.

But the track determines where and how it can move.


The next bottleneck will not be GPU count.


It will be whether networks can regulate interpretation and action at the speed AI demands.


Incumbent operators carry massive sunk investments built for a different architectural logic. As AI-driven workloads demand programmability and tighter coordination, friction is inevitable.


Not because anyone is wrong.


But because architectures optimized for the past rarely adapt smoothly to new paradigms.


We are building hyperscale AI datacenters.


The next strategic battleground may be the network itself.


Just like in the 90s.

But this time, the surprise may emerge from Japan.



bottom of page