[Special Contribution] ② It Is Time to Tear Down the ‘Three Walls’ Blocking the AI Highway Jason Park, Professor, Department of Digital Content Design, Osan University
▶ Invisible Bottlenecks In July 1970, no one at the opening ceremony of the Gyeongbu Expressway asked about the thickness of the asphalt. Their only concern was one: “Will this road truly get us there faster, cheaper, and safer?”
Fifty years later, we stand before another expressway—the AI highway. This time, the questions are slightly different: “Who will drive on this road?” And more critically: “What is holding us back?”
As I pointed out in my previous contribution (www.hitech.co.kr, November 2, 2025), competitiveness in the AI era depends not on the number of GPUs but on the speed of the infrastructure. Yet, a closer look at the field reveals that the real bottlenecks lie outside the hardware. They are three invisible walls: power and cooling, network and storage, and data, talent, and societal inertia. Unless these walls are overcome, purchasing expensive GPUs will be futile.
▲ First Wall: Power and Cooling—The Battlefield of Heat and Electricity In today’s data centers, two timelines coexist. In sections with ample bandwidth, models advance briskly, but the moment power fluctuates in a specific rack or cooling lags, everything grinds to a halt.
The numbers make it clearer. A single latest H100 GPU consumes 700 watts—equivalent to running a household air conditioner for an hour. An AI highway segment, however, houses 100,000 such GPUs. That totals 70 megawatts, rivaling the power demand of a medium-sized city.
The problem does not end there. AI training runs 24 hours without pause. Demand surges precisely when grid loads peak at night. Existing transmission networks cannot cope.
Global players are already responding. Microsoft is building a 100-megawatt data center in the Arizona desert, generating its own power with solar energy. Google constructed a facility in Hamina, Finland, where cold seawater serves as free coolant. They do not “purchase” power; they “design” it.
What about us? We still rely on Korea Electric Power Corporation’s rate tables.
Cooling is even more unforgiving. Fully loaded 100,000 GPUs generate heat equivalent to a small thermal power plant. Failure to dissipate it causes chips to melt—literally. Traditional air cooling is insufficient; liquid cooling is becoming the new standard, channeling coolant directly to GPUs. Efficiency rises tenfold, but so do costs.
An intriguing phenomenon emerges here: cooling is not merely a temperature issue. Field reports show that switching cooling methods alone can improve storage IOPS by nearly 70% in perceived terms. How does cooling affect I/O? The answer is simple: heat, voltage, clock speed, errors, retries, and queue delays form a single chain.
A data center with a PUE (Power Usage Effectiveness) of 1.1 differs from one at 1.5 not just in eco-friendliness. The former invests surplus capacity in precision control, while the latter is dragged by cooling demands and self-sabotages peak performance. Thus, PUE below 1.3 must become an operational KPI for reducing latency variability, not merely an environmental slogan.
Waste heat utilization is another emerging topic. A Stockholm data center heats 20,000 nearby apartments with its exhaust. Korea also has high winter heating demand. Connecting data center waste heat to district heating networks would cut cooling costs and save on heating. It is a win-win proposition, but it requires integrated reconfiguration from urban planning to energy policy.
▲ Second Wall: Network and Storage—The Tail Wags the Dog AI operates in unison, at the exact same beat. Thousands of GPUs must exchange data at precisely the same timing. Here, network homogeneity of latency matters more than average delay.
We target RTT ≤ 10 ms. Yet the real battle is not achieving that figure but maintaining it during traffic surges or early fault propagation.
In the field, this is called P95 or P99 latency—the slowest 5% or 1% of cases. A single microburst, whether on InfiniBand or Ethernet, extends tail latency, allowing the slowest 1% of nodes to dominate overall speed. GPUs wait. We simply do not see that idle time.
Think of a highway: a minor fender-bender in the emergency lane disrupts traffic 20 km back. The same occurs in networks. Packet loss must be managed below 10⁻⁶, and congestion controls like link aggregation, ECN, and RED belong on the first page of operations manuals.
Storage mirrors all this. Parameter servers at scale, sampling preprocessing, checkpoint saves—none are sequential. Parallel fragmented requests flood queues, and cache hit rates are sensitive to dataset versions and sampling strategies.
Field sentiment translates as: “We bought fast disks, but why isn’t the service faster?” The key is designing fast paths, not merely fast components.
These three elements—power and cooling, network, and storage—are not independent. Power headroom affects network latency uniformity, tail latency disrupts storage queues, and storage delay variance erodes scheduler efficiency, raising GPU idle rates. Thus, organizations that ask “how many paths” rather than “how many GPUs” ultimately prevail.
▲ Third Wall: Data, Talent, and Inertia—Invisible Structural Barriers Even with perfect physical infrastructure, a third wall remains. It is a human and institutional issue.
▷ The Fortress of Data Feudalism The fuel for vehicles on the AI highway is data. Yet Korea remains trapped in severe “data feudalism.” Public agencies, major hospitals, and key industries lock data within their domains under the banners of “security” and “regulation.”
While global Big Tech integrates worldwide web information, user behavior, and vast research data into massive lakes, we busy ourselves digging hundreds of small ponds. These ponds vary in quality and are too small to train giant AI models with potential.
We must shift perspective. Data creates value when circulated and combined, not when stored. A national platform for high-quality synthetic data generation and sharing is needed, alongside full adoption of federated learning that trains without moving personal data. Institutional innovation is required to guarantee “usage rights” for AI advancement without infringing ownership.
▷ Scarcity of System Architects The talent the AI highway demands is not mere coders or general server administrators. It requires infrastructure architects and distributed systems engineers who can deploy massive AI models on thousands of GPU clusters in ultra-low-latency environments.
They possess deep knowledge of HPC, network protocols, distributed parallelism, and operating systems, shaving even 1 millisecond from P99 latency across the cluster. Such core talent is extremely scarce, already swept up by Google, NVIDIA, and OpenAI with astronomical salaries.
We zealously train “scientists” who build AI models but neglect the “builders” and “operators” who deploy and manage them efficiently in the field. No matter how advanced an autonomous vehicle is, without experts to design and maintain the highway, investment cannot guarantee efficiency.
▷ Fragmented Cultural Inertia The final barrier is bureaucratic and cultural inertia. Korea’s IT budget execution favors “separation” over “integration” and “pilot projects” over “grand platforms.”
The AI highway must be a single unified national backbone platform, yet ministries and agencies pursue dozens or hundreds of mini-pilots to secure budgets—each creating “our ministry’s AI.” This resembles local governments paving separate mini-tracks for the Seoul-to-Busan route.
As pilots fragment, data scatters, infrastructure duplicates, and economies of scale vanish entirely. AI infrastructure, which should pursue ultimate efficiency, ends up exemplifying ultimate inefficiency—a paradox.
▲ Who Will Drive on That Road? When the Gyeongbu Expressway opened in the 1970s, no one imagined such traffic volume. Vehicle registrations barely exceeded 100,000, and the highway faced “overinvestment” criticism amid quiet paving. Yet once built, vehicles appeared, people moved, and money flowed.
The true value of the AI highway lies in “sharing.” The United States established the AI Research Resource, allowing nationwide universities and SMEs equal infrastructure access. China operates the “East Data, West Computing” project, processing eastern data with western cheap power on a national scale.
We must now begin designing for sharing. The AI highway is not built by government for private use. It must be a communal asset co-used and co-advanced by government, industry, academia, and startups.
◈ What Must Be Done First, establish AI Energy Special Zones. Relax regulations and provide spaces to design power, cooling, and networks as a bundle. Consider coastal, mountainous, or even offshore platforms for seawater cooling, wind power linkage, and waste heat recycling experiments.
Second, create integrated design standards. Benchmarks like MLPerf must comprehensively measure power efficiency, cooling efficiency, and network latency. Only data centers meeting these standards should receive government support.
Third, form a private-led consortium. An AI Infrastructure Alliance with Samsung Electronics, SK Hynix, Naver, and Kakao is essential. Their world-class technologies in semiconductors, memory, cloud, and search can integrate to build a globally top-tier AI highway.
Fourth, foster a data circulation ecosystem. Separate ownership from usage rights and dismantle data feudalism via federated learning and synthetic data generation.
Fifth, launch infrastructure architect training programs. Shift AI talent development from “model-centric” to “system-centric.”
Sixth, pursue unified platform construction. Halt hundreds of fragmented pilots and concentrate all resources on a single national AI backbone platform.
▲ Perceived Speed Is Born from Design Some readers may retort: “Isn’t this ultimately about money?” Yes, but this money is “time value,” not “equipment cost.” Shortening training cycles for one-week-earlier deployment, processing more requests at the same power to lower unit costs, maintaining latency during peaks—all are competitiveness forged by time.
GPUs compress that time, but leaks occur outside. Ninety percent of AI performance is determined beyond GPUs. If power, network, and storage are the road surface, lanes, and interchanges, GPUs are the engines atop them. Potholes and tangled lanes turn engine power into higher accident risk.
Perceived speed arises from design, not output.
I reiterate the principle from my previous contribution: Infrastructure does not await demand; design creates it. My conclusion remains unchanged. The moment power, network, and storage are bound into one blueprint with KPIs as constraints, and the walls of data, talent, and institutions are demolished, high-performance GPUs finally deliver their true potential.
Just as the Gyeongbu Expressway overcame land acquisition challenges 50 years ago, we now need the courage for “digital land expropriation”—tearing down barriers of data, talent, and bureaucracy.
This is our last chance. Global players are already three years ahead. What takes them one month takes us three. This gap compounds. In three years, it becomes nine; in five, twenty-five.
I earnestly propose once more to the Presidential AI Chief of Staff: Elevate dismantling the three walls of the AI highway to a national strategic task and commence design with all ministries and the private sector.
Like the cement road we laid 50 years ago, it is time to pave a new path with silicon, electrons, and photons. Korea’s next 50 years will run upon it.
▶ “Who will drive on the AI highway?” The answer is simple. All Korean challengers aiming globally will drive it. We must open the road now for them to run. This is the justification and calling of the AI era.
Dr. Jason Park graduated from the University of California, San Diego, and worked as a high school teacher in California before serving as an admissions officer at the University of Illinois. He is currently an admissions advisor at Eastern Illinois University, Southwest Minnesota State University, and the European University in Germany. Additionally, he operates the YouTube and TikTok channel "JasonTube" and serves as a full-time professor at Osan University, South Korea.

