Same Frenzy, New Plumbing
In the early 1990s, “networking” on a PC was a jigsaw puzzle. You didn’t have TCP/IP. You assembled TCP/IP:
- The right stack
- The right network card
- The right driver
- The right OS version
- The right configuration (that you’d only discover was wrong at 2am)
If you’re too young to remember this, imagine that the new thing for your work PC was to connect others send messages…but email only worked inside your company, and was not connected to the internet. You carried a briefcase full of papers home if you needed to work on something after hours. You probably didn’t have a mobile phone, and if you did it was mounted in your car – and only made voice calls.
I worked in customer support at a company that lived at the intersection of hardware, software, and networking. Our application ran across multiple protocols, so we didn’t just watch customers struggle—we helped them fight the puzzle: stack + driver + NIC + OS + application. It wasn’t just technical complexity. It was market immaturity.
This is the LLM market today.

Act I: Monetize the Mess
Inside the company I worked for, leadership was on a path to buy a TCP/IP stack to provide a consistent foundation for our applications.
The absurd part: application teams had to make functionality decisions based on disparate network stacks. Test teams had to test them all. Users had to understand them to get them to work. Then someone kicked the cable out of the adapter under their desk and the whole network went down.
Have you tried Hummingbird and Chameleon on both EtherLink and NE2000? What about when the network has both BNC and 10-BaseT connectors?
Networking vendors made tons of money in the confusion…and the switching costs…and the new versions. I believed the stack (and maybe the network cards) were heading toward commodity status. Essential, but not differentiating. I argued against buying or building a stack.
Act II: Standardize It (the Part Everyone Forgets)
TCP/IP didn’t win because one vendor’s stack was the best. Ethernet was technically deficient to Token Ring. But they both won because they became the standard—and standards create gravity. Once the interfaces stabilized, the application didn’t care whose stack you bought.
That’s the key idea: the application shouldn’t care. The user shouldn’t care. Maybe the IT department cares for a while, but eventually just procurement cares.
Act III: Forget It’s There
Once TCP/IP became “default” and the interfaces stabilized, the market stopped paying premiums for stacks.
Networking wasn’t eliminated. Thinking about networking was eliminated. Who knows which network adapter is in their new laptop today? Can you imagine buying a laptop without connectivity? It’s unthinkable.
History continues to prove the pattern:
Monetize → Standardize → Forget
The LLM World Is in Its “Stack Wars” Era
Today’s LLM discourse sounds like the early 90s networking discourse—just with better fonts and worse certainty:
- Which model for which task?
- Which provider is “best”?
- Should we build our own?
- How do we avoid lock-in?
It’s the same jigsaw puzzle, modernized:
model + prompt style + tooling + memory + safety + cost + latency
And it produces the same executive temptation:
“Let’s build the stack so we control our destiny. Everyone is doing it; we don’t want to be left behind!”
The value today, in the complexity phase, is the model. The value in the future is the thing that uses those models.
Segmentation: Specialized Providers vs Specialized “Application-Layer Engines”
Yes—there are real segments emerging: coding, personal info management, health, and more.
But the more important question is where specialization will live:
Path A: Specialized model providers dominate each segment
“Best model for coding.” “Best model for health.” “Best model for XYZ.”
Path B: Commodity base models + specialized implementations on top
Fine-tunes, adapters, retrieval, tool-use, memory, evals—packaged as product capabilities. In applications.
Path B is the historical match.
The winning move is applications standardize how they connect to intelligence, and the model selection becomes invisible plumbing.
That’s the interoperability point—and it’s where network effects quietly return.
Network Effects: TCP/IP Interoperated with Networks. LLMs Interoperate with Tools.
TCP/IP’s network effect was obvious: the value came from being able to talk to other networks.
LLMs don’t inherently need to “talk” to other LLMs. They compete on capability.
So where’s the network effect?
It moves up one layer.
That’s why protocols like Model Context Protocol (MCP) matter: they standardize how AI systems connect so developers don’t rebuild bespoke, model specific integrations.
Once connectivity is ubiquitous, the LLMs start to disappear.
Phase 1: Value = Plumbing
(TCP/IP stacks | LLM providers)
Phase 2: Value = Interfaces
(Winsock | MCP)
Phase 3: Value = Outcomes
(Connectivity | Apps & Agents)
The Pattern Is Undefeated
Every platform shift starts the same way:
- We monetize the complexity.
- Then we standardize the plumbing.
- Then we forget it was ever hard.
TCP/IP stacks were once a market category. Now they’re invisible.
LLMs are in their stack-wars era.
The winners won’t be the companies with the prettiest model demo…or even the best model. They’ll be the ones who make magical apps and let you forget the model exists.