An 8-part series on going from delivery team to agent-native organization — lessons earned, not borrowed. Genesis · → Anxiety · Names Matter · Proof of Value · The Pivot · Co-Creation · The Garage · The Flywheel
Most AI transformation stories skip Phase 2.
They go from “we built some agents” straight to “adoption soared and everyone loved it.”
That’s not what happened for us.
When we introduced the first agents to the delivery team, the reaction wasn’t excitement. It wasn’t curiosity.
It was anxiety. Real anxiety. The kind that doesn’t announce itself clearly. It comes out as skepticism, low usage, polite questions with an edge underneath them.
The edge was: is this going to replace me?
Nobody said it that way. But it was in the room.
There was a second layer too. A small squad had gone off and built things, and now those things were showing up in workflows that had been their workflows. It kind of felt like change being done to them.
“I feel like I’m on the outside looking in at my own job being replaced.”
That’s not a technology problem. That’s a trust problem. Technology solutions don’t fix trust problems.
We made two structural choices. Both matter.
The first choice: we doubled down on agents being teammates, not tools; we gave them all personas and personalities.
This sounds like semantics. It isn’t.
A tool is something you use, or don’t. It sits there. It doesn’t get better. It doesn’t respond to coaching. It doesn’t care if it’s valuable or not.
A teammate is different. A teammate can be given feedback that actually changes how they work. You can advocate for them. You can push for them to be more capable. You have a stake in whether they succeed.
When people have a stake in something, they engage with it differently.
The second choice: every agent got a name.
Reese. Casey. Theo. Mona. George.
Not product names. Not “AI Assistant v2.3.” Real names, each one tied to the function, each one introduced the way you’d introduce a new hire; with context, with expectations, with a clear path to give feedback.
More on this in the next post, but the short version: named agents get coached. Unnamed tools stay static and get ignored. When was the last time your garden rake got an upgrade?
The anxiety didn’t disappear overnight. But it had somewhere to go.
The question shifted from “is this replacing me?” to “how do I best work with this?”
That’s the crack in the door. That’s what Phase 3 is about.
You can build the best agent in the world. If your team doesn’t trust it, you’ve built nothing of value.
Next: Why naming your agents isn’t branding; it’s adoption strategy.
In the early 1990s, “networking” on a PC was a jigsaw puzzle. You didn’t have TCP/IP. You assembled TCP/IP:
The right stack
The right network card
The right driver
The right OS version
The right configuration (that you’d only discover was wrong at 2am)
If you’re too young to remember this, imagine that the new thing for your work PC was to connect others send messages…but email only worked inside your company, and was not connected to the internet. You carried a briefcase full of papers home if you needed to work on something after hours. You probably didn’t have a mobile phone, and if you did it was mounted in your car – and only made voice calls.
I worked in customer support at a company that lived at the intersection of hardware, software, and networking. Our application ran across multiple protocols, so we didn’t just watch customers struggle—we helped them fight the puzzle: stack + driver + NIC + OS + application. It wasn’t just technical complexity. It was market immaturity.
This is the LLM market today.
Act I: Monetize the Mess
Inside the company I worked for, leadership was on a path to buy a TCP/IP stack to provide a consistent foundation for our applications.
The absurd part: application teams had to make functionality decisions based on disparate network stacks. Test teams had to test them all. Users had to understand them to get them to work. Then someone kicked the cable out of the adapter under their desk and the whole network went down.
Have you tried Hummingbird and Chameleon on both EtherLink and NE2000? What about when the network has both BNC and 10-BaseT connectors?
Networking vendors made tons of money in the confusion…and the switching costs…and the new versions. I believed the stack (and maybe the network cards) were heading toward commodity status. Essential, but not differentiating. I argued against buying or building a stack.
Act II: Standardize It (the Part Everyone Forgets)
TCP/IP didn’t win because one vendor’s stack was the best. Ethernet was technically deficient to Token Ring. But they both won because they became the standard—and standards create gravity. Once the interfaces stabilized, the application didn’t care whose stack you bought.
That’s the key idea: the application shouldn’t care. The user shouldn’t care. Maybe the IT department cares for a while, but eventually just procurement cares.
Act III: Forget It’s There
Once TCP/IP became “default” and the interfaces stabilized, the market stopped paying premiums for stacks.
Networking wasn’t eliminated. Thinking about networking was eliminated. Who knows which network adapter is in their new laptop today? Can you imagine buying a laptop without connectivity? It’s unthinkable.
History continues to prove the pattern:
Monetize → Standardize → Forget
The LLM World Is in Its “Stack Wars” Era
Today’s LLM discourse sounds like the early 90s networking discourse—just with better fonts and worse certainty:
“Let’s build the stack so we control our destiny. Everyone is doing it; we don’t want to be left behind!”
The value today, in the complexity phase, is the model. The value in the future is the thing that uses those models.
Segmentation: Specialized Providers vs Specialized “Application-Layer Engines”
Yes—there are real segments emerging: coding, personal info management, health, and more.
But the more important question is where specialization will live:
Path A: Specialized model providers dominate each segment
“Best model for coding.” “Best model for health.” “Best model for XYZ.”
Path B: Commodity base models + specialized implementations on top
Fine-tunes, adapters, retrieval, tool-use, memory, evals—packaged as product capabilities. In applications.
Path B is the historical match.
The winning move is applications standardize how they connect to intelligence, and the model selection becomes invisible plumbing.
That’s the interoperability point—and it’s where network effects quietly return.
Network Effects: TCP/IP Interoperated with Networks. LLMs Interoperate with Tools.
TCP/IP’s network effect was obvious: the value came from being able to talk to other networks.
LLMs don’t inherently need to “talk” to other LLMs. They compete on capability.
So where’s the network effect?
It moves up one layer.
That’s why protocols like Model Context Protocol (MCP) matter: they standardize how AI systems connect so developers don’t rebuild bespoke, model specific integrations.
Once connectivity is ubiquitous, the LLMs start to disappear.
Phase 1: Value = Plumbing (TCP/IP stacks | LLM providers)
Phase 2: Value = Interfaces (Winsock | MCP)
Phase 3: Value = Outcomes (Connectivity | Apps & Agents)
The Pattern Is Undefeated
Every platform shift starts the same way:
We monetize the complexity.
Then we standardize the plumbing.
Then we forget it was ever hard.
TCP/IP stacks were once a market category. Now they’re invisible.
LLMs are in their stack-wars era.
The winners won’t be the companies with the prettiest model demo…or even the best model. They’ll be the ones who make magical apps and let you forget the model exists.