The Architecture of Agency
Seeing Agentic AI Clearly From a Coder Who Knows What Limited Vision Really Means

Last month, a VP cornered me after a meeting and asked: “Eric, how do I hire an agent?”
I almost laughed—until I realized he was serious. I started to explain the difference between an LLM and an autonomous system, and within thirty seconds I could see I’d lost him. He didn’t need a lecture. He needed a mental model.
That conversation stuck with me, because it exposed something bigger than one VP’s confusion. We are staring across a massive semantic gap. The boardroom sees a “Digital Worker.” The engineering floor sees an “Autonomous Loop.” And in between, there’s a graveyard of pilot projects that collapsed because nobody gave these two groups a shared language.
If we don’t bridge that gap, we are going to build a lot of brittle, unpredictable, and wildly expensive software.
The Architecture of Agency is a 5-part series designed to fix that. We are going to translate the dense, rapidly evolving jargon of Agentic AI into the familiar software architecture paradigms you already know—like microservices, REST APIs, and stateful applications.
Instead of throwing out abstract definitions, we are grounding every term in a single, evolving enterprise scenario: Building an AI system to autonomously triage and resolve Tier-1 customer support tickets.
The Bioptic Lens
The same engineering discipline that makes software work for a user with 20/150 vision is exactly what makes an autonomous agent safe in production.
That’s not a metaphor. I’ve spent 13 years shipping enterprise systems while navigating code with screen magnifiers and VoiceOver. When you build software for users who experience the world differently, you learn very quickly that “vibes” and “probabilistic guesses” don’t cut it. You need clear boundaries, explicit context, and highly reliable tools.
It turns out, those are the exact same principles that separate an agent that works in a demo from one that survives its first week in production. Accessibility isn’t a sidebar in this series—it’s the lens through which every concept becomes clearer.
This series isn’t just about how to build agents. It’s about how to give them the sight, memory, structure, and guardrails they need to actually succeed in the real world.
Series Index
Bookmark this page. As the series unfolds, I will update the links below so you can follow the complete architectural evolution—from a naive chatbot to a fully orchestrated, observable multi-agent system.
Part 1: The Myopia of Chatbots: Why Your Bot Can’t See the Full Picture (And How to Give It Real Sight)
Your chatbot is hallucinating refund policies and forgetting the customer’s name mid-conversation. Here’s why—and what to do about it.
Decoding: LLMs, Chatbots, Personas, Context Windows, and Context Engineering (managing Context Rot and Compression).Part 2: Giving Your AI Glasses and a Memory: The Handbook That Ends Hallucinations (Coming Soon)
The bot needs to read the actual company handbook and check the customer’s real billing status. But there’s a spectrum between a rigid workflow and a fully autonomous agent—and most teams pick the wrong spot.
Decoding: Agentic Workflows vs. Autonomous Agents, Tools (Function Calling), Structured Outputs (JSON Mode), RAG, and Vector Databases.Part 3: Stop Guessing, Start Thinking: How Chain-of-Thought Turns Probabilistic Chaos into Predictable Work (Coming Soon)
Your agent can access systems, but it’s making rash decisions—and you have no idea why. Time to force it to show its work and build the instrumentation to prove it.
Decoding: Chain of Thought (CoT), ReAct Reasoning, Guardrails, Grounding, Evaluation & Observability (Tracing), and the 12-Factor Agent Methodology.Part 4: The USB-C (and Ethernet) for Agents: Why Open Protocols Are the Only Way Enterprise AI Doesn’t Become a Mess of Brittle Integrations (Coming Soon)
The support agent is a success. Now Sales and HR want their own. Suddenly you need agents that connect to tools, talk to each other, remember what happened yesterday, and prove who they are.
Decoding: Model Context Protocol (MCP), Agent-to-Agent Protocol (A2A), Memory Architecture (Short-term, Episodic, Semantic, Procedural), Agent Identity & Security, and Multi-Agent Routing.Part 5: Frameworks, Platforms, or Raw Code? The Build-vs-Buy Decision Matrix (Coming Soon)
The architecture is approved. The VP of Engineering asks: “So what’s our tech stack?” The answer depends on where you are today—and how fast the landscape under your feet is shifting.
Decoding: LangGraph, OpenAI Agents SDK, Microsoft Agent Framework, Managed Platforms (Amazon Bedrock AgentCore, Azure AI Agent Service, Vertex AI), and Token Caching & Cost Optimization.
Has your chatbot hallucinated something embarrassing yet? Has your token bill made your CFO ask uncomfortable questions? Drop a comment below or find me on LinkedIn. I read every one, and the best comments often shape the future of this series.



Series Pitch: The Architecture of Agency Subtitle: Seeing Agentic AI Clearly – From a Coder Who Knows What Limited Vision Really Means The Hook (For your “Coming Soon” post): Last month a VP asked me, “How do I hire an agent?” I almost laughed—until I realized most of us are still treating these t
pasted

