Over the past few weeks, I kept seeing discussions around MCP everywhere. Blog posts, Twitter threads, demos, and strong opinions. Most of it sounded promising, but also vague. Everything seemed to fall back to “better tool calling” or “agents done right,” which did not really help me understand where MCP actually fits in real systems. So instead of just reading about it, I decided to build something small and practical while exploring MCP hands-on. This article is a reflection on that process.
What MCP Is, in a Nutshell
At a high level, MCP is a protocol that standardises how large language models interact with external systems. Instead of tightly coupling an LLM to APIs, databases, or services, MCP introduces a clear separation of roles. MCP servers expose capabilities and resources. An MCP host runs the LLM and orchestrates interactions. The LLM itself focuses on reasoning and coordination rather than owning state.
The important shift is not about calling tools more easily. It is about moving long-lived state and authority outside the model and into systems that are designed to persist, validate, and enforce rules. Once I started thinking about MCP this way, it stopped feeling abstract.
The Problem MCP Was Designed to Solve
Why MCP Matters From a Business Perspective
From a business standpoint, MCP addresses a real problem. LLMs are powerful but stateless. Production systems need durability, auditability, ownership and clear boundaries. When everything lives in prompts and model output, it becomes hard to trust, debug or scale.
MCP helps by turning external systems into first-class participants rather than helpers. It allows teams to build AI-enabled systems where data ownership, policies and workflows are explicit. This matters for reliability, compliance, maintainability and long-term cost. In short, MCP is less about intelligence and more about system design.
The Idea Behind My Project
While exploring MCP, I kept coming back to a problem I personally run into often. When I am evaluating technical options, a lot of reasoning happens in long AI conversations. Pros and cons, constraints, concerns, trade-offs. Once the conversation ends, most of that context is gone. I might remember the final choice, but not why I made it. I wanted a way to separate exploration from persistence.
So I built a small Decision Trace system. The goal was simple: capture and persist the reasoning that emerges during AI-assisted exploration, without letting the AI make decisions on my behalf. This was not a theoretical exercise. It is something I genuinely wanted to use.

Building the Local MCP Server
The core of the system is a local MCP server that I built myself. This server owns all authoritative state. Decisions, options, pros, cons, notes, and final outcomes live here. Everything is stored in a simple SQLite database. Entries are append-only, nothing gets overwritten, and finalized decisions are locked. The server exposes MCP tools for creating decisions, adding options, appending entries, inspecting state, and finalizing decisions. It also exposes an MCP resource that defines the canonical template for decision summaries.
The key point is that the server enforces structure and rules. The LLM can propose actions, but the server decides what is valid and what becomes persistent state.
Using a Google Docs MCP Server
For summaries, I deliberately did not reuse the local server. Instead, I connected a Google Docs MCP server. Technically, this server runs locally as well since I cloned and connected the repository, but I did not build it myself. I treated it as a separate, external capability. This was intentional.
The local server owns decision data. The Google Docs server owns document creation and storage. The summary format itself comes from a template exposed as an MCP resource by the local server. The LLM simply fills the template and hands it off to Google Docs. This clean separation made MCP’s value very concrete for me.
How the Entire Flow Works
The flow is straightforward. I start a decision and define options. I then talk naturally with the LLM, ask questions, explore trade-offs, and surface concerns. As relevant points become clear and validated in the conversation, the LLM proposes entries and the local MCP server persists them.
At any point, I can inspect the current state directly from the server. When I am done, I explicitly finalise the decision. That produces a clean snapshot of everything that led to the choice. From that snapshot, the LLM generates a summary using the server-owned template and creates a Google Doc via the Google Docs MCP server. The result is a durable decision record that shows what was considered and why a decision was made.
You can watch the following demo video that walks you through the entire flow.
What This Project Helped Me Understand
Building this clarified MCP for me more than any blog post or tutorial. It showed me that MCP is not about making models smarter. It is about making systems more reliable. The LLM coordinates interactions, but authority lives elsewhere. State survives model swaps. Rules are enforced in code, not prompts.
That mental shift made MCP finally click.
How This Could Be Improved Further
There is a lot that could be built on top of this. A lightweight review step before finalization could improve trust. Versioned templates could support different decision types. A simple UI could make inspection easier. Support for multiple users could turn this into a team decision ledger. None of these require changing the core architecture. That is a good sign.
Closing Thoughts
I started this project because I wanted to understand MCP beyond the hype. What I ended up with was a small system that I actually use and a much clearer mental model of where MCP fits.
For me, MCP is not about agents or magic workflows. It is about coordination, ownership, and building AI-enabled systems that behave predictably over time. This project was a good reminder that the fastest way to understand new infrastructure is to build something small, real, and slightly opinionated.
You can find the code repo for this mini project here – mcp-decision-journal
Until next time,
Adiba 😊

