He Built an MCP Server So You Can Talk to AI About Any Substack Author
From messy Python scripts to a one-click system that turns writing data into creative feedback.
You know that feeling when you’re deep in a project, making real progress, and then you need to check one thing in another tool?
You switch tabs, lose your train of thought, spend 10 minutes remembering where you were, and in this process, your momentum is gone. I’ve been obsessed with this problem since I started building connected AI systems.
The biggest killer of AI productivity isn’t the technology. It’s the constant context switching. We jump between ChatGPT for brainstorming, Cursor for coding, Notion for notes, and three other tools for research. Each switch breaks our flow, fragments our thinking, and turns what should be smooth building into exhausting tool management.
This is why MCP became one of my favorite AI advancements. It finally gives us a way to connect everything into one space. Instead of juggling AI tools, we can build systems that understand our context and work the way our brains actually think.
And this is what I love about today’s guest post: Alejandro Aboy has lived this exact problem, and his solution validates what I’ve been exploring about connected intelligence.
Alejandro writes The Pipe & The Line, where he shows hands-on guides, tools, and experiments to sharpen your Data & AI Engineering skills.
If you want to dive deeper into his writing, here are three top articles to get started:
In this post, Alejandro shares how he built an MCP system for Substack authors: turning scattered local scripts and authentication pain into a single connected workflow. It’s a practical look at how creators can use MCP to make their AI tools truly context-aware without rebuilding everything from scratch.
What I love about his approach is how he treats creation like engineering, structured, data-driven, and iterative. It’s the same mindset I’ve used in building connected systems: measure what works, automate the rest, and turn solutions into working tools.
Here’s his story:
Hi, I’m Alejandro.
AI has a lot to offer. And I’m not talking about the travel agent demos you see everywhere—I mean real tools that help you first, and hopefully others too.
I spent many weekends building systems based on the “second brain” idea.
If you have a bunch of ideas like me, you probably use Google Keep, Docs, Sheets, Notion, or tools like Evernote or Obsidian to capture them.
If you blog about those ideas, unless you’re super organized, you often can’t find your info dump when you need it. So you either give up or lose good ideas somewhere along the way.
I started building a prototype for a voice + note-taking app, which turned into a niche tool for Substack authors called Voice2Blog.
This simple app helped me stay organized, “dump and forget” my content ideas while walking my dog, and even talk through edits on my drafts.
Things were going well.
But then questions popped up:
How do I figure out what to write about next?
How do I make data-driven decisions based on what worked?
How do I learn from my own successes and failures without reinventing the wheel every time?
Most of the time, we build systems just to capture and store data, but don’t do much to turn that knowledge into action.
Voice2Blog has features to speed up turning ideas into content, but it lacked tools to analyze content performance.
That gap triggered the research that followed.
The Over-Engineering Trap
I spent about three months on this, which brought good findings and lots of messy dead ends.
Each phase helped shape what I built, but I focused too much on putting too many pieces together just to ask, “Read and tell me how my Substack articles did so I can write some Substack notes about it.”
Stage 1: When “Just One Script” Becomes Ten
I started with rss_articles.py to fetch articles from my Substack RSS feed, rss_insights.py to generate content ideas, an rss_calendar.py to recommend what to write next.
Each script handled parsing, cleaning, and storing data in Google Sheets.
But every time I wanted to analyze something, I had to run three different scripts wrapping LLM prompts, wait for data to sync, and manually piece together insights from static outputs.
Not exactly a smooth workflow.
Stage 2: The Limits of Claude Project + Local MCP
I added a lot of knowledge about writing notes, articles, and capturing style into a Claude project. I even designed a step-by-step workflow to run it sequentially.
I ran the Python script, then exported and imported CSV files because Claude couldn’t read directly from my Google Sheets via Google Drive.
Then I thought: “What if I wrap all these scripts in an MCP server?” So I built mcp_server.py with tools like:
Better, but now I had another problem: this only worked on my laptop with Claude Desktop.
I couldn’t use it from my phone, and sharing it with others meant they’d have to clone repos, install dependencies, configure Google Sheets credentials, and set up virtual environments.
Solving one problem by creating ten more 🤣
Stage 3: The Chrome Extension Rabbit Hole
I reverse-engineered some apps that help schedule notes, which Substack doesn’t support.
I wondered how they worked, and it turns out the key is Chrome extensions.
You have to run JavaScript in the browser while logged in.
These tools ask you to never let your computer sleep and keep your Substack tab open because the installed Chrome extension manipulates JavaScript on your browser.
I wrapped this in a GitHub project if you want to try extracting your notes with enriched stats. See Substack Notes JS Extractor.
Basically, I had to copy-paste once a week in Chrome, wait a few minutes, then take the JSON output somewhere else to parse with Python and export it for Claude Desktop to read via my local MCP.
The Missing Piece: Simplicity
Yes, I could read Substack article info, fetch notes, and run some analysis locally, but it was a mess.
If I shared it with anyone, I’d just overwhelm them 😅
So far, we talked about systems to fetch data, put it together, and store it consistently.
But nothing pointed toward a simple approach that could extend with other tools without ending up as Frankensteins.
And all without being chained to your desk.
The Substack Author MCP: A Smarter Way to Build and Learn
For the visual ones, take a look:
I stripped everything down and built a FastMCP server focused on three things:
Fetch Substack Content — No Auth Hassles Required
I refactored all the scripts I made over the last months and simplified them as MCP tools with one clear job each.
Just RSS feeds and public API endpoints wrapped cleanly using the feedparser package and some custom parsing logic.
Data-Driven Tools for Smarter Substack Insights
The MCP includes ready-to-use tools that:
Fetch Substack articles from RSS feeds
Read through their content, perfect for summaries or style extraction
Fetch Substack notes with custom logic to make stats easier to digest
Read parsed article performance to get restacks, reactions, and comments metrics
Here’s how one tool looks:
No “log in to Substack and leave the tab open so I can run JavaScript on your console.”
Prompts That Learn From What Works
The MCP also includes ready-to-use prompts that:
Analyze article performance and suggest follow-ups
Draft Substack notes using proven formats (vulnerability hooks, contrarian insights, etc.)
Extract your writing style into reusable Claude prompts
Find patterns in your top-performing notes
The prompts call tools as needed, so you don’t have to figure out how to use them step-by-step.
Here’s an example prompt:
Content as a Data Product: Extending Your Second Brain
My data engineering side kicked in. I started treating content creation like a data pipeline:
Fetch the last 10 articles with performance data
Identify the top 3 by engagement
Extract common patterns (topics, formats, hooks)
Generate new content ideas based on winners
Track performance and repeat
This helped me ditch local scripts and MCPs, remove noisy Claude project files, and focus more on content architecture than tooling overload.
Then I discovered Rube MCP, which connects to Google Sheets, Notion, Google Tasks, and more.
You can use it remotely, and anything not configured by you can be found and suggested by Rube. It’s an amazing tool.
I’ve been using it to save dumps on Google Sheets for later reference and to schedule Google Tasks with a custom schedule to post Substack notes. It’s been working quite nicely.
Sometimes I email myself info dumps I want to see as unread, and with Rube, I search those emails to suggest them for Substack note input.
Options are limitless.
No Repo Setup: How to Use It Anywhere
Here’s the best part: you don’t need to clone repos or install anything locally.
Go to the MCP registry and connect it to the tool you want, or just add this connector URL: https://substack-author.fastmcp.app/mcp.
That’s it. The MCP runs remotely, so you can use it from Claude mobile, desktop, or anywhere. No local setup. No environment configuration. No dependency hell.
At the end of the day, if you have to clone repos and manage virtual environments just to ask, “What performed well this week?” that’s pretty annoying—at least for me 😂
⚠️ Disclaimers. A few things to keep in mind:
You will be asked to provide access. When you do, restart Claude Desktop or Cursor and the MCP should be accesible.
If you want to fetch notes with the Substack Author MCP, make sure the publication has its notes visible. You can even ask the MCP how to do it. If you visit this link and see notes listed, they’re visible. It just requires toggling a setting on your Substack author website.
No direct publishing: You draft content, but you still publish manually on Substack whenever you decide.
TL;DR
Went from scattered Python scripts + Google Sheets auth + local Claude Desktop to a single remote MCP URL
Zero installation needed—works from Claude mobile with just a URL
Content becomes a data pipeline: fetch performance → spot patterns → generate more of what works
Includes battle-tested prompts from zero to 4,000+ subscriber growth
Want to fetch content and spot patterns with a simple tool? Cool, you don’t need 47 moving parts
If Alejandro’s approach to systematic problem-solving resonates with you, check out his work at FastMCP where he’s building tools that solve the context switching problem.
The lesson here isn’t about MCP servers or Substack. It’s about recognizing when you’re building too many moving parts to solve a problem that should have one solution. Context switching kills productivity. Connected intelligence restores it.
This is how real tools get built, and how AI becomes useful instead of flashy.
Quick reminder: I’m hosting a 3-day workshop series this week (Oct 14, 15, 16) on strategic use of AI: Adopt AI → Advance Your Career → Build Real Things. Here’s a quick peek at the details.











Incredible work. Moving the entire workflow to a cloud-based MCP completely changes who can benefit from this kind of analysis. It transforms what was once an engineering project into an accessible creative tool for any writer seeking to understand their audience better.
This is amazing! I just subscribed to you Alejandro!