7 Things Nobody Warns You About Launching Your First Vibe Coding App
What nobody warns you about launching your first vibe coding app: 7 real lessons from hosting hell across 5 platforms, packaging failures, and the 10/90 rule.
Most vibe coders hit a wall right after their MVP works — not because the code is wrong, but because launching is a completely different job. This is the full post-launch breakdown from my first vibe coding app, built with Cursor in early 2025: 7 specific things that caught me off guard, what I’d change with Claude Code today, and the 10/90 rule that reframes the whole experience.
The MVP worked. It ran on my machine. I showed a few people, they said “can I try it?” — and I said yes before I understood what that meant.
That question turned a working prototype into two months of hosting migrations, auth systems, Apple developer certificates, and packaging nightmares. The code I wrote in a weekend was the smallest part of the project. Everything I didn’t plan for took the rest.
These are the 7 lessons I got from launching my first vibe coding app.
Hi, I’m Jenny 👋
I build AI systems and teach non-technical people to ship with them. If you’re past the “what is AI” stage and trying to build real things with it, the Practical AI Builder program is where I work with builders directly.
New here? Start with these:
Claude Code Project Ideas: 30 Builds From Beginner to AI Agent
I Tested Claude Code Against 7 AI Coding Tools on the Same Build
What’s inside:
Why the MVP is only 10% of the work: the ratio that reframes every timeline estimate you’ve made for your vibe coding projects
7 things that caught me completely off guard: from hosting failures across 5 platforms to Apple code signing hell — with what fixed each one
How Claude Code changes the launch equation today: three things that go differently now, one core principle that doesn’t change
The one rule to apply before your next launch: how the 10/90 principle changes your planning before you start, not after you’ve hit every wall
The app
An AI image search tool — paste a folder path, describe what you’re looking for in plain language, get matching photos back using a vision model. Built with Cursor. Available as a desktop app at findinfolder.online.
Here’s my story of how a quick project turned into a full-fledged learning experience, complete with unexpected challenges, sleepless nights, and a newfound respect for app developers.
From Concept to MVP: The Smooth Beginning
Creating the app’s core feature was fast, fun, and immensely rewarding. My goal was simple: build a tool to help me quickly search for images in my folders. I worked on a couple of articles to set up the baseline and, before I knew it, I had a working MVP (minimum viable product).
Soon after sharing it, people started asking, “Is it available for me to try it out?” That’s when it hit me — why not make this app available for everyone? Excited by the prospect, I decided to launch it.
Little did I know, the real work was just beginning.
The app itself isn’t the point of this article. The launch is. If you’re at the point where your vibe coding project works locally and you’re thinking about putting it in front of real users, this is what comes next.
What this build required
Cursor (original build) — I’ll show the Claude Code bridge where it matters
A vision-capable AI model API key (OpenAI or equivalent)
A hosting provider that can handle ML libraries — not shared hosting (this lesson comes up fast)
Roughly 1 weekend to build the MVP, 6–8 more weeks to make it usable by someone who isn’t you
Willingness to debug things that have nothing to do with your actual product
Lesson 1: The MVP is 10% of the work — the launch is the other 90%
Building the core feature came together fast. A few Cursor sessions, a working image search function, done. Then someone asked to try it, and the real work started.
The next 90% was everything that has to exist before a stranger can use your app without you sitting next to them: user accounts, subscription management, forgotten password flows, feedback channels, and error states for things you never thought to handle. None of it is in any vibe coding tutorial.
If you want to understand this progression before you start building, How to Build an AI App — Vibe Coding Zero to Launch maps the whole arc. The 10/90 split shows up at every project level, not just the first one.
With Claude Code today: The ratio doesn’t change. But CLAUDE.md means you don’t re-explain the app structure every session — so the 90% moves faster.
Lesson 2: Going public changes the scope entirely
The moment you commit to real users, three questions appear at once: How do people sign up? How do you charge them? How do you communicate with them?
I needed Google sign-in or email registration, subscription management, forgotten passwords, logout flows, and gifting options. Each one seemed small. Together they were a project unto themselves.
I’d written about planning this kind of scope in Vibe Coding Planning Methodology — if you’re pre-launch, that’s the checklist to run through before your MVP is done, not after.
With Cursor: I built each feature one session at a time, re-explaining context each time.
With Claude Code today: CLAUDE.md holds the full user model — auth logic, subscription tiers, use cases — so every session starts from the same foundation. The Claude onboarding guide shows how to structure that file so it actually works.
Lesson 3: Boilerplate templates often cost more than they save
I looked at Shipfast and several boilerplate starters before deciding to build auth from scratch. The reasons: I wanted to understand the fundamentals, most templates had unresolved dependency conflicts, and none of them matched my app’s framework.
That decision turned out to be right for my situation — but it’s not always the right call. The key question is whether the template’s dependencies conflict with yours.
If you’re thinking through stack decisions like this, AI Software Engineering Best Practices for Builders covers the “scratch vs. template” tradeoff in detail — same logic applies whether you’re using Cursor or Claude Code.
With Claude Code today: I’d still evaluate templates. But CLAUDE.md with your stack defined means Claude recommends what fits your project, not what’s generic. You spend less time evaluating and more time building.
Lesson 4: Cheap shared hosting will fail your ML app
First assumption: Namecheap or GoDaddy would handle it. First reality: they couldn’t. The app used Torch and Transformer for the vision model. Shared hosting didn’t have the compute or the environment.
I went through five platforms before landing on one that worked:
Namecheap → crashed under ML load
GoDaddy → same issue
Oracle Cloud → workable
GCP → too expensive
AWS → too complex to configure quickly
If your app uses any ML library, assume shared hosting won’t work. If you want to skip the 5-platform cycle, the OpenClaw setup guide documents the Oracle Cloud configuration that ended up working. Go there first.
Lesson 5: The architecture split solves the ML hosting problem — but you have to design for it
The solution to the hosting failures wasn’t finding a better host. It was changing the architecture. I split the app into two parts: a webpage for user registration, subscriptions, and downloads; a desktop application for the actual image search.
The ML computation moved off the server entirely, onto the user’s machine. Hosting requirements collapsed. The crashes stopped.
This is one of the core patterns in How to Make Vibe Coding Production Ready — separating the web layer from the compute layer. The same principle appears in Vibe Coding Advanced Production Patterns in more detail.
The catch: I discovered this architecture after the hosting failures, not before. If you’re building an ML-heavy app, design the split before you start hosting migrations.
Lesson 6: The packaged build lives on a different planet from your dev environment
Getting the desktop app to install cleanly on someone else’s machine was its own war.
Dev environment assumptions that broke immediately:
Hardcoded paths that only existed on my machine
Port conflicts between background processes
System permission errors that never appeared in development
Apple developer certificates — code signing, notarization, the entire chain
A local database for user data turned out to be more reliable than API calls — fewer network failure modes, simpler to debug. And every path reference had to be explicit. Not relative. Not assumed. Explicit.
The AI Prompting Vibe Coding Production Ready Guide has the prompting patterns that help with packaging specifically — how to prompt your coding tool to write code that doesn’t assume the dev environment.
Treat the packaged build like a completely different machine. It has nothing your dev environment silently provides.
Lesson 7: AI is an extraordinary guide — that occasionally points you off a cliff
“Imagine a mischievous 3-year-old with 100 years of wisdom guiding your every move.” That’s the original article’s line, and it’s accurate.
Cursor (and any AI coding tool) will confidently suggest solutions that don’t exist, APIs that were deprecated, and approaches that almost work until they don’t. Not often — but enough to waste a day if you follow blindly.
The fix isn’t to trust it less. It’s to verify before implementing. Check that the function exists. Run a minimal test. Don’t build 40 lines on top of an untested foundation.
I went deep on this in 15 Best Claude Code Prompts — the prompting patterns that reduce hallucination specifically in build sessions. The constraint and verification layers make a real difference. Same principles applied in Cursor.
What I’d Build Differently Today
I built this with Cursor in early 2025. Three things would change with Claude Code:
1. CLAUDE.md replaces the per-session re-explanation. Every Cursor session, I re-explained the app structure, the subscription model, the ML dependencies. With Claude Code, that context lives in one file and loads every session. The Skills article covers how persistent context works in practice — the architecture of the file matters as much as the content.
2. Oracle Cloud from the first session. I wasted weeks on platforms that couldn’t handle ML before landing there. The OpenClaw setup guide exists specifically so you don’t repeat that cycle.
3. The architecture split is still right. Compute on the client, lightweight web layer for auth and distribution — that decision holds regardless of tool. It’s the same logic behind choosing your automation level: start with the simplest infrastructure that works, add complexity only when the simple version breaks under real load.
One more thing that belongs in the "build before you need it" category: an LLM backup plan. For what happens when your API provider goes down mid-day, When Your AI API Goes Down is the full account — 6 backup lessons from a real outage.
The One Thing That Stuck
The 10/90 rule doesn’t mean building is easy — it means the definition of “done” that most vibe coders use is wrong. Done-in-dev and done-for-users are two different products. The MVP is the proof of concept. The launch is the actual build.
Knowing this before you start lets you plan honestly: budget more time for auth and hosting than for the feature itself, choose your infrastructure before you need it under pressure, and design the architecture for real users before you have them. The first project is the one where you learn all of this the hard way. Every project after is faster because you already have the scar tissue.
Go Deeper
How to Start Vibe Coding: The Roadmap Nobody Gave You — if this is your first app or you’re planning your second, this is the progression map that makes each launch faster than the last.
OpenClaw Setup Guide — the Oracle Cloud configuration that solved the ML hosting problem: skip the 5-platform cycle and go straight to what works.
Smoke Testing Your Vibe Coding Projects — the validation pass that should happen before you invite real users, so you find the failures while they’re still cheap to fix.
What Happens After You Launch Your First AI App — 7 lessons from the weeks that followed: the metrics gap, real user feedback, and the moment you realize the build was the easy part.
— Jenny








I stumbled on your Substack randomly, and I'm glad I did. I have been going through a similar ordeal -- using AI to create a case summarization system -- and I completely relate. You're much farther along (my tool is still 3 Python scripts), but now, you've inspired me to put this together on a webpage somehow.
Thanks for sharing what you've learned, especially the challenges of dependency hell.
Congrats on the launch! I will check it out shortly