23 Comments
User's avatar
nihal | deeptech decoded's avatar

Yes, human judgement will always and always be there. Great write up! Thanks, Jenny!

Sharyph's avatar

Smart move here...Jenny.

Jenny Ouyang's avatar

Thank you Sharyph!

Madragh Rua's avatar

Hi Jenny! Thank you for your excellent article. I’m curious - why this particular technology stack? As you know there is no shortage of GPT/LLM choices now. I was looking at what you do in Perplexity - Í use ChatGPT and NotebookLM for analysis. I use GitHub for Notebook. I use ChatGPT and Claude for prototyping/coding/testing. Is there a fundamental problem with that approach. There is a migration cost and an opportunity cost in moving platforms. How have you mitigated or solved this in your own development journey?

Jenny Ouyang's avatar

Kevin, absolutely! Your doubt is very aligned with what I’ve experienced too. Like you said, there’s no shortage of GPT/LLM choices now, and migration cost is real.

Your setup (Perplexity/ChatGPT/NotebookLM + GitHub + Claude for prototyping) is totally valid. I don’t think there’s a fundamental problem with that approach at all.

For me, the main reason I use Perplexity is simply because its backend is optimized for search + research: it’s fast, accurate, and gives me direct citations/links. That saves me multiple steps (and saves me from having to build my own pipeline). But it’s more a convenience than a “must.” If your workflow already gets you the answers you need, there’s no reason to force a switch.

Same with Notion. I use it mostly because it makes organizing messy input extremely easy. I can dump raw notes in, and it helps structure them into a database/table format that’s clean and easy to scan later. That’s the “killer feature” for me. But if GitHub notebooks work for you, I’d stick with it.

Overall, my biggest lesson has been: don’t migrate just because other people are using a different stack. Tools are interchangeable, the system and habits matter more. I still experiment because of writing/building, but I try not to constantly rebuild my workflow unless the gain is clearly worth the switching cost.

Melanie Goodman's avatar

Who doesn’t like a deal?! Fantastic detail here and hope the move went smoothly

Jenny Ouyang's avatar

Thank you so much Melanie! It did went smoothly :)

Lakshmi Narasimhan's avatar

Running 9 parallel Claude Code agents for ecosystem research is the pattern that separates "using AI" from being genuinely AI-augmented. The fact that most deal sites run on WordPress with exposed REST APIs is exactly the kind of insight you only get from actually probing — no amount of desk research surfaces that.

I use a similar parallel-agent approach for market research on SaaS ideas and the biggest lesson matches yours: the research phase is where AI gives you the most leverage, not the coding phase. You can compress weeks of industry learning into hours. The coding part is fast regardless — it's knowing *what* to build that's expensive.

The data access hierarchy (RSS → WP API → robots.txt → skip hostile sites) should be a standard checklist for anyone building aggregators.

Jenny Ouyang's avatar

That is so true: the coding process is fast anyways. Thank you for this Lakshmi!

Manisha's avatar

This is on my reading list for the weekend!

Jenny Ouyang's avatar

Grateful to be on your reading list :)

Juan Gonzalez's avatar

Great work as always, Jenny!

This is going to save people lots of time and struggle hopefully.

Jenny Ouyang's avatar

Thank you Juan! It is a validation for me hearing your comment!

Juan Gonzalez's avatar

Or reading it 😉😆

Mark S. Carroll's avatar

This is the kind of validation I trust: find the data, test the pipes, then build. Vibes are not an API.

I also love the “research as validation” framing. If the ecosystem already solves your need, you do not build. If the data is locked down or hostile, you do not build. That’s adult supervision for builders.

The practical bits are gold too: RSS first, wp-json second, robots.txt third, and walk away from Cloudflare headaches. Plus the ASIN dedupe move is exactly the kind of nuts-and-bolts decision that makes an aggregator actually usable.

And using parallel agents to compress a week into a lunch break is the real superpower here.

Jenny Ouyang's avatar

Thank you so much, Mark!

Mark S. Carroll's avatar

Thanks for reliably writing must read content

We Dig Data's avatar

Thanks, Jenny. As a product person, I'm looking forward to trying out the industry/competitive research and database with the findings. Will share any feedback once I've had a chance to work with it! As always - thanks for sharing and thanks for building.

Jenny Ouyang's avatar

Amazing! Looking forward to your feedback :)

Dana Darr's avatar

Good read! I could understand 99% of your workflow. If someone is truly not technical at all reading this they may struggle a bit.

Jenny Ouyang's avatar

Ahh Dana, thank you for the feedback! I will look into how to make it more friendly :)

me-AI's avatar

This post raises such an important point about app viability! It’s fascinating how our own cognitive planning parallels AI’s capabilities, as discussed in our piece on implicit planning in language models. You can check it out here for insights on how this strategic thinking shapes responses: https://00meai.substack.com/p/planning-might-be-how-language-models.

Dheeraj Sharma's avatar

I use the exact stack for research..it helps a lot in the research but next time I have this idea of building, I would like to run your system to validate the idea!

BTW: Thank you Jenny for sharing about my project.