I like your note about checking to see if similar functionality has already been built elsewhere on the site. It helps keeps things consistent and avoids rework! Big time-saver there.. especially when it tries to rebuild it differently and you’re back and forth trying to fix that…
Really liked this one. I found myself going through a loop where I use Claude Code plan mode to put the scope together and scan for DRY principle.
Then I ask for implementation and iterate by adding context until it works (this sometimes means checking the source code or using MCPs such as Supabase for debugging).
Last step is cutting 80% of the code and make sure it achieves the same thing with robustness because AI tend to write complex code and over cover for edge cases that might never be there at all.
Just vibing cannot get these, using SWE principles and code design basics will always 10x the process?
It is so rare to find someone that is a true architect for software. Plenty took the title back in the days humans wrote code, but few understood the true value in an individual that can navigate and build from a holistic view, *and* drill down into the details as needed to *know* something will work when they move it to operations, *and* have the humility to know they must test those assumptions in the real world and outside of the ivory tower.
Thank you so much, Chris! Your words truly mean a lot. I feel seen and encouraged by what you said. It’s exactly the kind of motivation that keeps me sharing and building out loud. Appreciate you taking the time to write this!
Invaluable breakdown! Clear, structured guidance like this is exactly what AI builders need, especially the focus on planning, DRY, and real-world testing
Thanks Jenny, this is such a great piece. I thought that your reflections that AI "Optimizes for 'works right now,' not 'works well as it grows' are also symptomatic of some of the broader issues with AI, even away from software engineering as well. 🙏
Thank you so much for the thoughtful comment, Sam! It’s true, that short-term optimization bias shows up in many AI applications beyond code. It’s a broader pattern worth watching.
It really is, and funny thing is, use of old rules and solid frameworks aren’t just relevant for AI-assisted coding, but for so many other areas of expertise too.
Brilliant breakdown, Jenny — this hits the exact gap most AI builders miss: AI builds fast, but it doesn’t build frameworks of thought.
Loved how you reframed old-school software hygiene (DRY, SRP, and chaos testing) for the AI era. The “security-first prompting” section is pure gold — every coder should tattoo that line about AI not getting paged at 2 AM.
I’d love to see a follow-up on LLM-specific blind spots (prompt-injection defense, version control for model drift, output validation) and maybe a minimal prod checklist for those racing from prototype to launch. This one’s staying bookmarked. 👏
I like your note about checking to see if similar functionality has already been built elsewhere on the site. It helps keeps things consistent and avoids rework! Big time-saver there.. especially when it tries to rebuild it differently and you’re back and forth trying to fix that…
Thanks Tam! Yeah, it's such a common thing for AI building.
Really liked this one. I found myself going through a loop where I use Claude Code plan mode to put the scope together and scan for DRY principle.
Then I ask for implementation and iterate by adding context until it works (this sometimes means checking the source code or using MCPs such as Supabase for debugging).
Last step is cutting 80% of the code and make sure it achieves the same thing with robustness because AI tend to write complex code and over cover for edge cases that might never be there at all.
Just vibing cannot get these, using SWE principles and code design basics will always 10x the process?
Thank you Alejandro! Totally agree with you, it's a new way of learning to code, and the established principles matter more than ever.
Really appreciated this! The breakdown felt super clear and practical. Super useful for anyone building with AI.
Thank you Ruveyda! Glad it's clear and practical :)
It is so rare to find someone that is a true architect for software. Plenty took the title back in the days humans wrote code, but few understood the true value in an individual that can navigate and build from a holistic view, *and* drill down into the details as needed to *know* something will work when they move it to operations, *and* have the humility to know they must test those assumptions in the real world and outside of the ivory tower.
Cheers, and thank you for sharing your work!
~ Chris
Thank you so much, Chris! Your words truly mean a lot. I feel seen and encouraged by what you said. It’s exactly the kind of motivation that keeps me sharing and building out loud. Appreciate you taking the time to write this!
Invaluable breakdown! Clear, structured guidance like this is exactly what AI builders need, especially the focus on planning, DRY, and real-world testing
Thank you! Your words means a lot. Really glad it resonated!
Another fantastic piece Jenny!🤗 So many engineers shy away from vibecoding, but you’re brilliant at bridging the gap.
Thank you Karo, really appreciate your kind words! Feel like it’s the best decision I’ve made here, so that I get to know amazing people like you.
Good points
Thank you
Thanks Jenny, this is such a great piece. I thought that your reflections that AI "Optimizes for 'works right now,' not 'works well as it grows' are also symptomatic of some of the broader issues with AI, even away from software engineering as well. 🙏
Thank you so much for the thoughtful comment, Sam! It’s true, that short-term optimization bias shows up in many AI applications beyond code. It’s a broader pattern worth watching.
An essential read, the old rules of programming are more relevant than ever. A good framework is crucial for a stable system, especially with AI.
Thank you so much, Sharyph!
It really is, and funny thing is, use of old rules and solid frameworks aren’t just relevant for AI-assisted coding, but for so many other areas of expertise too.
Brilliant breakdown, Jenny — this hits the exact gap most AI builders miss: AI builds fast, but it doesn’t build frameworks of thought.
Loved how you reframed old-school software hygiene (DRY, SRP, and chaos testing) for the AI era. The “security-first prompting” section is pure gold — every coder should tattoo that line about AI not getting paged at 2 AM.
I’d love to see a follow-up on LLM-specific blind spots (prompt-injection defense, version control for model drift, output validation) and maybe a minimal prod checklist for those racing from prototype to launch. This one’s staying bookmarked. 👏