Gabriele Cavaliere

Vibe Coding Best Practices and Must Have .md Files

by

about to Launch a product for creators after building for a few weeks to months.

Started as a way to get into vibe coding and just got real useful real quick lol.

But the goal is still that - mix my skills and knowledge as an engineer with the good parts of AI coding.

So i learned a few of what to need and have

  • Claude.md file

  • best coding practices section

  • pretty heavy compartmentalization & file structure separation

  • certain ways to prompt

  • ask for implementation plan before telling it to code

But i feel like all i learned is about directing prompts - im wondering...

what are you guys finding are the best .md or deeper ai vibe coding tricks that are helping you save time? be more efficient? debug ai less or help it get it right the first time? whats your vibe code tips and tricks?

203 views

Add a comment

Replies

Best
Alessandro Pignotti

I tend to follow Andrej Karpathy's advice of "keeping AI under a tight leash". Small, well defined tasks that make code review possible, at least for the most sensitive components.

An accurate review is especially important to avoid security incidents, in particular when working with the npm ecosystem which seems to be heavily polluted with compromised and malicious packages these days.

How do you deal with security? Do you use any defense-in-depth solutions like sandboxing?

Gabriele Cavaliere

@alessandro_pignotti My experience has been in new features and aspects of apps so still learning full system architecture but depending on the project i look up what security is needed based on the stack and actions and then implement accordingly - after asking ai lol

but ai does have trouble with adhering to coding principles and best practices when creating.
for example this section of my .md is always changing and evolving as i go along and per project

## Code Quality Standards

**Code Practices**:

- Clear, descriptive variable and function names
- Extensive console logging for debugging during development

**Design Principles**:

- **Simplicity First**: Implement the simplest, most straightforward solution that meets requirements
- **Performance**: Consider optimal data handling and application performance in all implementations
- **Avoid Over-Engineering**: Prevent unnecessary complexity that doesn't add meaningful value
- **Code Cohesion**: All new code integrates seamlessly with existing patterns, architecture, and conventions

**Maintenance Practices**:

- **Code Evolution Over Addition**: Modify or remove existing code rather than adding on top. Never leave legacy code or commented-out implementations. If unsure whether to modify vs. add new code, ask for guidance to prevent code bloat.
- **Reuse Existing Patterns**: Check if similar logic exists before creating new utilities. Reuse patterns and functions inline rather than duplicating. Ask: "Can this be done with existing code?"
- **No Backward Compatibility**: Replace old code completely rather than wrapping in compatibility layers. Replace data structures, APIs, and signatures fully to prevent hidden technical debt.

and yet a lot of it still needs to get repeated as ai will do what it wants like generate all the code in one file or leave legacy.

I am thinking of condensing the main ones and making a .md file that for each ticket i create for it to read the file first and then create an implementation plan

Andre Lin

The software engineering space has exploded with LLMs, and vibe coding has lowered the barrier to entry. I think that’s a good thing! People who would never have touched code before can now build something useful!

With long-context, LLMs are also much better at understanding user intent. In practice, I’ve found that as long as something is human-readable and consistently structured, you can include almost anything in a CLAUDE.md file — best coding practices, preferring succinct comments, writing structured test cases etc. A simple litmus test I use is: “Can another human dev understand what I’m writing here?” I wont bang on AI understanding my intent if i can't convey it to another human either.

One thing I’d caution though (especially for newcomers) is fully embracing this emerging mindset “I don’t need to know what’s going on under the hood, I’ll just pilot with prompts.” That’s probably counter-productive long-term. Even for experienced devs, AI works best as a strong assistant, not an autonomous operator, especially when OS commands or infra are involved. It’s very easy to overlook a single bad command when the model generates a long sequence at once (this post i chanced upon is a good cautionary tale: https://www.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wiped/).

There are mitigations of course:

  • restricted system / OS permissions

  • aggressive use of git & version control

  • forcing planning / dry-runs before execution

But more broadly, using the time saved by AI to understand what you’re building under the hood pays off. Models and agents will evolve quickly but fundamentals won’t. I think this distinction separates someone who’s just model-builder from someone thinking like a systems architect.

Gabriele Cavaliere

@andrelinhk I totally agree andre!
in doing the occasional vibe coding project its actually increased my architectural skills and planning and put it well into the forefront of every project i have made

I do like the "can another human dev" angle!
I have been considering something similar as far as making sure its a part of every code implementation plan for any LLM im using both for file structure, comments, variables names and more.

I'm building a sort of additional claude.md or skill.md file with all of the regular incidents i get from vibe coding (proper file structure, no .js implemented dom elements, etc) for future projects that i think for each implementation i'll say something like "make sure you apply the coding principles in skill.md file when implementing and i think the "can a human dev" understand this idea is a great addition!!

Do you think after you get it working your vibe code still requires clean up or structuring or think you've handled that in the process as well?

Andre Lin

@gabriele_cavaliere glad to hear vibe coding is helping you with your code planning and systems design! I think thats how to best make use of it.

Haha yes, i've seen a fair share of devs writing messy md files or type in broken phrases, hoping the AI would somehow be able to intuit the higher intent and avoid being derailed. Maybe they don't think much of the AI use case or just want to see some variability and suggestions offered that they might find remotely useful, with minimal effort into the md file. Tough to say.

In any case, I think AI agents in code tooling are definitely here to stay. I myself have learnt a fair bit from them, though i tend to avoid allowing its modifications to be entirely automated, often preferring the "Plan" or "Ask" phase by Claude. In the few instances I use their "Agent" mode for automated modification, I find myself spending a lot more time reviewing (and learning!) and maybe re-writing some parts. I guess if anything, it helps to retain my coding style while giving me exposure to other good code practices that the AI suggested. A nice balance i'd say otherwise the code might look too foreign when i re-visit.

Oh but I should add that most of the time, correctness isn't the issue (barring occasional round-off errors). Maybe some parts could be slightly better optimized in performance or some slight re-structuring to better fit the wider structure (which sometimes isn't established and the AI doesn't have full context, just forward planning on my end). But in general, having the correct code provided is already a great boost in confidence and i'm currently quite comfortable with this level of AI collaboration. That said, definitely interested to see what possible further developments can be made and assess its viability in the long-term. Looking forward to hearing about your product!

Alessandro Pignotti

@gabriele_cavaliere  @andrelinhk I fully recognize the process you outlined when using "Agent" mode, with extensive review and modifications manually applied on top of the suggested changes.

I find myself persistently baffled by how much defensive the models seems to be, with excessive checks against null references, at least in TypeScript. I read somewhere that the models might be aggressively punished during training for null de-referencing.

A big part of the review process for me seems to be going around and removing pointless checks based on logic invariants that the model does not pick up on.