Ilia Ilinskii

What's your prompt engineering workflow?

byβ€’

Hey makers! πŸ‘‹
I've been deep in prompt engineering lately while building an AI tool, and I'm genuinely curious about how others approach this.

A few questions:
1. Do you save your best prompts somewhere? Notion, text files, dedicated app, or just copy-paste from chat history?
2. How do you iterate? Do you have a systematic approach or just tweak until it works?
3. Different prompts for different models? Or do you use the same prompt for ChatGPT, Claude, Gemini?
4. Text vs image prompts β€” do you treat them completely differently?

I've noticed I was doing the same optimizations over and over (adding role, being more specific, structuring output format), which made me wonder if everyone has their own "prompt formula."

Would love to hear your workflows! πŸ™

137 views

Add a comment

Replies

Best
Ilai Szpiezak

Best thread @ilia_ilinskii !

1. Do you save your best prompts somewhere?
Yes. Inside @Pretty Prompt. Replaced all my scattered prompts, with one single Library I manage and use across all AIs.


2. How do you iterate?
I write my first thoughts in bullet points, then press Improve Prompt using Pretty Prompt and done.

3. Different prompts for different models?

I think more and more, as the LLMs get better, we'll have less different prompts per model, but more specific prompts per medium (e.g. vibe coding, text, image, video, etc.) I just write normally, and Pretty Prompt does the job for me.


4. Text vs image prompts

Yes, completely different outputs, need completely different inputs :)

So the answer for me is Pretty Prompt ✨

Ilia Ilinskii

@ilaiszpΒ Thanks for sharing ! Chrome extension approach works great for browser-based workflows.

I built Oshn Prompt for a different use case β€” system-wide access. Native macOS app works in any app: VS Code, Slack, Notes, Terminal β€” not just browser. Plus voice input (Cmd+Shift+U) β€” speak your idea, get a polished prompt instantly.

One thing I’d push back on though: prompt differences across models are real and well-documented. Claude works better with XML structure, GPT responds differently to system prompts, Midjourney needs specific parameters (--ar 16:9 --v 6) while Nano Banana prefers natural language descriptions. Official docs from Anthropic and OpenAI recommend different approaches for a reason β€” @build_with_aj covered this nicely above.

That’s why I built model-specific Skills β€” video prompts follow Sora/Runway syntax, image prompts use Midjourney structure. Research-backed, not one-size-fits-all.

Different tools for different workflows 🀝

AJ

So, I've mainly worked with Claude code, and I have a highly specific system that works for myself. That is a master prompt that will act as an setup wizard and tailor it for your needs.


I am moving to opencode and seeing how the platform can work well, I'm not gonna get into specifics of what I'd change about it unless asked.

To answer your questions.

1. I have some in files, and whenever possible they become skills or slash commands.

  1. My iterative process I will outline here, (I have written two blog articles about general process and a deeper dive)

It begins with the what outcomes you want, and then backwards from there, example: When I open the app I want to see weather and the polymarket bets on it in my area.


From there I keep the LLM in plan mode and just talk it out, ask questions and have it ask questions.

The general flow of it goes like: What needs to happen to accomplish XYZ, and what are the dependencies of that.

In this app example, we would need to determine frameworks to use, authentication, payments, etc.

It's about deducing actions from desired outcomes.

  1. Yes, it varies wildly.

This is a whole article in itself and each model is different. I will use casual human terms.

Claude models:
Always add custom instructions that lower their percieved anxiety, example: "Claude is welcome and encouraged to ask clarifying questions, Claude is allowed to make mistakes and will not be punished"

Seems counterintuitive, but Anthropic models come with a lot of hedging and doubt baked in and this causes a one shot tendency, by giving permission to fail and encouragement to ask you will see better results as they will say when context is unclear.

OpenAI GPT models: Direct instructions to not format anything as a table unless asked, and context about your communication style.

GLM 4.7: Be hyper specific, encourage clarifying questions,

Gemini models: Be direct and keep questions grounded, 2.5 flash will got Tony Robbins on 3 redbulls on you.

Mistral: Direct instructions that the user wants measured and non speculative responses.

Deepseek: Same as GLM, clarity above all.

  1. I cannot answer this with expertise as I seldom create AI images. The general advice I see is be specific and outline the desired visual style in detail.

Ultimately it depends on your communication style and how much upfront specification you want to do. It's beyond prompting, the key is to think about context holistically.

I can also share some Qualitative benchmarks and how to interpret them to asses a models behavior and what do you want to change. Let me know and I'll make an article of it.

Also note that behavior is dependent on platform or harness. Claude Opus in Claude code will behave differently from Claude desktop, Mistral on API is way different from mistral on opencode.

Ilia Ilinskii

@build_with_ajΒ Thanks interesting thoughts! And wide experience ! Always interesting to share approach. I super agree with spec-driven approach of vibecoding.

Dushyant Khinchi

Hey! Great questions - this is something I've been thinking about a lot lately.


Hot take: I think we're solving the wrong problem.


Most people (myself included at first) obsess over prompt engineering - finding the perfect phrasing, the magic formula, saving prompts in Notion, tweaking role definitions, etc.

But honestly? I've shifted to focusing on context engineering instead of prompt engineering.

Here's what I mean: The richer your context is, the less you have to work on crafting the "perfect prompt." If the AI already has deep understanding of what you're working on, your prompts can be way more casual and conversational.

The real issue with most AI tools today:

They're fundamentally designed backwards. Instead of building robust memory and context preservation, they put the burden on you to write better prompts every single time. You end up:

  • Re-explaining your project in every new chat

  • Copying and pasting context manually

  • Maintaining your own "prompt library" as a workaround

  • Starting fresh when context gets too long

That's... exhausting? And honestly feels like a band-aid solution.

When prompts DO matter more than context:

Don't get me wrong - there are absolutely cases where prompt engineering is critical. Image generation, video editing, creative work where you need precise aesthetic control - yeah, prompts are everything there.

But for text-based work? Building tools? Vibe coding? Research and writing? You need rich, persistent context way more than you need a perfectly engineered prompt.

Think about it: would you rather spend time crafting the perfect 500-word prompt every time, or have a system that just... remembers what you're working on and lets you talk naturally?

My approach now:
The tool that i built(and that I mostly use now) focus on building up context that persists - embedding key info, organizing it spatially (not just chronologically), making it searchable and semantically connected. Once that foundation exists, my prompts can be simple:

"Add error handling here" "What's the best approach for this?" "Show me alternatives"

The AI already knows what "here" means, what "this" refers to, what the constraints are.

Curious what others think:

  • Are we all just coping with bad context management by over-engineering prompts?

  • Or is there real value in maintaining prompt libraries that I'm missing?

  • How do you balance the two?

Would genuinely love to hear different perspectives on this! πŸ€”

Ilia Ilinskii

@dushyant_khinchiΒ Lovely post! It is great to hear someone is trying to fix the same problem as I do. Agree and many aspects. The context is king. I thing the hardest part here is to find the way to navigate attention and to find the best balance between too low and too high context per request.

David Gilbert (DaGi)

I probably have the most simpliest workflow its Notion Workspace Ai For all Prep Work then to Replit for Build out. Back and Forth Back And Forth till its over and a baby is born lol