Mihir Kanzariya

What if AI context didn’t reset every time?

If you use AI dev tools daily, you’ve probably felt this:

You start a new session and immediately have to re-explain:

  • what the project is

  • what you already tried

  • why certain decisions exist

  • what not to repeat

Not because the AI is bad.
Because the workflow forgets.

That’s what pushed us to build Blocpad.

Blocpad is a CLI-first tool where context lives with your project, not inside chat history.

What that means in practice:

  • Context is explicit and versioned

  • Tasks and decisions stay local, next to your code

  • AI reads from project state, not your memory

  • No more copy-pasting “full context” every session

This started as a personal fix for my own workflow after using tools like Cursor and Codex CLI.
We’re launching early on Product Hunt to learn, not oversell.


I’d genuinely love to hear from other builders here:

  • How do you handle context today when using AI?

  • What breaks most often when you come back to a project?

  • If AI could remember one thing perfectly, what should it be?

I’ll be active in the comments — happy to talk about design decisions, tradeoffs, and what we’re exploring next.

Checkout here and show your support: Blocpad-cli

336 views

Add a comment

Replies

Best
Tereza Hurtová
Great launch, Mihir! We’ve been relying on GitHub as our 'source of truth' for context, but as the project grows, it becomes harder to keep the AI aligned with every decision made in previous sessions. Using tools like Cursor helps, but having a CLI-first tool like Blocpad to manage explicit, versioned context sounds like a potential game-changer for long-term projects. If I start using Blocpad today, how easy is it to sync existing context from a GitHub-based workflow?
Mihir Kanzariya

@tereza_hurtova Great question. Most teams already have context in GitHub — it’s just scattered across commits, PRs, issues, and docs.

BlocPad doesn’t require a big migration. You can start by pulling in high-signal context (project purpose, constraints, key decisions) and let it evolve alongside the repo. The idea is to make that context explicit and versioned going forward, rather than perfectly reconstructing the past on day one.

We’re focusing on fitting into existing GitHub workflows, not replacing them.

Tereza Hurtová

@mihir_kanzariya Makes sense, Mihir! I really appreciate the 'no big migration' approach. Most of the time, the friction of moving everything at once is what stops us from trying new tools. Thanks for clarifying!

Dan Williams

I upload the key files and ask the AI to review them to summarise it's understanding. Fine while my project is in it's infancy but clearly not sustainable.

For me, the best things it could remember are the purpose for the project and the platforms used, e.g. vercel, render, etc.

Mihir Kanzariya

@dan_williams10 Exactly. Uploading key files works until the project stops being small.

Then the real problem shows up: deciding what the AI needs to see again.

Purpose and platform are perfect examples,

they aren’t files, but they quietly constrain every decision. If the AI forgets why the project exists or where it runs (Vercel, Render, etc.), you spend time correcting instead of building.

Dan Williams

@mihir_kanzariya It is interesting that context management has become an important part of effective use of AI. I work in data, and semantic models are the equivalent. It is the boring bit no one wants to think about, but critical.

I have started getting in the habit of asking Gemini to write briefs to feed into Claude Code. It would be useful to have this as an agentic process which happens each time a Git push is done.

Favour Li
@dan_williams10 I too. I always make every important new knowledge or workflow into a pdf and upload it. Then whenever I am done with a session I make it summarize it then save as memory. Though this is pretty tedious
Mihir Kanzariya

If you use Cursor, Codex, or Claude, you’ve felt this: every new session starts with re-explaining everything.

BlocPad CLI keeps context, tasks, and decisions tied to your project — not lost in chat history.

Jon George

As others have stated, the thing that breaks most is tracking why decisions were made. "We tried X, it failed because of Y, so we went with Z" is the context that prevents wasted cycles.

I've been generating context files with Claude Code at key decision points and at the end of sessions, basically creating a handoff document that lives in the repo. It works, but it's a manual habit that depends on discipline.

It's the same problem as onboarding a new developer at work. They can read the code all day. Understanding why it looks the way it does is completely different and usually consumes the most time and effort.

Mihir Kanzariya

@jon_george This hits the core issue.


Code tells you what exists.
Decision history tells you why it exists — and that’s what prevents repeating dead ends.


Your handoff-doc habit is exactly what teams do for onboarding, just compressed into solo/AI workflows. It works, but only if discipline holds.


The interesting shift is treating those “we tried X → failed because Y → chose Z” moments as first-class artifacts, not optional notes.


That’s the layer AI actually needs to avoid wasted cycles.

Caspar Jee

This really resonates. Context loss is still one of the biggest productivity drains when using AI tools day to day. Keeping the project state explicit and versioned alongside code feels like a very natural direction.

Mihir Kanzariya

@chaw_chan_jee Exactly — once context is explicit and versioned, AI stops being a reset button and starts being a real collaborator.

Marces William

I think that by now both with complex topics and with topics that unfold over a longer period of time (quarters, months)for me it’s really only about managing to connect all the context again. Especially when it comes to making iterative progress and still constantly working with the original or respective input from x weeks ago.

Sometimes it feels as if my coworker has been on a long vacation…lol

Mihir Kanzariya

@marces_wiliam That "coworker back from vacation" feeling nails it.

The problem isn’t picking work back up, it’s reloading why things are the way they are weeks later. Without decision history and constraints, AI (and humans) end up re-discovering instead of iterating.

Long-running projects need continuity of thinking, not just code.