
Coderrr
Open source CLI-first AI coding companion
92 followers
Open source CLI-first AI coding companion
92 followers
A powerful CLI tool that writes, debugs, and ships code alongside you. Like Claude Code, but free and open source. AI-powered coding agent that analyzes tasks, creates actionable plans, and executes commands.





Coderrr
Hey Product Hunt ๐
Iโm Akash, the maker of Coderrr.
Coderrr started as a side project because I loved tools like Claude Code and OpenAI Codex, but wanted a free, open-source, CLI-first alternative that developers could actually inspect, modify, and extend.
Coderrr runs directly in your terminal and acts like an AI coding partner โ it can understand your codebase, help generate or refactor code, explain bugs, and automate common dev tasks without needing a heavy IDE setup.
What surprised me is that after submitting this to an open-source contribution event, it started getting real traction from the community โ which pushed me to polish it and launch here.
This is still early, and Iโd genuinely love your feedback:
What workflows should Coderrr support next?
What annoys you most about existing AI coding tools?
What would make this part of your daily dev setup?
If you like open-source, CLI tools, or AI for developers, Iโd love for you to try it out.
Thanks for checking it out ๐
โ Akash
How is this different from opencode for instance?
@janschutteย Good question. From what I see, the key difference isnโt โbetter AI,โ but CLI-first + fully inspectable stack. opencode leans more toward hosted/GUI workflows, while Coderrr feels built for engineers who want scripts, reproducibility, and local control.
Akash โ whatโs the single architectural choice that makes Coderrr fundamentally different under the hood?
Coderrr
@janschutteย @gnizdoappย I'd say Josef, the single architectural choice that makes Coderrr different from others is the stateless CLI. It only handles fileops and local prompts, while the API handles all the heavy stuff.
Apart from Ollama, all other providers are embedded with the API, so you can configure your own provider as you like or use the default llm
Congratulations on the launch, I guess the only question I have is how does it compare to OpenCode? which is the one I was initially thinking of using for local models now that my new Macbook is on its way. I'll test both once it arrives anyway but I want to know if there's already different workflows or features between the two
Coderrr
@daniel_contreras1ย Hey Daniel, to be very honest right now Coderrr isn't so different from Opencode except for a very minimal features. I made this project back in late October 2025, and left it as it is for a long time till I saw OpenCode launch.
Coderrr is open-source, and I invite all dev community members to contribute and solve all the problems people face during using Claude Code, Openai Codex or OpenCode
GitHub Link: https://github.com/Akash-nath29/...
Huge congrats on the launch! ๐ Coderrr looks like a super practical CLIโfirst companion for real-world shipping, not just toy snippetsโexcited to see how it fits into devsโ daily workflows.โ
Coderrr
@zeiki_yuย Thanks a lot! I really appreciate it!
Kindly share this with any tech enthusiast in your circle so they can try this out and give feedbacks ๐
CLI-first + pluggable providers (Ollama/OpenRouter/etc.) is a solid โinspectable agentโ direction ๐ฅ
The scale pain is monorepos/long sessions โ context drift; best practice is incremental repo indexing (tree-sitter + ripgrep) + lightweight embeddings + persisted session state keyed by git SHAs.
How are you planning retrieval + cross-terminal memory (local store like sqlite/duckdb + pgvector, or pure filesystem cache)?
Coderrr
@ryan_thillย Hey Ryan!
I am going to handle long contexts via chunking. I will chunk the codebase in multiple parts, and then send the necessary parts to the LLM. To implement this I also have to implement semantic search so LLM can understand user queries like "Find me the file with authentication" and can understand the contextual codesnippet when necessary.
And for the cross terminal memory, I am planning to just keep a json to keep track of the last few conversations. Won't put much pressure on client end, is fast (only last few conversation stored) and easy to handle as well
Congratulations on the launch, @akash_nath29
I had a question regarding the architecture and flexibility of Coderrr. Since itโs an open-source CLI agent for developers similar to tools like Gemini CLI or Claude CLI those tools usually come with their own bundled AI models, which can feel a bit restrictive.
Does Coderrr address this by allowing open AI model connectivity, including support for local models (for example via Ollama)? Also, from a security and integration standpoint, is it possible to connect our own self-hosted models or even subscribed third-party models instead of being locked into a single provider?
Would love to understand how flexible the model integration is
How does Coderrr compare to tools like Aider in terms of multi-file editing capabilities? Can it handle complex refactors across a whole directory?