
Giselle
Build and run AI workflows. Open source.
977 followers
Build and run AI workflows. Open source.
977 followers
Built to design and run AI workflows that actually complete. Zero infra setup—just build and run. Handle complex, long-running tasks with a visual node editor and real-time tracking. Combine models from multiple providers in one canvas.








Giselle
@dubd59 Thank you so much — we’re really honored!
If you don’t mind, I’d love to hear what stood out to you most. Was it the zero-infra setup, the visual node editor, real-time tracking, or the ability to mix models from multiple providers in one workflow?
Giselle
@dubd59 Thank you so much for your kind comment. It really means a lot to us.
We’re truly happy to hear that!
Giselle
@dubd59 san
Thanks so much for the kind words — it really means a lot! We’re going to keep shipping updates and improving Giselle, so stay tuned. If you get a chance to try it, I’d love to hear what you build and any feedback you have.
Giselle
@dubd59 This means a lot — thank you! 🙏
Giselle
@dubd59 san! Wow, thank you! 🎉 #2 product of the day is incredible – we're so grateful for the amazing support from the community!
Congrats on the launch! "Zero infra setup" is exactly what I've been looking for—most AI workflow tools are way too heavy or complex to just get started. The visual node editor looks super intuitive. Since it's open source, are you planning to add community-contributed nodes/integrations soon?
Giselle
@sumit_deepenrich san, Thank you so much for your comment! It really means a lot to us.
As for adding community-contributed nodes and integrations, it's not on our immediate roadmap, but if we can design the architecture well, introducing a plugin system could really open up a lot of possibilities. Thanks for the great idea!!
@codenote Makes total sense! A plugin system would definitely be powerful down the road. I'll keep an eye out for updates-cheers and congrats again on the launch! 🚀
Giselle
@sumit_deepenrich san, thank you so much! We will do my best!! 🚀
Giselle
@codenote
Agreed — community-contributed nodes and integrations could unlock a lot of value once we have the right extensibility model in place.
Giselle
@washizuryo Thanks for the support! Looking forward to refining the product together in 2026.
Giselle
@sumit_deepenrich Thanks for the kind words! "Zero infra setup" was a big focus for us — glad it resonates.
Plugin system is definitely something we're excited about exploring. Feedback like yours helps us prioritize. Stay tuned!
SigniFi
Looks cool! Thx for you guys build a easy-used workflow, which is really clear with input and output
Giselle
@yoang_loo san
Thanks so much! That really means a lot - we were aiming for that "cool" feeling while designing it, so I'm thrilled you picked up on that. Would love to hear your feedback once you try it out!👂
Giselle
@yoang_loo san, Thank you so much! We put a lot of effort into the UX, so I'm really happy to hear that!!
Giselle
@yoang_loo
Thanks! Really happy to hear that 😊
We put a lot of thought into making the inputs and outputs easy to understand.
Feel free to share any feedback as you explore!
Giselle
@yoang_loo Thanks! Clarity was a big focus for us — glad it shows.
This is mind-blowing! 🤯 The visual builder for chain-of-thought agents is exactly what the dev ecosystem needed.
I'm really curious about the 'GitHub-native' RAG part. How do you handle context limits when indexing massive repositories? Do you have intelligent chunking for code specifically?
Congrats on the launch, team!
Giselle
Thanks a lot — really appreciate it!
On the GitHub-native RAG side, we don’t try to cram an entire repo into an LLM context at indexing time. We keep the ingestion unit small (file/chunk) and embed those.
A few guardrails that make this hold up on massive repos:
- Skip huge files upfront (we enforce a max blob size — 1MB in Studio), so binaries / generated artifacts don’t blow up indexing.
- Handle GitHub Tree API limits explicitly: if the recursive tree is truncated, we fail loudly and recommend an initial import via tarball download (or git clone) instead.
- Text-only embeddings: if it can’t be decoded as UTF‑8, we skip it.
Re “smart” code chunking: today it’s intentionally robust rather than AST/parser-based — line-based splitting + overlap + a hard char cap (defaults: 150 lines, 30 overlap, 6000 chars). If a chunk still exceeds the cap, we shrink progressively and fall back to character splitting for pathological cases.
Happy to share more on the query-time side too (top‑k / thresholds / reranking) if you’re curious.
Giselle
@airtonvancin RAG itself is complex, but we've tried to keep the implementation relatively simple. There's still plenty of room for improvement, but even now we're able to pass enough context to the LLM to noticeably improve generation quality. Excited to keep pushing this further!
Giselle
@airtonvancin san! Thank you so much!
To be honest, we don't have code-specific intelligent chunking yet—we're currently splitting by regular chunk sizes. That said, it's been working well for our own use cases in production. We're definitely looking to improve the accuracy and make it smarter over time!
Giselle
@airtonvancin Thanks for the kind words! 🤯
For the deep dive into the 'GitHub-native' RAG architecture and how we manage context limits, I'd recommend reading the article. It explains all the specifics of our implementation.
https://giselles.ai/blog/github-vector-store-vercel-constraints
Giselle
@vouchy san! Thanks for your comment!
I can't recall a specific example off the top of my head, but workflows that pass large amounts of context to the model or combine multiple nodes with Deep Thinking tended to fail when execution time stretched to several hours. With the current version of Giselle, we can now complete workflows that run for several hours or more—theoretically, there's no limit on how long they can run.
Giselle
@vouchy Yeah — we hit this a lot.
I’m on Tadashi’s team, and the breaking point for me was trying to build a simple
“GitHub comment → run a few AI steps → post a reply back to GitHub” flow in tools that should have made it easy.
In practice, I kept running into things like:
- I couldn’t tell why a step failed — too much was hidden behind “magic”
- I managed to build something, but then had no clue how to actually run it reliably
- deployment, credentials, and hosting were suddenly my problem
- it felt like “when do I get to use this?” kept getting pushed out
That experience is a big reason we built Giselle the way we did:
zero infra setup, start building immediately, connect to real systems right away.
And when something goes wrong, you can see what’s happening in real time instead of guessing.
Giselle
@vouchy Especially with agent workflows — it's tricky because there's often no clear line between "broken" and "just not working as expected." These were issues we kept running into ourselves. We're committed to making this easier.
NerdyNotes
Congrats on the launch, the focus on clarity and flexibility in a workflow really stands out.
Giselle
@musfk san! Thank you so much!!
While we only have simple features for now, we worked really hard to deliver an easy-to-use UX! I really appreciate your comment.
Giselle
@musfk Thank you!
We really appreciate the kind words. We're glad the focus on clarity and flexibility came through.
Giselle
@musfk Thanks a lot! That’s exactly what we were aiming for — a workflow builder that stays clear and opinionated, but still flexible enough to handle real-world complexity.
Giselle
@musfk Thank you! Really glad that came through. NerdyNotes looks great too — love what you're building!
Looks very simple to use. I'm just wondering if you can build a workflow by writing a prompt, and will actually come up with the required nodes and suggest integrations?
Giselle
@pasha_tseluyko Thanks for the kind words!
You’re describing a text-to-workflow feature (write a prompt → Giselle generates the nodes + suggests integrations). We don’t support that yet, but it’s one of the higher-priority things we want to tackle as a team. I’ll definitely share an update on Product Hunt once it ships.
Curious to learn from you: if you had that feature today, what prompt would you type in—and what workflow would you want it to run end-to-end?
Giselle
@pasha_tseluyko As Satoshi mentioned, text-to-workflow is definitely on our radar — we're already structuring things with that in mind. Will share an update when it ships!
Giselle
@pasha_tseluyko san! That's exactly the feature we have in mind! We're planning to build an "Enhance Prompt" capability that lets you construct entire workflows from scratch—just describe what you need, and it generates the nodes and suggests integrations for you.