
Giselle
Build and run AI workflows. Open source.
977 followers
Build and run AI workflows. Open source.
977 followers
Built to design and run AI workflows that actually complete. Zero infra setup—just build and run. Handle complex, long-running tasks with a visual node editor and real-time tracking. Combine models from multiple providers in one canvas.








This is really well-executed. The visual node editor for multi-model workflows is exactly what the agent ecosystem needs right now.
Quick question: How do you handle state persistence across long-running tasks? We're launching Nex Sovereign tomorrow (cognitive OS with persistent memory + governance layer) and thinking about how workflow builders like this could interface with stateful agents.
Upvoted! 🚀
Giselle
@nexaios Thanks a lot — really glad it resonated.
On state persistence: each node’s runtime state is stored as an independent data object, so long-running workflows can rehydrate and resume by referencing the latest persisted state for the nodes they need (rather than relying on an in-memory “session”). That makes it easier to checkpoint, inspect, and replay parts of a run as it progresses.
“Cognitive OS + persistent memory + governance” is a super important direction — especially if you want these agents to be usable in real business contexts. Nex Sovereign sounds very relevant.
Giselle is intentionally designed to be pluggable so we can interface with stateful agents/memory layers and governance systems as they emerge. Would love to explore how Giselle workflows could hand off / sync state with Nex Sovereign once you launch. And congrats on the launch!
@toyamarinyonWe just went live: https://www.producthunt.com/products/nex-sovereign
Re: integration — the architecture is live in the demo now. Some concrete integration points I'm seeing:
Giselle workflows → Nex Memory Graph: Your workflows could query persistent context (user goals, preferences, past decisions) before executing steps
Nex Boardroom → Giselle orchestration: High-stakes workflow actions could require mandate approval from Nex's CEO/CTO sub-agents (built-in governance for automated workflows)
PDAR Thinking → Workflow transparency: Expose Nex's reasoning traces (Perceive → Decide → Act → Reflect) that Giselle workflows can consume or display
The 9-app architecture is visible in the demo. The Boardroom (App 9) and Memory Graph (App 4) are the most relevant for stateful workflow orchestration.
Would love to show you a deeper walkthrough when you have 15 mins. DM or email works: [manvendramodgil.ai@gmail.com]
Congrats again on Giselle ,really impressed by the execution. 🚀
Giselle
@nexaios Thanks for the upvote and the kind words! Would love to explore how Giselle and Nex Sovereign could work together. We'll check out your launch!
@gyu07 We're live!
The Boardroom governance layer (App 9) is exactly what I mentioned — internal CEO/CTO sub-agents make recommendations, users mandate approval. Could be interesting for workflow orchestration (e.g., Giselle workflows requesting governance approval before high-impact actions).
Would love your feedback on the architecture!
Giselle
@nexaios Thanks so much for the support, really appreciate it.
Giselle
@nexaios san, Thanks so much! Nex Sovereign sounds really impressive – can't wait to see your launch tomorrow! 🚀
@codenote We're live!
The 9-app architecture is in the demo (Mind, Memory Graph, PDAR Thinking, Boardroom, etc.). Really curious how Giselle's multi-model orchestration could interface with Nex's cognitive layer.
Would love your feedback! 🚀
Giselle
@nexaios Congrats on the launch! I'll check it out!!
You’re basically packaging “LLM orchestration as a legible object” turning invisible glue code into something humans can reason about.
Knife-edge question: what’s your plan for reproducibility (snapshots of models/inputs/connectors) so workflows don’t drift into “works on my Tuesday”?
I love this because I hate myself. (Kidding. Mostly.)
Giselle
@teodor_hascau Exactly, Giselle's core value is making LLM orchestration something humans can easily experiment with and iterate on. Sometimes you want AI to suggest the optimal approach, but ultimately, if humans can't understand it, it's hard to make good decisions. Our plan is to first make it easy to translate human ideas into flows, then layer in AI-powered suggestions.
As for reproducibility, it's a tough problem. We have internal state management, but we haven't yet shipped robust snapshotting or version control features for users. It's a pain point everyone's grappling with, and definitely on our roadmap.
Giselle
@teodor_hascau san, Thanks for the comment! You nailed it exactly.
On reproducibility, we're planning features like "app version history," "app templates," and "template sharing + community." As for reproducibility of flow execution results—since LLM outputs depend on the model itself, we're not planning to intervene there for now. That said, having an environment where workflows run reliably day after day is something I need myself as a daily Giselle user, so we're definitely working on it.
Giselle
@teodor_hascau Great question — and yes, that’s absolutely the knife-edge.
Today, Giselle already persists full step-level inputs and outputs for every execution (including intermediate states). The UI for inspecting, diffing, and selectively re-running those steps isn’t fully exposed yet, but it’s very much on the roadmap.
We’re pretty realistic about this part: bit-perfect reproducibility with LLMs is a mirage. Models change, providers evolve, and some entropy is inherent. Rather than pretending we can freeze time, our goal is to make every run legible — so you can see exactly what happened, why it happened, and what changed.
In other words, we may not guarantee “works on every Tuesday,” but we can guarantee you’ll know why it worked last Tuesday and what drifted by Thursday — and give you the tools to course-correct quickly.
Thanks for calling this out. I’ve personally been burned by this class of failure more times than I’d like to admit, and it’s one of the core problems we want to solve for builders trying to tame very energetic LLMs.
Oh, we’re actually working on a couple of AI startups in tourism and e-commerce. We’ll check out your product, thanks.
Giselle
@mykyta_semenov_ Thanks! Giselle's workflows can be used for content creation and analytics in tourism and e-commerce. We'd love to explore creating samples and templates for those use cases.
Giselle
@mykyta_semenov_ san, Thank you!
That's awesome! Please let me know when you launch — I'd love to check it out! And thanks for taking a look at Giselle :)
Giselle
@mykyta_semenov_ Thank you for sharing. Both tourism and e-commerce are ideal candidates for workflow-driven AI.
If you're interested, we can run it on a private cloud as well as a cloud version. Feel free to share more details or discuss your requirements.
@toyamarinyon My team and I will take a look at the beginning of January, and I’ll write to you on LinkedIn. I’ve already added you there.
FUNCTION12
I loved the neon sign-like effect, it made the content very easy to recognize! Is it possible to change the color of the cards inside as well? Looking forward to the launch!
Giselle
@shawn_park_f12 Thank you so much for your kind words! We've finally launched! 🎉 Please give it a try and let us know what you think!
Giselle
@shawn_park_f12 Thanks so much! I'm really glad you picked up on that - it's a detail I really worked on. Hoping to add customization features like other editors down the line. Will keep pushing forward with updates!
Giselle
@shawn_park_f12 Thanks for the early comment — we're live now! FUNCTION12 looks awesome too. Hope you get a chance to try it out!
Congrats on the launch! Building something you actually wanted to use really shows in how you’ve framed the problem. The focus on visibility and debuggability especially hits that’s where most workflow tools fall apart in real use.
Giselle
@syed_hassan9 san, Thank you so much!
I'm really glad it resonates with you. We've built this to handle our own use case—running complex workflows for extended periods—and we believe it's now at a level where others can benefit from it too.
Today marks just the beginning, and we'll keep refining the product from here.
@codenote Love that mindset. Building it for sustained, real-world use is usually what separates tools that look good from ones that actually stick. Excited to see how it evolves as more people put it through real workflows best of luck with what’s ahead 🙌
Giselle
@syed_hassan9 Totally agree — great look & feel matters a lot, but we’re even more focused on making sure Giselle holds up in real, day-to-day workflows (not just demos).
There are definitely some technical challenges in building something that can reliably run complex, long-running tasks end-to-end, but we’re tackling them head-on as a team — and using AI where it genuinely helps us iterate faster and make better decisions.
Really excited to keep improving it as more people put it through real use cases. Thanks again for the encouragement!
Giselle
@syed_hassan9 san! We'll keep pursuing the balance between great design, usability, and building something we actually use ourselves in real-world workflows. Thank you! 🙌
Giselle
@syed_hassan9 Really appreciate this — means a lot to hear. We'll keep pushing!
Giselle
@syed_hassan9 Thank you so much. We’ll keep pushing to improve the quality and polish of the product.
Giselle
@richard_wang18 Great question! Right now, we offer token visibility, but model selection and workflow design are up to the user. We're exploring features where AI could recommend optimal workflows and model choices based on your use case—definitely something we want to build toward.
Giselle
@richard_wang18 san, Thank you for your comment!
Currently, we don't have intelligent model selection—users choose their preferred model themselves. That said, smart auto-selection and switching sounds like a great idea, so we'll definitely consider adding it. Thanks for the suggestion!
Giselle
@richard_wang18 Thanks a lot — really appreciate it!
Right now, our “model selection” is intentionally manual. As we’ve been building Giselle, we’ve developed a pretty practical sense of each model’s strengths (quality vs. latency vs. cost, tool-use reliability, long-context behavior, etc.), and we wanted to make it easy for builders to choose the right model per node/workflow step—especially since different parts of a long-running workflow often benefit from different models/providers.
So to answer your question: we don’t have a single routing algorithm in place today that automatically optimizes across cost / speed / cache hits. Giselle focuses on making multi-model orchestration visible and controllable rather than “magic.”
That said, we agree this is a great direction. We’re seeing how popular modes like Cursor’s “Auto” are, and we’re actively considering adding an Auto Mode / model router in the future that could optimize for things like:
- execution cost
- latency / speed
- cache hits / token efficiency
- (and potentially reliability per task type)
If you have a specific workflow in mind (e.g., classify → extract → write → verify), I’d love to hear what signals you’d want the router to prioritize.
GraphBit
This resonates. Visual workflows only become useful once debuggability and execution transparency are first-class, not afterthoughts. Open source plus real-time visibility is a strong combination, especially for long-running, multi-step flows where trust is built by being able to see what ran, why it ran, and where it failed. Curious to see how this evolves as teams push it beyond experimentation into shared production workflows.
Giselle
@musa_molla Thanks for this. Debuggability and execution transparency being first-class is something we deeply care about too. Beyond open source and real-time visibility, we want to strengthen features like debugging and eval. But first, we're focused on making the experience of experimenting and iterating with AI as smooth as possible. As teams move toward production use, transparency and debugging become even more critical. We're also exploring the ability to expose workflows as APIs, and through that process, we aim to improve stability and transparency further.
Giselle
@musa_molla san, Thank you! Transparency was our top priority when building Giselle, so I'm glad that resonated with you. We're continuing to improve it for production use. Would love to hear your feedback if you give it a try!
Giselle
@musa_molla Thanks for the thoughtful take — you’re hitting the core of it. Visual workflows only really become “production-ready” when you can clearly see what ran, why it ran, and exactly where it failed so you can iterate with confidence.
Giselle already keeps the underlying run data, but we don’t yet have a user-facing UI that makes debugging and execution transparency truly first-class. That’s a priority for us next: better run history, step-by-step inspection, and clearer failure points so teams can move from experimentation to shared workflows with trust.
If there are specific debugging views you’ve found most useful (e.g., per-node logs, inputs/outputs diffs, retries), I’d love to hear what you’d want to see.