
Nordlys Hypernova
The best model for coding
60 followers
The best model for coding
60 followers
Nordlys Hypernova is the top scoring Mixture of Models for coding, reaching 75.6 percent on SWE Bench. It selects the best LLM per task and integrates into existing IDEs and agents like Claude Code, Cursor, and more.







Been using it for a week now and it’s so fast, great and efficient ! I’m an operation manager at Trofi, it’s been helping me a lot on tasks such as Data Analytics and Data management can’t wait for the future releases !
@amar_chaib Hey Amar, we are really happy to hear that you are loving the product, we would love to know anything we can improve on! More models coming out soon
@amar_chaib Happy that you liked it 😁
@amar_chaib Thank you for your comment Amar, Stay tuned !
Been trying this out for some data prep tasks, and i've been impressed. I needed to merge and clean a few messy CSV files for a uni project, and it gave me a complete, workable Python sript right away. Saved me the usual stackoverflow digging. I'm a third-year AI student, so i'm testing it on various things, it's good for breaking down tasks, but i've also used it to get full draft for smaller projects, which is really helpful. It's become my go-to first step. Curious to see where tthe team takes it next, good luck for the upcoming 🙌
@raoufog Thank you so much Raouf, glad to hear that hypernova helped you in your projects ! we can't wait to deliver future stronger and better models for our users, Stay Tuned ;)
Spoke with the team behind Nordlys Hypernova and you could tell how passionate they are about this problem, honestly loved asking technical questions around the intuition behind the clustering as a router and it's just pure genius!
Ofc I also tried this in cursor for a week and I did feel there was a noticeable improvement compared to the default models especially after the consistency tweak.
On va les gars!!!
Artificial Intelligence and Large Language Models have always been a major center of interest and fascination in my life, as they literally combine what I love the most: creativity, maths, and problem-solving. Just like art and sculptures, LLMs can be built and shaped in different ways, through algorithms, datasets, and architectures. This differentiation in architecture is what makes some LLMs better than others at specific tasks, even when they score higher than others on benchmarks.
So the question is: how can you choose the best model for a specific task right ? Some ideas that might come to mind are heuristics based on speed and cost, too old and inefficient for very large models. LLMs acting as routers, which are often too expensive and too slow.
The idea that came up to our mind was, why not trying traditional machine learning clustering and evaluate each LLM on each cluster? This way, the router scales with the size of the data. We already know how good each model is within every cluster, so we simply pick the one optimized for cost, quality, and speed ( the best model for that task ) .
Here comes Nordlys Hypernova, a Mixture of Models for coding, the first model to score over 75.6% on SWE-Bench, making it the best model for coding. Faster and cheaper than Opus, Nordlys has become the go-to coding model and can be easily integrated with a single command into existing IDEs and agents like Claude Code, Cursor, and more.
Try it out!
Nordlys Labs is a story that has been shaped through time after many rises and falls, and Hyperova is a new chapter that we would like to share with every developer on the planet.
Our philosophy with the MoM (Mixture of Models) model for coding starts from a belief that a general router that can be used for different domains is worse than a specialized router for each domain. So, what we did is the following recipe:
Get a coding dataset
Embed it
Cluster it
Take some tasks from each cluster
Run the LLMs you want to route between against those tasks
Now you have what we call LLM profiles. In the future, simply embed the request you receive, identify which cluster it is closest to, and pick the best LLM to answer it. In this way, we can easily add new LLMs and continue getting the best results with the current ones.
What we would like to do next is improve our product with more coding data!
Thank you.
Hi,
Checked out Nordlys Labs! Solid take on Mixture of Models with smart routing and an OpenAI-compatible API you can drop in with minimal friction. The idea of dynamically selecting the best model per prompt with cost controls and real-time analytics is something many teams building AI products will find compelling.
A couple of quick thoughts that might help increase clarity and conversions:
• The hero could be more outcome-driven, tell visitors what they achieve (e.g., optimized quality + cost vs a single model) in concrete terms.
• Adding a proof section with early metrics or benchmarks (performance/cost savings) would boost trust.
• A slightly stronger CTA promise (like “Optimize your AI results in minutes”) could improve trial starts.
If you’d like, I can pull together a short copy audit with specific headline and CTA options tailored to developers and product teams.
Happy to share something actionable if that sounds useful.
Best,
Paul
@paul91z thank you!
@botir_khaltaev