Nordlys Hypernova

Nordlys Hypernova

The best model for coding

60 followers

Nordlys Hypernova is the top scoring Mixture of Models for coding, reaching 75.6 percent on SWE Bench. It selects the best LLM per task and integrates into existing IDEs and agents like Claude Code, Cursor, and more.
Nordlys Hypernova gallery image
Nordlys Hypernova gallery image
Nordlys Hypernova gallery image
Free Options
Launch Team / Built With
Universal-3 Pro by AssemblyAI
Universal-3 Pro by AssemblyAI
The first promptable speech model for production
Promoted

What do you think? …

Botir Khaltaev
I mean where to start, I started going into this rabbit hole of LLM routing around feb of 2025 and oh boy has it been a journey. First with the classic LLM router, then realized how is a LLM or me supposed to prompt which model is best for this prompt at a large case. Then we moved to classifiers, and classifying prompts is hard, and even more so if you want to generalize your router. Then, decided to pivot into a guess some sort of prompt optimizer or orchestrator, which didn't make sense, at this point we were just lost and this was July/August 2025. Then we came upon this paper, that just saved our lives, it opened up a whole new perspective for me and team. What if we just do good old traditional ML clustering and evaluate each LLM on the cluster, that means your router will scale with the size of the data, and generalize pretty well which is exactly what happened. Now, here we are building what we coined Mixture of Models models to blend together the strengths of all labs into one big mega model that is all knowing.
Amar Chaib

Been using it for a week now and it’s so fast, great and efficient ! I’m an operation manager at Trofi, it’s been helping me a lot on tasks such as Data Analytics and Data management can’t wait for the future releases !

Botir Khaltaev

@amar_chaib Hey Amar, we are really happy to hear that you are loving the product, we would love to know anything we can improve on! More models coming out soon

Mohamed Atoui

@amar_chaib Happy that you liked it 😁

Zizou Brahmi

@amar_chaib Thank you for your comment Amar, Stay tuned !

Mohamed Abderraouf Ouaguenouni

Been trying this out for some data prep tasks, and i've been impressed. I needed to merge and clean a few messy CSV files for a uni project, and it gave me a complete, workable Python sript right away. Saved me the usual stackoverflow digging. I'm a third-year AI student, so i'm testing it on various things, it's good for breaking down tasks, but i've also used it to get full draft for smaller projects, which is really helpful. It's become my go-to first step. Curious to see where tthe team takes it next, good luck for the upcoming 🙌

Zizou Brahmi

@raoufog Thank you so much Raouf, glad to hear that hypernova helped you in your projects ! we can't wait to deliver future stronger and better models for our users, Stay Tuned ;)

Botir Khaltaev
@raoufog thanks Mohamed, I’m glad you had a good experience, any feedback contact the team and we will get on it immediately
Daniel

Spoke with the team behind Nordlys Hypernova and you could tell how passionate they are about this problem, honestly loved asking technical questions around the intuition behind the clustering as a router and it's just pure genius!

Ofc I also tried this in cursor for a week and I did feel there was a noticeable improvement compared to the default models especially after the consistency tweak.

On va les gars!!!

Botir Khaltaev
@new_user___2022025318f0e86afd7839a thanks so much Daniel, really kind words, we are improving the models everyday stay tuned!
Zizou Brahmi

Artificial Intelligence and Large Language Models have always been a major center of interest and fascination in my life, as they literally combine what I love the most: creativity, maths, and problem-solving. Just like art and sculptures, LLMs can be built and shaped in different ways, through algorithms, datasets, and architectures. This differentiation in architecture is what makes some LLMs better than others at specific tasks, even when they score higher than others on benchmarks.

So the question is: how can you choose the best model for a specific task right ? Some ideas that might come to mind are heuristics based on speed and cost, too old and inefficient for very large models. LLMs acting as routers, which are often too expensive and too slow.

The idea that came up to our mind was, why not trying traditional machine learning clustering and evaluate each LLM on each cluster? This way, the router scales with the size of the data. We already know how good each model is within every cluster, so we simply pick the one optimized for cost, quality, and speed ( the best model for that task ) .

Here comes Nordlys Hypernova, a Mixture of Models for coding, the first model to score over 75.6% on SWE-Bench, making it the best model for coding. Faster and cheaper than Opus, Nordlys has become the go-to coding model and can be easily integrated with a single command into existing IDEs and agents like Claude Code, Cursor, and more.

Try it out!

Mohamed Atoui

Nordlys Labs is a story that has been shaped through time after many rises and falls, and Hyperova is a new chapter that we would like to share with every developer on the planet.

Our philosophy with the MoM (Mixture of Models) model for coding starts from a belief that a general router that can be used for different domains is worse than a specialized router for each domain. So, what we did is the following recipe:

  1. Get a coding dataset

  2. Embed it

  3. Cluster it

  4. Take some tasks from each cluster

  5. Run the LLMs you want to route between against those tasks

Now you have what we call LLM profiles. In the future, simply embed the request you receive, identify which cluster it is closest to, and pick the best LLM to answer it. In this way, we can easily add new LLMs and continue getting the best results with the current ones.

What we would like to do next is improve our product with more coding data!

Thank you.

Paul

Hi,

Checked out Nordlys Labs! Solid take on Mixture of Models with smart routing and an OpenAI-compatible API you can drop in with minimal friction. The idea of dynamically selecting the best model per prompt with cost controls and real-time analytics is something many teams building AI products will find compelling.

A couple of quick thoughts that might help increase clarity and conversions:

• The hero could be more outcome-driven, tell visitors what they achieve (e.g., optimized quality + cost vs a single model) in concrete terms.
• Adding a proof section with early metrics or benchmarks (performance/cost savings) would boost trust.
• A slightly stronger CTA promise (like “Optimize your AI results in minutes”) could improve trial starts.

If you’d like, I can pull together a short copy audit with specific headline and CTA options tailored to developers and product teams.

Happy to share something actionable if that sounds useful.

Best,

Paul

Botir Khaltaev

@paul91z thank you!

Paul

@botir_khaltaev 

Anytime 👍
The product itself is strong, the main opportunity feels more about how fast new teams grasp the ROI rather than feature depth.
Always happy to exchange thoughts if useful.

12
Next
Last