Thank you again for the support on Alpie Core, and the feedback from this community meant a lot to us.
Since then, we have finally released Alpie, our most advanced product yet. A full AI workspace where you can now see Alpie Core working in real workflows, and not just isolated prompts. You can use the model with files and PDFs, run research, collaborate with others in shared chats, and keep long-running context organised.
If you were curious how Alpie Core performs beyond single queries, this is where you can try it hands-on.
Alpie is an AI workspace for deep work and real collaboration.
Drop in PDFs, images, code, or links, ask deep questions, and work with your team in real time. Pin important insights, keep chats and files organised in one dashboard, and move through complex problems together. Built for long workflows and shared thinking, Alpie helps teams turn scattered information into clear, lasting decisions, and not just answers.
While testing Alpie Core beyond benchmarks, we noticed something unexpected.
On tasks like step-by-step reasoning, reflective questions, and simple planning ( help me unwind after work , break this problem down calmly ), the model tends to stay unusually structured and neutral. Less fluff, less bias, more explicit reasoning.
It made us wonder if training and serving entirely at low precision changes how a model reasons, not just how fast it runs. Sometimes the chain of thought itself is something you d actually want to read to understand the reasoning behind the final answer.
To encourage real experimentation, we re offering 5 million free tokens on first API usage so devs and teams can test Alpie Core over Christmas and the New Year.
Alpie Core is a 32B reasoning model trained and served at 4-bit precision, offering 65K context, OpenAI-compatible APIs, and high-throughput, low-latency inference.
If you were evaluating or using a model like this: What would you benchmark first? What workloads matter most to you? What comparisons would you want to see?
Alpie Core is a 32B reasoning model trained, fine-tuned, and served entirely at 4-bit precision. Built with a reasoning-first design, it delivers strong performance in multi-step reasoning and coding while using a fraction of the compute of full-precision models. Alpie Core is open source, OpenAI-compatible, supports long context, and is available via Hugging Face, Ollama, and a hosted API for real-world use.
169Pi Playground is your interactive space to explore real reasoning with Alpie Core, our new 4-bit advanced reasoning model.
Drop in prompts, run live internet searches, tweak parameters, and watch reasoning unfold step by step — fully transparent, traceable, and yours to experiment with.
Built for developers, researchers, and the endlessly curious.
Sign up for 1M free tokens weekly.
Public API access opens next week — your feedback means everything.