trending
marius ndiniโ€ข

1mo ago

Feature Update: Rolling Conversation Summaries โ€” Cut Chat Costs Without Losing Context

We built a feature to solve a problem most AI apps eventually run into:

The longer the conversation, the more you keep paying to resend the entire chat history over and over.

Blog here (https://www.mnexium.com/blogs/ch...)

Docs here (https://www.mnexium.com/docs#sum...)

marius ndiniโ€ข

1mo ago

Mnexium AI - Persistent, structured memory for AI Agents

๐Ÿง  ๐Œ๐ง๐ž๐ฑ๐ข๐ฎ๐ฆ = persistent memory for LLM apps. Add one ๐ฆ๐ง๐ฑ object and get chat history, semantic recall, and user profiles that follow users across sessions and providers. ๐Ÿ”„ Works with ๐‚๐ก๐š๐ญ๐†๐๐“ and ๐‚๐ฅ๐š๐ฎ๐๐ž โ€” same memories, any model. Switch mid-conversation without losing context. โš™๏ธ No vector DBs or pipelines. A/B test, fail over, and route by cost โ€” your memory layer stays consistent.