Hamming tests your AI voice agents 100x faster than manual calls. Create Character.ai-style personas and scenarios. Run 100s of simultaneous phone calls to find bugs in your voice agents. Get detailed analytics on where to improve.
@sumanyu_sharma Congrats on the launch Sumanyu! :)
This is the best first comment I have seen on PH outlining the product details, walkthrough, contact info., etc. Shared it with our community.
Evals for a specific industry is a great idea. Llm as a judge is great, but comes with its own challenges, would be interesting to see how it performs for wide usecases. Also, persona generation automation based on usecases would also be great. Im sure thats in our roadmap π
Congratulations on the launch! π
@nikhilpareek Absolutely. So far we've seen 95%+ alignment between LLM and human judgement. Yup, we're already doing persona generation based on use cases :)
1000 parallel simulated calls to the AI voice agent is such a banger line and claim! As a product person, this is my #1 concern when I build AI product which is consistency. Sometimes, you just don't know why something breaks. Testing by hands and evaluating with eyes only go so far.
It's about time for automated testing for LLM voice agent!
Does this work with traditional chatbot usage? For example, we have one AI avatar talking to a human through web app?
Congrats on the launch @sumanyu_sharma and team!
Couldn't be more excited for these guys!! They have been working incredibly hard and have provided significant value to all of their current customers. They love them and anyone building in voice can get incredibly better results through Hamming!
Hamming AI (YC S24)
Hamming AI (YC S24)
Future AGI
Hamming AI (YC S24)
Future AGI
Hamming AI (YC S24)
Hamming AI (YC S24)
Respell
Hamming AI (YC S24)
Hamming AI (YC S24)
Documentation.AI
Hamming AI (YC S24)
Future AGI