Launching today

Tapfree
Voice dictation that adapts to what’s on your screen
89 followers
Voice dictation that adapts to what’s on your screen
89 followers
Typing on phones hasn’t evolved. Tapfree fixes that. Tapfree is a voice-first Android keyboard that lets you write messages, notes, and emails by speaking naturally - without dictation errors, awkward formatting, or constant corrections. It understands context, not just words.










I’m typing this with Tapfree! It has made my life so convenient, even if I jumble/stutter while dictating, Tapfree automatically ignores that and rearranges your sentences to still sound coherent!
Tapfree
@sosboy888 This really made my day - thanks for sharing that. A lot of Tapfree is built around embracing how messy real speech is, so I’m glad that’s coming through!
Dictating this using Tapfree. Great implementation; will test it more and let you know.
Tapfree
@piyush_gupta25 Thanks, Piyush! Looking forward to your feedback once you’ve had a chance to test it more.
Product Hunt
Tapfree
@curiouskitty Great question! When enabled, Tapfree can extract relevant text context from the screen (via Android’s accessibility APIs) to improve things like proper nouns, formatting, and intent. For example, the app name and surrounding text help it infer whether you’re writing an email, a message, or something else, including who you’re texting - which changes how dictation is handled.
A few key clarifications:
Accessibility is opt-in: You can use Tapfree without it, just with reduced context awareness.
Minimal, purpose-bound use: Only the text context needed for that specific enrichment step is used.
Ephemeral processing: Nothing is logged, nothing is stored, and nothing is retained on servers. Context is used only during the enrichment process and then discarded.
No training or reuse: User text is not saved or used to train models.
Because context is powerful, I’m deliberately keeping this scoped, optional, and transparent; and I’m actively refining both the technical boundaries and how clearly this is communicated in-product.
If there are specific scenarios that feel sensitive or unclear, I’d genuinely appreciate hearing about them. That feedback directly shapes safer defaults.