Skip to main
Products
Sound PersonalizationVoice ClarityDialogue FocusHearing TestHearing Test App
Solutions
Mimi for HeadphonesMimi for Hearing CareMimi for Chrome
Technology
Technology & PatentsWorld Hearing IndexWhite Papers
Partners
Our PartnersPartner Products
Support
IntegrateMimi AccountFAQsContact Us
Contact Sales

Research and Development

< Back to FAQ topics

Research & Partnerships

  • What areas of research and development does Mimi focus on?

    Mimi’s core R&D areas include:

    • Hearing testing on consumer hardware: Developing reliable, scientifically validated hearing assessments using smartphones and headphones. These measure both absolute thresholds and suprathreshold hearing abilities using language-independent test paradigms.
    • Advanced sound processing for media playback: Creating and optimizing algorithms that compensate for hearing loss during everyday media consumption, such as listening to music, watching videos, or engaging in digital content. These algorithms run efficiently on consumer devices like headphones, TVs, and speakers.
    • Live-sound enhancement: Exploring ways to apply Mimi’s technology in live and interactive listening environments, such as face-to-face conversations, voice calls, and video conferencing. This includes improving speech clarity to make communication more effortless in noisy or dynamic settings.
  • Is Mimi involved in clinical or academic partnerships?

    For several years, Mimi collaborated with Charité – Universitätsmedizin Berlin, one of Europe’s leading university hospitals, in a publicly funded research project aimed at validating and improving Mimi’s smartphone-based hearing test application. Mimi also worked with Johns Hopkins University to develop the Hearing Number app, which was released to the public in late 2024. Beyond these flagship projects, Mimi maintains ongoing collaborations and knowledge exchange with academic institutions and industry researchers around the world. Independent researchers and institutions have also onducted multiple studies assessing the validity, precision, and usability of Mimi’s hearing test across diverse devices and populations. A collection of these publications is available to read here: https://mimi.io/hearing-science/white-papers-and-studies

  • How does research at Mimi contribute to your products and services?

    Research is integral to everything we do at Mimi. Our research directly informs product development, from refining our hearing test algorithms (technical parameters and user experience) to optimizing sound personalization features (for efficiency and user benefit). This ensures our technology remains evidence-based, user-centered, and at the forefront of innovation in hearing health.

  • Where can I find Mimi whitepapers?

    You can explore our latest whitepapers here on our website: https://mimi.io/white-papers

  • Using Mimi in External Research Studies

  • Can I use the Mimi Hearing Test App in my academic or clinical research study?

    You are welcome to use the app for your research purposes. When citing the app, please ensure you refer to it by its full name as listed in the respective App Store, note Mimi Hearing Technologies GmbH as the developer, and include the specific version you used.Please note: The Mimi Hearing Test app is not a medically certified product, and we cannot technically guarantee the accuracy of the test results across all users and hardware in the same way a professional audiometer does. Therefore, results should be interpreted carefully, particularly when compared with hearing tests conducted using other methods or under different conditions.The app is primarily designed for individual, single-use cases rather than for research or broader data collection purposes. Currently, there isn’t a specific setup within the app for multiple users. In terms of data export, the app does not natively support data export features for large-scale research use.

  • Who should I contact to discuss a potential collaboration or proposal?

    If you're interested in collaborating with Mimi on a research project or exploring a partnership, please reach out to our Research & Partnerships team.

  • How Mimi uses AI

  • Is user data used to train Mimi’s AI models, and how is privacy ensured?

    Mimi’s denoiser is trained offline on public and licensed datasets, not on customer recordings. In particular, we use the DNS3 (Deep Noise Suppression Challenge) corpus with curated speech/noise mixtures and room simulations to cover real-world conditions (wind, crowd, traffic, keyboard, dish clatter). No raw user audio is required or ingested to build or improve our core models.

  • How does Mimi Voice Clarity use AI to enhance speech understanding?

    Voice Clarity runs an on-device speech-enhancement network (a custom GTCRN model) that estimates a speech mask per time-frequency band and applies a phase-aware post-filter. It selectively lifts consonants and formants, reduces competing mid-band noise, and preserves transients - all within a hearing-aid-like latency budget suitable for TWS transparency.

  • How does Mimi Dialogue Focus use AI to separate speech from background audio and improve clarity without increasing volume?

    Dialogue Focus uses Mimi’s high-fidelity AI Noise Reduction tuned for TV/media playback. Because latency isn’t constrained, the model leverages a larger network to more cleanly suppress competing sounds (music, crowd, effects) while preserving speech transients and timbre. The result is artifact-free, spatially faithful playback on TVs, set-top boxes, and soundbars.

  • How does Mimi integrate AI and machine learning into its technology?

    Low latency AI Noise Reduction is a recent development that replaces standard DSP Noise Reduction. AI-NR can completely remove the pass-through of percussive sounds like dish clatter in a restaurant or keyboards taps, as well as wind noise.

  • What role does machine learning play in the development of Mimi’s technology? What technology components does this help inform? 

    ML-Informed Loudness Loss Fitting Technology uses machine learning to compensate for perceived loudness loss, restoring a more natural and balanced listening experience. ML also informs scene classification (e.g., “street” vs “restaurant”) and event detection (eg “siren” or “baby cry”).

  • Couldn’t find what you were
    looking for?

    If you didn’t find the answer you’re looking for, or still have more questions, reach out to our customer support team -  we’re happy to help!

    Back to FAQsContact Support

    Join Global Leaders Redefining How the World Listens

    Deliver listening experiences your users won’t want to live without.

    Contact Sales
    Products
    Hearing TestSound PersonalizationVoice ClarityDialogue FocusHearing Test App
    Solutions
    Mimi for HeadphonesMimi for Hearing CareMimi for Chrome
    Company
    About usNewsroomBlogCareerContact us
    Partners
    Our PartnersPartner Products
    Compliance
    Privacy NoticeCookie PolicyImprintAccessiblity Statement

    Site Map

    Loading site map...

    Get updates
    Subscribe to newsletter  ↗︎
    © 2025 Mimi Hearing Technologies GmbH. All rights reserved.

    Sign up for our newsletter

    Thank you! Your submission has been received!
    Oops! Something went wrong while submitting the form.
    Please complete all fields and enter a valid email like name@example.com