We all hear differently
Our ears are unique. How I hear is different to how you hear, and to how the person next to you hears. We all hear differently. And hearing is a growing concern, with 55% of the adult population suffering some form of hearing loss and the total volume of people with hearing impairment set to increase to 900 million worldwide by 2050.
While wearing glasses and contact lenses are widely accepted as how we compensate for less than perfect sight, hearing difficulties are not equally addressed. Mild hearing impairment most of the time goes unrecognized, and even if an individual recognizes they have moderate loss, it still often goes untreated or compensated for.
With hearing loss, comes a reduction in our ability to understand speech. And speech is critical for our social interactions. For example, have you taken a call while commuting and struggled to make out what the other person was saying? Straining to hear a conversation creates listening effort and interrupts the ease with which we interact with our nearest and dearest. Speech affects the quality of life.
Speech intelligibility in a voice dominated future
With the explosive growth of personal voice assistant use, growing by 1000% and due to reach 275 million by 2023, voice has emerged as the biggest interface revolution since the iPhone popularized the touchscreen. With tens of millions of devices with voice interfaces entering our lives, from smart speakers and fridges, to wearables, not to mention our day to day experiences such as our smartphones and cars becoming increasingly voice-controlled, we’re experiencing a “voice-first” hardware upheaval. And with this, our primary sense of interaction with the intelligent world is increasingly moving from sight — to sound. Evidently, voice commands and our hearing will become the dominating interface for human-device interaction.
The industry is already investing heavily in ensuring our devices can better hear us but who is optimizing us and our hearing? Who is making sure we can hear the devices we are interacting with? A conversation is two-way, and with a growing population with hearing difficulties, optimizing both the sender and the receiver in every interaction becomes ever more important.
The future is ear, voice interfaces are taking over and hearing is consumers are becoming more health-oriented. We can’t change the conversation. But we can help you understand it – in all the places you listen.
Building an ecosystem
Originally inspired to develop hearing technology for music lovers, today Mimi’s in-house hearing scientists are now also researching speech intelligibility and developing our technology to optimize hearing for voice. We have approached this by integrating our sound processing into partner products, where users can create their unique hearing profile. A profile we call Hearing ID. By creating this, you can seamlessly switch between our partner products, whether it’s your smart speaker in your kitchen, your car’s entertainment system, your wearables when listening to music and even on your screen when flying, and all audio is personalized to your unique hearing ability. With our approach, we’re optimizing voice and audio output for humans, enabling the conversation to be heard on both sides.
Technology that knows your ears
Living in an ever-increasingly personalized world, it stands to reason therefore that our devices know how well we hear. This is one of the reasons why we are actively tackling this problem, by providing a technology that knows how people hear and that personalizes the audio to the user’s unique hearing ability.
When it comes to knowing how we hear, Mimi knows more than most. Since launch, Mimi has enabled over 1.5 million users globally to test their hearing through our hearing test apps. Through this, we have created the world’s largest database of digital hearing profiles and insights into the world’s hearing trends. As a deep tech company with an in-house team of hearing scientists, Mimi invests a significant amount of its resources in research enabling the continuous learning & development of our processing algorithm. As a result, we have created AI for hearing where Mimi’s intelligent audio processing technology adapts audio in real-time to continuously personalize the listening experience. By helping the inner ear pass on more information to the brain, more sounds become audible again, restoring detail otherwise lost. Music becomes more enjoyable, speech becomes more intelligible. And with Mimi, our devices become tailored to our hearing and we as humans become optimized in for a voiced controlled world.