Release date: N/A
Role: UX Design Lead
Project Ears was a collaboration with Mimi Hearing Technologies to create a hearing aid that could be bought over the counter. The earphones would feature an onboard hearing test that would take an Earprint of the user's hearing to customize the functionality without visiting a doctor. The project expanded in scope to include both hearing relief and protection.
Unlike the previous projects at Bragi, this project was started from scratch with no prior collected data. This presented a unique challenge as the project's direction was not laid down yet.
Initially, the project's scope focused on creating a hearing aid alternative that would not be as expensive as a typical hearing aid. However, we quickly realized that we would also use the same earphone platform in The Dash Pro as a tinnitus relief and a potential hearing protection aid that would cut audio transparency when a certain dB level was reached. With the cooperation of Mimi Hearing Technologies, we used their Mimi Defined™ algorithm to create an Earprint of the user's unique hearing that could then be used to enhance the various frequencies.
As we did not have any previous knowledge about how people approach the topic of hearing aids, our initial desk research gave us a surface understanding of the problems within the domain.
Key Results
12%
Adults aged 18-39 report difficulty following a conversation amid background noise.
~10%
Of adults experience tinnitus.
28.8mil
U.S. Adults could benefit from using hearing aids.
30%
Adults who could benefit from using a hearing aid have never used one.
The key results and our initial research made it clear that there was a market that we could focus on. However, we still did not have any clear understanding from a human perspective of users' issues or problems or why they do not already use some hearing aid. We decided to try and find people who are currently using a hearing aid or have hearing problems but have not done anything to mitigate it.
I have included a small section of quotes from some of our interviewees.
Full disclosure: I have decided to use stock images for profile pictures to respect the privacy concerns of the interviewees
CHRISTIAN, 28
”I don’t want to be the guy that wears a hearing aid. During concerts, I wear hearing protection but I hate how it looks.“
JONAS, 27
”I play white noise on my phone before going to bed, but I can’t do that when my girlfriend is sleeping next to me. She would be so mad!“
SONJA, 65
”When I speak to people, I often can’t hear them. My daughter, she keeps telling me to get my ears checked. But at the moment I don’t have the time or patience to go to a doctor.“
From our interviews, it was clear that we had an opportunity to help people through the existing Dash Pro platform. We grouped our insights into key categories to better overview the areas we could impact.
Insights
Convinience
Tinnitus relief
Hearing protection
Some requirements from our Board members were to investigate the possibility of using The Dash Pro platform independent of its Bluetooth capabilities. One of the reasons for this strict requirement was to make sure that the earphones themselves would have a decent battery span when used as a hearing aid. As mentioned earlier, we were partnered with Mimi Hearing Technologies, where they already had an excellent hearing test that could be taken through their iOS and Android app. However, here is also where one of the main challenges arises. As we needed to work without a Bluetooth connection, we could not default to just using the Mimi App. We had to come up with our way of using the functionality of The Dash Pro platform to implement an onboard hearing test.
Before we dove into creating our onboard hearing test, we first needed to understand how the Mimi Hearing Technologies hearing test works.
The Mimi Mobile App tests the user's hearing by going through sound frequencies ranging from low to high. The user must use an earphone or headset that has been tested by Mimi Hearing Technologies beforehand to guarantee that the frequencies are played back correctly. They then have to listen for the frequency tone and report on which ear they hear it as the tone increase in volume. A faster response indicated better hearing. And when the user has been through all the tones, they have now taken a unique earprint of their hearing.
The default interface on The Dash Pro is limited to taps, long and short holds, and swipes in both left and right directions. Sound is used as a feedback system, both in sound and voice commands. As one can imagine, these interaction inputs are very much limited, and on top of that, it is an interface that the user cannot see directly and has to operate "blindly."
The initial concept was to try and implement the Mimi hearing test in the same format as it was done in their app. But we quickly found out that we had a technical limitation on The Dash Pro platform as the connection between the Left and Right earphones would not communicate in all cases during the initial onboarding concept.
We decided on dividing the hearing test into the Left and the Right separately. This could potentially decrease the accuracy of the hearing test as the user now would be able to anticipate the tone as it is only played on one ear at a time.
Having worked out the technical limitations and possibilities with the software and hardware engineers, we started designing the hearing test. The interaction design was based on the following user scenario:
Like The Dash Pro, when you insert the earphones for the first time, they briefly introduce the product before initiating the hearing test. Rather than just starting the hearing test, the device would present a couple of requirements of the environment to ensure that the best possible setting for the hearing test to take place and inform the user about the estimated time it would take the hearing test.
Once they were ready to start, they would tap the earphone, and the hearing test would begin. Frequency tones would begin playing, increasing in volume over time until the user would tap the earphone to indicate that they heard the tone—a total of 8 frequencies were played per ear.
We created what we thought would be a good script and used a text-to-speech service to read out all the commands. To save time and not rely on long implementation periods, we divided the scripted speech into various parts, and with a dummy pair of The Dash, we tested if people would do what the voice told them to. Our working student conducted internal studies, and we took our learning from each session.
Learnings
Correcting the text-to-speech required a bit of learning as I had to learn the editing syntax of the service but having done so gave us a lot more space to customize the speech pace and pronunciation.
Working with text and speech was a challenge in itself. We skimmed out filler words to make a minimal viable text before adding the context again. The interesting aspect between text and speech is that many details are left out while retaining the meaning in speech. This is not the same when it comes to text. Text is descriptive and loses meaning if not describing things in detail.
We initially tested internally to get fast feedback as organizing tests with outside testers requires a much more time-consuming preparation. This led us down a road where we ended up testing with users that are used to the interaction interface of The Dash Pro. When we expanded our testing pool, we reached out to people who did not use The Dash Pro very often. We got a false sense of security that people immediately understood how to use the product. We quickly realized that these "simple" interactions created problems for first-time users and that we needed to address this issue to make a proper hearing profile. This resulted in a kind of training session before taking the hearing test, where we made the user use the interaction possibilities they would encounter in the hearing test.
We tested the concept again with a rewritten script, onboarding sequence, and accompanying soundbites.
Learnings
The result of our second iteration would show that users could better understand how to complete the hearing test after having been "trained" with the now incorporated training session. But as with any iterative process, you encounter new problems that need attention and solving. The sound cues that we implemented to help guide people through the process confused them at times because of timing issues. We had purposefully delayed some of the sounds to mentally prepare them for the next step. Still, these sounds came so late that people had already mentally moved on to the next phase and suddenly heard a feedback sound that was no longer expected. This, luckily, was easy to tweak and correct and then reimplement into the onboarding experience.
Another learning that proved more of a problem was that people had the perception that when they heard the frequency tone in the hearing test, the tapping interaction would not match when they heard the tone but rather be delayed. They felt that the resulting earprint would be wrong and wanted to redo the test. This aspect of having to redo a tone was something that we also earlier did not think would be a thing, and when we thought more about it, we started to see that having to redo your entire hearing test to fix one tone was a huge inconvenience. The idea of a redo function was put on the list for the next iteration, and we focused back on how to make people feel that the interaction matched their perception.
This meant that we had to change how we perceived a hearing test fundamentally. We went back to the drawing board and listed up our interaction possibilities, and tried to map out possible ways in which we would be able to play a tone using only these gestures. Our solution that we ended up going forward with was to invert how a regular hearing test was done. Rather than tapping when you heard a frequency tone, we decided to make the user use a hold gesture that they would then release when they heard the tone. We also had to update the corresponding text-to-speech script as part of the new gesture, which required some work to assure that the commands were again not misleading.
The next focus was on designing a redo function and how it could best be implemented in the hearing test. When we started to brainstorm on the redo function, a couple of concerns were how it would impact the process. e.g., would it encourage the user to think that they could improve their test result by constantly redoing the hearing test? Would they even trust the test if given the option to redo the frequency tone at their leisure? We eventually settled on the need for a redo function that should be present in the system in case something went wrong but not something that the user should initiate at their wish. The redo function had to satisfy the following conditions:
This meant an update to the voice script and new logic to the system to implement the function. The system now had to have a safeguard at the end of each frequency tone to check if there was input and whether this input made sense to the system, e.g., that the user's hearing was within the hearing algorithms capabilities.
We had to make sure our system was still working within the users' expectations with all of these changes, and we did another internal user test.
Learnings
We had mostly positive feedback on the onboarding experience and taking the hearing test with this iteration. The decision was made to use this version of the software's main functions as our MVP, and everything was built on this version from this point on. The issues we encountered from our testing in this iteration mainly were more minor issues with timing expectations and system instability caused by the new changes to the system. However, we saw with this version when users ran into problems that the redo function would cause some confusion. But, following the instruction in the redo function managed to lead them back to fully completing the hearing test.
From the very beginning of the project, the management of Bragi had the vision that the product should include a tinnitus relief function in the form of various colored noises. Luckily, early on, we decided that the hearing aid aspects only should be controlled on one side of the earphones, which left us with the left side for the tinnitus feature.
Our research showed that people with tinnitus experience tinnitus at various frequencies. Early discussions and concepts were based on this premise with trying to find out if we could create some test that would then tune the relief tone into that specific frequency. However, after diving deeper into the research, we realized that the hardware on The Dash Pro platform was insufficient to precisely find that frequency. Another idea was that you could program the earphones with a tinnitus profile that you previously acquired by a licensed ear doctor.
As the complexity of the issues at hand quickly rose, we decided it would be easier to simplify the tinnitus feature and use a broad spectrum colored frequency noise that the user could pick between.
The control scheme was also simplified. The user would hold the left earphone to enter a menu where they could use swipes to switch between the various colored noises and a tap to select. When the noise was playing, the user could use swipes forward and backward to adjust the volume of the noise.
As we finished the concept, another thought came to mind. What if we did not have a hearing test? Could we still somehow offer a hearing aid?. We talked with Mimi about what we could do if the user did not want to or had issues with completing the hearing test on the hearing aid side. A suggestion was to default to a hearing age profile based on the user's age. Mimi informed us that a person's hearing would naturally decay over time and that statistically specific frequencies were the ones to go first. This gave rise to the idea that users could enter their age, and a default profile would enhance the hearing. A spin-off concept was to remove the hearing test altogether and use age intervals as hearing aid enhancements where the user could swipe in a menu to select the different profiles.
Rather than just looking at the project platform as a hearing aid, we also thought about using it as hearing protection. As we could boost specific frequencies, we could lower them ultimately, cut the sound, and use the earphones as a passive noise canceling device when the dB levels got too high, thus protecting the users' hearing. We could have entered markets with this product, such as hunting, construction work, and security. We acquired various already existing products on the market finding overlapping features to form an understanding of the domain standard and, from that point, being able to try and differentiate and separate ourselves from the competitors and create a unique selling point.
The pilot project made it into publications on various websites. If you have interest in reading the review you can find some of the following news here:
Hearing aid and tinnitus support
Earprint
Hearing Health & Technology Matters
The teaser of the project can be seen here, presented by the CEO of Bragi, Nikolaj Hviid.
All Images and Videos are subject to © Copyright 2019 Bragi GmbH.
Selected Works
Process & MethodsOrganizational
Design SystemOrganizational
PrototypingExperience Design
Concept DevelopmentConcept Design
Icon CreationGraphics Design
MAN Truck & BusVehicle Design
Project EarsHardware & Software
iTranslateSoftware Design
4D MenuSoftware Design
BragiTrue Wireless Headphones