Bragi-Ears-Sonya

Project Description

Release date: N/A
Role: UX Design Lead

Project Ears was a collaboration with Mimi Hearing Technologies to develop an over-the-counter hearing aid. The earphones featured an onboard hearing test that captured an "Earprint" of the user's hearing, allowing for customized functionality without the need for a doctor's visit. The project eventually expanded to address both hearing relief and protection.

The Challenge

Unlike previous Bragi projects, Project Ears was conceived from the ground up—no legacy data, no predefined roadmap—making its initial direction uniquely challenging. What started as an effort to build an affordable hearing-aid alternative evolved into a multi-purpose earphone platform: one that not only relieves tinnitus but also automatically attenuates ambient sound once volume exceeds a safe decibel threshold.

In partnership with Mimi Hearing Technologies, we integrated their Mimi Defined™ algorithm to generate a personalized “Earprint” for each user. This hearing profile lets us precisely boost or attenuate specific frequencies, tailoring the listening experience to individual needs.

The Process

With no existing user data on hearing-aid behaviors, our initial desk research offered only a high-level glimpse into the domain’s core challenges.

Key Results

12%


Adults aged 18-39 report difficulty following a conversation amid background noise.

~10%

Of adults experience tinnitus.

28.8mil

U.S. Adults could benefit from using hearing aids. 

30%

Adults who could benefit from using a hearing aid have never used one.

Our initial research confirmed a viable market but exposed gaps in our grasp of human factors—why users struggle with hearing aids and why adoption remains low. To bridge this gap, we interviewed people who either use hearing aids or experience untreated hearing challenges. Below is a selection of their quotes;

profile photos are stock images used to protect their privacy.

unsplash-image

CHRISTIAN, 28

”I don’t want to be the guy that wears a hearing aid. During concerts, I wear hearing protection but I hate how it looks.“

unsplash-image

JONAS, 27

”I play white noise on my phone before going to bed, but I can’t do that when my girlfriend is sleeping next to me. She would be so mad!“

unsplash-image

SONJA, 65

”When I speak to people, I often can’t hear them. My daughter, she keeps telling me to get my ears checked. But at the moment I don’t have the time or patience to go to a doctor.“

Interviews revealed a clear opportunity to support users through the Dash Pro platform, so we organized our findings into core categories to create a structured overview of potential impact areas.

Insights

Young adults



  • Social stigma regarding wearing a traditional-looking hearing aid. 
  • Perceived handicap from other people. 

Convinience






  • Many people do not perceive their hearing problems as severe enough to require intervention.
  • People cannot be bothered to go to the doctor to get their hearing problems checked. 
  • People were interested in an over-the-counter solution that would not be overly expensive. 

Tinnitus relief





  • A more innovative alternative for people to use when they sleep.
  • A device in which they do not affect their partners during sleep. 
  • Convenient and practical to transport if needed. 

Hearing protection



  • Ability to protect people's hearing from loud noises. 
  • Practical alternative to currently clunky products. 

Design Challenges

Board members mandated that Dash Pro run independently of Bluetooth to preserve battery life in hearing-aid mode. While our partner, Mimi Hearing Technologies, offers a robust hearing assessment via iOS and Android apps, we couldn’t rely on a wireless link—so we devised a native, onboard hearing test using Dash Pro’s integrated audio hardware and processing capabilities.

MimiPhone

Before building our onboard hearing test, we analyzed Mimi’s existing assessment workflow. In the Mimi Mobile App, users wear a pre-calibrated headset and listen to sequential tones from low to high frequency. As each tone’s volume increases, they indicate which ear detects the sound; quicker responses reflect better hearing. Completing the sequence generates a personalized “Earprint” that maps their unique hearing profile.

The Dash Pro’s interface relies solely on taps, long- and short-presses, and left/right swipes, with audio and voice cues for feedback—forcing users to interact “blindly.”

Our first attempt to mirror Mimi’s multi-frequency test ran into connectivity issues between left and right earbuds during onboarding.

To work around this, we split the test into separate left-ear and right-ear sessions. While this simplifies data transfer, it does introduce a risk: users may learn to anticipate tones when each plays in isolation, potentially skewing accuracy.

bragistandalone

Conceptualization

With engineers’ input on platform constraints, we mapped out the user journey: after purchase and first insertion—when most users skip the manual—Dash Pro delivers a brief intro, then prompts environmental checks and an estimated duration. Users tap to start an eight-tone-per-ear hearing test, with each tone increasing in volume until they signal detection. We scripted the voice prompts using text-to-speech, segmented them for rapid prototyping on dummy devices, and ran iterative in-house trials led by our working student, refining the flow based on each session’s feedback.

Iteration #1

Learnings

  • Some test users found the text to speech voice being too fast when they heard it. Especially during longer text strings. 
  • Most of the text strings were too long.
  • Only the last part of a long text string would register in the user's mind, which resulted in people not performing instructions correctly.
  • First-time users would find the onboarding experience challenging as they were unaccustomed to the device's interface, which made the hearing test difficult as they were struggling with operating the earphones. 

After I mastered the text-to-speech service’s editing syntax to refine pace and pronunciation, we eliminated filler words for clarity and selectively reintroduced context where needed—recognizing that speech can drop detail without losing meaning, whereas text must remain precise.

We launched rapid internal tests to gather quick feedback, then broadened our pool to include both seasoned Dash Pro users and first-time testers. This surfaced a gap: simple interactions felt intuitive to veterans but baffled novices. To bridge it, we added a brief in-device training session before the hearing test and retested with an updated script, onboarding flow, and sound cues—iterating until both user types could reliably complete their personalized hearing profiles.

Iteration #2

Learnings

  • Having created a "training mode" made the user more comfortable with doing the hearing test. 
  • Sound ques were at time  confusing to the users.
  • The interactions design in the hearing test was perceived as giving a delayed response. e.g., that the user did not feel that they reacted fast enough. 

The second iteration showed that users completed the hearing test more smoothly after the in-device training session—but as with any iterative cycle, new issues emerged. Our deliberately delayed audio cues, intended to mentally prepare users for each phase, sometimes played after testers had already moved on, causing confusion when unexpected feedback sounded. Thankfully, tweaking the cue timing was straightforward, and re-implementing the adjusted delays restored clarity to the onboarding flow.

During testing, users who heard a tone but saw their tap register late questioned the accuracy of their Earprint and requested a full retest—an impractical solution for a single mis-tap. To better align perception with interaction, we flipped the gesture: users now press and hold before each tone and release exactly when they hear it, and we revised the text-to-speech prompts to reflect this change. Concurrently, we defined a conditional “redo” function—triggered only by genuine playback or input errors—to allow corrections without encouraging unlimited retakes and to maintain overall trust in the test results.

  • Should activate in case something goes wrong with the system.
  • Should activate if the system does not receive any input during the playback of a frequency tone. 
  • Users should be able to reset the whole earprint from the beginning after doing it once. 

Implementing the redo feature meant rewriting our voice prompts and embedding new logic to validate each response: after every frequency tone, the system checks for a hold-release input and verifies that it falls within the algorithm’s acceptable hearing range.

To ensure these safeguards met user expectations, we ran another round of internal user tests and iterated based on the feedback.

Iteration #3

Learnings

  • The new gesture for the hearing test was received better than the tap.
  • The updated voice script needed a couple of tweaks for better understanding.
  • The redo function fitted well into the system logic. Our internal testing did show any indications that users wanted to be able to manually initiate a frequency tone. 

Feedback on the updated onboarding and hearing-test flow was largely positive, leading us to adopt these core features as our MVP foundation. Remaining issues were mostly minor timing mismatches and occasional instability from the new logic. While some users initially hesitated with the redo function, its guided recovery reliably steered them to complete the test.

Concept Expansion & Further Development

From day one, Bragi leadership envisioned adding a tinnitus-relief mode using colored noise. We dedicated the right earbud to the personalized hearing-aid functions and reserved the left for tinnitus support. Initial plans to pinpoint each user’s tinnitus frequency proved too precise for Dash Pro’s hardware, so we simplified: users now choose from broad-spectrum noise profiles. Holding the left earbud opens the tinnitus menu; swipes cycle through noise types; a tap selects; and forward/backward swipes adjust volume.

As we wrapped up that concept, we explored a hearing-aid experience that skips the Mimi test entirely. In partnership with Mimi Hearing Technologies, we default to an age-based profile—leveraging statistical hearing-loss trends—to boost key frequencies automatically. Users simply enter their age and swipe through predefined age-interval profiles, removing the need for an onboard hearing assessment.

Beyond its hearing-aid capabilities, we reimagined Dash Pro as adaptive hearing protection: when ambient sound exceeds a safe decibel threshold, the earbuds automatically attenuate noise, functioning like a dynamic passive noise-canceler. This feature opens new markets—hunting, construction, security—where users demand both protection and situational awareness. To sharpen our competitive edge, we audited existing products, mapped common features to industry standards, and identified opportunities to differentiate Dash Pro with unique selling points.

In the media

The pilot project made it into publications on various websites. If you have interest in reading the review you can find some of the following news here:

Hearing aid and tinnitus support
Earprint
Hearing Health & Technology Matters

 

The teaser of the project can be seen here, presented by the CEO of Bragi, Nikolaj Hviid.

All Images and Videos are subject to © Copyright 2019 Bragi GmbH.

iTranslate
Software Design
4D Menu
Software Design