Overwhelming exposure to sounds can often lead to pain, meltdown, and embarrassing social situations for those with sensitivity. We've spent the last 6 months researching the needs of hypersensitive individuals in their sound environments. Along the way, we've met some pretty awesome people, who've opened our eyes (and ears) to the sound struggles they face on a daily basis.
Here are a few of the things they've said:
"I decided not to join my friends and go to see Hamilton, because I couldn't handle the amount of noise."
"I can't face the wall of sound at the cafeteria, so I eat lunch alone in my office."
"The more overstimulated you are, the harder it is to predict your tipping point, communicate your overwhelmed state, and make the decision to escape."
Our solution is made up of three stages: Collect, Reflect and Architect.
In the 1st stage, Collect, the user wears the device for a week, pressing the "flag" button to tag stressful sounds events they experience.
In the 2nd stage, Reflect, the information from the previous stage is curated in the app and fed into our machine learning alogrithm for trend analysis. Furthermore, the straightforward collection of stressful sound-related events allows for the information to be easily discussed with others.
In the final stage, Architect, the device is able to predict and alert the user of stressful situations before the user is exposed to the negative effects of such experiences after a sufficient number of events have been logged. The user can make real-time decisions that help them escape painful situations and plan around their own personal sound exposure day-by-day.
Our solution is comprised of two parts: a wearable and an accompanying app. Sound information is collected through the wearable device and transmitted to the user's phone via bluetooth. Once received, the phone processes the received data and stores the resultant information in a cloud database. The information can be accessed at a later time for predictive purposes.
The wearable device has a display screen and two buttons, one to the left of the screen and one of the leftmost face of the device. The button next to the screen allows the user to toggle through various segmentations of the. The other button allows the user to note times of stressful sound.
The app is the main portal for the user to access their sound information. When the user flags a sound event, the meta-data surrounding the experience (date, time, location, etc.) are logged and displayed in both a list and map format for the user. Once an event has been logged, the user can add additional commentary to any event to give further context to the experience.
Our solution is intended to be used in almost any environment, from monitoring to sound exposure in the classroom to reflecting upon past restaurant experiences.
We intend to sell the hardware for $100, comparable to the cost of a Fitbit, and access to the app and analytics for $5 per month.