by Tyler Gray

The Sound Barrier

By the end of 2016, we finally got a glimpse of life with a personal soundtrack that follows us everywhere.

Companies including Bose, Sonos, and Spotify got friendly and linked up, meaning the music a listener cues up at the office could follow him to his car, then right into his home without skipping a beat. No more having to close one app to open another just to change where a song is playing.

As a person who wants just about every aspect of my life set to music, my daily routine became significantly more awesome. I can now play tunes from my Spotify app on my speakers at work, take the songs with me when I clock out, and listen to them via headphones during my subway commute. Then when I walk in the front door of my apartment, I can use the same app to flick the music to my home speaker system midsong. By the time I’ve taken off my headphones, the song is playing in my living room. My whole family gets to join in my jam as I stroll in to the tune of my personal theme music. Reactions vary based on my song choices, of course. 

It’s a scenario that was clearly foreshadowed by Steve Jobs, who stood onstage on October 23, 2001, and debuted the first iPod with the slogan “1,000 songs in your pocket.” Since then, the places and ways in which we consume music have grown significantly in this age of intelligent things. Today, the sonic revolution that started in your pocket now extends across an array of connected devices that gathers data on our personal preferences to inform all kinds of experiences.

The real revolution is way bigger than music now, too. More than ever, sonic experiences can help you navigate and curate your world across appliances, cars, places, and spaces. Such moments range from the refreshing waaahhh sound that a Mac makes when booted up (originally created in part to calm frustrations after the machine crashes) to the auditory cue produced from refreshing a feed on Facebook’s mobile app. Even the haptic feedback on the home button of the iPhone 7 that mimics a click is a kind of sound that’s not heard but felt—not actually a noise but a vibration.

These days, if a device has a Wi-Fi antenna, it’s probably got a speaker, too.

“Sound is now an interactive experience between the individual and their environment,” says Mike Dennison, president of the Consumer Technologies Group for the design, engineering, and smart manufacturing company Flex, “and it’s more than just turning up a dial on a stereo. Sound is how you engage with your environment and how it engages you back.”

As a result of this new paradigm, audio companies big and small are finding ways into spaces and devices they’ve never gone into before. Tectonic Audio Labs, a five-year-old early-stage company based in Woodinville, Washington, has pioneered ultra-high-quality, ultra-durable flat-panel speakers that push sound through spaces as large as arenas and as compact as the manicured cabin of several Bentley models. “Putting our speaker technologies into things like washing machines, cookers, refrigerators—it’s kind of a market that is exploding for us,” says Tim Whitwell, VP of engineering for Tectonic. The company is also working with several makers of voice-activated connected home devices to figure out how to produce high-quality sound that won’t interfere with the voice-detecting microphone arrays. “That introduces us to a whole other set of challenges and opportunities,” he says.

 

Functional Sound

As producer, composer, and founder of sonic branding agency Man Made Music, Joel Beckerman has created soundtracks for sports, news, and entertainment programs across most of the major TV networks, as well as sonic identities for global giants such as Disney and Imax. He is perhaps best known for his work on the full anthem and sonic mnemonic (the four little notes) you hear in association with everything AT&T. (I also coauthored his nonfiction book, The Sonic Boom: How Sound Transforms the Way We Think, Feel, and Buy.) Man Made Music specializes in what it calls “brand navigation sounds,” which are both functional and emotional. “Sound is the key in tools we describe as ‘next-level intuitive,’” Beckerman says. “It’s about enhancing the experience and making technology feel so simple and intuitive to use, and so well integrated into the experience, that you might not even notice it. But the sound is there. It’s the emotional engine for all these experiences. Like a film score, we think about scoring the brand experience, everywhere.”

Actual filmmakers are using sound in new ways as they experiment with new formats, too. With virtual reality, for example, sound is becoming an important tool in directing attention or signaling emotional cues. To tell the story of British writer and academic John Hull’s vision loss, for example, the creators of the award-winning documentary Notes on Blindness made a companion VR app that replicated the sensory and psychological experience of sightlessness through a binaural recording that produces 3-D stereo sound.

Sound is also useful in helping to navigate extreme situations. The Danish audio software company AM3D uses it in a helmet to aid firefighters in smoke-filled environments, helping them locate their team members even when they can’t be seen. Similar helmets worn by pilots in some A-10 and F-16 fighter aircrafts instantly deliver audio alerts when missiles from enemy fighters are fired and indicate from which direction.

Sound and Wellness

Functional sounds in connected technologies represent a frontier with real effects on everyday human lives. Take alarms in hospitals, which are intended to alert caregivers to emergencies but instead have, in many cases, become a dangerous hazard. Alarm fatigue is a growing problem. The Joint Commission, a national health care quality-control organization, received 98 reports of alarm-related incidents, including 80 deaths, during a three-and-a-half-year period that ended in June 2012. One well-known case occurred when 17-year-old Mariah Edwards went to a surgical center in Pennsylvania to have her tonsils removed, and then died following the routine surgery after health care workers didn’t hear the warning on a machine that monitored her vital signs. A nurse later admitted that the monitor had been muted. In most of the cases studied by the Joint Commission, alarms were either turned off or inaudible. The commission estimated about 1,000 incidents in which patients died, were injured, or faced unnecessary risk because of sound failure. In 2013, The Washington Post reported that the ECRI Institute, a Pennsylvania-based patient-safety nonprofit organization, listed alarm hazards as the No. 1 issue on its annual list of the top 10 health-technology dangers for 2012 and 2013.

To fix the problem, researchers suggest we standardize alarm sounds. A universal sonic vocabulary could make it easier to train health care workers, as well as limit the number of different noises that demand their attention. 

Sound can also influence the healing process in hospitals. Experiments at hospitals in several U.S. states have already found that music has the power to improve blood pressure, slow down heart rates, ease distress, and increase blood flow through the arteries. Melodic intonation therapy, which uses musical elements of speech, and music are proven forms of effective therapy for stroke victims, helping to ameliorate the effects of stroke on the brain. After being shot in the head by a would-be assassin in 2011, U.S. Representative Gabrielle Giffords used music to help her regain the ability to speak. Roy Orbison’s “Crying” elicited tears from comatose Bee Gees member Robin Gibb after his wife played the song for him. And when his family put on The Titanic Requiem, the album he had collaborated on with his son RJ, Gibb woke up.

These days, Beckerman’s Man Made Music has been quietly working with two hospitals in California on a beta test involving VR soundtracks and pain management.

“If you give chronic-pain patients a one-hour treatment in an immersive sound system, they report feeling 40% less pain for the whole day,” Beckerman says of the early results from those hospital collaborations. “It really speaks to how sound can help us frame up our lives, even in times of pain or crisis.”

Man Made Music is also working with Memorial Sloan Kettering Cancer Center, looking at a Pavlovian use of sound to reward people for making progress during rehabilitation. “You get rewarded with pleasant or uplifting sounds for doing the hard stuff that’s good for you,” Beckerman says. The beta test could make an impact across the wellness and pharmaceutical industries, where factors such as adherence are pivotal for positive outcomes.   

 

Sound in Motion

For years, auto manufacturers have used materials to soundproof cabins and create the feel of luxury through different types of hardware. Today they use much more sound technology to create a sense of quiet in your ride. For example, sound can cancel out engine or suspension noises. And a new slew of navigational and functional sounds can warn or direct drivers without asking them to divert their eyes from the road.

“We work a lot with sound in the car today via speech recognition,” Flex’s Dennison says. Flex is working with Ford to help integrate its vehicles with Wink, a hub for connected devices in the home that lets you control things like lights and temperature remotely, even while driving. “We’re figuring out how drivers engage with their cars, and then how their cars respond to the needs of the drivers’ lives. The answers are voice and voice recognition,” Dennison says. “It’s not just about soundproofing your car anymore, because that’s what you could do 30 or 40 years ago with materials. It’s about having a personal way to adapt your car to be what you want it to be.”

Manufacturers and companies have even installed systems that generate fake engine noises to avoid collisions with unsuspecting or blind pedestrians. In November 2016, the U.S. Department of Transportation’s National Highway Traffic Safety Administration announced its Quiet Car rule, which requires all newly manufactured hybrid-electric light-duty vehicles traveling forward or backward at less than 19 miles per hour to generate sound. Manufacturers have until September 1, 2019, to comply. Auto makers need external-facing speakers to generate those EV sounds. Tectonic’s Whitwell notes that the company’s speakers have become sought after by EV makers since they’re small, light, and highly durable in a variety of harsh road conditions.

Sound has also given electric sports cars a sense of style, speed, and power. It took Audi’s acoustic engineers three years to develop the sound for the 313-horsepower, all-electric engine in its 2009 R8 e-tron. The result sounds like something out of the movie Tron: part muscle car, part “light cycle.” 

Sound that Sells

No group inherently harnesses the power of sound—or represents as great an opportunity for businesses—than millennials. Raised with social technology, they expect to be able to personalize all kinds of experiences to make them uniquely their own.

“In the supermarket, millennials are already curating their own experiences while shopping,” Man Made Music’s Beckerman says. Just look for the people with the headphones on. “Part of that is about personal space, creating your own personal bubble to tune out the world, but it’s more about curating your own experience with the world around you.” 

Marketers and business owners can take a cue from this behavior and use sound to influence the way potential customers spend money or time in a business. A study by marketing professor and researcher Ronald E. Millman in the 1980s found that supermarket sales went up 38% when the store played slow music versus fast music. In a widely cited 1999 paper titled “Play That One Again: The Effect of Music Tempo on Consumer Behavior in a Restaurant,” researchers Clare Caldwell and Sally A. Hibbert of the University of Strathclyde in Scotland found that diners spent 13.56 minutes longer in a restaurant when they were listening to slower-tempo music than when they were listening to higher-tempo music. And they spent “significantly more” on food and drink when the music was slower.

Man Made Music has also been applying some of those insights to a major theme park and its attractions. The park is looking to expand its appeal among millennials, Beckerman says, but the challenge is that most theme parks ask visitors to relinquish most control of their experiences. Parks aim to please the masses, not the individual. “That’s not how millennials operate,” Beckerman says, so his company is helping create interactive experiences in the park (he declined to say which ones, citing confidentiality agreements). Not only will the new interactions let visitors make their time on a ride unique but “they could experience the same attraction 20 times and have a different experience every time,” Beckerman explains. “We’re helping create the illusion of an infinite decision tree. The idea isn’t even only about repeatability; it’s about scale. Sound can help make every aspect of the park cohesive—standing in line is part of the experience; going to the food court is part of the experience.”

 

Audio for AI

The next frontier for sound is artificial intelligence. Automated music curators that “learn” from your music choices have been making us digital mixtapes for years. Now, however, sound is becoming the output and input.

“There’s this obvious notion that you can talk to your home now,” Flex’s Dennison says. “What’s less obvious is your home speaking back to you.”

The number of smart devices that Amazon’s Echo and voice-activated service Alexa can integrate with in the home is expanding, including to home speaker systems. “We’re working on a project with Amazon where instead of pushing a button on your light switch, you talk to it and it becomes Alexa-enabled,” Dennison says. “So you’re having conversations with your lights rather than walking around using your hands to turn them on or off.”

 

"[Sound is] more than just turning up a dial on a stereo. [It] is how you engage your environment, and how it engages you back.”

Mike Dennison, President of the Consumer Technologies Group, Flex

 

 

 

The more AI becomes commonplace in our lives, the more we will use sound in interactions with our technology. It’s our voices and other sounds, not our hands, that are becoming the most natural interface with technology, Beckerman says. “If anyone is really going to accept a true connected-home experience in their house, we are going to have to break the tyranny of the screen, or even the touch screen,” he says. “I don’t think anyone wants to have to drop everything they are doing to look at any one of a dozen screens in their home, not when they could interact with a system through simple, unobtrusive sounds and commands.”

As we head into the next five years, Flex’s Dennison predicts that apps on our smartphones and Wi-Fi hubs will allow our appliances to talk to one another and evolve into a system in which the machines actually learn from each other. “The common language among all of them and their interactions with humans will involve sound,” he says.

The question is, which industries will lead the way?

“Right now, you have a convergence of the audio companies contemplating how they can be more connected and the computing companies thinking about how they can do more with audio,” Dennison says. “Does Apple create the best sound experience or does Bose create the most connected experience? They’re coming at it from two directions. And I think a lot of people would say the sound guys could get there faster.”