Real-Time Acoustic Processing Has Big Data Potential - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Data Management // Big Data Analytics
11:06 AM
Connect Directly

Real-Time Acoustic Processing Has Big Data Potential

Ready for a wearable that listens to your snoring -- or your stomach? Meet audio machine-learning tech.

CES 2014: 8 Technologies To Watch
CES 2014: 8 Technologies To Watch
(Click image for larger view and slideshow.)

You're jogging down a busy city street, cranking tunes on your smartphone, oblivious to the world around you. The intersection ahead looks clear, and you're unaware of loud sirens signaling that a speeding ambulance is coming your way. But before disaster strikes, your smartphone shuts off the music and warns you of the approaching vehicle.

This is just one of many potential uses of real-time acoustic processing, a machine-learning system that analyzes ambient audio to predict near-future outcomes. In the example above it saved a clueless jogger from being squashed like a bug, but the technology has other potential uses too. It could, for instance, detect when industrial equipment is about to fail, alert deaf people to alarms and other auditory warnings, helping ornithologists analyze bird calls, and even monitor bodily sounds -- such as heartbeats, stomach rumblings, and snoring -- for use by mobile medical apps.

Rapid improvements in mobile devices, most notably faster processors and longer battery life, are helping audio machine-learning technology go mainstream, says One Llama Labs, a New York City-based developer of acoustic-processing software.

[There's more to wearable tech than just smartwatches. Read Wearables To Watch At CES 2014.]

"Wearable technology is now powerful enough to do serious machine learning, even at the audio level. And that technology will change the world in terms of monitoring," said David Tcheng, One Llama Labs' cofounder and chief science officer, in a phone interview with InformationWeek.

The company's Audio Aware machine-learning app is capable of analyzing hundreds of sounds, including music, from its surroundings. It will be available this month in the Google Play store; One Llama Labs plans to develop iOS and Windows Phone versions too, but no timetable was given.

The audio technology is based on research started a decade ago at the National Center for Supercomputing Applications' Automated Learning Group (which Tcheng cofounded) at the University of Illinois at Urbana-Champaign. One Llama Labs' original focus was on music recommendation technologies -- "sort of what like Pandora does but using supercomputers," explained company cofounder and EVP of business development Hassan Miah, who joined the call.

"The core acoustic, artificial-intelligence machine learning could apply to a lot of things," said Miah. "And now with the emergence of wearable technology, the cloud, and other factors, [our] technology can be used well beyond music. So that's the genesis of how we came out with the... Audio Aware system."

The company sees three primary markets for Audio Aware on mobile devices. The first: deaf users. "They can't hear alarms and other alerts," said Tcheng. "With my previous work with audio recognition and bird-call analysis and speech recognition -- in general, machine learning -- I knew we could detect these sounds with some of the audio machine-learning software I've created."

The second group: music lovers wearing headphones. "There is an epidemic of people just walking around -- kind of like zombies -- attached to their cellphones," said Tcheng with a chuckle. "And in the worst case [they're] cranking music so loud that they can't hear common threats."

The third group: people who want to be notified of specific sounds -- for example, nature lovers or users who study birds and other wildlife in outdoor settings.

Medical applications have potential as well, although identifying bodily sounds may present its own set of technical challenges. "We've been thinking about doing a sleep apnea application, because all the system needs to learn is how to recognize a breath," said Tcheng. "But as soon as you put the microphone on a body, you pick up all sorts of bodily sounds, from heart rate to the digestion system. If you've ever heard someone's tummy, it makes all sorts of noise."

In industrial settings, audio machine-learning technology might be used to distinguish between normally functioning machines, those in need of maintenance, and those about to fail, Tcheng said.

Engage with Oracle president Mark Hurd, NFL CIO Michelle McKenna-Doyle, General Motors CIO Randy Mott, Box founder Aaron Levie, UPMC CIO Dan Drawbaugh, GE Power CIO Jim Fowler, and other leaders of the Digital Business movement at the InformationWeek Conference and Elite 100 Awards Ceremony, to be held in conjunction with Interop in Las Vegas, March 31 to April 1, 2014. See the full agenda here.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
3/10/2014 | 11:41:48 PM
Re: What else?
Yes! With this new technology, people will be free to completely ignore their surroundings and the people around them. Reality will only have to assert itself, well, when ones about to walk into a fountain.
User Rank: Author
3/10/2014 | 9:06:38 PM
Re: Analysis
Railroads already analyze audio of train wheels, using microphones on tracks to listen for growling bearings and software to spot the anomalies. But it's not real time, and takes a person to make a judgment on the wheels deemend out of spec.
Kristin Burnham
Kristin Burnham,
User Rank: Author
3/10/2014 | 8:49:39 PM
What else?
If only this could warn you about those pesky fountains, too...
User Rank: Ninja
3/10/2014 | 7:34:41 PM
The potential here is really limitless! Think of the military apllications - it would be impossible to surprise a soldier, because the device could be "trained " to pick up the faintest sounds and recognize them as an enemy's steps. Also medical - what is the sound of a heart about to suffer a heart attack or of an arterry about to burst?
User Rank: Ninja
3/10/2014 | 6:42:13 PM
I like the concept, but I would be concerned about this type of audio just becoming "noise" to a person. With frequency comes familiarity and a lack of cognition. 

My question is: How can this be used without becoming obtrusive, relegating it unuseful?
10 Top Cloud Computing Startups
Cynthia Harvey, Freelance Journalist, InformationWeek,  8/3/2020
Adding Fuel to the MSP vs. In-house IT Debate
Andrew Froehlich, President & Lead Network Architect, West Gate Networks,  8/6/2020
How Enterprises Can Adopt Video Game Cloud Strategy
Joao-Pierre S. Ruth, Senior Writer,  7/28/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
Enterprise Automation: Do More with Less
In this IT Trend Report, we highlight the benefits of automation and the various tools as enterprises navigate turbulent times, try to do more with less, keep their operations running, and stay on track with digital modernizations.
Flash Poll