purchase of Autonomy is a host of gems that promise to take the IT industry through a realignment around human friendly information, to paraphrase Autonomy CEO Mike Lynch. Aurasma is one such gem: This augmented reality application promises endless fun, but also a healthy dose of practical applications.
While some categorize the company as enterprise search or content management, Lynch insists that Autonomy is out in front of the pack when it comes to managing all unstructured data, but more crucially, to extracting meaning from that data, and doing so in ways that allow human interaction. Aurasma sure packs the punch behind that statement.
Aurasma starts as an app that runs on an iPhone (it also runs on Android; ironically, it doesn't run on webOS). Point the phone camera at a real-world image or object and Aurasma understands the context of the image and pulls up an accompanying video from its database of over 500,000 videos. For example, a movie advertisement in a newspaper might elicit a movie trailer; a piece of art might conjure a video of the artist talking about it. Point it at a product, and a self-help video might emerge (Aurasma showed a video demonstrating how to connect an HP ProCurve router). Point the camera at a building (or an image of a building) and get a visual tour. Users can also upload their own content from the application--images tagged to video.
[ A lot of innovative technology is on dislplay at CES. Read LA Sheriff Patrol Cars Get Unique Laptops. ]
The idea here is to augment life's images with even richer information, and to do it completely in context. In some ways, it's a new definition of search: understand context, see images, provide more information. This is all really just scratching the surface, of course, and it will take some time to catch on, but it's easy to see the promise--indeed, technology like Google Goggles (which uses images to drive Web searches) has been in the mainstream for a while.
Aurasma has also made progress on the 3-D content front, and some of that can be seen in the video demonstration embedded below. Matt Mills, the company's head of innovation, said it owes such progress to the speedy hardware of phones--as the devices get more powerful, the ability to do better image recognition and playback also increases.
Mills said that a year ago the software could only create 300 triangle 3-D models at about 10 frames per second, but now it can provide about 70,000 at 30 frames per second. With quad-core processors hitting the scene at CES, those numbers are bound to grow even more. In one demonstration Mills overlayed a 3-D dinosaur in the lobby of our meeting area, and then set me inside the frame (the result is at the end of the video embedded below).
While Aurasma keeps all of this data in the cloud, it also caches a certain amount on the phone, based on the subjects and channels of interest the user selects. In other words, it also works offline. Mills said the file sizes are actually pretty small, and if the data starts to eat up too much storage, it will simply take the data from the cloud.
I asked Aurasma what the business model was--after all, this is a free app, and while many apps are free, it doesn't seem as if that fits Autonomy's (or HP's, for that matter) business practice. A spokeswoman responded that the company believes it has transformative technology and wants to unleash the application on as many people as possible.
It's time to get going on data center automation. The cloud requires automation, and it'll free resources for other priorities. Download InformationWeek's Data Center Automation special supplement now. (Free registration required.)