Google Neural Networks Make Captivating, Surreal Art - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Life
10:20 AM

Google Neural Networks Make Captivating, Surreal Art

Google is training neural networks -- think AI with an imagination -- to render images based on information engineers provide, and the results are fascinating.

10 Raspberry Pi Projects For Learning IoT
10 Raspberry Pi Projects For Learning IoT
(Click image for larger view and slideshow.)

There's a new style of art in the world, and it's called Inceptionism. Search giant Google has been training artificial neural networks, a family of statistical learning models inspired by neural networks like those found in the brain, to generate images based on information that engineers provide.

Using the example of a fork, which needs a handle and certain range of tines, Google can input that information, while withholding information like shape, size or color, and see what the network comes up with.

The network's results show a fascinating process whereby an artificial intelligence attempts to create images using data -- the company had particular trouble trying to get a neural network to correctly render dumbbells.

The whole notion of machine learning and AI has been making a lot of news lately. Beyond the worries that it causes some people, such as Bill Gates, AI seems to be something that business are taking a look at. A recent report found that AI can actually help create jobs, thus adding another wrinkle into the whole notion of what IT means to enterprises.

(Image: Google)

(Image: Google)

In Google's case, however, art seems to be taking precedence over commerce.

"Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture," a team of Google software engineers explained in a June 17 Google Research Blog post. "We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance."

While this technique can be applied it to any kind of image, the results can vary considerably depending on the kind of image, because the features that are entered bias the network towards certain interpretations.

The network's visualizations can be fascinating and surreal, as it often tries to interpret shapes into objects based on the information it thinks is relevant, similar to the way we morph clouds into animals using our imagination and recognition of certain similar features.

"If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about," the team wrote in the blog post. "We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network."

(Image: Google)

(Image: Google)

Those images were generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

[Read more about AI.]

When that happens, the results are positively Dali-esque, a collection of brightly colored hues and collage-type images that wouldn't look out of place under a blacklight in a college dorm room.

Google also provides high-resolution photos of these images in its Inceptionism gallery, including some fascinating variations on pointillist master Georges Seurat's famous "A Sunday Afternoon on the Island of La Grande Jatte" and expressionist icon Edward Munch's most famous painting, "The Scream."

"The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training, the post concludes. "It also makes us wonder whether neural networks could become a tool for artists -- a new way to remix visual concepts -- or perhaps even shed a little light on the roots of the creative process in general."

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
6/22/2015 | 10:21:40 PM
Re: Google Neural Networks Make Captivating, Surreal Art
This is very cool. For anyone who's still a little hazy on the specifics, like I was, I'd definitely recommend checking out the full Google blog. It goes into greater detail on what they mean by a 'layered process' (earlier layers recognize corners, or edges of things, and get more detailed from there), and shows some more examples where the researchers input photographs of real-world objects and the network produced a completely different result - one neural network 'trained' on mostly images of animals produced pictures of weird spliced-together animals against what was originally a blank skyline. They still don't really go into how the network or algorithms were designed, what they mean by 'parameters', etc., as I was expecting. It's almost more of a fluff piece, which is just as nice.

The bloggers say the neural nets may "...perhaps even shed a little light on the roots of the creative process in general" Actually, this reminds me of an exercise that some artists use to help them loosen up; you draw a line (or a couple of lines) on a page without thinking. Once they're in place, you can't change them - you have to build the rest of your image around them; so, if the line looks kind of like a big, goofy nose, you draw a clown. It's also supposed to help artists not overedit their work (you accept the permanency of the line), and it got me thinking about how Google's Neural Networks may relate to our own mental processes. It, too, must accept what it's handed by the previous layer - it can't see the original image. As in the picture posted near the bottom with the arches, the software seems to latch on to certain recurring shapes, and draw what it thinks their end result should be.
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Where Cloud Spending Might Grow in 2021 and Post-Pandemic
Joao-Pierre S. Ruth, Senior Writer,  11/19/2020
The Ever-Expanding List of C-Level Technology Positions
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/10/2020
White Papers
Register for InformationWeek Newsletters
The State of Cloud Computing - Fall 2020
The State of Cloud Computing - Fall 2020
Download this report to compare how cloud usage and spending patterns have changed in 2020, and how respondents think they'll evolve over the next two years.
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Flash Poll