Through a project called Magenta, Google's machine learning researchers hope to understand whether computer-generated music can qualify as art.

Thomas Claburn, Editor at Large, Enterprise Mobility

June 2, 2016

3 Min Read
<p style="text-align:left">(The Next Rembrandt/Microsoft/ING)</p>

Google I/O 2016: AI, VR Get Day In The Sun

Google I/O 2016: AI, VR Get Day In The Sun


Google I/O 2016: AI, VR Get Day In The Sun (Click image for larger view and slideshow.)

Google wants to know whether machine learning can be used to create engaging music and art. To find out, the company's machine learning researchers, the Google Brain team, have released Magenta, a set of open source data models and tools, on social code repository GitHub.

"Machine learning has already been used extensively to understand content, as in speech recognition or translation," said Google research scientists Douglas Eck in a June 1 blog post. "With Magenta, we want to explore the other side -- developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own."

The project utilizes TensorFlow, the machine learning framework Google made available late last year.

Eck describes what Google has made available as alpha code and encourages artists, coders, and machine learning researchers to come together to create art and music using the company's technology.

Eck says Google's goal is to create algorithms that generate art and music.

Microsoft has already demonstrated something of the sort. The Next Rembrandt project, a collaboration between Microsoft, ING, Delft University of Technology, and Dutch museum Mauritshuis, created a portrait in the style of Rembrandt using machine learning algorithms and an advanced 3D printer.

Google wants to go further than generating automated works, beyond teaching a computer to generate images that mimic the style of a specific artist. "It's not enough just to sample images or sequences from some learned distribution," explains Eck.

Google wants to create a machine learning model that can capture attention, create surprise, and follow a narrative arc.

To do so would be a monumental achievement for machine learning. But it might just be a disaster for human occupation.

Machines long ago proved superior to humans in strength. As a result, many labor-oriented jobs have become mechanized. The fact that human intellect remains necessary to orchestrate these jobs has ensured our continued employment.

[Read how Google's VR vision is finally coming into focus.]

If machines can match us as artists -- as musicians, writers, painters, or the like -- our most aspirational endeavor becomes diminished because the mechanized generation of art destroys its value, from both a cultural and financial perspective.

Art is a story we tell to ourselves. It's a yarn that has been unravelled by the mockery of Marcel Duchamp and Andy Warhol and by the mass production of Thomas Kinkade and LeRoy Neiman. But it has retained a link to singular, personal endeavor. It remains fundamentally human. It's also hard to come by, making it worth something, even if there's no real agreement as to its definition.

If machine learning can create hit songs or beloved stories at the push of a button -- which would amount to passing the Turing test -- the end result will no longer be art. It will be commoditized content, a new flavor of the drivel dispensed by content farms to capture ad revenue.

Machines that can assist us lighten our load. Machines that can surpass us make us redundant. We're a long way off from that point, even with algorithms penning dry financial news stories. But Google should be careful what it wishes for.

(Cover image: MarsBars/iStockphoto)

About the Author(s)

Thomas Claburn

Editor at Large, Enterprise Mobility

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful master's degree in film production. He wrote the original treatment for 3DO's Killing Time, a short story that appeared in On Spec, and the screenplay for an independent film called The Hanged Man, which he would later direct. He's the author of a science fiction novel, Reflecting Fires, and a sadly neglected blog, Lot 49. His iPhone game, Blocfall, is available through the iTunes App Store. His wife is a talented jazz singer; he does not sing, which is for the best.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights