There is a question today that is bordering on the philosophical, but artificial intelligence (AI) can certainly make something that people in the world of music and artwork find pleasing: Can computers also be creative? In the same context, Google launched Magenta, a research project that was mainly aimed at pushing the limits of what AI can do in the field of arts as well.
Now, What actually is Magenta?
Magenta is a project from the Google Brain team that questions: Can we make the use of machine learning in order to create art and music that is compelling? If so, then how?
What are its main goals?
Magenta, the research project comes with mainly two goals:
First of all, it’s a research project which aims at the advancement of the state of the art in machine intelligence for music and art generation. As we know that machine learning has already been used extensively to understand content by the means of speech recognition or translation. With Magenta, the team now wishes to explore the other side as well. that is the development of algorithms that can learn how to generate art and music, hence potentially creating content on their own that is both compelling and artistic.
Second, Magenta is an attempt to help build a community that comprises of artists, coders and machine learning researchers. The core Magenta team will make the use of TensorFlow around to build open-source infrastructure for making art and music. The team will start with support for audio and video, tools for working with formats like MIDI, and platforms that help artists connect to machine learning models.
Now, How does Magenta compose music?
The answer lies in learning as Learning is the key. The team is currently not spending any effort on classical AI approaches, which build intelligence with the use of rules. "We’ve tried lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. Explaining all of those buzzwords is too much for a short answer. What I can say is that they’re all different techniques for learning by example to generate something new. " says the team according to a source.
The main goal of the team is to design algorithms that learn how to generate art and music. There’s been a lot of great work in image generation from neural networks, some of which include Neural Style Transfer from A. Mordvintsev at Google and DeepDream from L. Gatys at U. Tübingen. It is believed that this area as of now is in its infancy, and it can be expected to see fast progress here. For those who are into following machine learning closely, it is very clear that this progress is already well underway. But there still remain a number of interesting questions that persist:
How can we make models like these truly generative?
How can we better take advantage of user feedback?
One of the main challenges:
To start with, The priming of the Magenta’s algorithm was done with only four notes, and from there it took off to plunk out a verse and bridge of sorts. The addition of the drum parts was done later on for texture. The project’s biggest challenge, according to researchers, wasn’t getting the machine to create a tune, but to make it surprising as well as compelling. "So much machine-generated music and art is good in small chunks, but lacks any sort of long-term narrative arc," wrote Magenta scientist Douglas Eck in a blog post introducing the project.
We don’t know what artists and musicians will do with these new tools, but we’re excited to find out," the researcher wrote. " If we Look at the history of creative tools, both later Eastman and Daguerre wouldn't have imagined what Richard Avalon or Annie Liebovitz would accomplish in the field of photography. It is for sure that Rickenbacker and Gibson didn’t have Jimi Hendrix or St. Vincent in mind. We believe that the models that have worked so well in speech recognition, translation and image annotation will seed an exciting new crop of tools for art and music creation."
As of now, it is known that the Developers will release their models and tools in open source on GitHub. The link has been provided below.
For More Information: GitHub