Breakthroughs in artificial intelligence make music composition easier than ever – because a machine is doing half the work. Could computers soon go it alone?
The first testing sessions for SampleRNN – an artificially intelligent software developed by computer scientist duo CJ Carr and Zach Zukowski, AKA Dadabots – sounded more like a screamo gig than a machine-learning experiment. Carr and Zukowski hoped their program could generate full-length black metal and math rock albums by feeding it small chunks of sound. The first trial consisted of encoding and entering in a few Nirvana a cappellas. “When it produced its first output,” Carr tells me over email, “I was expecting to hear silence or noise because of an error we made, or else some semblance of singing. But no. The first thing it did was scream about Jesus. We looked at each other like, ‘What the fuck?’” But while the platform could convert Cobain’s grizzled pining into bizarre testimonies to the goodness of the Lord, it couldn’t keep a steady rhythm, much less create a coherent song.
Artificial intelligence is already used in music by streaming services such as Spotify, which scan what we listen to so they can better recommend what we might enjoy next. But AI is increasingly being asked to compose music itself – and this is the problem confronting many more computer scientists besides Dadabots.
If you have a barrier to entry, you hack your way into figuring it out
Related: Are Spotify's 'fake artists' any good?
Continue reading...by Tirhakah Love via Electronic music | The Guardian
No comments:
Post a Comment