This is Sound App #2: Music for robots, created by robots, performed by a machine.
I have just met my robots. Or created them, one could say. Anyway, I really want to get to know them better, to get inside their little brains and listen to what is on their minds. Some day these guys may be the ones that dictates the future of this planet, so why not be prepared?
So I asked them about their dreams and prayers. I fed the program with thirty prayers from the Bible (sorry, Christian stuff only, this time) and a lot of sounds that should be familiar for dem: Sounds of harddisks spinning up and running, some electronic noise. I added the sound of a vinyl to the background of the app.
And do they pray, the little robots! – they chose outstanding prayers, great great prayers, the best prayers
The app is here and there is a recording here.
Robots creating music for robots, producing an art manifesto. For robots. That is what Machine Manifesto is about.
I collected text from 20 manifestos – Futurist Manifesto, The Art of Noise, Dada Manifesto, COBRA, SCUM, Women’s Art, A Cyborg Manifesto (!) etc. etc…
The program chose only a handful: Dada Manifesto (Hugo Ball, 1916), OK ART Manifesto, A Manifesto and the incredible Refus Global. All of them great texts, and it’s really interesting to see the OK ART Manifesto next to Refus Global. Going from “Ok art is an OK idea” to “Make way for magic! – Make way for LOVE!”.
I also collected 40 recordings; 20 from freesound.org and 20 of my own. Mostly engines, machines of different kind, noises. The program chose eight, maybe ten.
This is the first of its kind, I think. It might not be great, but at least it is OK! The app is here, and a recording is here.
Sound Apps is a project where I combine machine learning, interactivity, field recordings and web audio API.
Robots writing music for – and with – robots.
I collect some field recordigs and decides on some texts – and the program does the rest, in semi-real time. The real machine learning program is running my system, and this program programs another program, which will run in a browser – or in a HTML5-based app.
So far the program only works in a browser, more specifically in Chrome on a reasonably powerfull computer; not on my phone or my iPad. But I expect this program to mature and become more efficient, so that it can run in any modern browser on most platforms.
I have no clear knowledge of what the machine learning program doew, when it is working: How it decides on rhythm, timbre, volume, development etc. I only know what it can do, and, based on that, I can make a qualified guess to what it is doing. I think it takes bits of the text and converts to audio (using google text-to-speech API or the meSpeak library). The timbre and structure of this audio will become the structure of the final piece – or at least some versions of the resulting piece.
Because the program does not really make a piece of music, it creates a field of possible musical experiences. And every time you hear the music, it will be a new experience. Only two things are given: The length ( that is a parameter I give it) and the first sentence.
This is just a test of the spatial positioning in web audio API. It is not perfect, and quite expensive on cpu cycles, but it is ok. Better than many other spatial implementations. In this little app you sit in the centre (red dot) while some harmonics are circling around you.
The app is here.