Sound Apps is a project where I combine machine learning, interactivity, field recordings and web audio API.

Robots writing music for – and with – robots.

I collect some field recordigs and decides on some texts – and the program does the rest, in semi-real time. The real machine learning program is running my system, and this program programs another program, which will run in a browser – or in a HTML5-based app.

So far the program only works in a browser, more specifically in Chrome on a reasonably powerfull computer; not on my phone or my iPad. But I expect this program to mature and become more efficient, so that it can run in any modern browser on most platforms.

I have no clear knowledge of what the machine learning program doew, when it is working: How it decides on rhythm, timbre, volume, development etc. I only know what it can do, and, based on that, I can make a qualified guess to what it is doing. I think it takes bits of the text and converts to audio (using google text-to-speech API or the meSpeak library). The timbre and structure of this audio will become the structure of the final piece – or at least some versions of the resulting piece.

Because the program does not really make a piece of music, it creates a field of possible musical experiences. And every time you hear the music, it will be a new experience. Only two things are given: The length ( that is a parameter I give it) and the first sentence.

Leave a Reply

Your email address will not be published. Required fields are marked *