Still recovering from Music Hack Day here in Berlin. There are already some great write-ups over at CDM and elsewhere.
This was the second MHD and apparently there is one scheduled for October 21 in Amsterdam and November 21 in Boston. The format is pretty simple: On Saturday morning hackers gather to hear short presentations from API providers. At noon, everybody starts hacking and doesn't stop hacking until noon the next day. A non-stop supply of soda, pizza, Chinese food and shower facilities provided by the hosts (SoundCloud, Ableton, Native Instruments and many others in this case) provide the backdrop for this madness.
One of the recurring themes of this event was "music as software." The is idea is that, as opposed to music being simply a static data file that gets fed into a player, the pieces that make up what we are listening to is (at least partially) generated, composed and mixed on the fly based on a set of rules determined by context - just like software. Music doesn't "play" - it "runs" in non-deterministic real time.
The RjDj presenters call this "reactive music." As an iPhone app, RjDj give composers access to phone's mic/time/location services/accelerator and connects them up to the Pure Data API. (Personally, I love the potential behind having location service influence what you are listening to.)
Ben Lacker from Echo Nest presented what he called "music intelligence" engines that strives to compare music on more than just tempo, key and timing data. Their 'Analyze' API has 20 (!) parameters of timbre they use as points of comparison. They are in the midst of scraping the Web for all the music they can find and running Analyze across everything. Ben's particular specialty is their 'Remix' API which returns a huge hierarchy of data for each Analyze result making it a natural for chopping, time stretching and pitch sifting, but perhaps most importantly and forward thinking: cataloging samples by timbre. Ben was particularly jazzed about our SamplePool/Query API and you can expect to hear more about how they are using CC/ccM content in the near future.
Similar audio analytic engines were presented by Charlie from Cloud Speakers and Peter from Mufin. These services are also scraping the Web, especially webzines, matching data to Music Brainz and acting as aggregators for not only the audio, but also reviews and popularity of tracks. Mufin claims to have cataloged over 7 million pieces of music and presents them back as "sound maps" based on attributes such as 'percussive' and axis such as 'dark' vs. 'light.' Mufin is already being used as a Shazzam like service. Several of the resulting hack projects were Web mash-ups of these APIs with services such as Last.fm and Echo Nest.
Stephen from Native Instruments debuted a new MPC-like controller called Maschine that extends the potential of Reactor and other NI products using MIDI as well the Open Sound Control network protocol. A popular hack project was a pattern-matching game using Maschine to slice and dice sound samples from Street Fighter.
One of the hits of the "science fair" presentations was a toy xylophone wired into a multitude of controllers including the Monome (video at both links provided above), an iPhone (pictured above) and a mind-blowing sequencer iLoveAcid (written in Processor 1.0) by my new hero Jakob Penca. In the context of 'music as software,' the potential of Jakob's tool is nothing short of inspiring, especially when hooked up into a powerful analytical engine such as Echo Nest. On the low end is the ability to find samples based on the timbre of other samples and wire them into iLoveAcid - on the high end, well, it is only limited by one's imagination.