The project is geared towards an implementation of a simulation-based physical modelling sound synthesis framework in a highly parallel form, using the resources at the Parallel Computing Centre (EPCC). The framework itself allows a user (a composer) to create his own, possibly very complex percussion instrument, consisting of a network of connected objects, such as bars and plates, and then to ?play? it, by generating a score of hit times, locations, strengths, and then ?listen? to it by taking multichannel output from various locations on the instrument. The environment itself has already been prototyped in Matlab, and used by a professional composer in order to generate a short piece of music, performed this year at the Digital Audio Effects conference in Como, Italy. Sound quality is excellent, but run times are, however, quite slow---on the order of several minutes to generate a second of sound, for a reasonably complex instrument configuration. It would be extremely useful to have a fast implementation.
Another possibility would involve looking at GPUs in order to perform audio synthesis; a colleague of mine in Helsinki University of Technology (Lauri Savioja) is looking into this, with Nvidia, from the point of view of audio spatialization, but I think a synthesis engine is a possibility as well---so does he.
A further aim of the project would be to train a composer to use the system, and generate a piece of multichannel music, which could potentially be performed in the near future. I have worked with composers in the past, in Matlab, but figuring out a way in which a composer could work with HPC is not obvious?the musician wants to be able to experiment immediately, and not have to wait for jobs to be queued; in the longer term, real-time synthesis, suitable for live performance is desirable---this might be an argument for looking at the GPUs, rather than at EPCC as a solution.