NTNU, Faculty of Humanities, Department of Music
project manager: Professor Øyvind Brandsegg
The project explores cross-adaptive processing as a drastic intervention in the modes of communicating between performing musicians. Digital audio analysis and processing techniques are used to enable features of one sound to inform the professing of another. This allows the actions of one performer to directly inflict changes on the other performer’s sound, and doing so only be means of the acoustic signal produces by normal musical expression on the instrument. The goal of the project is development of a new performance practice, documented in audio and video by a series of studio recordings and concerts.
There ia a strong focus on performance and new modes of expression within the academic environment of Music Tehnology at NTNU. There is active research and exploration of new musical forms enabled by custom made instruments. The development of instruments tailored to explore new modes of preformance fertilize the musical exploration, and the insights gained from practical use informs development of new instruments. A dedicated ensemple (T-EMP, Trondheim Electroacoustic Music Performance) has been established to deepen the knowledge. The ensemble was key to the research project “Communication and interplay in an electronically based ensemble” under the Project Programme. The project resulted in several artistic productions and has crystallized some clear areas of focus for furter exploration and research. It also brought technological innovation within instrument development, where convolution and granular techniques have been put to new usage.
We now seek a more spesific intervention in the interplay between two acoustic instrumental performers by means of cross-adaptive processing. In our context, cross-adaptive processing means that the sounds made by one performer directly influence the sonic character of the other and vice versa. For example to let the balance between noise and tone in a saxaphone sound directly affect the size and amount of artificial reverberation for a singer. In our previous research, live processing was approached as an instrumental and performative technique, involving the electronic/computer performer directly in the realtime dialogue with an acoustic instrumentalist. In the current project, the electronic/computer artist’s role will be more focused in designing a situation and an environment for the two acoustic instrumental performers, where the rules and modes of communication between the two have been radically transformed.