Soyez le premier à aimer ceci
Real time is particularly difficult in the field of computer music where the handling of low level audio streams corresponding to periodic hard real time processes must be synchronized with asynchronous sporadic control events.
In this context, real time musical interactions with a musician raise even more difficult challenges: the management of both events and durations specified in parallel temporal frames that are only partially aligned, as for instance the musician own temporal coordinate, the temporal specification given in a score and the physical time.
These issues are at the core of the Antescofo research project. The Antescofo system couples a machine listening system and a synchronous and temporized action language. The listening machine follows the performance of a musician and locate in real-time its position in a score. The action language is used to manage the temporal dependencies between the actions triggered by the musical events recognized by the listening module.
The presentation will address the models of time at work in Antescofo and their synchronization, the management of events and duration, the estimation of the musician’s own tempo and the architecture of the system.
The Antescofo approach has been validated through many pieces of composers such as J. Harvey, M. Stroppa, P. Manoury, P. Boulez… Antescofo, inspired by Synchronous Reactive languages such as Esterel, has opened new research problems by considering explicitly the human in the loop and new perspective in real time musical interaction and mixed music.