Unreal-time improv - Open Cycle Collaborations
- Submitting institution
-
Canterbury Christ Church University
- Unit of assessment
- 33 - Music, Drama, Dance, Performing Arts, Film and Screen Studies
- Output identifier
- U33.010
- Type
- J - Composition
- Month
- -
- Year
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
-
- Research group(s)
-
-
- Proposed double-weighted
- Yes
- Double-weighted statement
- The project unfolded over 6 years (2014-2020), during which international collaborators contributed both musically and conceptually to the creation and design of notation, interpretation, and interaction materials. These creative materials formed stages in an open cycle of variations, snapshots of which were represented by performances and releases. The management of these collaborations, the cross-pollination of approaches and the interpretation of various outputs into
presentable and performable entities constituted a lengthy, complex, and multi-layered process.
- Reserve for an output with double weighting
- No
- Additional information
- Unrealtime is a technologically-mediated concept for improvisation and composition, developed and expressed through practice outputs in the form of an album release, a website and a book chapter. Research was produced in collaboration with performers Nick Roth, Pavlos Antoniadis and Luis Tabuenca. Unrealtime was developed out of studio-based improvisational practice utilizing acoustic and digital (Max, physical controllers) improvisation to invent audio-collages formed of multiple time-resolutions. With a capacity to reveal new potentialities when situated within various modes of interactive improvisation, the concept has been explored in a diverse
range of collaborative contexts.
Unrealtime’s distinct approach lies in its ‘out-of-time’ process of music-making, creating and exploring results that appear beyond the scope of an acoustic improviser's real-time recall of physically stored mechanical gestures or a composer's invention in suspended time. As the process embodies both the learning of audio gestures and their reformation into new syntactical relationships, is it possible to spur a learning process of embodiment, using notational forms of spatiotemporal indeterminacy, in which the performer can model the behaviour of the Unrealtime interface? From a reverse perspective, can a fixed-notation transcription of an Unrealtime audio composition reveal compositional forms that are mutable and externally applicable?
Expanding on current and past practice on gesture-based digital audio improvisation, Unrealtime’s interface design for fast and highly reactive multiple audio-timeline access yields a high diversity of output audio-gestures through a minimum of input performer-gestures. The resulting improvised audio-collages spawn new compositional practices that seek to both model and challenge the performers’ gestural behaviour through a hybrid notational system that combines fixed parts with elements of directed improvisation. Further collaborative possibilities have been explored through networked improvisation.
Unrealtime has received financial and in-kind support from the Canterbury Christ Church University; ACE; Arts Council of Ireland; Iklectik (venue, London) and GREAM (University of Strasbourg). https://panosghikas.com/unrealtime/#UNREALTIME
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -