dc.date.accessioned | 2017-09-24T13:27:47Z | |
dc.date.available | 2017-09-24T13:27:47Z | |
dc.date.created | 2017-09-19T11:20:40Z | |
dc.date.issued | 2017 | |
dc.identifier.citation | Martin, Charles Patrick Ellefsen, Kai Olav Tørresen, Jim . Deep Models for Ensemble Touch-Screen Improvisation. Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences. 2017 Association for Computing Machinery (ACM) | |
dc.identifier.uri | http://hdl.handle.net/10852/58517 | |
dc.description.abstract | For many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neural network (ANN) to model the way other musicians respond to a single performer. Some forms of music have well-understood rules for interaction; however, this is not the case for free improvisation with new touch-screen instruments where styles of interaction may be discovered in each new performance. This paper describes an ANN model of ensemble interactions trained on a corpus of such ensemble touch-screen improvisations. The results show realistic ensemble interactions and the model has been used to implement a live performance system where a performer is accompanied by the predicted and sonified touch gestures of three virtual players.
This is a chapter from the Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences. © 2017 Association for Computing Machinery (ACM) | |
dc.language | EN | |
dc.publisher | Association for Computing Machinery (ACM) | |
dc.title | Deep Models for Ensemble Touch-Screen Improvisation | |
dc.type | Chapter | |
dc.creator.author | Martin, Charles Patrick | |
dc.creator.author | Ellefsen, Kai Olav | |
dc.creator.author | Tørresen, Jim | |
cristin.unitcode | 185,15,5,42 | |
cristin.unitname | Forskningsgruppe for robotikk og intelligente systemer | |
cristin.ispublished | true | |
cristin.fulltext | postprint | |
dc.identifier.cristin | 1495267 | |
dc.identifier.bibliographiccitation | info:ofi/fmt:kev:mtx:ctx&ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.btitle=Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences&rft.spage=&rft.date=2017 | |
dc.identifier.pagecount | 337 | |
dc.identifier.doi | http://dx.doi.org/10.1145/3123514.3123556 | |
dc.identifier.urn | URN:NBN:no-61228 | |
dc.subject.nvi | VDP::Datateknologi: 551 | |
dc.type.document | Bokkapittel | |
dc.type.peerreviewed | Peer reviewed | |
dc.source.isbn | 978-1-4503-5373-1 | |
dc.identifier.fulltext | Fulltext https://www.duo.uio.no/bitstream/handle/10852/58517/1/AM2017-deep-models-for-ensemble-performance-author-version.pdf | |
dc.type.version | AcceptedVersion | |
cristin.btitle | Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences | |