Hide metadata

dc.date.accessioned2017-09-24T13:27:47Z
dc.date.available2017-09-24T13:27:47Z
dc.date.created2017-09-19T11:20:40Z
dc.date.issued2017
dc.identifier.citationMartin, Charles Patrick Ellefsen, Kai Olav Tørresen, Jim . Deep Models for Ensemble Touch-Screen Improvisation. Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences. 2017 Association for Computing Machinery (ACM)
dc.identifier.urihttp://hdl.handle.net/10852/58517
dc.description.abstractFor many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neural network (ANN) to model the way other musicians respond to a single performer. Some forms of music have well-understood rules for interaction; however, this is not the case for free improvisation with new touch-screen instruments where styles of interaction may be discovered in each new performance. This paper describes an ANN model of ensemble interactions trained on a corpus of such ensemble touch-screen improvisations. The results show realistic ensemble interactions and the model has been used to implement a live performance system where a performer is accompanied by the predicted and sonified touch gestures of three virtual players. This is a chapter from the Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences. © 2017 Association for Computing Machinery (ACM)
dc.languageEN
dc.publisherAssociation for Computing Machinery (ACM)
dc.titleDeep Models for Ensemble Touch-Screen Improvisation
dc.typeChapter
dc.creator.authorMartin, Charles Patrick
dc.creator.authorEllefsen, Kai Olav
dc.creator.authorTørresen, Jim
cristin.unitcode185,15,5,42
cristin.unitnameForskningsgruppe for robotikk og intelligente systemer
cristin.ispublishedtrue
cristin.fulltextpostprint
dc.identifier.cristin1495267
dc.identifier.bibliographiccitationinfo:ofi/fmt:kev:mtx:ctx&ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.btitle=Proceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences&rft.spage=&rft.date=2017
dc.identifier.pagecount337
dc.identifier.doihttp://dx.doi.org/10.1145/3123514.3123556
dc.identifier.urnURN:NBN:no-61228
dc.subject.nviVDP::Datateknologi: 551
dc.type.documentBokkapittel
dc.type.peerreviewedPeer reviewed
dc.source.isbn978-1-4503-5373-1
dc.identifier.fulltextFulltext https://www.duo.uio.no/bitstream/handle/10852/58517/1/AM2017-deep-models-for-ensemble-performance-author-version.pdf
dc.type.versionAcceptedVersion
cristin.btitleProceedings of the 12th International Audio Mostly Conference: Augmented and Participatory Sound and Music Experiences


Files in this item

Appears in the following Collection

Hide metadata