Hide metadata

dc.contributor.authorAskedalen, Benjamin Kløw
dc.date.accessioned2022-08-24T22:03:10Z
dc.date.available2022-08-24T22:03:10Z
dc.date.issued2022
dc.identifier.citationAskedalen, Benjamin Kløw. Generating Live Interactive Music Accompaniment Using Machine Learning. Master thesis, University of Oslo, 2022
dc.identifier.urihttp://hdl.handle.net/10852/95694
dc.description.abstractPracticing musical instruments can be experienced as repetitive and boring and is often a major barrier for people to start playing music. With the addition of digital tools for the composition, production, practice and sharing of music, it has become much more accessible, and with the rapid advances in machine learning technology, it is natural that these techniques are also introduced to musical tools. This project had the goal of creating the basis for an application that can help musicians practice improvisation and musical interplay by generating live interactive musical accompaniment to a human player. A deep learning model was developed, which uses two Long Short-Term Memory (LSTM) networks to generate polyphonic accompaniment for several instruments to one input melody. This compound model consists of one network trained to generate a fitting chord progression to the melody, and one network that uses the chord progression along with the melody to generate polyphonic music. The model was tested at a very low tempo with live music input and showed clear signs of adapting to what the user was playing. The implemented model was not fast enough to test the application at full speed, so musical analysis was performed on samples of accompaniment to static melodies, generated by the model. The music generated by the model was somewhat monotonous, likely due to data imbalance issues, but some interesting passages generated by the model are described in this thesis. Additionally, a baseline LSTM model was used to determine whether the proposed solution was better at generating music than a single, straight-forward LSTM. The models performed similarly when evaluated objectively through model accuracy, but through musical analysis it was concluded that the compound model generated more meaningful and functional music.eng
dc.language.isoeng
dc.subject
dc.titleGenerating Live Interactive Music Accompaniment Using Machine Learningeng
dc.typeMaster thesis
dc.date.updated2022-08-25T22:01:17Z
dc.creator.authorAskedalen, Benjamin Kløw
dc.identifier.urnURN:NBN:no-98197
dc.type.documentMasteroppgave
dc.identifier.fulltextFulltext https://www.duo.uio.no/bitstream/handle/10852/95694/1/UiO_Master_Thesis_benjamas.pdf


Files in this item

Appears in the following Collection

Hide metadata