Abstract
Practicing musical instruments can be experienced as repetitive and boring and is often a major barrier for people to start playing music. With the addition of digital tools for the composition, production, practice and sharing of music, it has become much more accessible, and with the rapid advances in machine learning technology, it is natural that these techniques are also introduced to musical tools. This project had the goal of creating the basis for an application that can help musicians practice improvisation and musical interplay by generating live interactive musical accompaniment to a human player. A deep learning model was developed, which uses two Long Short-Term Memory (LSTM) networks to generate polyphonic accompaniment for several instruments to one input melody. This compound model consists of one network trained to generate a fitting chord progression to the melody, and one network that uses the chord progression along with the melody to generate polyphonic music. The model was tested at a very low tempo with live music input and showed clear signs of adapting to what the user was playing. The implemented model was not fast enough to test the application at full speed, so musical analysis was performed on samples of accompaniment to static melodies, generated by the model. The music generated by the model was somewhat monotonous, likely due to data imbalance issues, but some interesting passages generated by the model are described in this thesis. Additionally, a baseline LSTM model was used to determine whether the proposed solution was better at generating music than a single, straight-forward LSTM. The models performed similarly when evaluated objectively through model accuracy, but through musical analysis it was concluded that the compound model generated more meaningful and functional music.