Abstract
In recent times, there have been developed biophysical advanced neuron models that accurately create detailed representations of the input-output relationship found in real neurons. However, these models require a lot of computational resources, which may limit their applicability for modelling large-scale neuronal systems. This thesis introduces three Multi-Task Learning (MTL) methods to address this computational challenge; the MTL methods were compared by loss and diversity metrics. We found that the Multi-gate Mixture of Experts (MMoE) and Multi-gate Mixture of Experts with Exclusivity (MMoEEx) best predicted compartmental voltage values from a biophysical Layer 5b Pyramidal Cell (L5b PC) neuron model. On the other hand, the Multi-task Hard-parameter sharing (MH) method was subpar in performance compared to the MMoE and MMoEEx. We also implemented the Loss-Balanced Task Weighting (LBTW) algorithm into our MTL methods to improve the prediction of the spiking behaviour of the neuron model; however, we did not manage to predict spiking initiation in any of our models, which is likely due to the spiking task being too different compared to the compartmental voltage values for our MTL methods to predict both simultaneously.