Multi-task machine learning approaches involve training a single model on multiple tasks at once to increase performance and efficiency over multiple singletask models trained individually on each task. When such a multi-task model is trained to perform multiple unrelated tasks, performance can degrade significantly since unrelated tasks often have gradients that vary widely in direction. These conflicting gradients may destructively interfere with each other, causing weights learned during the training of some tasks to become unlearned during the training of others. The research selects three existing methods to mitigate this problem: Project Conflicting Gradients (PCGrad), Modulation Module, and Language-Specific Subnetworks (LaSS). It explores how the application of different combinations of these methods affects the performance of a convolutional neural network on a multi-task image classification problem. The image classification problem used as a benchmark utilizes a dataset of 4,503 leaf images to create two separate tasks: the classification of plants and the detection of disease from leaf images. Experiment results on this problem show performance benefits over singular mitigation methods, with a combination of PCGrad and LaSS obtaining a task-averaged F1 score of 0.84686. This combination outperforms individual mitigation approaches by 0.01870, 0.02682, and 0.02434 for PCGrad, Modulation Module, and LaSS, respectively in terms of F1 score.
Copyrights © 2024