Claim Missing Document
Check
Articles

Found 1 Documents
Search

Comparative Evaluation of Parameter-Efficient Fine-Tuning Strategies for Continual Image Classification Nancy Agarwal; Alok Singh Chauhan; Patrick Bours
Advance Sustainable Science Engineering and Technology Vol. 8 No. 2 (2026): February-April
Publisher : Science and Technology Research Centre Universitas PGRI Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26877/asset.v8i2.2787

Abstract

Catastrophic forgetting remains a major challenge in continual transfer learning, where performance on earlier tasks degrades after sequential adaptation. While full fine-tuning updates all parameters and achieves strong performance on new tasks, it is computationally expensive and prone to forgetting. This study compares parameter-efficient fine-tuning (PEFT) methods—adapters, additive learning, side-tuning, LoRA, and zero-initialized layers—against full fine-tuning on CIFAR-100 using a two-stage protocol: task-A (classes 0–49) followed by task-B (classes 50–99), evaluated on ResNet-18 and ResNet-50. Results are reported as mean ± standard deviation over three runs (n = 3), with retention measured using a Swapback-based recall method that distinguishes true forgetting (Δ). Across both architectures, all PEFT methods maintain task-A knowledge (Δ = 0.00), while full fine-tuning exhibits forgetting (Δ = 0.31 on ResNet-18; Δ = 0.20 on ResNet-50). PEFT methods achieve competitive task-B performance while updating only 0.22–4.49% of parameters. Notably, LoRA on ResNet-50 achieves the highest task-B accuracy (0.82) with only 0.93% parameter updates and no forgetting, slightly outperforming full fine-tuning (0.81). These findings highlight PEFT as an efficient and stable alternative for scalable continual transfer learning.