Claim Missing Document
Check
Articles

Found 3 Documents
Search

A Hybrid Deep-Learning and Evolutionary Feature-Selection Framework for Skin Lesion Classification: Application to Monkeypox Detection Nidhi Chauhan; Alok Singh Chauhan
Advance Sustainable Science Engineering and Technology Vol. 8 No. 1 (2026): November - January
Publisher : Science and Technology Research Centre Universitas PGRI Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26877/asset.v8i1.2786

Abstract

The recent resurgence of Monkeypox has highlighted the urgent need for fast and accurate diagnostic tools. In this paper, we propose a new framework of hybrid deep learning to combine both DenseNet121 and MobileNetV2 to obtain both rich and supplementary attributes of the skin lesion images. By pooling the outputs of these two models in terms of features, we get the lightweight representation of the images as well as rich representations of the images. To improve the feature set, we use Genetic Algorithm (GA) which is useful in reducing the dimensions and eliminating redundancy. Optimized features are then categorized with the help of the Random Forest model, which has been selected due to its good performance and capacity to work with high-dimensional data. Using two publicly accessible datasets, MSID and MSLD, we tested our approach and obtained remarkable classification accuracies of 92.71% and 97.77%, respectively. These findings highlight the success of combining ensemble learning, evolutionary optimization, and deep learning to achieve accuracy and proper diagnosis of monkeypox through medical images.
Deep Learning-Based Classification of Cognitive Workload Using Functional Connectivity Features Vineeta Khemchandani; Alok Singh Chauhan; Shahnaz Fatima; Jalauk Singh Maurya; Abhay Singh Rathaur; Kumar Sharma, Narendra; Daya Shankar Srivastava; Vugar Abdullayev
Advance Sustainable Science Engineering and Technology Vol. 8 No. 1 (2026): November - January
Publisher : Science and Technology Research Centre Universitas PGRI Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26877/asset.v8i1.2833

Abstract

Cognitive workload plays a vital role in tasks that demand dynamic decision-making, especially under high-risk and time-sensitive conditions. An excessive workload can lead to unexpected and disproportionate risks, whereas insufficient workload may cause disengagement, undermining task performance. This underscores the importance of maintaining an optimal level of mental focus in high-pressure situations to ensure successful task execution. This study leverages deep learning methods alongside functional connectivity measures to classify cognitive workload levels. Using the N-back EEG dataset, functional connectivity metrics such as Phase Locking Value (PLV), Phase Lagging Index (PLI), and Coherency are extracted after data pre-processing. These metrics, characterized as directed or non-directed, enable efficient computational analysis. A convolutional neural network (CNN) classifier is employed to categorize cognitive workload into three levels: low (0-back), medium (2-back), and high (3-back). The CNN-A architecture achieves peak performance with an accuracy of 93.75% using PLV, 87.5% using Coherency, and 68.75% using PLI.
Comparative Evaluation of Parameter-Efficient Fine-Tuning Strategies for Continual Image Classification Nancy Agarwal; Alok Singh Chauhan; Patrick Bours
Advance Sustainable Science Engineering and Technology Vol. 8 No. 2 (2026): February-April
Publisher : Science and Technology Research Centre Universitas PGRI Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26877/asset.v8i2.2787

Abstract

Catastrophic forgetting remains a major challenge in continual transfer learning, where performance on earlier tasks degrades after sequential adaptation. While full fine-tuning updates all parameters and achieves strong performance on new tasks, it is computationally expensive and prone to forgetting. This study compares parameter-efficient fine-tuning (PEFT) methods—adapters, additive learning, side-tuning, LoRA, and zero-initialized layers—against full fine-tuning on CIFAR-100 using a two-stage protocol: task-A (classes 0–49) followed by task-B (classes 50–99), evaluated on ResNet-18 and ResNet-50. Results are reported as mean ± standard deviation over three runs (n = 3), with retention measured using a Swapback-based recall method that distinguishes true forgetting (Δ). Across both architectures, all PEFT methods maintain task-A knowledge (Δ = 0.00), while full fine-tuning exhibits forgetting (Δ = 0.31 on ResNet-18; Δ = 0.20 on ResNet-50). PEFT methods achieve competitive task-B performance while updating only 0.22–4.49% of parameters. Notably, LoRA on ResNet-50 achieves the highest task-B accuracy (0.82) with only 0.93% parameter updates and no forgetting, slightly outperforming full fine-tuning (0.81). These findings highlight PEFT as an efficient and stable alternative for scalable continual transfer learning.