Claim Missing Document
Check
Articles

Comparison of Transfer Learning Performance in Lung and Colon Classification with Knowledge Distillation Elsa Wulandari, Annastasya Nabila; Yudhistira , Aimar; Purwono; Sharkawy , Abdel-Nasser
Journal of Advanced Health Informatics Research Vol. 2 No. 2 (2024)
Publisher : Peneliti Teknologi Teknik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59247/jahir.v2i2.289

Abstract

This research aims to apply the knowledge distillation method to medical image classification, specifically in the case of lung and colon image classification using various transfer learning models. Knowledge distillation allows the transfer of knowledge from a larger model (teacher) to a smaller model (student), which enables more efficient model building without sacrificing accuracy. In this research, the DenseNet169 model is used as the teacher model. The student model uses several alternative transfer learning architectures such as DenseNet121, MobileNet, ResNet50, InceptionV3, and Xception. The data used consists of 25,000 histopathology images that have been processed and divided into training, validation, and test data. Data augmentation was performed to enlarge the dataset from 750 to 25,000 images, which helped improve the performance of the model. Model performance evaluation was performed by measuring the accuracy and loss value of each student model compared to the teacher model. The results showed that the student models generated through the knowledge distillation process performed close to or even exceeded the teacher model in some cases, with the Xception model showing the highest accuracy of 96.95%. In conclusion, knowledge distillation is effective in reducing model complexity without compromising performance, which is particularly beneficial for implementation on resource-constrained devices.
Understanding Transformers: A Comprehensive Review Rahmadhani, Berlina; Purwono, Purwono; Safar Dwi Kurniawan
Journal of Advanced Health Informatics Research Vol. 2 No. 2 (2024)
Publisher : Peneliti Teknologi Teknik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59247/jahir.v2i2.292

Abstract

Transformers have been recognized as one of the most significant innovations in the development of deep learning technology, with widespread application to Natural Language Processing (NLP), Computer Vision (CV), and multimodal data analysis. The self-attention mechanism, which is at the core of this architecture, is designed to capture global relationships in sequential and spatial data in parallel, enabling more efficient and accurate processing than Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN)-based approaches. Models such as BERT, GPT, and Vision Transformer (ViT) have been used for a variety of tasks, including text classification, translation, object detection, and image segmentation. Although the advantages of this model are significant, the high computing power requirements and reliance on large datasets are major challenges. Efforts to overcome these limitations have been made through the development of lightweight variants, such as the MobileViT and Swin Transformer, which are designed to improve efficiency without sacrificing accuracy. Further research is also directed at the application of transformers for multimodal data and specific domains, such as medical image analysis. With its high flexibility and adaptability, transformers continue to be regarded as a key component in the development of more advanced and far-reaching artificial intelligence.
A Comprehensive Review of Knowledge Distillation for Lightweight Medical Image Segmentation Asmat Burhan; Purwono, Purwono
Journal of Advanced Health Informatics Research Vol. 2 No. 2 (2024)
Publisher : Peneliti Teknologi Teknik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59247/jahir.v2i2.294

Abstract

Medical image segmentation plays a crucial role in computer-aided diagnosis by enabling precise identification of anatomical and pathological structures. While deep learning models have significantly improved segmentation accuracy, their high computational complexity limits deployment in resource-constrained environments, such as mobile healthcare and edge computing. Knowledge Distillation (KD) has emerged as an effective model compression technique, allowing a lightweight student model to inherit knowledge from a complex teacher model while maintaining high segmentation performance. This review systematically examines key KD techniques, including Response-Based, Feature-Based, and Relation-Based Distillation, and analyzes their advantages and limitations. Major challenges in KD, such as boundary preservation, domain generalization, and computational trade-offs, are explored in the context of lightweight model development. Additionally, emerging trends, including the integration of KD with Transformers, Federated Learning, and Self-Supervised Learning, are discussed to highlight future directions in efficient medical image segmentation. By providing a comprehensive analysis of KD for lightweight segmentation models, this review aims to guide the development of deep learning solutions that balance accuracy, efficiency, and real-world applicability in medical imaging
Edukasi Bahaya Cyberbullying di Kalangan Remaja Guna Membangun Kesadaran dan Ketangguhan Digital di SMKN 1 Kaligondang Purwono Purwono; Hadi Jayusman; Safa Kiana; Fakhri Zahi Mumtaza
Jurnal Arba - Multidisiplin Pengabdian Masyarakat Vol. 2 No. 1 (2025): Februari
Publisher : Jurnal Arba - Multidisiplin Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Cyberbullying has become a major challenge that affects the mental health of adolescents in the digital era. This community service program aims to increase awareness and digital resilience of SMKN 1 Kaligondang students through education about the dangers of cyberbullying. Using counseling methods, interactive discussions, and case simulations, this program helps increase students' awareness of various forms of cyberbullying, including its psychological and social impacts, as well as strategies for preventing and handling cyberbullying wisely. The results of the activity showed an increase in students' understanding of the risks of cyberbullying, as well as a better understanding of how to report and deal with such situations in cyberspace. In addition, students showed a greater commitment to maintaining digital ethics and practicing safe social skills in digital media. This article is expected to be a basis for schools in developing similar initiatives to form a generation that is more aware and resilient in dealing with cyberbullying