Claim Missing Document
Check
Articles

Found 2 Documents
Search

Multiclass Skin Lesion Classification Algorithm using Attention-Based Vision Transformer with Metadata Fusion Furqan, Mhd.; Katuk, Norliza; Hartama, Dedy
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1017

Abstract

Early and accurate classification of skin lesions is essential for timely diagnosis and treatment of skin cancer. This study presents a novel multiclass classification framework that integrates dermoscopic images with clinical metadata using an attention-based Vision Transformer (ViT) architecture. The proposed model incorporates a mutual-attention fusion mechanism to jointly learn from visual and tabular inputs, augmented by a class-aware metadata encoder and imbalance-sensitive loss function. Training was conducted using the HAM10000 dataset over 30 epochs with a batch size of 32, utilizing the Adam optimizer and a learning rate of 0.0001. The model demonstrated superior performance compared to a ViT Baseline, achieving 93.4% accuracy, 92.2% F1-score, 0.95 AUC, and significant reductions in MAE and RMSE. Additionally, Grad-CAM visualizations confirmed the model’s ability to focus on diagnostically relevant regions, enhancing interpretability. These findings suggest that the integration of structured clinical information with transformer-based visual analysis can significantly improve classification robustness, particularly in underrepresented lesion types. However, the model’s current performance is evaluated only on the HAM10000 dataset, and its generalizability to other clinical or non-dermoscopic image sources remains to be validated. Future studies should therefore explore multi-institutional datasets and real-world deployment scenarios to assess robustness and scalability. The proposed framework offers a practical, interpretable solution for AI-assisted skin lesion diagnosis and demonstrates strong potential for clinical deployment.
Using Readability Metrics in Estimating the Readability of REpresentational State Transfer State Transfer Uniform Resource Identifiers Schema Alshraiedeh, Fuad; Katuk, Norliza; Almahasneh, Hossam
JOIN (Jurnal Online Informatika) Vol 11 No 1 (2026)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v11i1.1653

Abstract

Uniform Resource Identifiers (URIs) may have a direct impact on the understanding of REpresentational State Transfer State Transfer (RESTful) functionality, and thus, on the discovery of final RESTful product. RESTful Web Services (WS)/Application Programming Interfaces (APIs) are designed to expose data and functionality through resources accessed by dedicated URIs over HyperText Transfer Protocol (HTTP), which recently represents the direct descriptions schema of what functions does the concerned RESTful WS/API present. Furthermore, the discovery of suitable RESTful is heavily rely on the simplicity of understanding their URI schemas, which recently suffer from critical issues in how to measure their readability. For that, WS/APIs developers aspire to measure the readability of RESTful URI schemas before exposing them over the Internet to estimate their usability. Consequently, this research proposes four readability metrics for the stated purpose namely: Flesch-Kincaid (F-K), Flesch Reading Ease (FRES), Simple Measure of Gobbledygook (SMOG), and Coleman Liau Index (CLI). The research identifies the variables required to calculate the readability metric and formulate the equations for them. Four experts in linguistics were asked to validate the proposed metrics and their identified variables. The research successfully conducted empirical research on 8 well-known RESTful WSs/APIs of the dataset, and the proposed metrics were implemented on 6952 URIs schemas. The average values for the aforementioned metrics were 7.41%, 59.63%, 6.73%, and 17.55% respectively, where in certain metrics, a low average value signifies easy readability, but in others, it signifies hard readability, and vice versa.