International Journal of Advances in Artificial Intelligence and Machine Learning
The International Journal of Advances in Artificial Intelligence and Machine Learning (IJAAIML) is a prominent academic journal dedicated to publishing cutting-edge research and developments in the fields of Artificial Intelligence (AI) and Machine Learning (ML). It serves as an essential platform for researchers, practitioners, and professionals worldwide to share innovative ideas, technologies, and empirical studies that contribute to advancing AI and ML. The journal emphasizes both theoretical advancements and practical applications, showcasing how these technologies are shaping various industries, including healthcare, finance, education, robotics, and autonomous systems. IJAAIML covers a wide range of topics within AI and ML, such as deep learning, neural networks, natural language processing (NLP), computer vision, robotics, data mining, reinforcement learning, and AI ethics. The journal is open to diverse types of scholarly contributions, including original research articles, review papers, case studies, technical notes, and surveys. It encourages submissions that introduce novel algorithms, methodologies, and systems, as well as those addressing challenges and proposing new approaches in AI and ML. This broad scope allows the journal to remain at the forefront of technological innovation, providing valuable insights into the latest trends and developments in the field. The journal maintains high academic standards through a rigorous peer-review process, ensuring that each published article is of exceptional quality and originality. Submissions are evaluated by experts in relevant fields based on their significance, innovation, methodology, and clarity. This commitment to quality makes IJAAIML a trusted source of information for a diverse audience, including academic researchers, industry professionals, AI practitioners, and students who seek to stay informed about the latest advances in AI and ML. IJAAIML is committed to global knowledge dissemination, making its publications accessible to researchers and professionals worldwide through its online platform. This approach fosters knowledge exchange and collaboration across borders, enabling the journal to reach a broad international audience. By highlighting state-of-the-art research that addresses real-world problems using AI and ML technologies, IJAAIML plays a significant role in advancing the understanding and application of these technologies. Additionally, the journal explores the ethical, societal, and economic impacts of AI and ML, promoting discussions on responsible AI practices and future directions. By contributing to these conversations, IJAAIML not only advances technological innovation but also encourages the development of AI and ML in a manner that considers broader implications for society. Overall, the International Journal of Advances in Artificial Intelligence and Machine Learning stands as a crucial resource for anyone involved in AI and ML, driving forward the frontiers of these dynamic fields through high-quality, peer-reviewed research.
Articles
35 Documents
Knowledge Distillation for Enhancing Interpretability and Efficiency in Complex Machine Learning Models
Jeong, Jaesik;
Ling Chan, Kit;
Sanmugam, Mageswaran
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 3 No. 1 (2026): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.58723/ijaaiml.v3i1.649
Background: Complex machine learning (ML) systems often require substantial computational resources, making them difficult to deploy in real-world environments constrained by hardware limitations, interpretability requirements, and regulatory standards. While knowledge distillation (KD) has traditionally been viewed as a model compression technique, its broader implications for efficiency, interpretability, and regulatory compliance remain underexplored.Aims: This study aims to reconceptualize knowledge distillation beyond model compression by framing it as a dual strategy for efficiency and interpretability enhancement. The paper proposes a structured distillation protocol that integrates predictive performance assessment, computational profiling, and feature attribution alignment within a unified experimental design.Methods: The proposed distillation protocol employs a temperature-scaled objective function combining supervised cross-entropy loss and Kullback Leibler divergence to facilitate relational knowledge transfer from teacher to student models. Experiments were conducted across multiple benchmark datasets. Evaluation consisted of three components: (1) predictive performance measurement, (2) computational efficiency profiling including parameter counts and inference latency, and (3) interpretability analysis using feature attribution similarity and perturbation stability metrics. Statistical analyses were performed to assess performance differences.Result: Across benchmark datasets, distilled student models achieved teacher-level accuracy ranging between 95% and 98%. Parameter counts and inference latency were reduced by more than 60%. Interpretability analyses showed improved explanation consistency, smoother decision structures, and higher feature attribution alignment. Statistical testing confirmed that efficiency and interpretability gains were obtained without significant performance degradation.Conclusion: The findings support the reconceptualization of knowledge distillation as a dual optimization strategy that enhances both operational efficiency and interpretability while preserving predictive strength. Rather than serving solely as a compression mechanism, KD functions as a scalable and adaptive framework for deployment-ready AI systems that balance performance, computational constraints, and explanation stability.
Constraint-Aware Machine Learning for Ensuring Feasible Predictions in Operational Data Science
Shukun, Wu;
Kurniawan, Tri Basuki;
Esad Kuloğlu, Muhammet
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 3 No. 1 (2026): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.58723/ijaaiml.v3i1.652
Background: Machine learning models deployed in operational environments often demonstrate high predictive accuracy during benchmark evaluation. However, their practical reliability is frequently compromised when predictions violate domain-specific operational constraints.Aims: This study aims to address the problem of infeasible predictions by proposing a unified framework that integrates operational constraints directly into the learning and inference processes.Methods: The CALF framework incorporates operational constraints through a dual mechanism consisting of correction-based learning and regularization-based penalty functions. These mechanisms are embedded directly within the training and inference objectives, allowing the model to learn constraint-compliant predictions during optimization. The framework was evaluated by comparing predictive error and operational feasibility against an unconstrained baseline model. Sensitivity analysis was also conducted to examine the stability and flexibility of the constraint penalties under varying operational thresholds.Result: Experimental results demonstrate that CALF achieved predictive error levels comparable to the unconstrained baseline while maintaining full operational feasibility. The framework reached 100% operational compliance, indicating that all generated predictions satisfied the defined constraints. Sensitivity analysis further showed that the regularization penalties operated within acceptable thresholds, allowing the model to maintain predictive flexibility while enforcing constraint adherence.Conclusion: The findings highlight the importance of integrating operational constraints directly into machine learning model design. By embedding feasibility constraints within the optimization process, the CALF framework ensures that predictive outputs remain both accurate and operationally compliant. This approach repositions operational constraints as intrinsic components of predictive modeling and contributes to the development of reliable and deployable AI systems in real-world environments.
Bias Detection and Mitigation Techniques in Data Science Pipelines: An Empirical Evaluation
Dewi, Deshinta Arrova;
Okengwu, Ugochi;
Rizqi, Zakka Ugih
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 3 No. 1 (2026): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.58723/ijaaiml.v3i1.655
Background: Failure to consider algorithmic bias can result in discriminatory outcomes in machine learning systems, particularly when these models operate in high-stakes decision-making environments. Although numerous bias mitigation techniques have been proposed, most studies treat fairness assessment as a post hoc evaluation. This gap highlights the need for a lifecycle-oriented framework to examine interconnected bias and fairness mechanisms.Aims: This study aims to conduct an empirical investigation of bias propagation across the data science continuum within a structured bias-processing framework.Methods: The proposed framework was tested on benchmark datasets containing sensitive attributes. Three predictive models were implemented: Logistic Regression, Random Forest, and Gradient Boosting. Fairness was evaluated using Demographic Parity, Equal Opportunity, and Average Odds metrics. Predictive modeling techniques were further employed to interpret fairness outcomes. Bias mitigation strategies were applied at both data and model levels, including fairness-regularized optimization and hybrid approaches. Sensitivity analysis was conducted to examine the trade-off between fairness constraints and model loss.Result: The empirical findings indicate that most disparities originate from bias embedded in the data rather than from model architecture. Data-level bias mitigation reduced disparity by 28%. The fairness-regularized optimization approach reduced disparity by 35%. The hybrid mitigation strategy achieved a demographic disparity reduction of 40–45%, with an accuracy decrease of no more than 2%. Sensitivity analysis revealed non-linear tensions between fairness constraints and optimization loss, demonstrating that early-stage bias mitigation stabilizes fairness without significantly increasing performance trade-offs.Conclusion: This study extends both theoretical and practical understanding of lifecycle bias propagation in machine learning systems. The findings emphasize the importance of addressing bias at early stages of the data science pipeline to achieve stable and sustainable fairness outcomes. By integrating fairness engineering throughout the lifecycle, the proposed framework contributes to more robust and ethically aligned AI systems.
Transfer Learning Effectiveness Across Domain Similarity Levels in Data Science Applications
Risdianto, Eko;
Trung Pham, Thai Ky;
Yeoh, William;
Alshammari, Sultan Hammad
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 3 No. 1 (2026): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.58723/ijaaiml.v3i1.656
Background: Transfer learning has become increasingly prominent in data science due to the challenges posed by limited labeled data and distribution shifts between training and deployment environments. However, the success of transfer learning depends significantly on the structural compatibility between source and target domains.Aims: This study aims to investigate the relationship between domain similarity and transfer learning performance using an experimental framework termed Similarity-Aware Transfer Evaluation (SATE).Methods: Twelve pairs of benchmark datasets were selected to simulate varying levels of domain similarity and were made publicly available. Domain similarity was computed using Maximum Mean Discrepancy (MMD) in the learned representation space. Transfer performance was measured using a predefined Transfer Gain metric under bounded fine-tuning strategies. Correlation analysis and statistical testing were conducted to examine the relationship between similarity scores and transfer effectiveness, while fine-tuning depth was analyzed in relation to similarity magnitude.Result: The results demonstrate a strong positive correlation between domain similarity and transfer gain (r = 0.83, p < 0.01), indicating that approximately 69% of performance variability can be explained by similarity-based transfer effects. Negative transfer was observed when similarity scores were S ≤ 0.41. Furthermore, higher similarity levels were associated with deeper and more stable fine-tuning, whereas lower similarity resulted in increased instability during adaptation. These findings establish similarity as a structural compatibility constraint in transfer learning.Conclusion: The study confirms that domain similarity plays a fundamental role in determining transfer learning success. By operationalizing similarity measurement and linking it to performance thresholds, the proposed SATE framework provides a structured method for evaluating transfer feasibility in real-world data science applications.
Failure Mode Analysis of Machine Learning Models in Realistic Data Deployment Scenarios
Meng Cheng, Lau;
Hassan Adlan, Amel Zulfukar
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 3 No. 1 (2026): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.58723/ijaaiml.v3i1.651
Background: Machine learning models frequently demonstrate strong performance under controlled benchmark evaluations. However, such evaluations often fail to capture hidden vulnerabilities that emerge under realistic deployment conditions. In real-world environments, models are exposed to stressors such as label corruption, feature noise, distributional shifts, and operational constraints, including reduced computational precision and increased latency. These conditions can induce performance degradation and structural instability, highlighting the need for a systematic robustness evaluation framework that goes beyond conventional accuracy metrics.Aims: This paper aims to introduce a formalized Failure Mode Analysis Protocol (FMAP) for evaluating machine learning model robustness under realistic operational stressors. The study reconceptualizes robustness evaluation as a distribution-based process, where model deployment itself generates a new distribution over time.Methods: The proposed FMAP framework evaluates model behavior under progressively adverse conditions, including symmetric label corruption, additive feature noise, distributional shifts, and operational constraints such as reduced numerical precision and increased inference latency. Experiments were conducted across diverse tabular and image benchmark datasets using representative model architectures, including linear models, ensemble methods, margin-based models, and deep neural networks.Result: The experiments reveal distinct robustness profiles across model architectures when exposed to escalating stress conditions. Operational constraints and compositional limitations were shown to induce measurable degradation patterns, including instability and output collapse under extreme stress. The findings demonstrate that model failure is not solely a function of predictive accuracy loss but is closely linked to operational constraints and evolving distributional conditions. The distribution-based evaluation framework effectively captures early-stage degradation and full failure transitions.Conclusion: This study establishes a structured protocol for analyzing machine learning failure modes under realistic deployment scenarios. By framing robustness evaluation as a distribution-based process, the FMAP approach provides a systematic method for identifying operational risks and structural vulnerabilities.