Noramli, Nur Athirah Syafiqah
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Optimizing nonlinear autoregressive with exogenous inputs network architecture for agarwood oil quality assessment Roslan, Muhammad Ikhsan; Ahmad Sabri, Noor Aida Syakira; Noramli, Nur Athirah Syafiqah; Ismail, Nurlaila; Mohd Yusoff, Zakiah; Almisreb, Ali Abd; Tajuddin, Saiful Nizam; Taib, Mohd Nasir
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i5.pp3493-3502

Abstract

Agarwood oil is highly valued in perfumes, incense, and traditional medicine. However, the lack of standardized grading methods poses challenges for consistent quality assessment. This study proposes a data-driven classification approach using the nonlinear autoregressive with exogenous inputs (NARX) model, implemented in MATLAB R2020a with the Levenberg-Marquardt (LM) algorithm. The dataset, sourced from the Universiti Malaysia Pahang Al-Sultan Abdullah under the Bio Aromatic Research Centre of Excellence (BARCE) and Forest Research Institute Malaysia (FRIM), comprises chemical compound data used for model training and validation. To optimize model performance, the number of hidden neurons is systematically adjusted. Model evaluation uses performance metrics such as mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), coefficient of determination (R²), epochs, accuracy, and model validation. Results show that the NARX model effectively classifies agarwood oil into four quality grades which is high, medium-high, medium-low, and low. The best performance is achieved with three hidden neurons, offering a balance between accuracy and computational efficiency. This work demonstrates the potential of automated, standardized agarwood oil quality grading. Future research should explore alternative training algorithms and larger datasets to further enhance model robustness and generalizability.