cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 330 Documents
Analyzing computer vision models for detecting customers: a practical experience in a mexican retail Fernández Del Carpio, Alvaro
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1112

Abstract

Computer vision has become an important technology for obtaining meaningful data from visual content and providing valuable information for enhancing security controls, marketing, and logistic strategies in diverse industrial and business sectors. The retail sector constitutes an important part of the worldwide economy. Analyzing customer data and shopping behaviors has become essential to deliver the right products to customers, maximize profits, and increase competitiveness. In-person shopping is still a predominant form of retail despite the appearance of online retail outlets. As such, in-person retail is adopting computer vision models to monitor store products and customers. This research paper presents the development of a computer vision solution by Lytica Company to detect customers in Steren’s physical retail stores in Mexico. Current computer vision models such as SSD Mobilenet V2, YOLO-FastestV2, YOLOv5, and YOLOXn were analyzed to find the most accurate system according to the conditions and characteristics of the available devices. Some of the challenges addressed during the analysis of videos were obstruction and proximity of the customers, lighting conditions, position and distance of the camera concerning the customer when entering the store, image quality, and scalability of the process. Models were evaluated with the F1-score metric: 0.64 with YOLO FastestV2, 0.74 with SSD Mobilenetv2, 0.86 with YOLOv5n, 0.86 with YOLOv5xs, and 0.74 with YOLOXn. Although YOLOv5 achieved the best performance, YOLOXn presented the best balance between performance and FPS (frames per second) rate, considering the limited hardware and computing power conditions.
Covid-19 detection using modified xception transfer learning approach from computed tomography images Morani, Kenan; Ayana, Esra Kaya; Unay, Devrim
International Journal of Advances in Intelligent Informatics Vol 9, No 3 (2023): November 2023
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v9i3.1432

Abstract

The significance of efficient and accurate diagnosis amidst the unique challenges posed by the COVID-19 pandemic underscores the urgency for innovative approaches. In response to these challenges, we propose a transfer learning-based approach using a recently annotated Computed Tomography (CT) image database. While many approaches propose an intensive data preprocessing and/or complex model architecture, our method focuses on offering an efficient solution with minimal manual engineering. Specifically, we investigate the suitability of a modified Xception model for COVID-19 detection. The method involves adapting a pre-trained Xception model, incorporating both the architecture and pre-trained weights from ImageNet. The output of the model was designed to make the final diagnosis decisions. The training utilized 128 batch sizes and 224x224 input image dimensions, downsized from standard 512x512. No further da processing was performed on the input data. Evaluation is conducted on the 'COV19-CT-DB' CT image dataset, containing labeled COVID-19 and non-COVID-19 cases. Results reveal the method's superiority in accuracy, precision, recall, and macro F1 score on the validation subset, outperforming the VGG-16 transfer model and thus offering enhanced precision with fewer parameters. Furthermore, compared to alternative methods for the COV19-CT-DB dataset, our approach exceeds the baseline approach and other alternatives on the same dataset. Finally, the adaptability of the modified Xception transfer learning-based model to the unique features of the COV19-CT-DB dataset showcases its potential as a robust tool for enhanced COVID-19 diagnosis from CT images.
CMT-CNN: colposcopic multimodal temporal hybrid deep learning model to detect cervical intraepithelial neoplasia Mukku, Lalasa; Thomas, Jyothi
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1527

Abstract

Cervical cancer poses a significant threat to women's health in developing countries, necessitating effective early detection methods. In this study, we introduce the Colposcopic Multimodal Temporal Convolution Neural Network (CMT-CNN), a novel model designed for classifying cervical intraepithelial neoplasia by leveraging sequential colposcope images and integrating extracted features with clinical data. Our approach incorporates Mask R-CNN for precise cervix region segmentation and deploys the EfficientNet B7 architecture to extract features from saline, iodine, and acetic acid images. The fusion of clinical data at the decision level, coupled with Atrous Spatial Pyramid Pooling-based classification, yields remarkable results: an accuracy of 92.31%, precision of 90.19%, recall of 89.63%, and an F-1 score of 90.72. This achievement not only establishes the superiority of the CMT-CNN model over baselines but also paves the way for future research endeavours aiming to harness heterogeneous data types in the development of deep learning models for cervical cancer screening. The implications of this work are profound, offering a potent tool for early cervical cancer detection that combines multimodal data and clinical insights, potentially saving countless lives.
Job scheduling reservations on cloud resources Pujiyanta, Ardi; Noviyanto, Fiftin
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1421

Abstract

The current application of cloud computing focuses more on research problems. One of the main problems in the cloud is job allocation. Jobs are dynamically allocated to server processors. All cloud virtualized hardware is available to users on demand and can be dynamically upgraded. Resource scheduling is critical in research in the cloud, due to its large execution time and resource costs. The differences in resource scheduling criteria and parameters used cause various categories of Resource Scheduling Algorithms. Resource scheduling has a goal, identifying the right resources to schedule workloads in a timely manner and improving the effectiveness of resource utilization. In other words, minimizing workload completion time. Mapping the right workloads to resources will result in good scheduling. Another goal of resource scheduling is to identify adequate and appropriate workloads. So it can support scheduling of multiple workloads, to meet various QoS needs in cloud computing. The aim of this research is to determine the value of waiting time, idle time and makespan on cloud resources. The proposed method is to sort the arrival times of jobs with the least workload and place the jobs on a virtual view, before scheduling them on cloud resources. Experimental results show that the proposed method has an idle time of 25.3%, FCFS is 43.1% while for bacfilling it is 31.5%. The average makespan reduction for FCFS is 16.73%, for bacfilling it is 12.87%. The average decrease in AWT for FCFS was 13.3% for bacfilling of 12.03%. The results of this research can be applied to cloud rentals with flexible times.
Granularity-aware legal question answering: a case study of Indonesian government regulations Faisal, Douglas Raevan; Darari, Fariz; Ryanda, Reynard Adha
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1105

Abstract

Question answering (QA) technologies are crucial for building conversational AI.  Current research related to QA for the legal domain lacks focus on the organized structure of laws, which are hierarchically segmented into components at varying levels of detail. To address this gap, we propose a new task of granularity-aware legal QA, which accounts for the underlying granularity levels of law components. Our approach encompasses task formulation, dataset creation, and model development. Under the Indonesian jurisdiction, we consider four law component granularity levels: chapters (bab), articles (pasal), sections (ayat), and letters (huruf). We include 15 government regulations (Peraturan Pemerintah) of Indonesia related to labor affairs and build a legal QA dataset with granularity information. We then design a solution for such a task—the first IR system to account for legal component granularity. We implement a customized retriever-reranker pipeline in which the retriever accepts law components of multiple granularities and the reranker is trained for granularity-aware ranking. We leverage BM25 and BERT models as retriever and reranker, respectively, yielding an end-to-end exact match accuracy of 35.68%, which offers a significant improvement (20%) over a strong baseline. The use of reranker also improves the granularity accuracy from 44.86% to 63.24%. In practical context, such a solution can help provide more precise answers, not only from legal chatbots, but also other conversational AI that deals with hierarchically-structured documents.
Imputation of missing microclimate data of coffee-pine agroforestry with machine learning Nurwarsito, Heru; Suprayogo, Didik; Sakti, Setyawan Purnomo; Prayogo, Cahyo; Yudistira, Novanto; Fauzi, Muhammad Rifqi; Oakley, Simon; Mahmudy, Wayan Firdaus
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1439

Abstract

This research presents a comprehensive analysis of various imputation methods for addressing missing microclimate data in the context of coffee-pine agroforestry land in UB Forest. Utilizing Big data and Machine learning methods, the research evaluates the effectiveness of imputation missing microclimate data with Interpolation, Shifted Interpolation, K-Nearest Neighbors (KNN), and Linear Regression methods across multiple time frames - 6 hours, daily, weekly, and monthly. The performance of these methods is meticulously assessed using four key evaluation metrics Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). The results indicate that Linear Regression consistently outperforms other methods across all time frames, demonstrating the lowest error rates in terms of MAE, MSE, RMSE, and MAPE. This finding underscores the robustness and precision of Linear Regression in handling the variability inherent in microclimate data within agroforestry systems. The research highlights the critical role of accurate data imputation in agroforestry research and points towards the potential of machine learning techniques in advancing environmental data analysis. The insights gained from this research contribute significantly to the field of environmental science, offering a reliable methodological approach for enhancing the accuracy of microclimate models in agroforestry, thereby facilitating informed decision-making for sustainable ecosystem management.
Computation of spatial error model with matrix exponential spatial specification approach Marsono, Marsono; Setiawan, Setiawan; Kuswanto, Heri
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1506

Abstract

In spatial regression analysis, we not only consider the internal factors of a location, but also take into account the spatial factors that may affect the relationship. The model of spatial dependence between regions caused by unknown factors or errors is known as the Spatial Error Model (SEM). In its application to large datasets, SEM suffers from several problems in parameter estimation and computational time. One of the methods to solve this problem is to use Matrix Exponential Spatial Specification (MESS). The purpose of this research is to find another alternative to modeling data containing spatial autocorrelation errors as a substitute for SEM. MESS(0,1) is named as an alternative model to SEM. With the advantage of MESS features, the MESS(0,1) model is expected to be faster in analytics and computation compared to SEM when using Maximum Likelihood Estimation (MLE). The purpose of this study was to evaluate the effectiveness of the MESS (0,1) model as an alternative to SEM using MLE based on simulation studies and real data analysis. Simulation studies were conducted by generating data from small samples to large samples and then estimating parameters with the MESS(0,1) and SEM models. Then we compared the performance of the two models with the time used during estimation and the root mean square error (RMSE). In addition, it is applied to real data, namely Gross Regional Domestic Product (GRDP) data. The real data used is the GRDP of the construction category on Java Island in 2021. This is in line with the massive infrastructure development as a government program. The independent variables used and considered influential on the GRDP of the construction sector are domestic investment, foreign investment, labor, and wages. Based on the simulation study results, the computation time for estimating the parameters of MESS(0,1) is faster than the SEM model. In addition, in terms of accuracy, the RMSE indicator shows MESS(0,1) is more accurate than the SEM. In addition, the MESS(0,1) and SEM models were applied to the real data. The modeling real data results show that all variables have a significant positive effect on GRDP in the construction category.
Big data analytics for relative humidity time series forecasting based on the LSTM network and ELM Kurnianingsih, Kurnianingsih; Wirasatriya, Anindya; Lazuardi, Lutfan; Wibowo, Adi; Enriko, I Ketut Agung; Chin, Wei Hong; Kubota, Naoyuki
International Journal of Advances in Intelligent Informatics Vol 9, No 3 (2023): November 2023
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v9i3.905

Abstract

Accurate and reliable relative humidity forecasting is important when evaluating the impacts of climate change on humans and ecosystems. However, the complex interactions among geophysical parameters are challenging and may result in inaccurate weather forecasting. This study combines long short-term memory (LSTM) and extreme learning machines (ELM) to create a hybrid model-based forecasting technique to predict relative humidity to improve the accuracy of forecasts. Detailed experiments with univariate and multivariate problems were conducted, and the results show that LSTM-ELM and ELM-LSTM have the lowest MAE and RMSE results compared to stand-alone LSTM and ELM for the univariate problem. In addition, LSTM-ELM and ELM-LSTM result in lower computation time than stand-alone LSTM. The experiment results demonstrate that the proposed hybrid models outperform the comparative methods in relative humidity forecasting. We employed the recursive feature elimination (RFE) method and showed that dewpoint temperature, temperature, and wind speed are the factors that most affect relative humidity. A higher dewpoint temperature indicates more air moisture, equating to high relative humidity. Humidity levels also rise as the temperature rises.
Comparative study of predictive models for hoax and disinformation detection in indonesian news Adiati, Nadia Paramita Retno; Priambodo, Dimas Febriyan; Girinoto, Girinoto; Indarjani, Santi; Rizal, Akhmad; Prayoga, Arga; Beatrix, Yehezikha
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.878

Abstract

Along with the times, false information easily spreads, including in Indonesia.  In Press Release No.485/HM/KOMINFO/12/2021 the Ministry of Communication and Information has cut off access to 565,449 negative content and published 1,773 clarifications on hoax and disinformation content. Research has been carried out regarding this matter, but it is necessary to classify fake news into disinformation and hoaxes. This study presents a comparison between our proposed model, which is an ensemble of shallow learning predictive models, namely Random Forest, Passive Aggressive Classifier, and Cosine Similarity, and the deep learning model that uses BERT-Indo for classification. Both models are trained using equivalent datasets, which contain 8757 news, consisting of 3000 valid news, 3000 hoax news, and 2757 disinformation news. These news were obtained from websites such as CNN, Kompas, Detik, Kominfo, Temanggung Mediacenter, Hoaxdb Aceh, Turnback Hoax, and Antara, which were then cleaned from all unnecessary substances, such as punctuation marks, numbers, Unicode, stopwords, and suffixes using the Sastrawi library. At the benchmarking stage, the shallow learning model is evaluated to increase accuracy by applying ensemble learning combined using hard voting.  This results in higher values, with an accuracy of 98.125%, precision of 98.2%, F-1 score of 98.1%, and recall of 98.1%, compared to the BERT-Indo model which only achieved 96.918% accuracy, 96.069% precision, 96.937% F-1 score, and 96.882% recall. Based on the accuracy value, shallow learning model is superior to deep learning model.  This machine learning model is expected to be used to combat the spread of hoaxes and disinformation in Indonesian news. Additionally, with this research, false news can be classified in more detail, both as hoaxes and disinformation
An automated learning method of semantic segmentation for train autonomous driving environment understanding Wang, Yang; Chen, Yihao; Yuan, Hao; Wu, Cheng
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1521

Abstract

One of the major reasons for the explosion of autonomous driving in recent years is the great development of computer vision. As one of the most fundamental and challenging problems in autonomous driving, environment understanding has been widely studied. It directly determines whether the entire in-vehicle system can effectively identify surrounding objects of vehicles and make correct path planning. Semantic segmentation is the most important means of environment understanding among the many image recognition algorithms used in autonomous driving. However, the success of semantic segmentation models is highly dependent on human expertise in data preparation and hyperparameter optimization, and the tedious process of training is repeated over and over for each new scene. Automated machine learning (AutoML) is a research area for this problem that aims to automate the development of end-to-end ML models. In this paper, we propose an automatic learning method for semantic segmentation based on reinforcement learning (RL), which can realize automatic selection of training data and guide automatic training of semantic segmentation. The results show that our scheme converges faster and has higher accuracy than researchers manually training semantic segmentation models, while requiring no human involvement.