cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 11 Documents
Search results for , issue "Vol 10, No 3 (2024): August 2024" : 11 Documents clear
A deep learning model for detection and classification of coffee-leaf diseases using the transfer-learning technique Mansouri, Nabila; Guessmi, Hanene; Alkhalil, Adel
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1573

Abstract

The early treatment  and detection of plant diseases are important processes, as many diseases affecting crops are highly contagious. Recent advancements in deep learning have helped to provide innovative tools that have not only assisted early detection, but also significantly improved the performance and accuracy of Coffee Leaf Disease (CLD) classification and treatment. However, training a deep learning model from scratch can be both resource and time-consuming. To overcome this challenge, the transfer learning technique can take full advantage of pre-trained  models for more general tasks on extensive datasets o ameliorate the performance of a new, related task using few-shot training. This paper proposes a deep learning model, coupled with transfer learning, for CLD detection that aims to provide high accuracy forecasting of diseases that could affect coffee leaves. Our method involves 195 different pre-trained deep learning models, including real-time models like MobileNet and dense ones like EfficientNet and ResNet for the detection of four different diseases. The findings suggest that the EfficientNetB0 model, with transfer learning, has the most relevent accuracy (99.99%), and thus offers an effective solution for coffee leaf diseases classification of. This result could be used to develop applications that help coffee growers to improve the productivity and quality of their crops through early and accurate detection of coffee plant leaf diseases. Such an Artificial Intelligence based application would provide growers with timely control measures, preventing the spread of disease, and minimizing crop damage.
Job scheduling reservations on cloud resources Pujiyanta, Ardi; Noviyanto, Fiftin
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1421

Abstract

The current application of cloud computing focuses more on research problems. One of the main problems in the cloud is job allocation. Jobs are dynamically allocated to server processors. All cloud virtualized hardware is available to users on demand and can be dynamically upgraded. Resource scheduling is critical in research in the cloud, due to its large execution time and resource costs. The differences in resource scheduling criteria and parameters used cause various categories of Resource Scheduling Algorithms. Resource scheduling has a goal, identifying the right resources to schedule workloads in a timely manner and improving the effectiveness of resource utilization. In other words, minimizing workload completion time. Mapping the right workloads to resources will result in good scheduling. Another goal of resource scheduling is to identify adequate and appropriate workloads. So it can support scheduling of multiple workloads, to meet various QoS needs in cloud computing. The aim of this research is to determine the value of waiting time, idle time and makespan on cloud resources. The proposed method is to sort the arrival times of jobs with the least workload and place the jobs on a virtual view, before scheduling them on cloud resources. Experimental results show that the proposed method has an idle time of 25.3%, FCFS is 43.1% while for bacfilling it is 31.5%. The average makespan reduction for FCFS is 16.73%, for bacfilling it is 12.87%. The average decrease in AWT for FCFS was 13.3% for bacfilling of 12.03%. The results of this research can be applied to cloud rentals with flexible times.
Granularity-aware legal question answering: a case study of Indonesian government regulations Faisal, Douglas Raevan; Darari, Fariz; Ryanda, Reynard Adha
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1105

Abstract

Question answering (QA) technologies are crucial for building conversational AI.  Current research related to QA for the legal domain lacks focus on the organized structure of laws, which are hierarchically segmented into components at varying levels of detail. To address this gap, we propose a new task of granularity-aware legal QA, which accounts for the underlying granularity levels of law components. Our approach encompasses task formulation, dataset creation, and model development. Under the Indonesian jurisdiction, we consider four law component granularity levels: chapters (bab), articles (pasal), sections (ayat), and letters (huruf). We include 15 government regulations (Peraturan Pemerintah) of Indonesia related to labor affairs and build a legal QA dataset with granularity information. We then design a solution for such a task—the first IR system to account for legal component granularity. We implement a customized retriever-reranker pipeline in which the retriever accepts law components of multiple granularities and the reranker is trained for granularity-aware ranking. We leverage BM25 and BERT models as retriever and reranker, respectively, yielding an end-to-end exact match accuracy of 35.68%, which offers a significant improvement (20%) over a strong baseline. The use of reranker also improves the granularity accuracy from 44.86% to 63.24%. In practical context, such a solution can help provide more precise answers, not only from legal chatbots, but also other conversational AI that deals with hierarchically-structured documents.
Computation of spatial error model with matrix exponential spatial specification approach Marsono, Marsono; Setiawan, Setiawan; Kuswanto, Heri
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1506

Abstract

In spatial regression analysis, we not only consider the internal factors of a location, but also take into account the spatial factors that may affect the relationship. The model of spatial dependence between regions caused by unknown factors or errors is known as the Spatial Error Model (SEM). In its application to large datasets, SEM suffers from several problems in parameter estimation and computational time. One of the methods to solve this problem is to use Matrix Exponential Spatial Specification (MESS). The purpose of this research is to find another alternative to modeling data containing spatial autocorrelation errors as a substitute for SEM. MESS(0,1) is named as an alternative model to SEM. With the advantage of MESS features, the MESS(0,1) model is expected to be faster in analytics and computation compared to SEM when using Maximum Likelihood Estimation (MLE). The purpose of this study was to evaluate the effectiveness of the MESS (0,1) model as an alternative to SEM using MLE based on simulation studies and real data analysis. Simulation studies were conducted by generating data from small samples to large samples and then estimating parameters with the MESS(0,1) and SEM models. Then we compared the performance of the two models with the time used during estimation and the root mean square error (RMSE). In addition, it is applied to real data, namely Gross Regional Domestic Product (GRDP) data. The real data used is the GRDP of the construction category on Java Island in 2021. This is in line with the massive infrastructure development as a government program. The independent variables used and considered influential on the GRDP of the construction sector are domestic investment, foreign investment, labor, and wages. Based on the simulation study results, the computation time for estimating the parameters of MESS(0,1) is faster than the SEM model. In addition, in terms of accuracy, the RMSE indicator shows MESS(0,1) is more accurate than the SEM. In addition, the MESS(0,1) and SEM models were applied to the real data. The modeling real data results show that all variables have a significant positive effect on GRDP in the construction category.
Comparative study of predictive models for hoax and disinformation detection in indonesian news Adiati, Nadia Paramita Retno; Priambodo, Dimas Febriyan; Girinoto, Girinoto; Indarjani, Santi; Rizal, Akhmad; Prayoga, Arga; Beatrix, Yehezikha
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.878

Abstract

Along with the times, false information easily spreads, including in Indonesia.  In Press Release No.485/HM/KOMINFO/12/2021 the Ministry of Communication and Information has cut off access to 565,449 negative content and published 1,773 clarifications on hoax and disinformation content. Research has been carried out regarding this matter, but it is necessary to classify fake news into disinformation and hoaxes. This study presents a comparison between our proposed model, which is an ensemble of shallow learning predictive models, namely Random Forest, Passive Aggressive Classifier, and Cosine Similarity, and the deep learning model that uses BERT-Indo for classification. Both models are trained using equivalent datasets, which contain 8757 news, consisting of 3000 valid news, 3000 hoax news, and 2757 disinformation news. These news were obtained from websites such as CNN, Kompas, Detik, Kominfo, Temanggung Mediacenter, Hoaxdb Aceh, Turnback Hoax, and Antara, which were then cleaned from all unnecessary substances, such as punctuation marks, numbers, Unicode, stopwords, and suffixes using the Sastrawi library. At the benchmarking stage, the shallow learning model is evaluated to increase accuracy by applying ensemble learning combined using hard voting.  This results in higher values, with an accuracy of 98.125%, precision of 98.2%, F-1 score of 98.1%, and recall of 98.1%, compared to the BERT-Indo model which only achieved 96.918% accuracy, 96.069% precision, 96.937% F-1 score, and 96.882% recall. Based on the accuracy value, shallow learning model is superior to deep learning model.  This machine learning model is expected to be used to combat the spread of hoaxes and disinformation in Indonesian news. Additionally, with this research, false news can be classified in more detail, both as hoaxes and disinformation
Computer-aided pulmonary disease diagnosis using lung ultrasound video Bahri, Saeful; Suprijanto, Suprijanto; Juliastuti, Endang
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1397

Abstract

The development of a machine learning-based computer-aided diagnosis (CAD) system implemented for processing lung ultrasound images will greatly assist doctors in making decisions in diagnosing lung diseases. The learning method of the classifier model used in the computer-aided diagnosis system will affect the system's accuracy in diagnosing lung disease. Determining variables in the classifier and image pre-processing stages requires special attention to obtain a highly accurate classifier model. This study presents the development of a machine learning-based CAD as an add-on tool to classify lung disease based on a lung ultrasound (LUS) video. The main steps in this study are capturing the LUS videos and converting them into images, image pre-processing for speckle noise removal, image contrast and brightness enhancement, feature extraction, and the classification stage. In this study, three learning algorithm models, namely Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Naïve Bayes (NB), were used to classify images into three categories, namely healthy conditions, pneumonia, and COVID-19.  The performance of the three classifier models is compared to each other to obtain the best classifier model. The experimental results demonstrate the superiority of the suggested strategy utilizing the SVM classifier. Based on experimental data using 2,149 lung images for three classes and 20 texture feature sets, the SVM has an accuracy of 98.1%, the KNN is 94.7%, and the Gaussian NB is 79.6%. The model with the highest accuracy will be used to develop the computer-aided diagnosis (CAD) system.
Dynamic path planning using a modified genetic algorithm Pratomo, Awang Hendrianto; Wahyunggoro, Oyas; Triharminto, Hendri Himawan
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.699

Abstract

Genetic algorithm (GA) is well-known algorithm to find a feasible path planning which can be defined as global optimum problem. The drawback of GA is the high computation due to random process on each operator.  In this research, the new initial population integrating with new crossover operator strategy was proposed. The parameter is the length of distance travelled of the robot. Before employing the crossover operator, generating a c-obstacle have been done. The c-obstacle is used  as a filter to reduce unnecessary nodes to decrease time computation. After that, the initial population has been determined. The initial population is divided into two parents which parent’s chromosome contains an initial and goal position. The second parents are fulfilled with nodes from each obstacle. The genes of chromosome will add with c-obstacle nodes. Crossover operator is applied after filtering and c-obstacle of possible hopping is determined. Filtering method is used to remove unnecessary nodes that are part of c-obstacle. Fitness function considers the distance from  the last to next position. Optimum value is the shortest distance of path planning which avoids the obstacle in front.  The aim of the proposed method is to reduce the random population and random operating in GA. By using a similar data set of previous researches, the modified GA can reduce the total of generation and yield an adaptive generation number. This means that the modified GA converges faster than the other GA methods.
Enhancement of images compression using channel attention and post-filtering based on deep autoencoder Wirabudi, Andri Agustav; Fachrurrozi, Nurwan Reza; Dorand, Pietra; Royhan, Muhamad
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1499

Abstract

Image compression is a crucial research topic in today's information age, especially to meet the demand for balanced data compression efficiency with the quality of the resulting image reconstruction. Common methods used for image compression nowadays are based on autoencoders with deep learning foundations. However, these methods have limitations as they only consider residual values in processed images to achieve existing compression efficiency with less satisfying reconstruction results. To address this issue, we introduce the Attention Block mechanism to improve coding efficiency even further. Additionally, we introduce post-filtering methods to enhance the final reconstruction results of images. Experimental results using two datasets, CLIC for training and KODAK for testing, demonstrate that this method outperforms several previous research methods. With an efficiency coding improvement of -28.16%, an average PSNR improvement of 34%, and an MS-SSIM improvement of 8%, the model in this study significantly enhances the rate-distortion (RD) performance compared to previous approaches.
Predictive optimization in automotive supply chains: a BiLSTM-Attention and reinforcement learning approach Amellal, Asmae; Amellal, Issam; Ech-charrat, Mohammed Rida; Seghiouer, Hamid
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1351

Abstract

Effective supply chain management is pivotal for enhancing customer satisfaction and driving competitiveness and profitability in the automotive service and spare parts distribution sector. Our research introduces an innovative approach, integrating game theory, BiLSTM-Attention deep learning, and Reinforcement Learning (RL) to refine supply and pricing strategies within this domain. Focusing on Moroccan automobile companies, we utilized Enterprise Resource Planning (ERP) system data to forecast customer behavior using a BiLSTM model enhanced with an Attention mechanism. This predictive model achieved a Mean Squared Error (MSE) of 0.0525 and an R² value of 0.896, indicating high accuracy and an ability to explain substantial variance in customer behavior. To further our analysis, we incorporated reinforcement learning, evaluating three algorithms: Q-learning, Deep Q-Networks (DQN), and SARSA. Our findings demonstrate SARSA's superior performance in our context, attributed to its adeptness at navigating the dynamic environment of the automotive supply chain. By synergizing the predictive power of the BiLSTM-Attention model with the strategic optimization capabilities of reinforcement learning, particularly SARSA, our study offers a comprehensive framework for automotive companies to enhance their supply chain strategies, balancing profitability and customer satisfaction effectively in a rapidly evolving industry sector
Link stability - based optimal routing path for efficient data communication in MANET Salim, Renisha Pulinchuvallil; Ramachandran, Rajesh
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1558

Abstract

The paper delves into the complexities of Mobile Ad hoc Networks (MANETs), which consist of a diverse array of wireless nodes. In such networks, routing packets poses a significant challenge due to their dynamic nature. Despite the variety of techniques available for optimizing routing in MANETs, persistent issues like packet loss, routing overhead, and End-to-End Delay (EED) remain prevalent. In response to these challenges, the paper proposes a novel approach for efficient Data Communication (DC) by introducing a Link Stability (LS)-based optimal routing path. This approach leverages several advanced techniques, including Pearson Correlation Coefficient SWIFFT (PCC-SWIFFT), Galois-based Digital Signature Algorithm (G-DSA), and Entropy-based Gannet Optimization Algorithm (E-GOA). The proposed methodology involves a systematic process. Initially, the nodes in the MANET are initialized to establish the network infrastructure. Subsequently, the Canberra-based K Means (C-K Means) algorithm is employed to identify Neighboring Nodes (NNs), which are pivotal for creating communication links within the network. To ensure secure communication, secret keys (SK) are generated for both the Sender Node (SN) and the Receiver Node (RN) using Galois Theory. Following this, PCC-SWIFFT methodologies are utilized to generate hash codes, serving as unique identifiers for data packets or routing information. Signatures are created and verified at the SN and RN using the G-DSA. Verified nodes are subsequently added to the routing entry table, facilitating the establishment of multiple paths within the network. The Optimal Path (OP) is selected using the E-GOA, considering factors such as link stability and network congestion. Finally, Data Communication (DC) is initiated, continuously monitoring LS to ensure optimal routing performance. Comparative analysis with existing methodologies demonstrates the superior performance of the proposed model. In summary, the proposed approach offers a comprehensive solution to enhance routing efficiency in MANETs by addressing critical issues and leveraging advanced algorithms for key generation, signature verification, and path optimization

Page 1 of 2 | Total Record : 11