cover
Contact Name
Imam Much Ibnu Subroto
Contact Email
imam@unissula.ac.id
Phone
-
Journal Mail Official
ijai@iaesjournal.com
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN : 20894872     EISSN : 22528938     DOI : -
IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like genetic algorithm, ant colony optimization, etc); reasoning and evolution; intelligence applications; computer vision and speech understanding; multimedia and cognitive informatics, data mining and machine learning tools, heuristic and AI planning strategies and tools, computational theories of learning; technology and computing (like particle swarm optimization); intelligent system architectures; knowledge representation; bioinformatics; natural language processing; multiagent systems; etc.
Arjuna Subject : -
Articles 1,808 Documents
Classification of Tri Pramana learning activities in virtual reality environment using convolutional neural network Sindu, I Gede Partha; Sudarma, Made; Hartati, Rukmi Sari; Gunantara, Nyoman
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp2840-2853

Abstract

Tri Pramana as the local genius of Balinese society, is now adopted in the education system. This adaptation results in a Learning Cycle Model which essentially consists of three classes namely Sabda Pramana (theoretical study), Pratyaksa Pramana (direct observation), and Anumana Pramana (practicum). In learning activities, it is difficult for educators to fully observe individuals to find out the most suitable learning model. Through Virtual Environment Technology, educators can observe students more freely through the recording of students' activities. However, in its implementation, manual analysis requires large resources. Deep Learning approach based on Convolutional Neural Network (CNN) is able to automate this analysis process through the classification ability of the image of the recorded learner activity. To produce a robust CNN model, this research compares four of the most commonly used architectures, namely ResNet-50, MobileNetV2, InceptionV3, and Xception. Each architecture is tuned using a combination of learning rate and batch size. Through a 512 x 512 resolution dataset with 70% training subset (4,541 images), 20% validation (1,296 images), and 10% test (652 images), the best ResNet model is obtained with a learning rate configuration of 1e-3 and batch size 64 with an accuracy of 99.39%, precision of 99.37%, and recall of 99.42%.
Deep learning method for lung cancer identification and classification Jamdar, Sahil; Vaddin, Jayashree; B. Nargundkar, Sachidanand; Patil, Shrinivasa
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp1119-1128

Abstract

Lung cancer (LC) is calming many lives and is becoming a serious cause of concern. The detection of LC at an early stage assists the chances of recovery. Accuracy of detection of LC at an early stage can be improved with the help of a convolutional neural network (CNN) based deep learning approach. In this paper, we present two methodologies for Lung cancer detection (LCD) applied on Lung image database consortium (LIDC) and image database resource initiative (IDRI) data sets. Classification of these LC images is carried out using support vector machine (SVM), and deep CNN. The CNN is trained with i) multiple batches and ii) single batch for LC image classification as non cancer and cancer image. All these methods are being implemented in MATLAB. The accuracy of classification obtained by SVM is 65%, whereas deep CNN produced detection accuracy of 80% and 100% respectively for multiple and single batch training. The novelty of our experimentation is near 100% classification accuracy obtained by our deep CNN model when tested on 25 Lung computed tomography (CT) test images each of size 512×512 pixels in less than 20 iterations as compared to the research work carried out by other researchers using cropped LC nodule images.
A systematic review of non-intrusive human activity recognition in smart homes using deep learning El Ghazi, Mariam; Aknin, Noura
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3188-3202

Abstract

Smart homes are a viable solution for improving the independence and privacy of elderly and dependent people thanks to IoT sensors. Reliable human activity recognition (HAR) devices are required to enable precise monitoring inside smart homes. Despite various reviews on HAR, there is a lack of comprehensive studies that include a diverse range of approaches, including sensor-based, wearable, ambient, and device-free methods. Considering this research gap, this study aims to systematically review the HAR studies that apply deep learning as their main solution and utilize a non-intrusive approach for activity monitoring. Out of the 2,171 studies in the IEEE Explore database, we carefully selected and thoroughly analyzed 37 studies for our research, following the guidelines provided by the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology. In this paper, we explore various modalities, deep learning approaches, and datasets employed in the context of non-intrusive HAR. This study presents essential data for researchers to employ deep learning techniques for HAR in smart home environments. Additionally, it identifies and highlights the main trends, challenges, and future directions.
Photo-realistic photo synthesis using improved conditional generative adversarial networks Mandara Kirimanjeshwara, Raghavendra Shetty; Prasad, Sarappadi Narasimha
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp516-523

Abstract

There are a wide range of potential uses for both the forward (generating face drawings from actual images) and backward (generating photos from synthetic face sketches). However, photo/sketch synthesis is still a difficult problem to solve because of the distinct differences between photos and sketches. Existing frameworks often struggle to acquire a strong mapping among the geometry of drawing and its corresponding photo-realistic pictures because of the little amount of paired sketch-photo training data available. In this study, we adopt the perspective that this is an image-to-image translation issue and investigate the usage of the well-known enhanced pix2pix generative adversarial networks (GANs) to generate high-quality photo-realistic pictures from drawings; we make use of three distinct datasets. While recent GAN-based approaches have shown promise in image translation, they still struggle to produce high-resolution, photorealistic pictures. This technique uses supervised learning to train the generator's hidden layers to produce low-resolution pictures initially, then uses the network's implicit refinement to produce high-resolution images. Extensive tests on three sketch-photo datasets (two publicly accessible and one we produced) are used to evaluate. Our solution outperforms existing image translation techniques by producing more photorealistic visuals with a peak signal-to-noise ratio of 59.85% and pixel accuracy of 82.7%. 
Adaptive radio propagation model for maximizing performance efficiency in smart city disaster management application Mangasuli, Sushant; Kaluti, Mahesh
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp1348-1357

Abstract

Climate change poses several environmental threats like floods to urban environment; thus, effective and reliable communication of emergency information is needed during massive breakdown of network infrastructure. This paper presents a mobile adhoc network (MANETs) based effective information such as calls, image, and videos communication system that is compatible with current 3GPP and 5G communication network. Here in maintaining connectivity the information is communicated between different MANET nodes in a multi-hop manner. However, designing radio propagation is challenging considering higher local emergency request congestion at different terrain with varying speed of users. The current radio propagation model is designed without considering the effect of line-of-sight between communicating device and are not adaptive to different environment considering urban disaster management environment. This paper develops an adaptive radio propagation (ARP) model namely expressway, city and semiurban. Then, in reducing congestion and improving network performance efficiency the work introduced an adaptive medium access control (AMAC) protocol. The MAC incorporates a dynamic network controller (DNC) to optimize the contention window size in dynamic manner according to current traffic demands. The AMAC protocol achieves much improved throughput with lesser packet loss in comparison with existing MAC (EMAC) model considering different radio propagation model introduced in this work.
Efficient fusion of spatio-temporal saliency for frame wise saliency identification Narasimha, Sharada P; Lingareddy, Sanjeev C
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3621-3633

Abstract

Video saliency detection is a rapidly growing subject that has seen very few contributions. The most common technique used nowadays is to perform frame-by-frame saliency detection. The modified Spatio-temporal fusion method presented in this paper offers a novel approach to saliency detection and mapping. It uses frame-wise overall motion color saliency as well as pixel-based consistent Spatio-temporal diffusion for its temporal uniformity. Additionally, a variety of techniques is advocated as a way to increase the saliency maps' overall accuracy and precision. The video is divided into groups of frames, and each frame temporarily goes through diffusion and integration in order to compute the color saliency mapping, as covered in the proposed method section. Then, with the aid of a permutation matrix, the inter-group frame is used to format the pixel-based saliency fusion, after which the features, or the fusion of pixel saliency and color information, direct the diffusion of the spatiotemporal saliency. The result is tested using five publicly accessible global saliency evaluation metrics, and it is determined that the proposed algorithm outperforms numerous saliency detection techniques with an improvement in accuracy margin. The robustness, dependability, adaptability, and precision are all demonstrated by the results.
Empowering anomaly detection algorithm: a review Iqbal Basheer, Muhammad Yunus; Mohd Ali, Azliza; Osman, Rozianawaty; Abdul Hamid, Nurzeatul Hamimah; Nordin, Sharifalillah; Mohd Ariffin, Muhammad Azizi; Iglesias Martínez, José Antonio
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp9-22

Abstract

Detecting anomalies in a data stream relevant to domains like intrusion detection, fraud detection, security in sensor networks, or event detection in internet of things (IoT) environments is a growing field of research. For instance, the use of surveillance cameras installed everywhere that is usually governed by human experts. However, when many cameras are involved, more human expertise is needed, thus making it expensive. Hence, researchers worldwide are trying to invent the best-automated algorithm to detect abnormal behavior using real-time data. The designed algorithm for this purpose may contain gaps that could differentiate the qualities in specific domains. Therefore, this study presents a review of anomaly detection algorithms, introducing the gap that presents the advantages and disadvantages of these algorithms. Since many works of literature were reviewed in this review, it is expected to aid researchers in closing this gap in the future.
Chelonia mydas detection and image extraction from field recordings Amir Zakry, Khalif; Syahiran Soria, Mohamad; Hipiny, Irwandi; Ujir, Hamimah; Hassan, Ruhana; Hardi, Richki
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp2354-2363

Abstract

Wildlife videography is an essential data collection method for conducting. The video recording process of an animal like the Chelonia mydas sea turtles in its habitat requires setting up special camera or by performing complex camera movement whilst the camera operator maneuvers over its complicated habitat. The result is hours of footage that contains only some good data that can be used for further animal research but still requires human input in filtering it out This presents a problem that artificial intelligence models can assist, especially to automate extracting any good data. This paper proposes usage of machine learning models to crop images of endangered Chelonia mydas turtles to help prune through hundreds and thousands of frames from several video footages. By human supervision, we extracted and curated a dataset of 1,426 good data from our video dataset and used it to perform transfer learning on a you only look once (YOLO)v7 pre-trained model. Our paper shows that the retrained YOLOv7 model when run through our remaining video dataset with various confidence scores can crop images in the field video recordings of Chelonia mydas turtles with up to 99.89% of output correctly cropped thus automating the data extraction process.
Optimized feature selection approaches for accident classification to enhance road safety Sobhana, Mummaneni; Venkatesh Mendu, Gnana Siva Sai; Vemulapalli, Nihitha; Kumar Chintakayala, Kushal
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3283-3290

Abstract

In the modern era, the issue of road accidents has become an increasingly critical global concern, requiring urgent attention and innovative solutions. This investigation has compiled an extensive dataset of 10,356 accident occurrences that occurred between the years 2018 and 2022 in Ernakulam district. By utilizing advanced feature selection methodologies, such as genetic algorithm and coyote optimization, this research has identified pivotal accident determinants. The study harnesses the potential of deep learning techniques, encompassing recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), and multilayer perceptron (MLP) for classifying accidents according to severity (categorized as fatal, grievous, and severe). Eight predictive models are trained using the dataset, and the top two are ensembled. Integrating deep learning and optimization strategies, this research aims to create a robust accident classification system. The system will help in developing proactive policies that can reduce the frequency and severity of accidents in Ernakulam district.  
Early prediction of chronic heart disease with recursive feature elimination and supervised learning techniques Kumar Napa, Komal; Kalyan Kumar, Angati; Murugan, Sangeetha; Mahammad, Kamaluru; Admassu Assegie, Tsehay
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp730-736

Abstract

Chronic heart disease (CHD) is a common complication among patients suffering in the cardiological intensive care unit, often resulting in poor prognosis and high mortality. Early prediction of CHD can reduce mortality by preventing the severity of the disease. This study evaluated the efficacy of on recursive feature elimination for predicting CHD using supervised learning techniques for predicting CHD. The study employed 1190 Cleveland Hungarian CHD dataset. Different supervised learning techniques (support vector machine, decision tree, k-nearest neighbor, Naive Bayes, stochastic gradient descent, adaptive boosting, and multilayer perceptron) were used to study the efficacy of the recursive feature elimination. Chest pain type, sex, blood sugar level, angina, depression, and slope were associated with CHD occurrence. The accuracy of the K-nearest neighbor and decision tree model was 89.91% for the feature-selected dataset indicating good predictive ability. Ultimately, the support vector machine and logistic regression with the selected features exhibited good discriminatory ability for early prediction of CHD. Thus, the recursive feature elimination is a good approach to develop a a model with higher accuracy to predict CHD.

Page 89 of 181 | Total Record : 1808


Filter by Year

2012 2026


Filter By Issues
All Issue Vol 15, No 1: February 2026 Vol 14, No 6: December 2025 Vol 14, No 5: October 2025 Vol 14, No 4: August 2025 Vol 14, No 3: June 2025 Vol 14, No 2: April 2025 Vol 14, No 1: February 2025 Vol 13, No 4: December 2024 Vol 13, No 3: September 2024 Vol 13, No 2: June 2024 Vol 13, No 1: March 2024 Vol 12, No 4: December 2023 Vol 12, No 3: September 2023 Vol 12, No 2: June 2023 Vol 12, No 1: March 2023 Vol 11, No 4: December 2022 Vol 11, No 3: September 2022 Vol 11, No 2: June 2022 Vol 11, No 1: March 2022 Vol 10, No 4: December 2021 Vol 10, No 3: September 2021 Vol 10, No 2: June 2021 Vol 10, No 1: March 2021 Vol 9, No 4: December 2020 Vol 9, No 3: September 2020 Vol 9, No 2: June 2020 Vol 9, No 1: March 2020 Vol 8, No 4: December 2019 Vol 8, No 3: September 2019 Vol 8, No 2: June 2019 Vol 8, No 1: March 2019 Vol 7, No 4: December 2018 Vol 7, No 3: September 2018 Vol 7, No 2: June 2018 Vol 7, No 1: March 2018 Vol 6, No 4: December 2017 Vol 6, No 3: September 2017 Vol 6, No 2: June 2017 Vol 6, No 1: March 2017 Vol 5, No 4: December 2016 Vol 5, No 3: September 2016 Vol 5, No 2: June 2016 Vol 5, No 1: March 2016 Vol 4, No 4: December 2015 Vol 4, No 3: September 2015 Vol 4, No 2: June 2015 Vol 4, No 1: March 2015 Vol 3, No 4: December 2014 Vol 3, No 3: September 2014 Vol 3, No 2: June 2014 Vol 3, No 1: March 2014 Vol 2, No 4: December 2013 Vol 2, No 3: September 2013 Vol 2, No 2: June 2013 Vol 2, No 1: March 2013 Vol 1, No 4: December 2012 Vol 1, No 3: September 2012 Vol 1, No 2: June 2012 Vol 1, No 1: March 2012 More Issue