Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : IAES International Journal of Artificial Intelligence (IJ-AI)

Fragmented-cuneiform-based convolutional neural network for cuneiform character recognition Prasetiadi, Agi; Saputra, Julian
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp554-562

Abstract

Cuneiform has been a widely used writing system in one of the human history phases. Although there are millions of tablets, have been excavated today, only around 100,000 tablets have been read. The difficulty in translating also increased if the tablet has damaged areas resulting in some of its characters become fragmented and hard to read. This paper investigates the possibility of reading fragmented cuneiform characters from Noto Sans Cuneiform font based on convolutional neural network (CNN). The dataset is built on extracted 921 characters from the font. These characters are then intentionally being damaged with specific patterns, resulting set of fragmented characters ready to be trained. The model produced by this training phase then being used to read the unseen fragmented pattern of cuneiform sets. The model also being tested for reading normal characters set. From the simulation, 83.86% accuracy of reading fragmented characters are obtained. Interestingly, 96.42% accuracy is obtained while the model is being tested for reading normal characters.
Acapella-based music generation with sequential models utilizing discrete cosine transform Saputra, Julian; Prasetiadi, Agi; Kresna, Iqsyahiro
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3371-3380

Abstract

Making musical instruments that accompany vocals in a song depends on the mood quality and the music composer’s creativity. The model created by other researchers has restrictions that include being limited to musical instrument digital interface files and relying on recurrent neural networks (RNN) or Transformers for the recursive generation of musical notes. This research offers the world’s first model capable of automatically generating musical instruments accompanying human vocal sounds. The model we created is divided into three types of sound input: short input, combed input, and frequency sound based on the discrete cosine transform (DCT). By combining the sequential models such as Autoencoder and gated recurrent unit (GRU) models, we will evaluate the performance of the resulting model in terms of loss and creativity. The best model has a performance evaluation that resulted in an average loss of 0.02993620155. The hearing test results from the sound output produced in the frequency range 0-1,600 Hertz can be heard clearly, and the tones are quite harmonious. The model has the potential to be further developed in future research in the field of sound processing.
Dental caries detection using faster region-based convolutional neural network with residual network Lanyak, Andre Citro Febriliyan; Prasetiadi, Agi; Widodo, Haris Budi; Ghani, Muhammad Hisyam; Athallah, Abiyan
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp2027-2035

Abstract

Dental caries is the highest prevalent dental disease in the world by 2022. Caries can be stopped by early detection of patients through efficient screening. Previously, there have been several methods used to detect caries such as single shot multibox detector (SSD), faster region-based convolutional neural network (Faster R-CNN) and you only look once (YOLO). This research aims to develop accurate dental caries detection using Faster R-CNN. Using a dataset collected from scraping on the internet, this research is started by creating an original dataset consisting of 81 base images which are then augmented to a total of 486 images and annotated by dental health experts from Jenderal Soedirman University. Transfer learning using pre-trained Faster R-CNN residual network (ResNet)-50 and ResNet-101 model is utilized to detect and localise dental caries. The Faster R-CNN ResNet-50 model trained using the Adam optimizer produces a mean average precision (mAP) of 0.213, and those using the momentum optimizer produce a mAP of 0.177. While the Faster R-CNN ResNet-101 model trained using the Adam optimizer produces a mAP of 0.192, and those using the momentum optimizer produce a mAP of 0.004. The model trained on the dataset showed satisfactory results in detecting dental caries, especially ResNet-50 with Adam optimizer.