Claim Missing Document
Check
Articles

Found 8 Documents
Search

Tap For Battle: Perancangan Casual Game Pada Smartphone Android Chowanda, Andry; Prabowo, Benard H.; Iglesias, Glen; Diansari, Marsella
ComTech: Computer, Mathematics and Engineering Applications Vol 5, No 2 (2014): ComTech
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/comtech.v5i2.2187

Abstract

Smartphones have become a necessity. Almost everyone uses a smartphone in a variety of activities. Both young and old are sure to utilize this technology, for a wide range of activities such as doing the work, doing school work or enjoying entertainment. The purpose of this research is to build a casual-action game with war theme. The game is built for Android smartphone that has multi touch screen capability. The research methods used in this research are data collection and analysis method including user analysis with questionnaire. Furthermore, IMSDD method is implemented for game design and development phase including system requirement analysis, system design, system implementation, finally system evaluation. In this research, we conclude that 83.9% participants enjoyed the game with touch-screen as the game control.
The Development of Indoor Object Recognition Tool for People with Low Vision and Blindness Sutoyo, Rhio; Chowanda, Andry
ComTech: Computer, Mathematics and Engineering Applications Vol 8, No 2 (2017): ComTech
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/comtech.v8i2.3763

Abstract

The purpose of this research was to develop methods and algorithms that could be applied as the underlying base for developing an object recognition tools. The method implemented in this research was initial problem identification, methods and algorithms testing and development, image database modeling, system development, and training and testing. As a result, the system can perform with 93,46% of accuracy for indoor object recognition. Even though the system achieves relatively high accuracy in recognizing objects, it is still limited to a single object detection and not able to recognize the object in real time.
Clickbait Classification Model on Online News with Semantic Similarity Calculation Between News Title and Content Ahmadi, Hero Akbar; Chowanda, Andry
Building of Informatics, Technology and Science (BITS) Vol 4 No 4 (2023): March 2023
Publisher : Forum Kerjasama Pendidikan Tinggi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/bits.v4i4.3030

Abstract

Clickbait is a sensational title that makes us click internet links to an article, image, or video. Online content providers use clickbait to gain user traffic, that leads to increasing income from the placed ads in their page. To attract more and more traffic, online content providers write sensational and hyperbolic titles, and even misleading and not telling the whole story. This can give us, the internet consumer, wrong perspective, and half-truth. And nowadays, clickbait titles are worse than ever. Modern clickbait titles are not hyperbolic nor ambiguous enough, and sometimes very hard to identify. This paper aims to classify clickbait titles, to help humans identify clickbait and stop sharing more online content that contains clickbait and misleading titles. This model classifies clickbait by calculating semantic similarity between the article title and the summary of the article content. The article content is summarized by T5 (Text-to-text Transfer Transformer) model. IndoBERT is then used to calculate semantic similarity score between generated summary and the article title. The article title, content, summary, and semantic similarity score are used for clickbait classification with various algorithms. The result shows that by adding article content alongside article title in the classification process improves F1-score by 7% when classified with IndoBERT. In another future research, this model can be integrated with another application such as twitter or telegram bot to send us warning every time a user consumes online content with clickbait title. Thus, it can prevent online communities from sharing misleading information caused by clickbait
Object Detection Model for Web-Based Physical Distancing Detector Using Deep Learning Chowanda, Andry; Sariputra, Ananda Kevin Refaldo; Prananto, Ricardo Gunawan
CommIT (Communication and Information Technology) Journal Vol. 18 No. 1 (2024): CommIT Journal
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/commit.v18i1.8669

Abstract

The pandemic has changed the way people interact with each other in the public setting. As a result, social distancing has been implemented in public society to reduce the virus’s spread. Automatically detecting social distancing is paramount in reducing menial manual tasks. There are several methods to detect social distance in public, and one is through a surveillance camera. However, detecting social distance through a camera is not an easy task. Problems, such as lighting, occlusion, and camera resolution, can occur during detection. The research aims to develop a physical distancing detector system that is adjusted to work with Indonesian rules and conditions, especially in Jakarta, using deep learning (i.e., YOLOv4 architecture with the Darknet framework) and the CrowdHuman dataset. The detection is done by reading the source video, detecting the distance between individuals, and determining the crowd of individuals close to each other. In order to accomplish the detection, the training is done with CSPDarknet53 and VGG16 backbone in YOLOv4 and YOLOv4 Tiny architecture using various hyperparameters in the training process. Several explorations are made in the research to find the best combination of architectures and fine-tune them. The research successfully detects crowds at the 16th training, with mAP50 of 71.59% (74.04% AP50) and 16.2 Frame per Second (FPS) displayed on the web. The input size is essential for determining the model’s accuracy and speed. The model can be implemented in a web-based application.
Modeling Emotion Recognition System from Facial Images Using Convolutional Neural Networks Kusno, Jasen Wanardi; Chowanda, Andry
CommIT (Communication and Information Technology) Journal Vol. 18 No. 2 (2024): CommIT Journal
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/commit.v18i2.8873

Abstract

Emotion classification is the process of identifying human emotions. Implementing technology to help people with emotional classification is considered a relatively popular research field. Until now, most of the work has been done to automate the recognition of facial cues (e.g., expressions) from several modalities (e.g., image, video, audio, and text). Deep learning architecture such as Convolutional Neural Networks (CNN) demonstrates promising results for emotion recognition. The research aims to build a CNN model while improving accuracy and performance. Two models are proposed in the research with some hyperparameter tuning followed by two datasets and other existing architecture that will be used and compared with the proposed architecture. The two datasets used are Facial Expression Recognition 2013 (FER2013) and Extended Cohn-Kanade (CK+), both of which are commonly used datasets in FER. In addition, the proposed model is compared with the previous model using the same setting and dataset. The result shows that the proposed models with the CK+ dataset gain higher accuracy, while some models with the FER2013 dataset have lower accuracy compared to previous research. The model trained with the FER2013 dataset has lower accuracy because of overfitting. Meanwhile, the model trained with CK+ has no overfitting problem. The research mainly explores the CNN model due to limited resources and time.
Hyperparameter tuning for deep learning model used in multimodal emotion recognition data Widardo, Fernandi; Chowanda, Andry
Bulletin of Electrical Engineering and Informatics Vol 14, No 1: February 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i1.8707

Abstract

This study attempts to address overfitting, a frequent problem with multimodal emotion identification models. This study proposes model optimization using various hyperparameter approaches, such as dropout layer, l2 kernel regularization, batch normalization, and learning rate schedule, and discovers which approach yields the most impact for optimizing the model from overfitting. For the emotion dataset, this research utilizes the interactive emotional dyadic motion capture (IEMOCAP) dataset and uses the motion capture and speech audio data modality. The models used in this experiment are convolutional neural network (CNN) for the motion capture data and CNN-bidirectional long short-term memory (CNN-BiLSTM) for the audio data. This study also applied a smaller model batch size in the experiment to accommodate the limited computing resources. The result of the experiment is that the optimization using hyperparameter tuning raises the validation accuracy to 73.67% and the f1-score to 73% on audio and motion capture data, respectively, from the base model of this research and can competitively compete with another research model result. It is hoped that the optimization experiment results in this study can be useful for future emotion recognition research, especially for those who have encountered overfitting problems.
Efficient object detection for augmented reality based english learning with YOLOv8 optimization Putra, Arya Krisna; Tambunan, Fiqri Ramadhan; Ndruru, Samson; Chowanda, Andry
Indonesian Journal of Electrical Engineering and Computer Science Vol 39, No 2: August 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v39.i2.pp1189-1197

Abstract

This study develops a mobile-based augmented reality (AR) application with machine learning for elementary school students to enhance basic English vocabulary learning. The application integrates an optimized YOLOv8 object detection model, designed to recognize 20 common classroom objects in real-time. The model optimization involves replacing standard Conv layers with GhostConv and the C2f block with the C2fCIB block that has significantly improved computational efficiency. Evaluation results show the optimized model reduces the parameters by 22.003% and decreases the file size from 6.2 MB to 4.9 MB. The model performance improved by achieving precision of 83.7%, recall of 73.5% and a mean Average Precision (mAP) of 81.4%. The model was integrated into the Unity platform via the Barracuda library, enabling real-time detection and interactive display of 3D objects. This aplication also complete with English text, translations, example sentences also audio pronunciation. 3D objects representing classroom vocabulary were specifically created to support AR-based learning. Performance testing on a Samsung A14 showed an improved frame rate of 6–12 FPS compared to the original model’s 5–10 FPS. These results demonstrate that the optimized YOLO model effectively integrates with AR technology, creating a more interactive and enjoyable vocabulary learning experience.
Real-time recognition of Indonesian sign language SIBI using CNN-SVM model combination Santika, Satriadi Putra; Benhard, Stefanus; Arifin, Yulyani; Chowanda, Andry
Indonesian Journal of Electrical Engineering and Computer Science Vol 39, No 2: August 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v39.i2.pp1198-1210

Abstract

Real-time Sistem Isyarat Bahasa Indonesia (SIBI) sign language recognition plays a crucial role in improving accessibility for individuals with hearing and speech impairments. Despite advancements in SIBI recognition research, challenges remain in ensuring model stability and accuracy in realtime settings, particularly in handling gesture variations and classification inconsistencies. This study addresses these challenges by developing a convolutional neural network-support vector machine (CNN-SVM) combination model, integrating MediaPipe for hand coordinate extraction, CNN for feature extraction, and SVM for classification. To improve generalization and prevent overfitting, data augmentation is applied to expand the dataset. The model's performance is further enhanced through hyperparameter optimization (HPO) and post-processing techniques such as multi-window majority voting (MWMV) and SymSpell. Experimental results show that the CNN-SVM model trained on augmented data with HPO achieves 91% testing accuracy, outperforming both standalone CNN and SVM models. Furthermore, MWMV improves recognition stability, while SymSpell enhances spelling errors, ensuring more meaningful outputs. The system is integrated with OpenCV for real-time recognition, but current deployment remains limited to local execution. Future work will focus on developing lightweight models for web-based and mobile applications, making the system more accessible and scalable.