p-Index From 2021 - 2026
14.203
P-Index
This Author published in this journals
All Journal International Journal of Informatics and Communication Technology (IJ-ICT) International Journal of Advances in Applied Sciences TEKNIK INFORMATIKA Techno.Com: Jurnal Teknologi Informasi Pixel : Jurnal Ilmiah Komputer Grafis Jurnal Teknologi Informasi dan Ilmu Komputer Jurnal Transformatika JUITA : Jurnal Informatika Scientific Journal of Informatics InfoTekJar : Jurnal Nasional Informatika dan Teknologi Jaringan Fountain of Informatics Journal Sinkron : Jurnal dan Penelitian Teknik Informatika Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) SemanTIK : Teknik Informasi RABIT: Jurnal Teknologi dan Sistem Informasi Univrab INTENSIF: Jurnal Ilmiah Penelitian dan Penerapan Teknologi Sistem Informasi JURNAL MEDIA INFORMATIKA BUDIDARMA CogITo Smart Journal JTERA (Jurnal Teknologi Rekayasa) Indonesian Journal of Artificial Intelligence and Data Mining INOVTEK Polbeng - Seri Informatika JITK (Jurnal Ilmu Pengetahuan dan Komputer) JURNAL REKAYASA TEKNOLOGI INFORMASI JURNAL TEKNIK INFORMATIKA DAN SISTEM INFORMASI Jurnal Teknoinfo ILKOM Jurnal Ilmiah Voice Of Informatics MATRIK : Jurnal Manajemen, Teknik Informatika, dan Rekayasa Komputer JURNAL TEKNOLOGI DAN OPEN SOURCE Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) Digital Zone: Jurnal Teknologi Informasi dan Komunikasi JURIKOM (Jurnal Riset Komputer) JURTEKSI ComTech: Computer, Mathematics and Engineering Applications CSRID (Computer Science Research and Its Development Journal) JOISIE (Journal Of Information Systems And Informatics Engineering) EDUMATIC: Jurnal Pendidikan Informatika METIK JURNAL Jurnal Ilmiah Ilmu Komputer Fakultas Ilmu Komputer Universitas Al Asyariah Mandar Jurnal Manajemen Informatika dan Sistem Informasi Jurnal Informatika dan Rekayasa Elektronik Jurnal Sistem informasi dan informatika (SIMIKA) Zonasi: Jurnal Sistem Informasi Journal of Applied Engineering and Technological Science (JAETS) JSR : Jaringan Sistem Informasi Robotik Sains, Aplikasi, Komputasi dan Teknologi Informasi Grouper: Jurnal Ilmiah Perikanan JISA (Jurnal Informatika dan Sains) JSES : Journal of Sport and Exercise Science Aiti: Jurnal Teknologi Informasi Jurnal Sistem Informasi dan Sistem Komputer Journal of Applied Data Sciences Jurnal J-PEMAS Decode: Jurnal Pendidikan Teknologi Informasi Ikhtisar: Jurnal Pengetahuan Islam Jurnal Saintekom : Sains, Teknologi, Komputer dan Manajemen Formosa Journal of Science and Technology (FJST) Prosiding Seminar Nasional Sisfotek (Sistem Informasi dan Teknologi Informasi) J-COSCIS : Journal of Computer Science Community Service JAIA - Journal of Artificial Intelligence and Applications Malcom: Indonesian Journal of Machine Learning and Computer Science SATIN - Sains dan Teknologi Informasi Bulletin of Social Informatics Theory and Application Jurnal Sains, Nalar, dan Aplikasi Teknologi Informasi Jurnal Masyarakat Berdikari dan Berkarya (MARDIKA) The Indonesian Journal of Computer Science Advance Sustainable Science, Engineering and Technology (ASSET) Indonesian Journal of Health Research Innovation
Claim Missing Document
Check
Articles

Found 9 Documents
Search
Journal : Journal of Applied Data Sciences

Early Stopping on CNN-LSTM Development to Improve Classification Performance Anam, M. Khairul; Defit, Sarjon; Haviluddin, Haviluddin; Efrizoni, Lusiana; Firdaus, Muhammad Bambang
Journal of Applied Data Sciences Vol 5, No 3: SEPTEMBER 2024
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v5i3.312

Abstract

Currently, CNN-LSTM has been widely developed through changes in its architecture and other modifications to improve the performance of this hybrid model. However, some studies pay less attention to overfitting, even though overfitting must be prevented as it can provide good accuracy initially but leads to classification errors when new data is added. Therefore, extra prevention measures are necessary to avoid overfitting. This research uses dropout with early stopping to prevent overfitting. The dataset used for testing is sourced from Twitter; this research also develops architectures using activation functions within each architecture. The developed architecture consists of CNN, MaxPooling1D, Dropout, LSTM, Dense, Dropout, Dense, and SoftMax as the output. Architecture A uses default activations such as ReLU for CNN and Tanh for LSTM. In Architecture B, all activations are replaced by Tanh, and in Architecture C, they are entirely replaced by ReLU. This research also performed hyperparameter tuning such as the number of layers, batch size, and learning rate. This study found that dropout and early stopping can increase accuracy to 85% and prevent overfitting. The best architecture entirely uses ReLU activation as it demonstrates advantages in computational efficiency, convergence speed, the ability to capture relevant patterns, and resistance to noise.
Machine Learning Algorithm Optimization using Stacking Technique for Graduation Prediction Herianto, Herianto; Kurniawan, Bambang; Hartomi, Zupri Henra; Irawan, Yuda; Anam, M Khairul
Journal of Applied Data Sciences Vol 5, No 3: SEPTEMBER 2024
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v5i3.316

Abstract

Graduating on time is crucial for academic success, impacting time, costs, and education quality. Hang Tuah University Pekanbaru (UHTP) is currently struggling to meet its goal of achieving a 75% on-time graduation rate. This study introduces an innovative approach using machine learning techniques, particularly ensemble learning with Stacking Machine Learning Optuna SMOTE (SMLOS), to address this issue. Our primary objective is to enhance data classification accuracy to predict student graduation timelines effectively. We employ algorithms such as K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (C4.5), Random Forest (RF), and Naive Bayes (NB). These were combined with meta-models, including Logistic Regression (LR), Adaboost, XGBoost, LR+Adaboost, and LR+XGBoost, to create a robust prediction model. To address class imbalance, we applied the Synthetic Minority Over-sampling Technique (SMOTE) and utilized Optuna for hyperparameter tuning. The findings reveal that SMLOS with the Adaboost meta-model achieved the highest accuracy of 95.50%, surpassing previous models' performances, which averaged around 85%. This contribution demonstrates the effectiveness of using SMOTE for class imbalance and Optuna for hyperparameter optimization. Integrating this model into UHTP's academic information system facilitates real-time monitoring and analysis of student data, offering a novel solution for promoting a Smart Campus through more accurate student performance predictions. This technique is not only beneficial for predicting student graduation but can also be applied to various machine learning tasks to improve data classification accuracy and stability.
The Development of Stacking Techniques in Machine Learning for Breast Cancer Detection Van FC, Lucky Lhaura; Anam, M. Khairul; Bukhori, Saiful; Mahamad, Abd Kadir; Saon, Sharifah; Nyoto, Rebecca La Volla
Journal of Applied Data Sciences Vol 6, No 1: JANUARY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i1.416

Abstract

This study addresses the challenges of accurately detecting breast cancer using machine learning (ML) models, particularly when handling imbalanced datasets that often cause model bias toward the majority class. To tackle this, the Synthetic Minority Over-sampling Technique (SMOTE) was applied not only to balance the class distribution but also to improve the model's sensitivity in detecting malignant tumors, which are underrepresented in the dataset. SMOTE was effective in generating synthetic samples for the minority class without introducing overfitting, enhancing the model's generalization on unseen data. Additionally, AdaBoost was employed as the meta model in the stacking framework, chosen for its ability to focus on misclassified instances during training, thereby boosting the overall performance of the combined base models. The study evaluates several models and combinations, with K-Nearest Neighbors (KNN) + SMOTE achieving an accuracy of 97%, precision, recall, and F1-score of 97%. Similarly, C4.5 + Hyperparameter Tuning + SMOTE reached 95% in all metrics. The stacking model with Logistic Regression (LR) as the meta model and SMOTE achieved a strong performance with 97% accuracy, precision, recall, and F1-score all at 97%. The best result was obtained using the combination of Stacking AdaBoost + Hyperparameter Tuning + SMOTE, reaching an accuracy of 98%. These findings highlight the effectiveness of combining SMOTE with stacking techniques to develop robust predictive models for medical applications. The novelty of this study lies in the integration of SMOTE and advanced stacking methods, particularly using AdaBoost and Logistic Regression, to address the issue of class imbalance in medical datasets. Future work will explore deploying this model in clinical settings for accurate and timely breast cancer detection.
Improved Performance of Hybrid GRU-BiLSTM for Detection Emotion on Twitter Dataset Anam, M. Khairul; Munawir, Munawir; Efrizoni, Lusiana; Fadillah, Nurul; Agustin, Wirta; Syahputra, Irwanda; Lestari, Tri Putri; Firdaus, Muhammad Bambang; Lathifah, Lathifah; Sari, Atalya Kurnia
Journal of Applied Data Sciences Vol 6, No 1: JANUARY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i1.459

Abstract

This study addresses emotion detection challenges in tweets, focusing on contextual understanding and class imbalance. A novel hybrid deep learning architecture combining GRU-BiLSTM with SMOTE is proposed to enhance classification performance on an Israel-Palestine conflict dataset. The dataset contains 40,000 tweets labeled with six emotions: anger, disgust, fear, joy, sadness, and surprise. SMOTE effectively balances the dataset, improving model fairness in detecting minority classes. Experimental results show that the GRU-BiLSTM hybrid with an 80:20 data split achieves the highest accuracy of 89%, surpassing BiLSTM alone, which obtained 88%, and other state-of-the-art models. Notably, the proposed model delivers significant improvement in detecting the emotion of joy (recall: 0.87, F1-score: 0.86). In contrast, the surprise category remains challenging (recall: 0.24). Compared to existing research, this study highlights the effectiveness of combining SMOTE and hybrid GRU-BiLSTM, outperforming models such as CNN, GRU, and LSTM on similar datasets. The incorporation of GloVe embeddings enhances contextual word representations, enabling nuanced emotion detection even in sarcastic or ambiguous texts. The novelty lies in addressing class imbalance systematically with SMOTE and leveraging GRU-BiLSTM's complementary strengths, yielding superior performance metrics. This approach contributes to advancing emotion detection tasks, especially in conflict-related social media data, by offering a robust, context-sensitive, and balanced classification method.
Optimizing Sentiment Analysis on Imbalanced Hotel Review Data Using SMOTE and Ensemble Machine Learning Techniques Putra, Pandu Pratama; Anam, M. Khairul; Chan, Andi Supriadi; Hadi, Abrar; Hendri, Nofri; Masnur, Alkadri
Journal of Applied Data Sciences Vol 6, No 2: MAY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i2.618

Abstract

This research addresses the challenge of imbalanced sentiment classes in hotel review datasets obtained from Traveloka by integrating SMOTE (Synthetic Minority Oversampling Technique) with ensemble machine learning methods. The study aimed to enhance the classification of Positive, Negative, and Neutral sentiments in customer reviews. Data preprocessing techniques, including tokenization, stemming, and stopword removal, prepared the textual data for analysis. Various machine learning models—CART, KNN, Naive Bayes, and Random Forest—were evaluated individually and in ensemble configurations such as Bagging, Stacking, Soft Voting, and Hard Voting. The Stacking ensemble approach, utilizing Logistic Regression as a meta-classifier, demonstrated superior performance with an accuracy, precision, recall, and F1-score of 88%, outperforming Bagging (86%), Hard Voting (84%), and Soft Voting (81%). The findings highlight the effectiveness of SMOTE in balancing sentiment classes, particularly improving the classification of underrepresented Neutral and Negative categories. The novelty of this study lies in the comprehensive use of ensemble techniques combined with SMOTE, which significantly enhanced prediction stability and accuracy compared to previous approaches. These results provide valuable insights into leveraging advanced machine learning techniques for sentiment analysis, offering practical implications for improving customer experience and service quality in the hospitality industry.
Enhancing the Performance of Machine Learning Algorithm for Intent Sentiment Analysis on Village Fund Topic Anam, M. Khairul; Putra, Pandu Pratama; Malik, Rio Andika; Karfindo, Karfindo; Putra, Teri Ade; Elva, Yesri; Mahessya, Raja Ayu; Firdaus, Muhammad Bambang; Ikhsan, Ikhsan; Gunawan, Chichi Rizka
Journal of Applied Data Sciences Vol 6, No 2: MAY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i2.637

Abstract

This study explores the implementation of Intent Sentiment Analysis on Twitter data related to the Village Fund program, leveraging Multinomial Naïve Bayes (MNB) and enhancing it with Synthetic Minority Over-sampling Technique (SMOTE) and XGBoost (XGB). The analysis categorizes tweets into six labels: Optimistic, Pessimistic, Advice, Satire, Appreciation, and No Intent. Initially, the MNB model achieved an accuracy of 67% on a 90:10 data split. By applying SMOTE, accuracy improved by 12%, reaching 89%. However, adding Chi-Square feature selection did not increase accuracy further. Incorporating XGB into the MNB+SMOTE model led to a 6% improvement, achieving a final accuracy of 95%. Comprehensive model evaluation revealed that the MNB+SMOTE+XGB model achieved 96% accuracy, 96% precision, 96% recall, and a 96% F1-score, with an AUC of 99%, categorizing it as excellent. These findings demonstrate that the combination of SMOTE for addressing class imbalance and XGBoost for boosting performance significantly enhances the MNB model's classification capabilities. The novelty lies in the integration of these techniques to improve intent sentiment classification for public opinion analysis on the Village Fund program. The results indicate that the majority of tweets labeled as "No Intent" reflect a lack of specific sentiment or actionable intent, providing valuable insights into public perception of the program.
ACLM Model: A CNN-LSTM and Machine Learning Approach for Analyzing Tourist Satisfaction to Improve Priority Tourism Services Arsyah, Ulya Ilhami; Pratiwi, Mutiana; Fryonanda, Harfeby; Anam, M. Khairul; Munawir, Munawir
Journal of Applied Data Sciences Vol 6, No 4: December 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i4.974

Abstract

Tourist satisfaction is a key proxy for destination service quality, yet automatic sentiment analysis of online reviews still faces class imbalance, overfitting, and limited deployability. This study proposes ACLM, a hybrid sentiment classification pipeline that learns semantic and temporal features with a CNN-LSTM backbone and evaluates three classifier heads (Softmax, Logistic Regression, XGBoost) on a three-class corpus (neutral, satisfied, dissatisfied). The objective is to deliver an accurate and operational model for decision support in tourism services. The idea combines Word2Vec embeddings, a compact CNN for local patterns, an LSTM for sequence dependencies, and a training workflow with text cleaning, SMOTE based balancing, and regularization to curb overfitting; outputs are exposed through a simple Streamlit interface. Results show that CNN-LSTM with a Softmax head attains accuracy 0.89, macro precision 0.89, macro recall 0.84, and macro F1 0.86, outperforming Logistic Regression (accuracy 0.87, macro precision 0.84, macro recall 0.82, macro F1 0.82) and XGBoost (accuracy 0.85, macro precision 0.80, macro recall 0.82, macro F1 0.80). The findings indicate that deep sequence features paired with a simple Softmax head provide the best tradeoff between accuracy and stability for three-way sentiment classification. The contribution is a reusable, end to end blueprint from preprocessing and balanced training to quantitative evaluation and an inference GUI, and the novelty lies in testing interchangeable classifier heads on a single CNN-LSTM feature extractor while explicitly addressing data imbalance and deployment constraints. The GUI is implemented using the highest accuracy model, namely CNN-LSTM with Softmax.
MYCD: Integration of YOLO-CNN and DenseNet for Real-Time Road Damage Detection Based on Field Images Yenni, Helda; Muzawi, Rometdo; Karpen, Karpen; Anam, M. Khairul; Kasaf, Michel; Hadi, Tjut Rizqi Maysyarah; Wahyuni, Dewi Sari
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1040

Abstract

Road damage such as cracks, potholes, and uneven surfaces poses serious risks to transportation safety, logistics efficiency, and maintenance budgeting in Indonesia. Manual inspection is time consuming, labor intensive, and prone to error, motivating the use of reliable computer vision solutions. This study proposes MYCD, a hybrid and mobile ready architecture that combines the fast detection ability of YOLO with the dense feature reuse of DenseNet, enhanced by the Convolutional Block Attention Module (CBAM) for spatial and channel focus and Spatial Pyramid Pooling (SPP) for multi scale context understanding. The system detects and classifies the severity of road damage into minor, moderate, and severe categories using images captured by standard cameras. MYCD was trained and validated on 1,120 field images using an 80/20 split to simulate realistic deployment. Validation achieved 64 percent accuracy, with the highest per class precision of 0.72 for minor damage and mAP@0.5 = 0.677. The confusion matrix showed that most errors occurred in the moderate category because of visual similarity with minor and severe damage. Unlike earlier studies that extended YOLO with heavy backbones such as ResNet or EfficientNet, MYCD focuses on feature propagation (DenseNet), attention precision (CBAM), and multi scale fusion (SPP) optimized for real time operation on standard hardware. Efficiency profiling confirmed its deployability. After compression, the model size is 46.8 MB and it requires 3.7 GFLOPs per inference at 640×640 resolution. On a mid-range Android device (Snapdragon 778G, 8 GB RAM), MYCD runs at 19 frames per second with 1.2 GB peak memory. Compared with YOLOv8 WD (68 MB; 5.2 GFLOPs), MYCD reduces computation by 31 percent while maintaining similar accuracy. Overall, MYCD achieves a practical balance of speed, accuracy, and efficiency, providing a deployable and reproducible framework for real time road damage detection in resource limited settings.
Robust Predictive Model for Heart Disease Diagnosis Using Advanced Machine Learning Techniques Sovia, Rini; Anam, M. Khairul; Wisky, Irzal Arief; Permana, Randy; Rahmi, Nadya Alinda; Zain, Ruri Hartika
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1092

Abstract

This study presents a hybrid ensemble learning framework designed to enhance the predictive accuracy, robustness, and generalizability of heart disease classification models. The framework integrates three base classifiers: Decision Tree (DT), Gaussian Naive Bayes (GNB), and K Nearest Neighbor (KNN), which are combined using a stacking ensemble method with Logistic Regression (LR) as the meta learner. Each classifier contributes a distinct analytical perspective: DT models nonlinear relationships, GNB provides probabilistic reasoning, and KNN captures similarity-based patterns. Logistic Regression aggregates their outputs to produce a unified predictive decision. To mitigate class imbalance commonly observed in clinical datasets, the Synthetic Minority Oversampling Technique (SMOTE) is applied to generate synthetic samples of the minority class, improving the model’s ability to recognize underrepresented cases. Hyperparameter optimization is performed using the Optuna framework, which applies the algorithm to efficiently explore parameter configurations. The proposed model was evaluated on a publicly available heart disease dataset and achieved an accuracy of 99.61%, precision of 99.62%, recall of 99.59%, F1 score of 99.60%, and specificity of 99.58%, corresponding to a false positive rate of only 0.42 percent. These results demonstrate the framework’s strong ability to accurately identify heart disease cases while minimizing misclassification. The integration of SMOTE, stacking, and Optuna optimization contributes to its superior performance and robustness. Consequently, this approach shows strong potential for integration into clinical decision support systems to assist healthcare professionals in reliable and timely diagnosis.
Co-Authors -, Tashid Abrar Hadi Ade Riyanda Putra Agus Tri Nurhuda Agustin Agustin Agustin Agustin Agusviyanda Agusviyanda Ahmad Ihsan Ahmad Zamsuri Ahmad Zamsuri, Ahmad Aisum Aliyah Sari Akram, Rizalul Al Amin Fadillah Sani Alkadri Masnur Ambiyar, Ambiyar Andesa, Khusaeri Andhika, Imam Andi Supriadi Chan, Andi Supriadi Anwar, Reksi Aprillian Kartino Arba, Muhammad Hendra Arda Yunianta Arda Yunianta Arief Hidayat Arita Fitri, Triyani Arsyah, Ulya Ilhami Atalya Kurnia Sari Atmaja, Teuku Hadi Wibowo Bambang Kurniawan Br.Situmorang, Elisabet Sinta Romaito Budiman, Edy Budiman, Edy Bunga Nanti Pikir Bunga Nanti Pikir Chatarina Umbul Wahyuni Cut, Banta Damar Sanggara Habibie Darma, Adi Surya Daryanto, Diki Dea Safitri Dedy Irfan Devi Yuliana Dewi Sari Wahyuni Dewi, Nina Nurmalia Didik Sudyana Didik Sudyana Diki Daryanto Diky Daryanto Dona Wahyuning Laily Eddy Kurniawan Pradana Efrizoni, Lusiana Elangga Sony Widiharsono Elva, Yesri Emerlada, Esi Tri Erlin Erlin Erlinda, Susi Ersan Fadrial, Yogi Esi Tri Emerlada Fadli Suandi Fahrul Yamani Faisol Mas’ud Fajar Arifandi Fajrizal Fatdha, T.Sy. Eiva Faza Alameka Fernando Elda Pati Fika Felanda Ardelia Firdaus, Muhammad Bambang Fransiskus Zoromi Fransiskus Zoromi Fransiskus Zoromi Fransiskus Zoromi, Fransiskus Fryonanda, Harfebi Fuquh Rahmat Shaleh Gendhy Dwi Harlyan Gubtha Mahendra Putra Gunadi Gunanti Mahasri Gunawan, Chichi Rizka Habibi Ulayya Hadi Asnal, Hadi Hairah, Ummul Halim, Muhammad Yusuf Hamdani Hamdani - Hamdani . Hamdani Hamdani Hamdani Hamdani Hamdani Hamdani Handayani, Nadya Satya Hanif Aulia Happy Yugo Prasetiya Haris Kurniawan, Haris Hartomi, Zupri Henra Hasan J. Alyamani Haviluddin Haviluddin Hazira, Nadila Helda Yeni Helda Yenni, Helda Hendra Saputra Hendrawan, Riki hendri, nofri Herianto Herianto Herwin Herwin Ika Purnamasari Ike Yunia Pasa Ikhsan Ikhsan Indah Mukhlis Tamara Indra Prayogo Indra Prayogo Indri Febrianti Irfan Putra Pratama Irfansyah Irfansyah Irfansyah Irfansyah Irsyad, Akhmad Irwanda Syahputra Irwanda Syahputra Irzal Arief Wisky Istianah Istianah Jamaris, Muhamad Jamaris, Muhammad Jasmarizal Junadhi Junadhi Junadhi Junadhi Junadhi, Junadhi Kadek Mirnawati Karfindo, Karfindo Karpen Kartina Diah K. W. Kharisma Rahayu Khusaeri Andesa Khusaeri Andesa Kresnapati, I Nyoman Bagus Aji Kudadiri, Parlindungan Lathifah Lathifah Lathifah Lathifah Lathifah Lathifah Lathifah Lathifah Lathifah, Lathifah Latifah Lia Oktavia Ika Putri Lilis Cahaya Septiana Liza Fitria Lucky Lhaura Van FC Lucky Lhaura Van FC, Lucky Lhaura Lusiana Lusiana Efrizoni Lusiana Lusiana M Syauqi Hafizh Machdalena Mahamad, Abd Kadir Mahendra, Muhammad Ihza Mahessya, Raja Ayu Mardainis Mardainis Mardainis Martilinda Panjaitan Mega Susanti Mega Susanti Melda Royani Michal Dennis Michel Kasaf Mi`rajul Rifqi Mohamad, Nur Ikhwan Bin Muhaimin, Abdi Muhamad Jamaris Muhamad Sadar Muhamad Sadar, Muhamad Muhammad Bambang F Muhammad Bambang Firdaus Muhammad Bambang Firdaus Muhammad Budi Saputra muhammad Fuad Muhammad Nur Ihwan Muhammad Wisdan Pratama Putra Munawir Munawir Munawir N.A, Randi Nadila Rahmadhani Nadya Alinda Rahmi Nanda, Novianda Nanda Nariza Wanti Wulan Sari Nasrul Sani Neci Nirwanda Nisa, Aida Nora Lizarti Novi Yona Sidratul Munti Nu'man, Nu'man Nurjayadi Nurjayadi Nurjayadi Nurjayadi Nurjayadi Nurjayadi Nurkholifah Dwi Rahayu Nurul fadillah, Nurul Nurul Indriani Nurwijayanti Pandu Pratama Putra, Pandu Pratama Paradila, Dinda Permana, Randy Pradipta , Rahman Pranata, Angga Pratiwi, Mutiana Purwanto Putra, Ryanda Satria Rahmaddeni Rahmaddeni Rahmaddeni Rahmaddeni Rahmi, Nadya Alinda Rahmiati Rahmiati Rahmiati Rebecca La Volla Nyoto Refni Wahyuni Reksi Anwar Rini Yanti Rini Yanti Rini Yanti Rinno Hendika Putra Rio Andika Malik Rivaldi Dwi Andhika Rohana Yola Parastika Hutasoit Rohmat Romadhoni Rometdo Muzawi, Rometdo Ruri Hartika Zain Saiful Bukhori Salman Aldo Alfaresi Salsabila Rabbani Salsabila Rabbani Saon, Sharifah Saputra, Eko Ikhwan Sari Irma Yani Sitorus Sari, Atalya Kurnia Sarjon Defit Silvyana Dwi Putri Sofiansyah Fadli Sofiansyah Fadli Soni Sovia, Rini suaidah suaidah Sumijan Sumijan Susandri, Susandri Susanti Susanti Susanti Susanti Susanti Susanti Susanti, Mega Susanti, Susanti Susi Erlinda Susi Erlinda SUSI ERLINDA Syam, Salmaini Safitri Syamsiar, Syamsiar T. Sy. Eiva Fatdha Taruk, Medi Tashid Tashid Tashid Tatang Hidayat Tejawati, Andi Tengku Alvin Firdaus Teri Ade Putra Tjut Rizqi Maysyarah Hadi Torkis Nasution Tri Putri Lestari Tri Putri Lestari Tri Putri Lestari Tri Putri Lestari, Tri Putri Triyani Arita Fitri Ulfah, Aniq Noviciate Wahyudianto, Mochamad Rizky Wahyuni, Dewi Sari Waksito, Alan Zulfikar Waskita, Ghozi Indra Wifra, Rizki Wirta Agustin Wirta Agustin Woro Hastuti Setyantini Yaakub, Saleh Yansyah Saputra Wijaya Yesaya Twin Situmorang Yogi Ersan Fadrial Yogi Yunefri, Yogi Yoyon Efendi Yuda Irawan Yudhistira, Dewangga Yumami, Eva Zainal Arifin Zeki Kurniadi zeki Kurniadi