Claim Missing Document
Check
Articles

Found 13 Documents
Search

Prediksi Magnitudo Gempa Menggunakan Random Forest, Support Vector Regression, XGBoost, LightGBM, dan Multi-Layer Perceptron Berdasarkan Data Kedalaman dan Geolokasi Maulita, Ika; Wahid, Arif Mu'amar
Jurnal Pendidikan dan Teknologi Indonesia Vol 4 No 5 (2024): JPTI - Mei 2024
Publisher : CV Infinite Corporation

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jpti.470

Abstract

Penelitian ini bertujuan membandingkan kinerja lima algoritma pembelajaran mesin, yaitu Random Forest, Support Vector Regression, XGBoost, LightGBM, dan Multi-Layer Perceptron dalam memprediksi magnitudo gempa berdasarkan data kedalaman dan geolokasi. Masalah yang diangkat adalah pentingnya prediksi magnitudo gempa yang lebih akurat untuk meningkatkan efektivitas mitigasi risiko bencana, terutama di daerah rawan gempa. Data yang digunakan mencakup informasi kedalaman, lintang, dan bujur dari peristiwa gempa selama periode tertentu. Metode penelitian melibatkan pembagian data pelatihan dan pengujian, serta evaluasi kinerja model menggunakan metrik Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), dan R². Hasil penelitian menunjukkan bahwa LightGBM memberikan performa terbaik dengan nilai MAE sebesar 0,4688, RMSE sebesar 0,6284, dan R² sebesar 0,2458. Random Forest mengikuti dengan nilai MAE sebesar 0,4750, RMSE sebesar 0,6312, dan R² sebesar 0,2391. XGBoost menunjukkan performa yang kompetitif dengan MAE sebesar 0,4932, RMSE sebesar 0,6471, dan R² sebesar 0,2003. Sebaliknya, Support Vector Regression mencatatkan nilai MAE sebesar 0,5136, RMSE sebesar 0,6987, dan R² sebesar 0,0677, sementara Multi-Layer Perceptron memberikan kinerja terendah dengan MAE sebesar 0,5190, RMSE sebesar 0,7152, dan R² sebesar 0,0231. Dampak penelitian ini sangat penting bagi pengembangan sistem peringatan dini gempa dan peningkatan akurasi prediksi magnitudo gempa. Penelitian ini menegaskan bahwa pemilihan model yang tepat dapat berkontribusi pada mitigasi risiko bencana, dengan memberikan informasi yang lebih akurat mengenai kekuatan gempa yang dapat terjadi. Temuan ini juga menunjukkan bahwa algoritma pembelajaran mesin, terutama LightGBM dan Random Forest, dapat menjadi alat yang efektif dalam analisis seismologi dan aplikasi prediksi gempa.
Analisis Komparatif Linear Regression, Random Forest, dan Gradient Boosting untuk Prediksi Banjir Maulita, Ika; Widiawati, Chyntia Raras Ajeng; Wahid, Arif Mu'amar
Jurnal Pendidikan dan Teknologi Indonesia Vol 4 No 8 (2024): JPTI - Agustus 2024
Publisher : CV Infinite Corporation

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jpti.599

Abstract

Penelitian ini mengevaluasi tiga model machine learning—Linear Regression, Random Forest Regressor, dan Gradient Boosting Regressor—untuk memprediksi probabilitas banjir di India, dengan tujuan meningkatkan akurasi prediksi dan mendukung strategi mitigasi risiko banjir. Kinerja model dievaluasi menggunakan metrik Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), dan ????2. Hasil penelitian menunjukkan bahwa Linear Regression dan Gradient Boosting Regressor memiliki kinerja yang hampir setara, dengan MAE dan RMSE yang kompetitif. Namun, Linear Regression sedikit unggul dalam menjelaskan variabilitas probabilitas banjir berdasarkan nilai ????2. Sebaliknya, Random Forest Regressor menunjukkan kinerja yang lebih rendah, yang kemungkinan disebabkan oleh overfitting atau kurang optimalnya penyetelan parameter. Penelitian ini memberikan kontribusi penting terhadap peningkatan akurasi sistem peringatan dini dan pengelolaan risiko banjir berbasis data. Dengan menganalisis faktor-faktor utama yang memengaruhi probabilitas banjir, penelitian ini menawarkan wawasan yang dapat mendukung perencanaan intervensi yang lebih efektif, seperti pengelolaan sungai yang lebih baik dan perencanaan tata ruang perkotaan yang adaptif. Saran untuk penelitian mendatang meliputi eksplorasi algoritma tambahan, termasuk pendekatan pembelajaran mendalam, penerapan rekayasa fitur lanjutan, serta optimalisasi model menggunakan alat Automated Machine Learning (AutoML). Temuan ini berkontribusi pada pengembangan metode prediksi banjir yang lebih akurat dan efisien, serta memperkuat upaya mitigasi risiko banjir di masa depan.
Optimasi Logistic Regression dan Random Forest untuk Deteksi Berita Hoax Berbasis TF-IDF Wahid, Arif Mu'amar; Turino, Turino; Nugroho, Khabib Adi; Maharani, Titi Safitri; Darmono, Darmono; Utomo, Fandy Setyo
Jurnal Pendidikan dan Teknologi Indonesia Vol 4 No 8 (2024): JPTI - Agustus 2024
Publisher : CV Infinite Corporation

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jpti.602

Abstract

Penyebaran berita hoax di era digital menjadi tantangan serius yang memerlukan solusi berbasis teknologi untuk mengidentifikasi dan meminimalkan dampaknya. Penelitian ini bertujuan untuk mengevaluasi performa Logistic Regression (LR) dan Random Forest (RF) dalam mendeteksi berita hoax menggunakan representasi teks berbasis Term Frequency-Inverse Document Frequency (TF-IDF). Hyperparameter tuning diterapkan pada kedua algoritma untuk meningkatkan akurasi, precision, recall, dan F1-score. Dataset yang digunakan terdiri dari berita hoax dan valid dalam bahasa Indonesia, yang telah melalui tahapan preprocessing, termasuk pembersihan teks, penghapusan stopwords, dan stemming. Hasil evaluasi menunjukkan bahwa Logistic Regression, setelah tuning, mencapai akurasi sebesar 95.20%, precision 95.71%, recall 94.48%, dan F1-score 95.09%. Random Forest menunjukkan akurasi sebesar 92.39%, precision 94.39%, recall 89.87%, dan F1-score 92.08%. Logistic Regression unggul dalam keseimbangan antara precision dan recall, sementara Random Forest menunjukkan kekuatan pada precision dengan kemampuan menangani pola data yang lebih kompleks. Teknik TF-IDF terbukti efektif dalam memberikan bobot pada kata-kata yang relevan, membantu algoritma klasifikasi dalam mengenali pola dalam data teks. Penelitian ini juga memiliki dampak praktis dalam memberikan fondasi bagi pengembangan sistem deteksi hoax yang dapat digunakan di aplikasi berbasis NLP, baik untuk kebutuhan akademis maupun implementasi di industri. Penelitian ini berkontribusi pada pengembangan sistem deteksi hoax berbasis Natural Language Processing (NLP), khususnya untuk bahasa Indonesia. Untuk pengembangan lebih lanjut, disarankan memperluas dataset dengan sumber berita yang lebih beragam dan mengeksplorasi algoritma berbasis deep learning seperti LSTM atau Transformer. Secara ilmiah, penelitian ini memberikan kontribusi penting dengan menguji efektivitas hyperparameter tuning dalam meningkatkan akurasi model deteksi hoax. Hasil penelitian ini diharapkan dapat menjadi acuan dalam membangun sistem deteksi hoax yang lebih akurat dan andal.
Optimizing Higher Education Performance Through Data Integration Using the Zachman Framework: A Case Study on LAM Infokom Accreditation Criteria Jahir, Abdul; Wahid, Arif Mu'amar; Sufranto, Tri Titis
Jurnal Nasional Teknologi dan Sistem Informasi Vol 10 No 3 (2024): Desember 2024
Publisher : Departemen Sistem Informasi, Fakultas Teknologi Informasi, Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/TEKNOSI.v10i3.2024.201-215

Abstract

This study explores the application of the Zachman Framework to enhance data integration in higher education, specifically targeting the LAM Infokom accreditation criteria. The research addresses the challenges faced by educational institutions in managing fragmented data systems, which hinder their ability to meet comprehensive accreditation standards. Utilizing a multi-phase methodology, the research incorporates a literature review, case analysis, and prototype development to develop a cohesive data integration model aligned with accreditation requirements. The Zachman Framework provides a structured approach to system integration, covering perspectives such as data types, processes, storage locations, personnel, timelines, and objectives. The proposed integration strategy emphasizes the use of Application Programming Interfaces (APIs), middleware solutions, and a centralized data warehouse to unify disparate data sources. These integration methods facilitate seamless data exchange across academic, financial, and administrative systems, promoting data consistency and accessibility. Additionally, a phased implementation plan is recommended, outlining specific tasks, resource allocation, and monitoring measures to ensure systematic system improvement. Key performance indicators and evaluation metrics are established to monitor the effectiveness of the integrated system in meeting accreditation requirements. The study highlights the importance of a robust data governance framework and the role of stakeholder engagement in overcoming technical and resource-related challenges. Ultimately, this research contributes a practical data integration blueprint for higher education institutions, offering a replicable model for achieving and maintaining accreditation compliance through structured data management and governance practices.
ENHANCING COLLABORATION DATA MANAGEMENT THROUGH DATA WAREHOUSE DESIGN: MEETING BAN-PT ACCREDITATION AND KERMA REPORTING REQUIREMENTS IN HIGHER EDUCATION Wahid, Arif Mu'amar; Afuan, Lasmedi; Utomo, Fandy Setyo
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 6 (2024): JUTIF Volume 5, Number 6, Desember 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.6.1747

Abstract

In higher education institutions, effective management of collaboration data is crucial for academic reporting and strategic planning. This study addresses the challenges of managing diverse data types and the necessity for streamlined data management to meet BAN-PT accreditation and Kerma reporting requirements. It aims to design and implement a data warehouse utilizing the star schema for improved accessibility and decision-making. Highlighting the development process, special emphasis is placed on the Extract, Transform, Load (ETL) process with Pentaho to assure data integrity and quality. The methodology involves a systematic approach to constructing the data warehouse, aimed at resolving identified challenges through efficient data organization and quality management. Results demonstrate significant enhancements in data accessibility, reporting efficiency, and quality, leading to reduced administrative efforts and improved decision-making. The research also considers the wider implications of such data management systems in academic administration, suggesting the potential of data warehouses in higher education as benchmarks for similar institutional challenges. Future research directions are recommended for optimizing data warehouse designs and adapting to evolving academic standards, underlining the critical role of advanced data management in meeting stringent accreditation and reporting needs, thus providing a model for technology-driven solutions in educational data management.
Optimization of Recommender Systems for Image-Based Website Themes Using Transfer Learning Wahid, Arif Mu'amar; Hariguna, Taqwa; Karyono, Giat
Journal of Applied Data Sciences Vol 6, No 2: MAY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i2.671

Abstract

Recommender systems play a crucial role in personalizing user experiences in e-commerce, digital media, and web design. However, traditional methods such as Collaborative Filtering and Content-Based Filtering struggle to account for visual preferences, limiting their effectiveness in domains were aesthetics influence decision-making, such as website theme recommendations. These systems face challenges such as data sparsity, cold-start problems, and an inability to capture intricate visual features. To address these limitations, this study integrates Convolutional Neural Networks (CNNs) with advanced recommendation models, including Inception V3, DeepStyle, and Visual Neural Personalized Ranking (VNPR), to enhance the accuracy and personalization of visually-aware recommender systems. A quantitative research approach was employed, using controlled experiments to evaluate different combinations of feature extractors and recommendation models. Data was sourced from ThemeForest, a widely used platform for website themes, and underwent preprocessing to ensure consistency. The models were evaluated using precision, recall, F1 score, Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG) to measure recommendation quality. The results indicate that Inception V3 + VNPR outperforms other model combinations, achieving the highest accuracy in personalized theme recommendations. The integration of transfer learning further improved feature extraction and performance, even with limited training data. These findings underscore the importance of combining deep learning-based feature extraction with recommendation models to improve visually-driven recommendations. This study provides a comparative analysis of CNN-based recommender systems and contributes insights for optimizing recommendations in visually complex domains. Despite improvements, challenges such as dataset diversity remain a limitation, affecting generalizability. Future research could explore alternative CNN architectures, such as ResNet and DenseNet, and incorporate user feedback mechanisms to further enhance recommendation accuracy and adaptability.
Transformasi Media Pembelajaran Melalui Pelatihan AI untuk Meningkatkan Kompetensi Digital Guru di SD Negeri 2 Gandatapa Abdul Azis; Wahid, Arif Mu'amar; Widyaningsih, Dyah Ayu; Putri, Khanivia Yuniska
JURPIKAT (Jurnal Pengabdian Kepada Masyarakat) Vol. 6 No. 3 (2025)
Publisher : Politeknik Piksi Ganesha Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37339/jurpikat.v6i3.2531

Abstract

Pengabdian ini menjawab isu rendahnya pemanfaatan teknologi oleh guru di SD Negeri 2 Gandatapa yang mengakibatkan pembelajaran cenderung monoton dan motivasi siswa menurun. Tujuan kegiatan ini adalah untuk meningkatkan kompetensi guru dalam membuat media pembelajaran interaktif berbasis AI. Metode yang digunakan adalah pelatihan partisipatif selama tiga hari pada September 2024, yang meliputi praktik langsung penggunaan platform ChatGPT, Wordwall, dan Gamma. Keberhasilan program diukur melalui pre-test dan post-test. Hasil menunjukkan peningkatan kompetensi yang signifikan, dengan seluruh guru berhasil menciptakan produk media ajar. Dampak dari pelatihan ini adalah terciptanya kualitas pengajaran yang lebih dinamis, yang berpotensi meningkatkan keterlibatan dan motivasi belajar siswa.
Penguatan Keterampilan Menulis Ilmiah Dosen Universitas Amikom Purwokerto pada Bidang Data Science untuk Publikasi Internasional Hariguna, Taqwa; Sarmini, Sarmini; Wahid, Arif Mu'amar; Pratama, Satrya Fajri; Yi, Ding
Jurnal Abdi Masyarakat Indonesia Vol 5 No 5 (2025): JAMSI - September 2025
Publisher : CV Firmos

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.54082/jamsi.2082

Abstract

Kemampuan menulis ilmiah merupakan keterampilan esensial bagi dosen di bidang data science untuk meningkatkan produktivitas dan kualitas publikasi di jurnal internasional bereputasi.. Permasalahan utama yang dihadapi oleh dosen di Fakultas Ilmu Komputer Universitas Amikom Purwokerto adalah kurangnya keterampilan dalam menulis artikel ilmiah yang memenuhi standar jurnal internasional, yang berdampak pada terbatasnya publikasi mereka. Sebagai solusi, kegiatan pengabdian ini menyelenggarakan pelatihan dan pendampingan penulisan artikel ilmiah dengan tujuan untuk meningkatkan kapasitas dosen dalam menulis secara sistematis dan sesuai standar jurnal internasional. Workshop interaktif yang diadakan diikuti oleh 25 dosen dari empat program studi, dengan sesi teori, praktik langsung, dan peer review. Hasil evaluasi menunjukkan peningkatan signifikan pada kemampuan peserta: aspek struktur penulisan meningkat dari 60% menjadi 80%, bahasa ilmiah dari 58% menjadi 82%, serta pemahaman standar jurnal dari 52% menjadi 76%. Selain itu, 8 peserta berhasil menghasilkan draf artikel yang siap disubmit ke jurnal internasional. Kegiatan ini juga berhasil mendorong terbentuknya komunitas penulis ilmiah yang menjadi langkah awal dalam membangun budaya akademik kolaboratif secara berkelanjutan di Fakultas Ilmu Komputer.
Empirical Analysis of Social Media Interaction Metrics and Their Impact on Startup Engagement Wahid, Arif Mu'amar; Maulita, Ika
International Journal of Informatics and Information Systems Vol 8, No 3: September 2025
Publisher : International Journal of Informatics and Information Systems

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/ijiis.v8i3.272

Abstract

In the digital economy, social media serves as a crucial platform for startups to build relationships with audiences and strengthen brand presence. However, the specific effects of different types of user interactions—likes, comments, and shares—on startup engagement remain insufficiently quantified. This study provides an empirical analysis of how social media interaction metrics influence engagement using secondary data from the publicly available Social Media Engagement Metrics dataset on Kaggle. Employing a quantitative design, the study integrates descriptive statistics, Pearson correlation, Random Forest, and multiple linear regression to examine both linear and non-linear relationships. Results show that likes, comments, and shares collectively affect engagement rates, with Random Forest identifying likes as the most influential feature. However, regression results indicate that shares exert a statistically significant but negative effect on engagement, suggesting complex behavioral patterns behind user interactions. Visual analyses—including histograms, boxplots, and heatmaps—support data normality and highlight variation in post performance. The findings emphasize the importance of visually engaging content and interactive captions to enhance user engagement. This study contributes to digital marketing research by combining methodological rigor with actionable insights, offering data-driven recommendations for startups aiming to optimize their social media strategies.
Optimizing Higher Education Performance Through Data Integration Using the Zachman Framework: A Case Study on LAM Infokom Accreditation Criteria Jahir, Abdul; Wahid, Arif Mu'amar; Sufranto, Tri Titis
Jurnal Nasional Teknologi dan Sistem Informasi Vol 10 No 3 (2024): Desember 2024
Publisher : Departemen Sistem Informasi, Fakultas Teknologi Informasi, Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/TEKNOSI.v10i3.2024.201-215

Abstract

This study explores the application of the Zachman Framework to enhance data integration in higher education, specifically targeting the LAM Infokom accreditation criteria. The research addresses the challenges faced by educational institutions in managing fragmented data systems, which hinder their ability to meet comprehensive accreditation standards. Utilizing a multi-phase methodology, the research incorporates a literature review, case analysis, and prototype development to develop a cohesive data integration model aligned with accreditation requirements. The Zachman Framework provides a structured approach to system integration, covering perspectives such as data types, processes, storage locations, personnel, timelines, and objectives. The proposed integration strategy emphasizes the use of Application Programming Interfaces (APIs), middleware solutions, and a centralized data warehouse to unify disparate data sources. These integration methods facilitate seamless data exchange across academic, financial, and administrative systems, promoting data consistency and accessibility. Additionally, a phased implementation plan is recommended, outlining specific tasks, resource allocation, and monitoring measures to ensure systematic system improvement. Key performance indicators and evaluation metrics are established to monitor the effectiveness of the integrated system in meeting accreditation requirements. The study highlights the importance of a robust data governance framework and the role of stakeholder engagement in overcoming technical and resource-related challenges. Ultimately, this research contributes a practical data integration blueprint for higher education institutions, offering a replicable model for achieving and maintaining accreditation compliance through structured data management and governance practices.