Claim Missing Document
Check
Articles

Found 9 Documents
Search

PREDIKSI VOLUME LALU LINTAS ANGKUTAN LEBARAN PADA WILAYAH JAWA TENGAH DENGAN METODE K-MEANS CLUSTERING UNTUK ADAPTIVE NEURO FUZZY INFERENCE SYSTEM (ANFIS) Evanita, Evanita; Noersasongko, Edi; Pramunendar, Ricardus Anggi
Jurnal Simetris Vol 7, No 1 (2016): JURNAL SIMETRIS VOLUME 7 NO 1 TAHUN 2016
Publisher : Universitas Muria Kudus

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (450.085 KB)

Abstract

Di Indonesia kepadatan arus lalu lintas terjadi pada jam berangkat dan pulang kantor, hari-hari libur panjang atau hari-hari besar nasional terutama saat hari raya Idul Fitri (lebaran). Mudik sudah menjadi tradisi bagi masyarakat Indonesia yang ditunggu-tunggu menjelang lebaran, berbondong-bondong untuk pulang ke kampung halaman untuk bertemu dan berkumpul dengan keluarga. Kegiatan rutin tahunan ini banyak di lakukan khususnya bagi masyarakat kota-kota besar seperti Jakarta, dimana diketahui bahwa Jakarta adalah Ibu kota negara Republik Indonesia dan menjadi tujuan merantau untuk mencari pekerjaan yang lebih layak yang merupakan harapan besar bagi masyarakat desa. Volume kendaraan bertambah sejak 7 hari menjelang lebaran sampai 7 hari setelah lebaran tiap tahunnya terutama pada arah keluar dan masuk wilayah Jawa Tengah yang banyak menjadi tujuan mudik. Volume kendaraan saat arus mudik yang selalu meningkat inilah yang akan diteliti lebih lanjut dengan metode ANFIS agar dapat menjadi alternatif solusi  langkah  apa  yang  akan  dilakukan di  tahun  selanjutnya agar  pelayanan lalu  lintas, kemacetan panjang dan angka kecelakaan berkurang. Dengan input parameter ANFIS yang digunakan yaitu pengclusteran hingga 5 cluster, epoch 100, error goal 0 diperoleh performa terbaik ANFIS dengan K-Means clustering yang terbagi menjadi 3 cluster, epoch terbaik sebesar 20 dengan RMSE Training terbaik sebesar  0,1198,  RMSE  Testing terbaik sebesar  0,0282  dan  waktu proses tersingkat  sebesar 0,0695.Selanjutnya hasil prediksi diharapkan dapat bermanfaat menjadi alternatif solusi langkah apa yang akan dilakukan di tahun selanjutnya agar pelayanan lalu lintas lebih baik lagi.Kata kunci: angkutan lebaran, Jawa Tengah, ANFIS.
PREDIKSI VOLUME LALU LINTAS ANGKUTAN LEBARAN PADA WILAYAH JAWA TENGAH DENGAN METODE K-MEANS CLUSTERING UNTUK ADAPTIVE NEURO FUZZY INFERENCE SYSTEM (ANFIS) Evanita Evanita; Edi Noersasongko; Ricardus Anggi Pramunendar
Simetris: Jurnal Teknik Mesin, Elektro dan Ilmu Komputer Vol 7, No 1 (2016): JURNAL SIMETRIS VOLUME 7 NO 1 TAHUN 2016
Publisher : Universitas Muria Kudus

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (450.085 KB) | DOI: 10.24176/simet.v7i1.505

Abstract

Di Indonesia kepadatan arus lalu lintas terjadi pada jam berangkat dan pulang kantor, hari-hari libur panjang atau hari-hari besar nasional terutama saat hari raya Idul Fitri (lebaran). Mudik sudah menjadi tradisi bagi masyarakat Indonesia yang ditunggu-tunggu menjelang lebaran, berbondong-bondong untuk pulang ke kampung halaman untuk bertemu dan berkumpul dengan keluarga. Kegiatan rutin tahunan ini banyak di lakukan khususnya bagi masyarakat kota-kota besar seperti Jakarta, dimana diketahui bahwa Jakarta adalah Ibu kota negara Republik Indonesia dan menjadi tujuan merantau untuk mencari pekerjaan yang lebih layak yang merupakan harapan besar bagi masyarakat desa. Volume kendaraan bertambah sejak 7 hari menjelang lebaran sampai 7 hari setelah lebaran tiap tahunnya terutama pada arah keluar dan masuk wilayah Jawa Tengah yang banyak menjadi tujuan mudik. Volume kendaraan saat arus mudik yang selalu meningkat inilah yang akan diteliti lebih lanjut dengan metode ANFIS agar dapat menjadi alternatif solusi langkah apa yang akan dilakukan di tahun selanjutnya agar pelayanan lalu lintas, kemacetan panjang dan angka kecelakaan berkurang. Dengan input parameter ANFIS yang digunakan yaitu pengclusteran hingga 5 cluster, epoch 100, error goal 0 diperoleh performa terbaik ANFIS dengan K-Means clustering yang terbagi menjadi 3 cluster, epoch terbaik sebesar 20 dengan RMSE Training terbaik sebesar 0,1198, RMSE Testing terbaik sebesar 0,0282 dan waktu proses tersingkat sebesar 0,0695.Selanjutnya hasil prediksi diharapkan dapat bermanfaat menjadi alternatif solusi langkah apa yang akan dilakukan di tahun selanjutnya agar pelayanan lalu lintas lebih baik lagi.Kata kunci: angkutan lebaran, Jawa Tengah, ANFIS.
Pengaruh Peringkas Dokumen Otomatis Dengan Penggabungan Metode Fitur Dan Latent Semantic Analysis (LSA) Pada Proses Clustering Dokumen Teks Berbahasa Indonesia Muhammad Jamhari; Edi Noersasongko; Hendro Subagyo
Jurnal Pseudocode Vol 1, No 2 (2014)
Publisher : Universitas Bengkulu

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (7886.459 KB) | DOI: 10.33369/pseudocode.1.2.72-82

Abstract

Penyimpulan adalah proses pengumpulan bagian yang paling penting dari sebuah sumber dokumen yang menghasilkan versi yang lebih singkat. Metode yang dianggap paling layak untuk melakukan penyimpulan adalah metode berbasis fitur dan LSA (Latent Semantic Analysis). Pengklusteran adalah proses pengelompokan dokumen yang mempunyai kesamaan topik. Metode yang paling seringd ilakukan adalah LSA dimana SVD (Singular Value Decomposition) digunakan untuk menghubungkan semantik antara istilah dan kalimat begitu juga dengan dokumen. SVD juga mengurangi dimensi yang besar dari matriks dokumen istilah. Yang bersama dengan metode Feature Selection melakukan pengurangan fitur. Tesis ini memeriksa pengaruh metode penggabungan fitur dan metode LSA pada penyimpulan pada kumpulan data yang hasilnya akan diklusterkan berdasarkan pada LSA dimana SVD dilakukan bersamaan dengan metode seleksifitur. Uji coba yang dilakukan pada 150 dokumen dari 5 topik dengan beberapa kombinasi metode fitur metode LSA dan kedua metode digabungkan, pada tingkatan penyimpulan yang diintegrasikan tingkatan klusterisasi berdasarkan pada LSA dengan nilai k 12 dan metode kontribusi tema pemilih tema terbimbing memperlihatkan pengaruh yang besar pada metode yang digabungkan pada tahapan penyimpulan yang mendapatkan hasil akurasi 93.33%  dan waktu komputasi yang relatif cepat berkisar 57 detik dengan proporsi penggabungan seperti berikut : Kesimpulan LSA + 50% kesimpulan Fitur+20% seleksifitur+ Klusterisasi LSA.
Improving the Accuracy of C4.5 Algorithm with Chi-Square Method on Pure Tea Classification Using Electronic Nose Mula Agung Barata; Edi Noersasongko; Purwanto; Moch Arief Soeleman
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 7 No 2 (2023): April 2023
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v7i2.4687

Abstract

Tea is one of the plantation products within the Ministry of Agriculture of the Republic of Indonesia, which plays an essential role as a mainstay commodity that boosts the Indonesian economy. Each type of tea has different properties, and the aroma of each type of tea can measure the quality of the tea. The human sense of smell is still very limited in classifying pure types of tea. Therefore, a device is needed to help measure the aroma of tea from an electronic nose. The devices attached to several gas sensors help humans take data from the smell of pure tea and calculate the value of each type of tea to test datasets with data mining algorithms. This study uses the C4.5 algorithm as a classification method with advantages over noise data, missing values, and handling variables with discrete and continuous types. Meanwhile, Chi-square is used to perform attribute severing in the data preprocessing process to increase the accuracy of dataset testing. Testing a pure tea dataset with four whole attributes, namely CO2, CO, H2, and CH4, using the C4.5 algorithm resulted in an accuracy of 93.65% and an increase in the accuracy performance of the C4.5 algorithm by 94.27% with dataset testing using Chi-Square feature selection with the two highest value attributes.
Gaussian Based-SMOTE Method for Handling Imbalanced Small Datasets Muhammad Misdram; Edi Noersasongko; Purwanto Purwanto; Muljono Muljono; Fandi Yulian Pamuji
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 4 (2023): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i4.26881

Abstract

The problem of dataset imbalance needs special handling, because it often creates obstacles to the classification process. A very important problem in classification is to overcome a decrease in classification performance. There have been many published researches on the topic of overcoming dataset imbalances, but the results are still unsatisfactory. This is proven by the results of the average accuracy increase which is still not significant. There are several common methods that can be used to deal with dataset imbalances. For example, oversampling, undersampling, Synthetic Minority Oversampling Technique (SMOTE), Borderline-SMOTE, Adasyn, Cluster-SMOTE methods. These methods in testing the results of the classification accuracy average are still relatively low. In this research the selected dataset is a medical dataset which is classified as a small dataset of less than 200 records. The proposed method is Gaussian Based-SMOTE which is expected to work in a normal distribution and can determine excess samples for minority classes. The Gaussian Based-SMOTE method is a contribution of this research and can produce better accuracy than the previous research. The way the Gaussian Based-SMOTE method works is to start by determining the random location of synthesis candidates, determining the Gaussian distribution. The results of these two methods are substituted to produce perfect synthetic values. Generated synthetic values are combined with SMOTE sampling of the majority data from the training data, produce balanced data. The result of the balanced data classification trial from the influence of the Gaussian Based SMOTE result in a significant increase in accuracy values of 3% on average.
Optimasi Centroid Awal Algoritma K-Medoids Menggunakan Particle Swarm Optimization Untuk Segmentasi Customer Wijaya, Danang Bagus; Noersasongko, Edi; Purwanto, Purwanto
Techno.Com Vol. 23 No. 1 (2024): Februari 2024
Publisher : LPPM Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/tc.v23i1.9516

Abstract

Customer segmentation is an important strategy in a company, it affects good customer relationships which will result in increased profits. Grouping customers in data mining can use several algorithms, but K-Medoids is the right choice because it can reduce noise and outlier sensitivity. However, the selection of cluster centers is still random and has an effect on the results of clustering, so it is necessary to improve the k-medoids algorithm so that the resulting cluster value can be optimal. Particle Swarm Optimization is an optimization algorithm that is often used and has been proven to improve the results of a clustering. In this case, optimization using Particle Swarm Optimization (PSO) in the selection of the initial cluster center needs to be applied to the k-medoids algorithm so that the results of the cluster can be optimal. The results of the study showed the Davies-Bouldin Index (DBI) value for K-Medoids K 2 = 0.379, K 3 = 0.283, and K 4 = 0.593, while the DBI value PSO + K-Medoids K 2 = 0.088, K 3 = 0.226, and K4 = 0.363. The DBI value shows that PSO optimization on K-Medoids to determine the initial centroid is proven to improve the results of clustering than standard K-Medoids.
Analysis Kernel and Feature: Impact on Classification Performance on Speech Emotion Using Machine Learning Jutono Gondohanindijo; Edi Noersasongko; Pujiono Pujiono; Muljono Muljono
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 3 (2024): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i3.29022

Abstract

The main objective of this study is to test the machine learning kernel's selection against the characteristics of the data set used, resulting in good classification performance. The goal of speech emotion recognition is to improve computers' ability to detect and process human emotions in order to improve their ability to respond to interactions between people and computers. It can be applied to feedback on talks, including sentimental or emotional content, as well as the detection of human mental health. One field of data mining work is Speech Emotion Recognition. One of the important things in data mining research is to determine the selection of the kernel Classifier, know the characteristics of datasets, perform Engineering Features and combine features and Corpus Datasets to obtain high accuracy. The research uses analysis and comparison methods using private and public datasets to detect speech emotions. Experimental analysis was done on the characteristics of datasets, selection of kernel classifiers, pre-processing, feature and corpus datasets fusion. Understanding the selection of a classifier kernel that matches the characteristics of the dataset, engineering features and the merger of features and datasets are the contributions of this investigation to improving the accuracy of the classification of speech emotion data. For models with the selection of kernels that match the characteristics of their datasets, this study gave an increase in accuracy of 12.30% for the private dataset and 14.80% for the public dataset, with accuracies of 100.00% and 74.80% respectively. Combining features and public datasets provides an increase in accuracy of 33.62% with an accuracy of 73.95%.
Effectiveness of Individual Performance Dialogue on Employee Performance (Case Studies on Civil Servants of the Ministry of Finance in City of Semarang) Sudiarti, Ni Made Watimena; Noersasongko, Edi; Pramudi, Yuventius Tyas Catur
Jurnal Penelitian Ekonomi dan Bisnis Vol. 6 No. 1 (2021): March 2021
Publisher : Universitas Dian Nuswantoro Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33633/jpeb.v6i1.4485

Abstract

Bureaucratic Reform in human resources began with the birth of Law No. 5 of 2014 on State Civil Apparatus (ASN). The new paradigm in the ASN Law makes ASN employees a profession that has an obligation to conduct self-development and must take responsibility for performance and apply merit principles in the implementation of ASN management. The Ministry of Finance followed up the Bureaucratic Reform in hr with performance management policies. Performance management in the Environment of the Ministry of Finance consists of three main stages, namely planning, monitoring, and determining performance results and evaluation.  Performance evaluation results determine the performance value of employees and organizations, which affects employee careers and compensation. The solution by the Ministry of Finance is to implement management practices called Organizational Performance Dialogue (DKO) and Individual Performance Dialogue (DKI) whose main purpose is to improve the performance of employees and organization. This research is explanatory, where the research method used is Partial Least Square (PLS), with software for analysis using SmartPLS. Data collection instruments using questionnaires and structured interviews with research respondents are employees of the Ministry of Finance in Semarang City area. This research aims to know the effectiveness of performance dialogue on employee performance, so that the results are expected to be a reference in management practices in organizational units in the environment of the Ministry of Finance in an effort to improve the performance of employees and organizations. Keywords:Effectiveness, Individual factors, Leadership factors, Performance dialogue, Employee performance
Data Pre-Processing And Feature Selection Techniques Backward Elimination For Naïve Bayes Classification On Heart Disease Detection Angkasa, Julius Warih; Noersasongko, Edi; Purwanto, Purwanto
Jurnal Ekonomi Teknologi dan Bisnis (JETBIS) Vol. 2 No. 4 (2023): Jurnal Ekonomi, Teknologi dan Bisnis
Publisher : Al-Makki Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.57185/jetbis.v2i4.48

Abstract

According to a study published in the International Journal of Cardiology titled "Heart failure across Asia: Same healthcare burden but differences in organization of care," the mortality rate due to heart failure in Indonesia is relatively high. The research findings indicate that approximately 5% of the total population in Indonesia suffers from heart failure. Heart disease is a condition that occurs when the heart experiences disruptions, either due to infections or congenital abnormalities. It is important to pay attention to heart disease in order to reduce the mortality rate. However, there are several inaccuracies in identifying heart disease, and it is necessary to perform calculations using a predictive approach utilizing data mining techniques. One of the data mining methods used is the Naïve Bayes (NB) algorithm, which serves as a classification technique. Additionally, before performing the classification, issues with the data content are often encountered, such as the presence of missing values. This problem can interfere with the classification process; therefore, a special technique called pre-processing is needed to remove missing values. By employing this technique, it can support obtaining accurate prediction results. Furthermore, to support the classification, this study applies feature selection using the Backward Elimination (BE) method to enhance accuracy. In this study, through the implementation of data pre-processing techniques and feature selection, the accuracy rate was successfully improved to 98.31%.