Claim Missing Document
Check
Articles

Found 38 Documents
Search

Implementasi Steganografi Menggunakan Metode End of File (EOF) untuk Menyisipkan File Detail Drawing Engineering dalam Gambar Effendi, Muhammad Makmun; Sen , Tjong Wan; Zy, Ahmad Turmudi; Isarianto, Isarianto
Techno.Com Vol. 24 No. 3 (2025): Agustus 2025
Publisher : LPPM Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/tc.v24i3.13227

Abstract

Penelitian ini bertujuan mengembangkan sistem steganografi berbasis web dengan menerapkan pendekatan End of File (EOF) guna menyisipkan file PDF berisi detail gambar teknik ke dalam file citra digital (.jpg dan .png) pada skenario industri manufaktur. Perlindungan informasi teknis yang bersifat rahasia sangat esensial untuk mencegah akses tidak sah selama proses transmisi digital melalui berbagai kanal komunikasi. Pendekatan EOF memungkinkan penyisipan file secara tidak terlihat tanpa mengubah struktur asli media gambar, sehingga tidak menurunkan kualitas visual. Sistem dibangun dengan HTML, PHP, JavaScript, dan MySQL sebagai basis backend dan frontend. Pengujian mencakup validasi format file, performa proses enkripsi-dekripsi, serta efektivitas distribusi file melalui WhatsApp, email, dan media penyimpanan fisik. Hasilnya menunjukkan bahwa metode EOF berhasil menyisipkan dan mengekstrak file secara akurat, dengan mutu visual gambar yang tetap terjaga. Sistem yang dihasilkan terbukti dapat menjadi solusi proteksi data yang efektif, fleksibel, dan aplikatif bagi kebutuhan industri.   Kata Kunci : steganografi; End of File; keamanan data; penyisipan file PDF; steganografi gambar
The Use of K-Means Algorithm Clustering in Grouping Life Expectancy (Case Study: Provinces in Indonesia) Nugraha, Dimas Reza; Zy, Ahmad Turmudi; Sunge, Aswan Supriyadi
Journal of Computer Networks, Architecture and High Performance Computing Vol. 6 No. 3 (2024): Articles Research Volume 6 Issue 3, July 2024
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v6i3.4171

Abstract

Life expectancy is defined as information that illustrates the age of the death of a population. Life expectancy is a general picture of the state of a region. If the infant mortality rate is high, then the life expectancy in the area is low. And vice versa, if the infant mortality rate is low, the life expectancy in the region is high. Life expectancy is also a benchmark for government actions in improving the welfare of society and the human development index. For this reason, it is necessary to group life expectancy data to make it easier to determine the provinces with high, middle, and low life expectancy. The results of cluster testing using the silhouette score method showed that two subjects had a low silhouette score level, which caused the cluster value to be less than optimal, namely East Java  & Gorontalo. The clustering results found that the cluster was divided into 3, namely cluster 1, with a high level of life expectancy consisting of 10 provinces, namely East Java, Riau, North Sulawesi, Bali, North Kalimantan, DKI Jakarta, West Java, Central Java, East Kalimantan and Special Region of Yogyakarta. Cluster 2 has a level of middle-life expectancy consisting of 18 provinces, namely Gorontalo, North Maluku, Central Sulawesi, South Kalimantan, North Sumatra, Bengkulu, West Sumatra, Central Kalimantan, Aceh, South Sumatra, Banten, Kep. Riau, South Sulawesi, Kep. Bangka Belitung, Lampung, West Kalimantan, Southeast Sulawesi and Jambi. Cluster 3, with a low level of life expectancy, consists of 6 provinces, namely West Sulawesi, Papua, Maluku, West Papua, West Nusa Tenggara, and East Nusa Tenggara.
Analysis of Manual and Automated Methods Effectiveness in Website Penetration Testing for Identifying SQL Injection Vulnerabilities Anaoval, Abdul Aziz; Zy, Ahmad Turmudi; S, Suherman
Journal of Computer Networks, Architecture and High Performance Computing Vol. 6 No. 3 (2024): Articles Research Volume 6 Issue 3, July 2024
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v6i3.4249

Abstract

This research aims to identify vulnerabilities to SQL Injection attacks on websites through penetration testing using quantitative and descriptive methods. In the current digital era, data and information security has become a crucial aspect. One of the frequent threats is SQL Injection attacks, where attackers insert malicious SQL commands into queries executed by web applications. This study utilizes tools such as Burp Suite to identify and exploit vulnerabilities in a login form created by the researchers. The research process begins with the Pre-Engagement Interactions phase, which includes information gathering and setting the testing scope. Subsequently, Vulnerability Testing is conducted to evaluate existing weaknesses. The exploitation of vulnerabilities is performed using the 'OR'1'='1 technique, which successfully demonstrates that the website is vulnerable to SQL Injection attacks. The results of this study indicate that the login form on the website is susceptible to SQL Injection due to insufficient input validation and the use of dynamic SQL queries without prepared statements. Implementing stricter input validation techniques and using prepared statements has proven effective in enhancing website security. This research makes a significant contribution to the field of information system security, particularly in the prevention of SQL Injection attacks. The results of this study can serve as a practical guide for web developers in improving the security of their applications and provide a deeper understanding of the threats and mitigation techniques for SQL Injection.
The Analysis of Product Sales in the Application of Data Mining with Naive Bayes Classification Zahri, M. Hannata; Sunge, Aswan S.; Zy, Ahmad Turmudi
Journal of Computer Networks, Architecture and High Performance Computing Vol. 6 No. 3 (2024): Articles Research Volume 6 Issue 3, July 2024
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v6i3.4255

Abstract

H&F Shoe Store is a privately owned Micro, Small, and Medium Enterprises retail store that sells merchandise. The owner serves customers directly and also acts as a cashier. In this store, the business owner is less aware of what types or categories of products are most in demand by customers, making sales operations less than optimal. Because of this, special expertise is needed to handle the problems in the retail store, namely data mining or Data Mining with the aim of digging up information related to sales problems, in this case the author will use the Classification method with the Naive Bayes algorithm. In this study, the author uses secondary data obtained from sales notebooks and re-collected into Microsoft Excel according to research needs. The data that has been collected on the software is 121 data which have 10 attributes, namely “Nama Produk”, “Size Produk”, “Kategori Produk”, “Jenis Produk”, “Gender Produk”, “Merek Produk”, “Stok Awal”, “Stok Terjual”, “Stok Sisa”, and “Penjualan”. The Naive Bayes Classifier method has successfully produced good results in classifying sales on a type or category of marketed products, the results obtained are in the form of product sales analysis and Naive Bayes model evaluation values. The results of the model evaluation values on the Confusion Matrix obtained are accuracy of 86.11%, recall of 84.62% and precision of 84.62%.
Sentiment Analysis of Dune: Part Two Movie Reviews Using the Naive Bayes Method Maheswari, Diyan Arum; Zy, Ahmad Turmudi; Afriantoro, Irfan
Journal of Computer Networks, Architecture and High Performance Computing Vol. 6 No. 4 (2024): Articles Research October 2024
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v6i4.4604

Abstract

Research on films is fascinating because of the profound changes that the development of information and communication technology has brought about in our interactions with and consumption of media content. This study performs sentiment analysis on "Dune: Part Two" movie reviews using the Naïve Bayes method. Review data was collected from IMDb and then processed through several stages such as preprocessing, feature selection with TF-IDF, data splitting, and data mining and evaluation. Naïve Bayes was chosen for its simplicity and ability to handle large datasets effectively. The test results showed a high accuracy rate of 95%, indicating that this model can identify positive, negative, and neutral sentiments well. The use of TF-IDF in feature selection allowed the model to focus on important words, enhancing its sentiment classification ability. This research can provide insights into audience perceptions of the film "Dune: Part Two," which is beneficial for the film industry.
Comparative Analysis of Earthquake Prediction with SVM, Naïve Bayes, and K-Means Models: Comparative Analysis of Earthquake Prediction with SVM, Naïve Bayes, and K-Means Models Muttaqin, Ahmad Fadhiil; Sunge, Aswan Supriyadi; Zy, Ahmad Turmudi
Journal of Computer Networks, Architecture and High Performance Computing Vol. 7 No. 1 (2025): Article Research January 2025
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v7i1.5085

Abstract

Earthquakes are natural disasters with significant impacts on people and the environment, so effective methods for prediction are needed to improve preparedness and risk mitigation. This study analyzes the performance of three algorithms Support Vector Machine (SVM), Naïve Bayes, and K-Means in predicting earthquakes in Indonesia using a dataset containing 4,645 historical data from BMKG processed through preprocessing, data separation, analysis, and performance evaluation with RapidMiner tools. The results show that SVM has the best performance with 99.87% accuracy, 99.83% precision, and 95.61% recall, making it highly relevant for earthquake prediction. Naïve Bayes achieved 90.31% accuracy and 95.08% recall, but the low precision (57.24%) shows the limitations of this model. K-Means successfully clusters earthquakes into two categories: small (3,661 data) and large (55 data) earthquakes, with a Davies-Bouldin Index value of 0.579, reflecting good clustering quality. Based on these results, SVM is recommended as a superior earthquake prediction model, while Naïve Bayes and K-Means are more suitable for additional analysis. This approach confirms the potential of machine learning algorithms in supporting future earthquake risk mitigation.
Optimalisasi Load Balancing Menggunakan Metode NDLC untuk Meningkatkan Kualitas Layanan Jaringan Internet Isro, Aditya Bani; Zy, Ahmad Turmudi; Andika, Sophian
Journal of Information System Research (JOSH) Vol 5 No 4 (2024): Juli 2024
Publisher : Forum Kerjasama Pendidikan Tinggi (FKPT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/josh.v5i4.5484

Abstract

This research focuses on the main problem of internet network instability that is often disconnected and the low quality of network services characterized by slow access speeds, high latency, and fluctuations in network performance at SMK Al-Manar Islamic School. This instability causes disruptions in the teaching and learning process, such as slow access to educational websites, difficulty in accessing online materials, and disruption in the use of internet-based applications. The purpose of this research is to improve the quality of network services through the implementation of load balancing using Mikrotik router. The method used is Network Development Life Cycle (NDLC), which includes the stages of analysis, design, simulation, implementation, monitoring, and management. Data was collected through literature study, field study, and observation using Wireshark to measure QoS parameters such as throughput, packet loss, delay, and jitter before and after the implementation of load balancing. The results showed that the implementation of load balancing successfully increased throughput from 14215.591 kbps to 46460.8675 kbps, reduced delay from 135 ms to 7 ms, and decreased jitter from 135.615 ms to 73.293 ms. The packet loss value remains 0%, both before and after implementation. In conclusion, the implementation of load balancing using Mikrotik router has successfully improved the quality of internet network services at SMK Al-Manar Islamic School, making it faster, more stable and efficient in accordance with TIPHON standards. It is recommended that schools continue to monitor and manage the network regularly to maintain optimal performance.
Implementasi Metode Decision Tree pada Sistem Prediksi Status Kualitas Produk Minuman A Anshor, Abdul Halim; Zy, Ahmad Turmudi
Jurnal Ilmiah Informatika Global Vol. 15 No. 1: April 2024
Publisher : UNIVERSITAS INDO GLOBAL MANDIRI

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36982/jiig.v15i1.3778

Abstract

The quality of a beverage product is one of the important items that beverage product entrepreneurs must pay attention to. Good quality beverage products will have an impact on consumers' health. UMKM Buah Sabar is one of the MSMEs located in Bekasi district which produces beverage products A. In the distribution of these beverage products, MSME workers in the delivery section have conditions where the product is out of stock or left over. The reseller must be able to understand whether the status of the remaining product is still of good quality or has been damaged. This is very important to pay attention to because the cooling conditions of each reseller have varying degrees of cold, sometimes also influenced by blackouts and unstable electricity voltage. This condition can cause the quality of product A to decrease. The large number of resellers and products sent will make it difficult for MSME workers to detect the quality of beverage product A. To overcome this problem the researchers found a solution that requires a machine learning method to predict the quality status of product A. In this research, the researchers used the decision tree method to predict the quality status of the product Drink A. The data used are 500 samples of drink product A in the production period from November 2023 to February 2024. The parameters used include temperature, color, taste, aroma, and quality status class of drink product A. The results of this research will show the presentation The accuracy value for the quality of product A is 99.59%, this shows that the decision tree algorithm has very good performance in the process of classifying the quality of beverage product A.