Claim Missing Document
Check
Articles

Internet Network Analysis with Hierarchy Token Bucket Method at Dhyana Pura University Trywanto Rina; Kadek Yota Ernanda Aryanto; I Made Gede Sunarya
Paradigma Vol. 25 No. 2 (2023): September 2023 Period
Publisher : LPPM Universitas Bina Sarana Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31294/p.v25i2.2354

Abstract

Bandwidth management is indispensable in computer networks. Not only to manage the needs of each individual, but also to keep the data traffic running smoothly. Dhyana Pura University is a private university that utilizes information technology in achieving optimal performance. Observation results with throughput, delay, packet loss and jitter parameters show that bandwidth management has not been done properly. Implementation of bandwidth management is done on Mikrotik Cloud Core Router and PC Router based on Ubuntu server version 16.04. One way to reduce performance degradation is to manage bandwidth. Good bandwidth management is expected to provide the right Quality of Service (QoS) for each internet service. The Hierarchy Token Bucket (HTB) method as a queuing method that regulates bandwidth usage to be given to each internet user shows more optimal results and is easier to use according to the desired needs. This is because the bandwidth is divided evenly and prevents one user from spending excessive bandwidth, so that it can increase employee satisfaction in using internet services. The results of the analysis of measuring the level of employee satisfaction with the Customer Satisfaction Index (CSI) method show that the HTB method has a total satisfaction index of 66.154% in the very satisfied category, while for troughput variables of 65.32%, delay of 67.14%, packet loss of 66.50% and jitter of 65.40%. Thus the implementation on the internet network at Dhyana Pura University using the Hierarchy Token Bucket (HTB) method is feasible to implement with a satisfied predicate.
Penilaian Tata Kelola dan Manajemen Infrastruktur TI Bank BPD XYZ Menggunakan COBIT 2019: Assessment of IT Infrastructure Governance and Management Bank BPD XYZ Using the COBIT 2019 I Wayan Budiana; Kadek Yota Ernanda Aryanto; I Made Gede Sunarya
MALCOM: Indonesian Journal of Machine Learning and Computer Science Vol. 4 No. 1 (2024): MALCOM January 2024
Publisher : Institut Riset dan Publikasi Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.57152/malcom.v4i1.1043

Abstract

Bank BPD XYZ telah memiliki 144 aplikasi yang dikelola secara mandiri di Data Center dan Disaster Recovery Center Bank BPD XYZ. Berdasarkan laporan monitoring dan audit dari regulator, terindentifikasi beberapa permasalahan yang terkait dengan pengelolaan infrastruktur TI. Oleh karena itu, diperlukan sebuah penilaian tata kelola dan manajemen infrastruktur TI menggunakan kerangka kerja COBIT 2019 yang bertujuan untuk menilai seberapa besar capability level dan maturity level saat ini, serta menganalisis tingkat kesenjangan (gap) dengan tingkat yang diharapkan yang selanjutnya akan digunakan sebagai dasar untuk memberikan rekomendasi perbaikan. Hasil pengisian formulir design factor oleh top level management, domain obyektif yang memiliki tingkat kepentingan ? 70 yaitu EDM03 Ensured Risk Optimization, APO12 Managed Risk, APO13 Managed Security, dan MEA03 Managed Compliance with External Requirement. Berdasarkan hasil pengisian kuesioner obyektif terpilih, setelah dilakukan analisis didapatkan nilai pada domain obyektif EDM03 3,44 persentase 68,87% largely, APO12 3,45 persentase 68,93% largely, APO13 3,63 persentase 72,50% largely, dan MEA03 3,63 persentase 72,50% largely. Terdapat kesenjangan (gap) pada setiap domain obyektif terpilih, nilai kesenjangan pada domain obyektif EDM03 0,57, APO12 0,55, APO13 0,37, dan MEA03 0,38. Hasil ini merupakan deskripsi tata kelola dan manajemen Bank BPD XYZ saat ini dalam mengelola infrastruktur TI yang telah dilaksanakan.
Application of the Learning Vector Quantization Algorithm for Classification of Students with the Potential to Drop Out I Gusti Made Wahyu Krisna Widiantara; Kadek Yota Ernanda Aryanto; I Made Gede Sunarya
Brilliance: Research of Artificial Intelligence Vol. 3 No. 2 (2023): Brilliance: Research of Artificial Intelligence, Article Research November 2023
Publisher : Yayasan Cita Cendekiawan Al Khwarizmi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/brilliance.v3i2.3155

Abstract

Universities, as providers of academic education services, are required to provide an optimal educational process for students to produce a generation of quality human beings. Student learning success is seen as the success of universities in implementing the higher education process. One of the problems universities face in maintaining the quality of education is student dropout. The high dropout rate in universities can impact accreditation assessments. As a result, it will affect the level of public trust. The number of dropouts in higher education can be minimized from an early age by analyzing the factors that cause student dropouts using data on students who graduated and those who dropped out. This data can be used to determine student dropout patterns by classifying them using the artificial neural network learning vector quantization (LVQ) approach. The data used in this research was 4053, consisting of 3840 graduate student data and 213 dropout student data. This data is considered unbalanced, an unbalanced dataset can cause errors because the model tends to classify the majority class with a high classification and pays less attention to the minority class. So, it is necessary to apply oversampling techniques to overcome this problem. The research results show that the application of the LVQ method to unbalanced data produces an accuracy value of 95.53%, a precision value of 100%, a recall value of 15.02% and an f1-score of 0.26, while the application of the LVQ method to data that has undergone resampling resulting in an accuracy value of 94.66%, a precision value of 92.22%, a recall value of 97.55%, and an f1-score value of 0.95. The LVQ method can be used to classify dropout students with excellent results.
Deteksi Nodul Paru pada Citra CT dengan Klasifikasi Pseudo Nearest Neigbour Rule Jaya, I Nyoman Surya; Aryanto, Kadek Yota Ernanda; Divayana, Dewa Gede Hendra
JTIM : Jurnal Teknologi Informasi dan Multimedia Vol 5 No 4 (2024): February
Publisher : Puslitbang Sekawan Institute Nusa Tenggara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35746/jtim.v5i4.463

Abstract

This research aims to obtain the classification performance of the Pseudo Nearest Neighbor Rule (PNNR) algorithm in detecting lung nodules in CT scan images. The PNNR classification algorithm is used to reduce the influence of noise or outliers in the classification process so that false positives (prediction of an object that is not a nodule as a nodule) can be reduced. The data set used is 200 patient data obtained from the public data of The Lung Image Database Consortium and Infectious Disease Research Institute (LIDC/IDRI) where 4 fold Cross Validation will be carried out. The preprocessing stage is carried out by segmenting the otsu image, where from the segmentation results the two largest blobs are then searched for to determine the area of ​​the lung to be analyzed. Next, the feature extraction process from the candidate nodules (white pixels / foreground) is obtained from the Otsu segmentation process again. The results of this second segmentation contain information from the candidate nodules to then calculate the value of the shape features of the candidate nodules such as area, eccentricity, equivalent diameter, major axis length, minor axis length and perimeter which produces feature set values ​​as the basis for training data and data test for the classification process in PNNR The results of the classification proposed in this research, namely using the PNNR classification method, obtained an Accuracy value of , which is included in the excellent classification level or the Accuracy level is very good but with a lower level of sensitivity or recognition of true positives, namely . In further research, classification optimization can be carried out by selecting the feature set usedThis research aims to obtain the classification performance of the Pseudo Nearest Neighbor Rule (PNNR) algorithm in detecting lung nodules in CT scan images. The PNNR classification algorithm is used to reduce the influence of noise or outliers in the classification process so that false positives (prediction of an object that is not a nodule as a nodule) can be reduced. The data set used is 200 patient data obtained from the public data of The Lung Image Database Consortium and Infectious Disease Research Institute (LIDC/IDRI) where 4 fold Cross Validation will be carried out. The preprocessing stage is carried out by segmenting the otsu image, where from the segmentation results the two largest blobs are then searched for to determine the area of ​​the lung to be analyzed. Next, the feature extraction process from the candidate nodules (white pixels / foreground) is obtained from the Otsu segmentation process again. The results of this second segmentation contain information from the candidate nodules to then calculate the value of the shape features of the candidate nodules such as area, eccentricity, equivalent diameter, major axis length, minor axis length and perimeter which produces feature set values ​​as the basis for training data and data test for the classification process in PNNR The results of the classification proposed in this research, namely using the PNNR classification method, obtained an Accuracy value of , which is included in the excellent classification level or the Accuracy level is very good but with a lower level of sensitivity or recognition of true positives, namely . In further research, classification optimization can be carried out by selecting the feature set used
Identification of Little Tuna Species Using Convolutional Neural Networks (CNN) Method and ResNet-50 Architecture Pusparani, Diah Ayu; Kesiman, Made Windu Antara; Aryanto, Kadek Yota Ernanda
Indonesian Journal of Artificial Intelligence and Data Mining Vol 8, No 1 (2025): March 2025
Publisher : Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24014/ijaidm.v8i1.31620

Abstract

Indonesia is home to a vast array of biodiversity, including various species of little tuna. However, the process of identifying little tuna species is still challenging due to their diversity. The Indonesian Society and Fisheries Foundation (MDPI), which has the task of collecting fisheries data manually, is prone to significant identification errors. Therefore, the author proposes the utilization of Deep Learning, a Machine Learning method due to its ability to model various complex data such as images or pictures and sounds. This approach can facilitate the identification process of little tuna. In this research, the Resnet-50 architecture is utilised in the modelling process with the original dataset of 500 images. In this study, several test scenarios were also applied. The best results obtained are global accuracy of 91% and matrix accuracy value of 95%. These results were obtained using an augmented dataset with some parameter adjustments to the model built. With these good accurate identification, the MDPI Foundation is expected to better manage fisheries data and use it to support sustainable fisheries management.
PEMETAAN AKTIFITAS KONSUMEN TOKO MENGGUNAKAN METODE BACKGROUND SUBTRACTION Listartha, I Made Edy; Indrawan, Gede; Aryanto, Kadek Yota Ernanda
International Journal of Natural Science and Engineering Vol. 1 No. 2 (2017): July
Publisher : Lembaga Penelitian dan Pengabdian kepada Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (516.695 KB) | DOI: 10.23887/ijnse.v1i2.12468

Abstract

This study aims to heat mapping the consumer’s movement using background subtraction techniques. The mapping is built using the coordinate information obtained from the consumer location that detected from the video where the separation of the consumer object and the background is done by background subtraction technique. Tests were performed on eleven video of consumer data activity that have different activity characteristics that were created using Microsoft PowerPoint application. Simulated activities include walking straight, staying, walking back to the path that had been passed, pacing, disturbance from another object, the influence of color, the consumer walks meet and coincide with other consumers. From the test of video discovery is obtained accuracy of 96.07% for the detection process of consumer movement, where the lack of detection process occurs due to the absence of techniques used to perform the introduction of characteristics of consumer objects. The mapping process is very much in line with the number of coordinates generated in the motion detection process, but the inaccurate detection of movement in the entrance and exit areas makes the coordinates high. By filtering with Region of Interes (ROI) in the survey area, creating disturbances in the area of doors and areas with objects that produce movements other than consumers can be eliminated.
ANALISIS KEBERMANFAATAN WEBSITE SEKOLAH TINGGI PARIWISATA (STIPAR) TRIATMA JAYA MENGGUNAKAN METODE USABILITY TESTING Indriyani, Ni Luh Putu Ratih; Dantes, Gede Rasben; Aryanto, Kadek Yota Ernanda
International Journal of Natural Science and Engineering Vol. 1 No. 2 (2017): July
Publisher : Lembaga Penelitian dan Pengabdian kepada Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (334.298 KB) | DOI: 10.23887/ijnse.v1i2.12469

Abstract

This research is aimed to determine the results of the usability analysis from the website Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya viewed from the user side as well as knowing the recommendation of website improvement of Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya from usability aspect. The methods used are Usability Testing of Performance Measurement and Retrospective Think Aloud (RTA) techniques and the dissemination of SUS questionnaires.  The results showed that the Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya still not effective, it is seen from the error or mistake made by users of lecturers and students while doing the task. Statistically website Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya has been efficient for lecture but not efficient for college students users. For lecturers there are 6 out of 10 tasks that do not have significant time difference, while for  college students there are 4 out of 10 tasks that do not have significant time difference. From the aspect of user satisfaction, both lecturers and college students feel still less satisfied using the website of Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya, this can be seen from the SUS questionnaire scores of lecturers of 63.28 and college students users of 58.44. Based on the analysis result, it can be concluded that the Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya has not fulfilled the criteria of products that have good usability, because the three aspects (effectiveness, efficiency and user satisfaction) have not been met. Based on the above, the recommendation of Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya website is focused on adjustment of display, language and term change, feature addition, menu name adjustment, menu structure and menu layout, content addition and menu simplification. Repairs done by making wireframe recommendation page Sekolah Tinggi Pariwisata (STIPAR) Triatma Jaya.
IMPLEMENTASI METODE C4.5 DAN NAIVE BAYES BERBASIS ADABOOST UNTUK MEMPREDIKSI KELAYAKAN PEMBERIAN KREDIT Nugraha, Putu Gede Surya Cipta; Dantes, Gede Rasben; Aryanto, Kadek Yota Ernanda
International Journal of Natural Science and Engineering Vol. 1 No. 2 (2017): July
Publisher : Lembaga Penelitian dan Pengabdian kepada Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (355.441 KB) | DOI: 10.23887/ijnse.v1i2.12470

Abstract

At PT. BPR XYZ credit problems is a very vital issue, where if many debtors are delinquent in payment it will increase the NPL value of the bank itself. Increasing the NPL value above 5% indicates that the bank is not healthy. From the above problems, then in this study aims to perform the implementation process of data mining methods to determine the accuracy level of prediction of creditworthiness at PT. BPR XYZ, so that the future of credit problems can be overcome. Data mining methods used in the prediction process are C4.5 and Naïve Bayes methods, where both methods are implemented and the accuracy level comparison process is used to see which method is more accurate in predicting creditworthiness. Both methods are also embedded AdaBoost method with the aim of increasing the accuracy in the process of prediction of creditworthiness feasibility. The result obtained from the comparison of method accuracy level, stated that the better accuracy is C4.5 method that is 90.00% with the precision level of 86.67%. As for the accuracy of Naïve Bayes method that is equal to 70.00% with the precision level of 79.71%. Then with the addition of AdaBoost method in predicting creditworthiness proved to increase the higher accuracy value of 91.54% in method C4.5 and by 78.13% in Naïve Bayes method. From the description above, with the implementation of AdaBoost method on the method of C4.5 and Naïve Bayes can improve the accuracy of the prediction of creditworthiness of PT. BPR XYZ. In addition, the implementation of the AdaBoost-based C4.5 method can be a recommendation for PT. BPR XYZ in conducting predictive process of credit worthiness in the future.
PENDETEKSIAN OBJEK ROKOK PADA VIDEO BERBASIS PENGOLAHAN CITRA DENGAN MENGGUNAKAN METODE HAAR CASCADE CLASSIFIER Sanjaya, Kadek Oki; Indrawan, Gede; Aryanto, Kadek Yota Ernanda
International Journal of Natural Science and Engineering Vol. 1 No. 3 (2017): October
Publisher : Lembaga Penelitian dan Pengabdian kepada Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (724.964 KB) | DOI: 10.23887/ijnse.v1i3.12938

Abstract

Object detection is a topic widely studied by the scientists as a special study in image processing. Although applications of this topic have been implemented, but basically this technology is not yet mature, futher research is needed to developed to obtain the desired result. The aim of the present study is to detect cigarette objects on video by using the Viola Jones method (Haar Cascade Classifier). This method known to have speed and high accuracy because of combining some concept (Haar features, integral image, Adaboost, and Cascade Classifier) to be a main method to detect objects. In this research, detection testing of cigarettes object is in samples of video with the resolution 160x120 pixels, 320x240 pixels, 640x480 pixels under condition of on 1 cigarette object and condition 2 cigarettes object. The result of this research indicated that percentage of average accuracy highest 93.3% at condition 1 cigarette object and 86,7% in the condition 2 cigarette object that was detected on the video with resolution 640x480 pixels, while the percentage of accuracy lowest 90% at condition 1cigarette object, and 81,7% at the condition 2 cigarette objects, detected on the video with the lowest resolution 160x120 pixels. The percentage of average errors at detection cigarettes object was inversely with percentage of accuracy. So that the detection system is able to better recognize the object of the cigarette, then the number of samples in the database needs to be improved and able to represent various types of cigarettes under various conditions and can be added new parameters related to cigarette object
INTEGRASI TEKNOLOGI PENGINDERAAN JAUH DAN MACHINE LEARNING PADA WEB GIS UNTUK PEMETAAN POTENSI BANJIR Ony Andewi, Putu; Seputra, Ketut Agus; Aryanto, Kadek Yota Ernanda; Dewi, Luh Joni Erawati
Jurnal Pendidikan Teknologi dan Kejuruan Vol. 22 No. 1 (2025): Edisi Januari 2025
Publisher : Universitas Pendidikan Ganesha

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.23887/jptkundiksha.v22i1.87455

Abstract

Flooding is a natural phenomenon that has frequently posed significant challenges in various regions of Indonesia, driven by factors such as rainfall, river conditions, upstream landscapes, land use patterns, and sea-level rise. These events often lead to severe consequences, including the spread of waterborne diseases, destruction of infrastructure, depletion of natural resources, and economic disruption. One proactive measure to mitigate such impacts is mapping potential flood risk areas. This study utilized Landsat 8 satellite imagery Level 2, Collection 2, Tier 1 processed on the Google Earth Engine (GEE) platform to derive indices such as the Digital Elevation Model (DEM), Topographic Position Index (TPI), Normalized Difference Vegetation Index (NDVI), and Normalized Difference Water Index (NDWI). These indices served as input variables for a Random Forest model, classifying areas into high, medium, and low flood risk categories. The developed model achieved 86% accuracy when evaluated using a confusion matrix, with precision, recall, and F1-score metrics validating its performance. The integration of this model into a WebGIS service was implemented through Flask, offering an API that supports real-time flood risk data retrieval by third-party applications. The front-end interface, built using LeafletJS, provides an interactive and user-friendly map visualization of flood risk levels. The results demonstrate that the Random Forest model effectively classifies flood risk, while the WebGIS service offers a practical tool for visualizing and disseminating flood risk information. This service has the potential to support disaster management efforts and enhance community preparedness against flooding.