Claim Missing Document
Check
Articles

Point of Interest (POI) Recommendation System using Implicit Feedback Based on K-Means+ Clustering and User-Based Collaborative Filtering Sulis Setiowati; Teguh Bharata Adji; Igi Ardiyanto
Computer Engineering and Applications Journal Vol 11 No 2 (2022)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (499.913 KB) | DOI: 10.18495/comengapp.v11i2.399

Abstract

Recommendation system always involves huge volumes of data, therefore it causes the scalability issues that do not only increase the processing time but also reduce the accuracy. In addition, the type of data used also greatly affects the result of the recommendations. In the recommendation system, there are two common types of data namely implicit (binary) rating and explicit (scalar) rating. Binary rating produces lower accuracy when it is not handled with the properly. Thus, optimized K-Means+ clustering and user-based collaborative filtering are proposed in this research. The K-Means clustering is optimized by selecting the K value using the Davies-Bouldin Index (DBI) method. The experimental result shows that the optimization of the K values produces better clustering than Elbow Method. The K-Means+ and User-Based Collaborative Filtering (UBCF) produce precision of 8.6% and f-measure of 7.2%, respectively. The proposed method was compared to DBSCAN algorithm with UBCF, and had better accuracy of 1% increase in precision value. This result proves that K-Means+ with UBCF can handle implicit feedback datasets and improve precision.
Peranan Kontur dan Slope dalam Pengenalan Keaslian Tanda Tangan Menggunakan Dynamic Time Warping dan Polar Fourier Transform Ignatia Dhian Estu Karisma Ratri; Hanung Adi Nugroho; Teguh Bharata Adji
Jurnal Informatika Vol 12, No 2 (2016): Jurnal Teknologi Komputer dan Informatika
Publisher : Universitas Kristen Duta Wacana

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (476.563 KB) | DOI: 10.21460/inf.2016.122.495

Abstract

The writer has seen that so far signatures are just validated manually, so there is possibility to create a system for hand signature recognition.  The objective of this research is to improve the method for hand signature recognition using combination method with different characteristic. Contour and slope used for special feature in this research. Contour and slope from image will be applied using Dynamic Time Warping (DTW). Another extraction feature that been used was Polar Fourier Transform (PFT).   The method employed for classification are Support Vector Machine (SVM).From the research results, the writer obtains the fact that the combination between the DTW and PFT using SVM classification, provide the better results in verification of an authentic hand signature with the accuracy of 93.23%.  it is expected that from this research, the results can be utilized in the process of verification of an authentic hand signature in near future dailylife.
Performance Improvement Using CNN for Sentiment Analysis Moch. Ari Nasichuddin; Teguh Bharata Adji; Widyawan Widyawan
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 2, No 1 (2018): March 2018
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1136.606 KB) | DOI: 10.22146/ijitee.36642

Abstract

The approach using Deep Learning method provides great results in various field implementations, especially in the field of Sentiment Analysis. One of Deep Learning methods is CNN which has the ability to provide great accuracy in some previous research. However, there are some parts of the training process which can be improved to upgrade the accuracy level and the training time. In this paper, we try to improve the accuracy and processing time of sentiment analysis using CNN model. By tuning the filter size, frameworks, and pre-training, the results show that the use of smaller filter size and pre-training word2vec provide greater accuracy than some previous studies.
Study of Undersampling Method: Instance Hardness Threshold with Various Estimators for Hate Speech Classification Naufal Azmi Verdikha; Teguh Bharata Adji; Adhistya Erna Permanasari
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 2, No 2 (2018): June 2018
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (969.833 KB) | DOI: 10.22146/ijitee.42152

Abstract

A text classification system is needed to address the problem of hate speech in social media. However, texts of hate speech are very hard to find in social media. This will make the distribution of training data to be unbalanced (imbalanced data). Classification with imbalanced data will make a poor performance. There are several methods to solve the problem of classification with imbalanced data. One of them is undersampling with Instance Hardness Threshold (IHT) method. IHT method balances the dataset by eliminating data that are frequently misclassified. To find those data, IHT requires an estimator, which is a classifier. This research aims to compare estimators of IHT method to solve imbalanced data problem in hate speech classification using TF-IDF weighting method. This research uses the class ratio of dataset after undersampling, time of the undersampling process, and Index of Balanced Accuracy (IBA) evaluation to determine the best IHT method. The results of this research show that IHT method using the Logistic Regression (IHT(LR)) has the fastest undersampling process (1.91 s), perfectly balance dataset with the class ratio is 1:1, and has the best of IBA evaluation in all estimation process. This result makes IHT(LR) be the best method to solve the imbalanced data problem in hate speech classification.
Relational into Non-Relational Database Migration with Multiple-Nested Schema Methods on Academic Data Teguh Bharata Adji; Dwi Retno Puspita Sari; Noor Akhmad Setiawan
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 3, No 1 (2019): March 2019
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (965.805 KB) | DOI: 10.22146/ijitee.46503

Abstract

The rapid development of internet technology has increased the need of data storage and processing technology application. One application is to manage academic data records at educational institutions. Along with massive growth of information, decrement in the traditional database performance is inevitable. Hence, there are many companies choose to migrate to NoSQL, a technology that is able to overcome the traditional database shortcomings. However, the existing SQL to NoSQL migration tools have not been able to represent SQL data relations in NoSQL without limiting query performance. In this paper, a relational database transformation system transforming MySQL into non-relational database MongoDB was developed, using the Multiple Nested Schema method for academic databases. The development began with a transformation scheme design. The transformation scheme was then implemented in the migration process, using PDI/Kettle. The testing was carried out on three aspects, namely query response time, data integrity, and storage requirements. The test results showed that the developed system successfully represented the relationship of SQL data in NoSQL, provided complex query performance 13.32 times faster in the migration database, basic query performance involving SQL transaction tables 28.6 times faster on migration results, and basic performance Queries without involving SQL transaction tables were 3.91 times faster in the migration source. This shows that the theory of the Multiple Nested Schema method, aiming to overcome the poor performance of queries involving many JOIN operations, is proved. In addition, the system is also proven to be able to maintain data integrity in all tested queries. The space performance test results indicated that the migrated database transformed using the Multiple Nested Schema method showed a storage requirement of 10.53 times larger than the migration source database. This is due to the large amount of data redundancy resulting from the transformation process. However, at present, storage performance is not a top priority in data processing technology, so large storage requirements are a consequence of obtaining efficient query performance, which is still considered as the first priority in data processing technology.
Design of Web-Based Cashier and Spare Part Warehouse Application Display (Case Study at Surya Motor Shop) Muhammad Esa Permana Putra; Teguh Bharata Adji; Adhistya Erna Permanasari
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 4, No 2 (2020): June 2020
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijitee.53512

Abstract

A cashier and spare parts warehouse application is an information system facilitating financial reporting and items inventory systems. This has become a necessity in almost all fields of large and small-scale businesses in every country. The information system that belongs to Surya Motor Shop does not have a display that can facilitate users in operating the company's financial and transaction systems in accordance with company needs. This information system uses Bootstrap with HTML, CSS, and Javascript programming languages. In this paper, an interactive display was developed, so as to be able to accommodate web users' responses, by developing a prototype using Bootstrap at the Surya Motor Shop. This was carried out to digitize the transaction system, making it easier to report the items inventory and financial reporting of the company. The prototype development was developed using the The Elements of User Experience method, a user-centered design process. After developing the prototype, a test was carried out to determine the quality of the user experience. The test employed the User Experience Questionnaire (UEQ) method. UEQ testing shown that the prototype interface developed had a positive level of user experience. Compared with the benchmarks set by UEQ, the test results were above the mean benchmark, except for the pull factor which was still below the benchmark average.
Serendipity Identification Using Distance-Based Approach Widhi Hartanto; Noor Akhmad Setiawan; Teguh Bharata Adji
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 5, No 1 (2021): March 2021
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijitee.62344

Abstract

The recommendation system is a method for helping consumers to find products that fit their preferences. However, recommendations that are merely based on user preference are no longer satisfactory. Consumers expect recommendations that are novel, unexpected, and relevant. It requires the development of a serendipity recommendation system that matches the serendipity data character. However, there are still debates among researchers about the available common definition of serendipity. Therefore, our study proposes a work to identify serendipity data's character by directly using serendipity data ground truth from the famous Movielens dataset. The serendipity data identification is based on a distance-based approach using collaborative filtering and k-means clustering algorithms. Collaborative filtering is used to calculate the similarity value between data, while k-means is used to cluster the collaborative filtering data. The resulting clusters are used to determine the position of the serendipity cluster. The result of this study shows that the average distance between the recommended movie cluster and the serendipity movie cluster is 0.85 units, which is neither the closest cluster nor the farthest cluster from the recommended movie cluster.
User Curiosity Factor in Determining Serendipity of Recommender System Arseto Satriyo Nugroho; Igi Ardiyanto; Teguh Bharata Adji
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 5, No 3 (2021): September 2021
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijitee.67553

Abstract

Recommender rystem (RS) is created to solve the problem by recommending some items among a huge selection of items that will be useful for the e-commerce users. RS prevents the users from being flooded by information that is irrelevant for them.Unlike information retrieval (IR) systems, the RS system's goal is to present information to the users that is accurate and preferably useful to them. Too much focus on accuracy in RS may lead to an overspecialization problem, which will decrease its effectiveness. Therefore, the trend in RS research is focusing beyond accuracy methods, such as serendipity. Serendipity can be described as an unexpected discovery that is useful. Since the concept of a recommendation system is still evolving today, formalizing the definition of serendipity in a recommendation system is very challenging.One known subjective factor of serendipity is curiosity. While some researchers already addressed curiosity factor, it is found that the relationships between various serendipity component as perceived by the users and their curiosity levels is still yet to be researched. In this paper, the method to determine user curiosity model by considering the variation of rated items was presented, then relation to serendipity components using existing user feedback data was validated. The finding showed that the curiosity model was related to some user-perceived values of serendipity, but not all. Moreover, it also had positive effect on broadening the user preference. 
Shape analysis for classification of breast nodules on digital ultrasound images Hanung Adi Nugroho; Hesti Khuzaimah Nurul Yusufiyah; Teguh Bharata Adji; Widhia K.Z Oktoeberza
Indonesian Journal of Electrical Engineering and Computer Science Vol 13, No 2: February 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v13.i2.pp837-844

Abstract

One of the imaging modalities for early detection of breast cancer malignancy is ultrasonography (USG).  The malignancy can be analysed from the characteristic of nodule shape.  This study aims to develop a method for classifying the shape of breast nodule into two classes, namely regular and irregular classes.  The input image is pre-processed by using the combination of adaptive median filter and speckle reduction bilateral filtering (SRBF) to reduce speckle noises and to eliminate the image label.  Afterwards, the filtered image is segmented based on active contour followed by feature extraction process.  Nine extracted features, i.e. roundness, slimness and seven features of invariant moments, are used to classify nodule shape using multi-layer perceptron (MLP).  The performance of the proposed method is evaluated using 105 breast nodule images which comprise of 57 regular and 48 irregular nodule images.  The results of classification process achieve the level of accuracy, sensitivity and specificity at 96.20%, 97.90% and 94.70%, respectively.  These results indicate that the proposed method successfully classifies the breast nodule images based on shape analysis.
Kinerja Metode Load Balancing dan Fault Tolerance Pada Server Aplikasi Chat Sampurna Dadi Riskiono; Selo Sulistyo; Teguh Bharata Adji
Retii Prosiding Seminar Nasional ReTII ke-11 2016
Publisher : Institut Teknologi Nasional Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Perkembangan aplikasi social network telah tumbuh begitu cepat dengan berbagai aplikasi yang didukung oleh piranti cerdas sebagai platform untuk menjalankannya. Salah satu aplikasi social network yang digunakan adalah aplikasi chat. Ketika jumlah pengguna yang mengakses layanan chat meningkat  dan server tidak dapat mengatasinya tentu ini akan menjadi masalah yang mengakibatkan layanan server terhenti disebabkan adanya beban berlebih yang diterima oleh suatu server tunggal. Oleh karena itu diperlukan penelitian untuk merancang bagaimana membangun sistem server yang dapat menangani banyaknya permintaan layanan  yang masuk agar beban dari server chat dapat diatasi. Hal ini bertujuan untuk meningkatkan pelayanan untuk setiap permintaan yang dikirim oleh pengguna. Salah satu solusi dari permasalahan tersebut adalah penggunaan banyak server. Perlu metode untuk mendistribusikan beban agar merata di masing-masing server, yaitu dengan menerapkan metode load balancing untuk mengatur pemerataan beban tersebut. Selanjutnya akan dilakukan evaluasi sebelum dan sesudah penerapan load balancing. Sedangkan untuk ketersediaan yang tinggi diperoleh ketika server memiliki kemampuan dalam melakukan failover atau berpindah ke server yang lain bila terjadi kegagalan. Sehingga penerapan load balancing dan fault tolerance dapat meningkatkan layanan kinerja aplikasi chat dan memperkecil kesalahan yang terjadi. Kata Kunci: load balancer, fault tolerance, sistem serverchat, beban berlebih.