Claim Missing Document
Check
Articles

Perbandingan Performa Relational, Document-Oriented dan Graph Database Pada Struktur Data Directed Acyclic Graph Setialana, Pradana; Adji, Teguh Bharata; Ardiyanto, Igi
Jurnal Buana Informatika Vol 8, No 2 (2017): Jurnal Buana Informatika Volume 8 Nomor 2 April 2017
Publisher : Universitas Atma Jaya Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24002/jbi.v8i2.1079

Abstract

Abstract.Directed Acyclic Graph (DAG) is a directed graph which is not cyclic and is usually employed in social network and data genealogy. Based on the characteristic of DAG data, a suitable database type should be evaluated and then chosen as a platform. A performance comparison among relational database (PostgreSQL), document-oriented database (MongoDB), and graph database (Neo4j) on a DAG dataset are then conducted to get the appropriate database type. The performance test is done on Node.js running on Windows 10 and uses the dataset that has 3910 nodes in single write synchronous (SWS) and single read (SR). The access performance of PostgreSQL is 0.64ms on SWS and 0.32ms on SR, MongoDB is 0.64ms on SWS and 4.59ms on SR, and Neo4j is 9.92ms on SWS and 8.92ms on SR. Hence, relational database (PostgreSQL) has better performance in the operation of SWS and SR than document-oriented database (MongoDB) and graph database (Neo4j).Keywords: database performance, directed acyclic graph, relational database, document-oriented database, graph database Abstrak.Directed Acyclic Graph (DAG) adalah graf berarah tanpa putaran yang dapat ditemui pada data jejaring sosial dan silsilah keluarga. Setiap jenis database memiliki performa yang berbeda sesuai dengan struktur data yang ditangani. Oleh karena itu perlu diketahui database yang tepat khususnya untuk data DAG. Tujuan penelitian ini adalah membandingkan performa dari relational database (PostgreSQL), document-oriented database (MongoDB) dan graph database (Neo4j) pada data DAG. Metode yang dilakukan adalah mengimplentasi dataset yang memiliki 3910 node dalam operasi single write synchronous (SWS) dan single read (SR) pada setiap database menggunakan Node.js dalam Windows 10. Hasil pengujian performa PostgreSQL dalam operasi SWS sebesar 0.64ms dan SR sebesar 0.32ms, performa MongoDB pada SWS sebesar 0.64ms dan SR sebesar 4.59ms sedangkan performa Neo4j pada operasi SWS sebesar 9.92ms dan SR sebesar 8.92ms. Hasil penelitian menunjukan bahwa relational database (PostgreSQL) memiliki performa terbaik dalam operasi SWS dan SR dibandingkan document-oriented database (MongoDB) dan graph database (Neo4j).Kata Kunci: performa database, directed acyclic graph, relational database, document-oriented database, graph database
ANALISIS SENTIMEN DATA PRESIDEN JOKOWI DENGAN PREPROCESSING NORMALISASI DAN STEMMING MENGGUNAKAN METODE NAIVE BAYES DAN SVM Saputra, Nurirwan; Adji, Teguh Bharata; Permanasari, Adhistya Erna
Dinamika Informatika Vol 5, No 1 (2015): Jurnal Dinamika Informatika
Publisher : Universitas PGRI Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Jokowi merupakan seorang tokoh masyarakat dengan jenjang karir yang sangat cepat, dan tidak luput dari pandangan masyarakat baik itu positif, netral maupun negatif. Data mengenai Jokowi yang berisikan komentar positif , netral dan negatif yang berasal dari media sosial dan blog politik diperlukan dalam menentukan langkah-langkah yang harus diambil oleh Jokowi untuk mendapatkan kepercayaan dari masyarakat. Selain itu data yang sudah didapat perlu dievaluasi untuk menunjukkan urgensi diimplementasikannya preProcessing terhadap data, yaitu normalisasi dan stemming. Analisis sentimen merupakan ilmu yang berguna untuk menganalisis pendapat seseorang, sentiment seseorang, evaluasi seseorang, sikap seseorang dan emosi seseorang ke dalam bahasa tertulis. Penelitian ini menggunakan search techniques dalam pengambilan data, sehingga pengambilan data dilakukan dengan efektif dan efisien. Search techniques dalam penelitian ini menggunakan Boolean searching dengan operator “AND”. Data yang sudah didapat dilabeli positif, netral dan negatif oleh penulis kemudian dikoreksi oleh ahli bahasa. Setelah itu dilakukan preProcessing baik itu mengubah kata tidak baku menjadi baku atau biasa disebut normalisasi menggunakan kamus dan mencari akar kata yaitu stemming dengan bantuan aplikasi Sastrawi Master. Selanjutnya dilakukan juga tokenisasi N-Gram, Unigram, Bigram, dan Trigram terhadap kalimat,  kemudian menghilangkan kata-kata yang umum digunakan dan tidak mempunyai Informasi yang berharga pada suatu konteks atau biasa disebut stopword removal, dan mempertahankan emoticon karena emoticon merupakan simbol yang menunjukkan ekspresi seseorang ke dalam tulisan. Akurasi yang terbaik dalam penelitian ini adalah dengan dilakukan normalisasi dan stemming pada data sebesar 89,2655% menggunakan metode SVM, dan kemudian data yang dinormalisasi saja sebesar 88,7006% menggunakan metode SVM. Dalam penelitian ini, tidak ada ujicoba terhadap data yang dilakukan stemming saja, dikarenakan tahap yang harus dilakukan dalam stemming adalah melakukan normalisasi terlebih dahulu terhadap data.
The Social Engagement to Agricultural Issues using Social Network Analysis Widiyanti, Tanty Yanuar; Adji, Teguh Bharata; Hidayah, Indriana
IJID (International Journal on Informatics for Development) Vol. 10 No. 1 (2021): IJID June
Publisher : Faculty of Science and Technology, UIN Sunan Kalijaga Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14421/ijid.2021.2185

Abstract

Twitter is one of the micro-blogging social media which emphasizes the speed of communication. In the 4.0 era, the government also promotes the distribution of information through social media to reach the community from various lines.  In previous research, Social Network Analysis was used to see the relationship between actors in a work environment, or as a basis for identifying the application of technology adoption in decision making, whereas no one has used SNA to see trends in people's response to agricultural information. This study aims to see the extent to which information about agriculture reaches the community, as well as to see the community's response to take part in agricultural development.  This article also shows the actors who took part in disseminating information. Data was taken on November 13 to 20, 2020 from the Drone Emprit Academic, and was taken limited to 3000 nodes. Then, the measurements of the SNA are represented on the values of Degree Centrality, Betweenness Centrality, Closeness Centrality, and Eigenvector Centrality. @AdrianiLaksmi has the highest value in Eigenvector Centrality and Degree Centrality, he has the greatest role in disseminating information and has many followers among other accounts that spread the same information. While the @RamliRizal account ranks the highest in Betweenness Centrality, who has the most frequently referred information, and the highest Closeness Centrality is owned by the @baigmac account because of the fastest to re-tweet the first information.
EVALUASI METODE LOAD BALANCING Dan FAULT TOLERANCE PADA SISTEM DATABASE SERVER APLIKASI CHAT Hamka, Cakra Aminuddin; Adji, Teguh Bharata; Sulistyo, Selo
Edu Komputika Journal Vol 5 No 1 (2018): Edu Komputika Journal
Publisher : Jurusan Teknik Elektro Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/edukomputika.v5i1.23029

Abstract

Abstract The development of social network applications has growth so fast with various applications supported by smart tools as multiplatform to used it. Several applications that used by user are the chat and media social app. However, there is problem is the number of users accessing the chat service is large. This condition makes communication into database server can be stop and loss of data. It is caused by excessive load received by a single server database. Therefore, this study designs a database server to solve a lot of users of communication server and storage capacity into database server more than one. This is important because to increase availibiliy of services for each users request. This study used a method of distribution of communications service request for each database server. Distribution of communication service request used a load balancing method and HAPRoxy with scheduling method. There are two algorithms in scheduling method are round robin and least connection algorithm. Both of algorithms were evaluated. compared, and used in database server more than one. The result shown average of least connection algorithm has value of response time 32.421 ms is smaller than round robin algorithm 35.813 ms. While on the throughput, least algorithm has big value 211.267 Kb/s than round robin algorithm has value 210.298 Kb/s. The result shown the number of packet least connection algorithm are big and better than round robin algortihm with load balancing implementation for distribution of server communication on database server more than one able to solve a lot of services communication. Keywords: load balancing, database server chat, least connection, round robin, response time, throughput
Rekomendasi Berdasarkan Nilai Pretest Mahasiswa Menggunakan Metode Collaborative Filtering dan Bayesian Ranking Stefani, Brillian; Adji, Teguh Bharata; Kusumawardani, Sri Suning; Hidayah, Indriana
Edu Komputika Journal Vol 5 No 1 (2018): Edu Komputika Journal
Publisher : Jurusan Teknik Elektro Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/edukomputika.v5i1.23077

Abstract

Abstract- Self-Regulated Learning (SRL) skill can be improved by improving students’ cognitive and metacognitive abilities. To improve metacognitive abilities, metacognitive support in learning process using e-learning needs to be included. One of the example is assisting students by giving feedbacks once students had finished doing specific avtivities. The purpose of this study was to develop a pedagogical agent with the abilities to give students feedbacks, particularly recommendations of lesson sub-materials order. Recommendations were given by considering students pretest scores (students’ prior knowledge). The computations for recommendations used Collaborative Filtering and Bayesian Ranking methods. Results obtained in this study show that based on MAP (Mean Average Precision) testings, Item-based method got the highest MAP score, which was 1. Computation time for each method was calculated to find runtime complexity of each method. The results of computation time show that Bayesian Ranking had the shortest computation time with 0,002 seconds, followed by Item-based with 0,006 seconds, User Based with 0,226 seconds, while Hybrid has the longest computation time with 0,236 seconds. Keyword- self-regulated learning, metacognitive, metacognitive support, feedback, pretest (prior knowledge), Collaborative Filtering, Bayesian Ranking, Mean Average Precision, runtime complexity.
Stemming Influence on Similarity Detection of Abstract Written in Indonesia Tari Mardiana; Teguh Bharata Adji; Indriana Hidayah
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 14, No 1: March 2016
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v14i1.1926

Abstract

In this paper we would like to discuss about stemming effect by using Nazief and Adriani algorithm against similarity detection result of Indonesian written abstract. The contents of the publication abstract similarity detection can be used as an early indication of whether or not the act of plagiarism in a writing. Mostly in processing the text adding a pre-process, one of it which is called a stemming by changing the word into the root word in order to maximize the searching process. The result of stemming process will be changed as a certain word n-gram set then applied an analysis of similarity using Fingerprint Matching to perform similarity matching between text. Based on the F1-score which used to balance the precision and recall number, the detection that implements stemming and stopword removal has a better result in detecting similarity between the text with an average is 42%. It is higher comparing to the similarity detection by using only stemming process (31%) or the one that was done without involving the text pre-process (34%) while applying the bigram.
Analisis Perbandingan Komputasi GPU dengan CUDA dan Komputasi CPU untuk Image dan Video Processing Bagus Kurniawan; Teguh Bharata Adji; Noor Akhmad Setiawan
Seminar Nasional Aplikasi Teknologi Informasi (SNATI) 2015
Publisher : Jurusan Teknik Informatika, Fakultas Teknologi Industri, Universitas Islam Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Teknologi yang semakin maju memicu berbagaiparadigma komputasi untuk terus berkembang, tidak terkecualimengenai teknik image processing maupun video processing yangdibutuhkan masyarakat untuk memanipulasi gambar gunakebutuhan informasi. Komputasi Graphics Processing Unit(GPU) menjadi salah satu alternatif komputasi paralel yangmenawarkan kinerja komputer yang lebih cepat daripadakomputasi Central Processing Unit (CPU) dengan memanfaatkankartu grafis. Penelitian ini menganalisis teknik komputasi paralelGPU dengan Compute Unified Device Architecture (CUDA) danmembandingkan hasil kinerja dari komputasi sekuensial CPUdengan OpenCV yang dianalisis menggunakan metodeeksperimen. Eksperimen dilakukan dengan implementasi imagedan video processing untuk operasi grayscale, negatif, dan deteksitepi. Penelitian ini menunjukkan sebuah hasil bahwa imageprocessing untuk operasi grayscale dan negatif, komputasiparalel GPU lebih unggul antara 0.2 hingga 2 detik. Sedangkanuntuk operasi deteksi tepi, komputasi GPU unggul hingga 14detik. Atau 2.8 kali lipat lebih cepat daripada komputasi CPU.Untuk video processing, komputasi CPU lebih unggul darikomputasi GPU selisih antara 1-2 frame per second.
Appropriate Data mining Technique and Algorithm for Using in Analysis of Customer Relationship Management (CRM) in Bank Industry Maghfirah Maghfirah Maghfirah; Teguh Bharata Adji; Noor Akhmad Setiawan
Seminar Nasional Aplikasi Teknologi Informasi (SNATI) 2015
Publisher : Jurusan Teknik Informatika, Fakultas Teknologi Industri, Universitas Islam Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract—Customer Relationship Management (CRM)adalah ide yang menjadi sebuah peningkatan kepentingan faktorsukses untuk bisnis ke depannya. CRM adalah proses darimengatur interaksi antara sebuah perusahaan danpelanggannya. Pada awalnya, ini termasuk ke dalam segmentasipasar untuk mengidentifikasi pelanggan dengan potensial profityang tinggi, dari strategi pemasaran yang dirancang dengan baikuntuk mempengaruhi tingkah laku dari pelanggan dalam segmentersebut. Dalam masyarakat modern, pelanggan menjadi asetyang penting bagi perusahaan. Hubungan antara pelanggandengan manajemen yang efisien adalah metode yang dibutuhkanuntuk meningkatkan keuntungan lebih dari perusahaan.Termasuk di industri perbankan, misalnya, di sebuahperusahaan industri perbankan digunakan konsep CRMkhususnya dengan menggunakan salah satu model strategipemasaran yaitu Customer Segmentation yang bertujuan untukmembantu pihak bank untuk membagi pasar menjadi kelompoknasabah yang terbedakan dengan kebutuhan, karakteristik atautingkah laku yang berbada yang mungkin membutuhkan produkatau bauran pemasaran yang terpisah. Customer Segmentationdapat dilakukan dengan bantuan teknik Data Mining, sehinggadiharapkan dapat dihasilkan Customer Segmentation yang sesuaidengan kebutuhan bank yang dapat meningkatkan kualitasservis dan revenue dari bank tersebut.Penerapan data mining untuk sistem CRM diperbankan seharusnya menggunakan teknik dan algoritma yangtepat. Untuk itu, paper ini akan membahas mengenai bagaimanacara untuk menentukan teknik dan algoritma data mining yangtepat untuk sistem CRM di perbankan.Keywords—Customer Relationship Management (CRM);Data Mining; Bank Customer Segmentation
Texture Analysis for Skin Classification in Pornography Content Filtering Based on Support Vector Machine Hanung Adi Nugroho; Fauziazzuhry Rahadian; Teguh Bharata Adji; Ratna Lestari Budiani Buana
Journal of Engineering and Technological Sciences Vol. 48 No. 5 (2016)
Publisher : Institute for Research and Community Services, Institut Teknologi Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/j.eng.technol.sci.2016.48.5.6

Abstract

Nowadays, the Internet is one of the most important things in a human's life. The unlimited access to information has the potential for people to gather any data related to their needs. However, this sophisticated technology also bears a bad side, for instance negative content information. Negative content can come in the form of images that contain pornography. This paper presents the development of a skin classification scheme as part of a negative content filtering system. The data are trained by grey-level co-occurrence matrices (GLCM) texture features and then used to classify skin color by support vector machine (SVM). The tests on skin classification in the skin and non-skin categories achieved an accuracy of 100% and 97.03%, respectively. These results indicate that the proposed scheme has potential to be implemented as part of a negative content filtering system.
Social-Child-Case Document Clustering based on Topic Modeling using Latent Dirichlet Allocation Nur Annisa Tresnasari; Teguh Bharata Adji; Adhistya Erna Permanasari
IJCCS (Indonesian Journal of Computing and Cybernetics Systems) Vol 14, No 2 (2020): April
Publisher : IndoCEISS in colaboration with Universitas Gadjah Mada, Indonesia.

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijccs.54507

Abstract

Children are the future of the nation. All treatment and learning they get would affect their future. Nowadays, there are various kinds of social problems related to children.  To ensure the right solution to their problem, social workers usually refer to the social-child-case (SCC) documents to find similar cases in the past and adapting the solution of the cases. Nevertheless, to read a bunch of documents to find similar cases is a tedious task and needs much time. Hence, this work aims to categorize those documents into several groups according to the case type. We use topic modeling with Latent Dirichlet Allocation (LDA) approach to extract topics from the documents and classify them based on their similarities. The Coherence Score and Perplexity graph are used in determining the best model. The result obtains a model with 5 topics that match the targeted case types. The result supports the process of reusing knowledge about SCC handling that ease the finding of documents with similar cases