Claim Missing Document
Check
Articles

Found 36 Documents
Search

Design of Digital Evidence Collection Framework in Social Media Using SNI 27037: 2014 Adi Setya; Abba Suganda
JUITA : Jurnal Informatika JUITA Vol. 10 No. 1, May 2022
Publisher : Department of Informatics Engineering, Universitas Muhammadiyah Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1285.049 KB) | DOI: 10.30595/juita.v10i1.13149

Abstract

Social media is a place that people use to socialize. In addition to socializing, social media is also often used as a crime medium by certain people. In the evidentiary process, law enforcers have the duty to present the evidence used by the suspect in committing his crime. The method used in collecting digital evidence from social media must have a clear scientific basis and guidelines. If the method used is not known as a theory or method in digital forensics, this will undermine all expert testimony and evidence presented in the court. Making a framework that can be recognized by all judicial administrators (judges, public prosecutors, attorneys for defendants, witnesses and defendants) is a solution that can be used as a standard so that the evidence process runs well. The framework that has been created by the researcher is an update from the previous framework. The framework design is made using the Composite Logic method. The composite logic method will collaborate with the Digital Forensics Investigation Models framework to produce a new framework. Based on existing data and facts, this research has produced a framework with better performance than the previous framework. 
Tableau Business Intelligence Using the 9 Steps of Kimball’s Data Warehouse & Extract Transform Loading of the Pentaho Data Integration Process Approach in Higher Education Indrabudhi Lokaadinugroho; Abba Suganda Girsang; Burhanudin Burhanudin
Engineering, MAthematics and Computer Science (EMACS) Journal Vol. 3 No. 1 (2021): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v3i1.6816

Abstract

This paper discusses about how to build a data warehouse (DW) in business intelligence (BI) for a typical marketing division in a university. This study uses a descriptive method that attempts to describe the object or subject under study as it is, with the aim of systematically describing the facts and characteristics of the object under study precisely. In the elaboration of the methodology, there are four phases that include the identification and source data collection phase, the analysis phase, the design phase, and then the results phase of each detail in accordance with the nine steps of Kimball’s data warehouse and the Pentaho Data Integration (PDI). The result is a tableau as a tool of BI that does not have complete ETL tools. So, the process approach in combining PDI and DW as a data source certainly makes a tableau as a BI tool more useful in presenting data thus minimizing the time needed to obtain strategic data from 2-3 weeks to 77 minutes.
PERANCANGAN DATA WAREHOUSE dan DASHBOARD PT XYZ I Dewa Bagus Gde Khrisna Jayanta Nugraha; Agus Susanto; Abba Suganda Girsang
Jurnal Teknoif Teknik Informatika Institut Teknologi Padang Vol 10 No 1 (2022): TEKNOIF APRIL 2022
Publisher : ITP Press

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (427.707 KB) | DOI: 10.21063/jtif.2022.V10.1.17-24

Abstract

A contractor is a company that contracts with another person or government or company to supply goods or complete certain services. PT. XYZ has a lot of project contract data and it has not yet been processed into information. To facilitate the employees of PT. XYZ in analyzing and evaluating past or future projects requires a Data Warehouse. The purpose of this research is to design and analyze the data warehouse needed to provide information regarding project contract data at PT. XYZ. The data warehouse design method uses 4 stages (Four-Step Methodology) proposed by Ralph Kimball in designing a data warehouse system. The final result to be achieved is the design of a data warehouse and dashboard is a visualization that will provide significant information about the project contract data at PT. XYZ which can be seen from different perspectives and facilitates related parties in making decisions.
Detection of traffic congestion based on twitter using convolutional neural network model Rifqi Ramadhani Almassar; Abba Suganda Girsang
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 11, No 4: December 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v11.i4.pp%p

Abstract

Microblogging is a form of communication between users to socialize by describing the state of events in real-time. Twitter is a platform for microblogging. Indonesia is one of the countries with the largest Twitter users, people can share information about traffic jams. This research aims to detect traffic jams by extracting tweets in the form of vectors and then inserting them into the Convolution neural network (CNN) model and getting the best model from CNN+Word2Vec, CNN+FastText, and support vector machine (SVM). Data retrieval was conducted using the Rapidminer application. Then, the context of the tweets was checked so that there were 2777 data consisting of 1426 congestion road data and 1351 smooth road data. The data was taken from certain coordinate points in around Jakarta, Indonesia. Then, preprocessing and changes to vector form were carried out using the Word2Vec and FastText methods, then inserted into the CNN model. The results of CNN+Word2Vec and CNN+FastText were compared to the SVM method. The evaluation was done manually using the actual traffic conditions. The highest result obtained using test data by the CNN+FastText method are 86.33% while CNN+Word2Vec is 85.79% and SVM is 67.62%.
Fast Ant Colony Optimization for Clustering Abba Suganda Girsang; Tjeng Wawan Cenggoro; Ko-Wei Huang
Indonesian Journal of Electrical Engineering and Computer Science Vol 12, No 1: October 2018
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v12.i1.pp78-86

Abstract

Data clustering is popular data analysis approaches, which used to organizing data into sensible clusters based on similarity measure, where data within a cluster are similar to each other but dissimilar to that of another cluster. In the recently, the cluster problem has been proven as NP-hard problem, thus, it can be solved with meta-heuristic algorithms, such as the particle swarm optimization (PSO), genetic algorithm (GA), and ant colony optimization (ACO), respectively. This paper proposes an algorithm called Fast Ant Colony Optimization for Clustering (FACOC) to reduce the computation time of Ant Colony Optimization (ACO) in clustering problem. FACOC is developed by the motivation that a redundant computation is occurred in ACO for clustering. This redundant computation can be cut in order to reduce the computation time of ACO for clustering. The proposed FACOC algorithm was verified on 5 well-known benchmarks. Experimental result shows that by cutting this redundant computation, the computation time can be reduced about 28% while only suffering a small quality degradation.
Prediction measuring local coffee production and marketing relationships coffee with big data analysis support Anita Sindar Ros Maryana Sinaga; Ricky Eka Putra; Abba Suganda Girsang
Bulletin of Electrical Engineering and Informatics Vol 11, No 5: October 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v11i5.4082

Abstract

Following the increasing enthusiasm of the coffee market in Indonesia, a machine learning model is developed to study the relationship between coffee producers, consumers, production, and the market. Machine learning work flow is constructed in various stages; explore, develop, and validate the models. In this research, the building model predicts the production and market of late departure coffee based on labeled and unlabeled variables. The best predictions from the trained type of model algorithms of machine learning like tree accuracy of 85.7%, support vector machine (SVM) accuracy of 82.9%, and k-nearest neighbors, the accuracy of 82.9%, which produce three categories, namely, high production of 2 provinces, medium production of 21 provinces, and low production of 11 provinces. The accuracy classification is supported by the AUC value obtained for a high class, a medium class, and a low class. In addition, local coffee marketing modeling used in logistic regression was found with an accuracy of 88.9%, aiming to classify coffee interests between Arabica coffee and Robusta coffee. We found that the AUC value logistic regression for arabica coffee is about 0.94, while for Robusta is 0.92. The analysis of the classification modeling results shows a high level of accuracy of 93.0%.
Hybrid model: IndoBERT and long short-term memory for detecting Indonesian hoax news Yefferson, Danny Yongky; Lawijaya, Viriyaputra; Girsang, Abba Suganda
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp1913-1924

Abstract

The world has entered an era that technology has developed far. Due to rapid technological development, information is easily spread. However, not all information spread through social media is factual information. Responding to this social phenomenon, we initiated to create a hoax detection system using the combined method of Indo bidirectional encoder representations from transformers (IndoBERT) and long short-term memory (LSTM). The dataset used in this study are obtained through the process scraping on the site turnbackhoax.id and cable news network (CNN) Indonesia. We decided to use the IndoBERT-LSTM method to detect hoaxes, using IndoBERT as the feature extractor and LSTM as the classification layer can be an effective method because of its advantages in managing and understanding Indonesian language. The results show that the IndoBERT-LSTM model achieved an accuracy of 93.2%, precision of 92%, recall of 89.7%, and F1-score of 90,8%. From a total of 5876 data composed of a total of 1998 factual news and 3878 hoax data. The hoax detection system using IndoBERT-LSTM is a promising approach for detecting hoaxes accurately and efficiently. This model has the potential to make a significant impact in the fight against the spread of Hoaxes.
Indonesian generative chatbot model for student services using GPT Priccilia, Shania; Girsang, Abba Suganda
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 1: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i1.pp50-56

Abstract

The accessibility of academic information greatly impacts the satisfaction and loyalty of university students. However, limited university resources often hinder students from conveniently accessing information services. To address this challenge, this research proposes the digitization of the question-answering process between students and student service staff through the implementation of generative chatbot. A generative chatbot can provide students with human-like responses to academic inquiries at their convenience. This research developed generative chatbot using pre-trained GPT-2 architecture in three different sizes, specifically designed for addressing practicum-related questions in a private university in Indonesia. The experiment utilized 1288 question-answer pairs in Indonesian and demonstrated the best performance with a BLEU score of 0.753, signifying good performance accuracy in generating text despite dataset limitations.
Traffic accident classification using IndoBERT Naufal, Muhammad Alwan; Girsang, Abba Suganda
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 1: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i1.pp42-49

Abstract

Traffic accidents are a widespread concern globally, causing loss of life, injuries, and economic burdens. Efficiently classifying accident types is crucial for effective accident management and prevention. This study proposes a practical approach for traffic accident classification using IndoBERT, a language model specifically trained for Indonesian. The classification task involves sorting accidents into four classes: car accidents, motorcycle accidents, bus accidents, and others. The proposed model achieves a 94% accuracy in categorizing these accidents. To assess its performance, we compared IndoBERT with traditional methods, random forest (RF) and support vector machine (SVM), which achieved accuracy scores of 85% and 87%, respectively. The IndoBERT-based model demonstrates its effectiveness in handling the complexities of the Indonesian language, providing a useful tool for traffic accident classification and contributing to improved accident management and prevention strategies.
Autism detection based on autism spectrum quotient using weighted average ensemble method Lawysen, Lawysen; Anggara, Nelsen; Girsang, Abba Suganda
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp188-196

Abstract

Autism spectrum disorder (ASD) is a condition that occurs in an individual, wherein it is accompanied by various symptoms such as difficulties in socializing with others. Early detection of ASD patients can assist in preventing various symptoms caused by ASD. The focus of this research is to automate the diagnosis of ASD in an individual based on the results of the autism spectrum quotient (AQ) using weighted average ensemble method. Initially, preprocessing is carried out on the dataset to ensure optimal performance of the resulting model. In the preprocessing step, the filling of missing values and feature selection occurs, where the feature selection method being utilized is p-value. The model in this research uses the weighted average ensemble method, which is the model that combines three machine learning classification algorithms. Eight classification algorithms are tested to identify the three algorithms with the best performance, namely gaussian Naïve Bayes (NB), logistic regression (LR), and random forest (RF). Following the testing, the model constructed using the weighted average ensemble method exhibits the highest performance compared to the model built using a single classification algorithm. The performance matrix used to measure the model’s performance is area under the curve (AUC)/receiver operating characteristic (ROC), with the developed model achieving an AUC/ROC value of 0.912.