Scientific Journal of Informatics
Scientific Journal of Informatics published by the Department of Computer Science, Semarang State University, a scientific journal of Information Systems and Information Technology which includes scholarly writings on pure research and applied research in the field of information systems and information technology as well as a review-general review of the development of the theory, methods, and related applied sciences.
Articles
20 Documents
Search results for
, issue
"Vol 5, No 1 (2018): May 2018"
:
20 Documents
clear
Implementation of Decision Tree and Dempster Shafer on Expert System for Lung Disease Diagnosis
Alfatah, Abdul Muis;
Arifudin, Riza;
Muslim, Much Aziz
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.13440
The expert system is a computer system that contains set of rules to solve problems like an expert. The lungs are one of the vulnerable respiratory organs. The purpose of this research is to implement decision tree and dempster shafer method on lung disease diagnosis and measure the accuracy of the system. The symptom was searched using forward chaining decision tree and the diagnosis was calculated using dempster shafer method. Dempster Shafer method calculates the possibility of a lung disease based on the density of probability value that possessed by each symptom. This research used 65 data obtained from medical record of Puskesmas Tegowanu Grobogan Regency. General symptoms and types of disease are used as a variable. Based on the results of the study, it can be concluded that the results of the diagnosis using dempster shafer method has an 83.08% accuracy.
Comparison Between SAW and TOPSIS Methods in Selection of Broiler Chicken Meat Quality
Adi, Pungky Tri Kisworo;
Sugiharti, Endang;
Alamsyah, Alamsyah
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.14416
Decision support system is a system that can assist semi-structured and unstructured decision making, in which no one knows exactly how decisions should be made. Broiler Chicken farm production is growing very rapidly along with the increasing market demand for Broiler Chicken. Broiler Chickens have fast growth in a relatively short time. The purpose of this research is the selection of chicken meat quality by applying comparison of SAW and TOPSIS method. The variables used are age, ration conversion, weight of chicken weight, and water consumption. The system is created using PHP framework Code Ignitier and database MySQL using waterfall method. That is analyze the user needs on the system, do the database design, by doing a coding and testing the system whether it is what is expected. The result of this research is the application of comparison between SAW and TOPSIS method each consist of 5 criteria. Comparison of these algorithms can facilitate the breeders in choosing a good quality broiler chicken meat.The results of the best farmer recommendation according to comparative method of SAW and TOPSIS. In SAW method of breeder 1 The biggest value is at V2 = 0,341, so alternative A2 is alternatives chosen as good alternative. Breeder 2 The biggest value is at V3 = 0.033, so alternative A3 is the alternative chosen as a good enough alternative. Breeder 3 The biggest value is at V1 = 0.005, so alternative A1 is the alternative chosen as an excellent alternative. Topsis Method of Breeders 1 is the largest value at V2 = 9.98, so alternative A2 is the alternative chosen as a good alternative. Breeder 2 is the biggest value at V3 = 0.372, so alternative A3 is the alternative chosen as a good enough alternative. Breeder 3 is the biggest value at V3 = 0.982, so alternative A3 is the alternative chosen as a good enough alternative. This system uses only 5 criteria, it would be nice if you add other criteria that support the selection of broiler chicken meat quality.
Genetic Algorithm for Relational Database Optimization in Reducing Query Execution Time
Hidayat, Kukuh Triyuliarno;
Arifudin, Riza;
Alamsyah, Alamsyah
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.12720
The relational database is defined as the database by connecting between tables. Each table has a collection of information. The information is processed in the database by using queries, such as data retrieval, data storage, and data conversion. If the information in the table or data has a large size, then the query process to process the database becomes slow. In this paper, Genetic Algorithm is used to process queries in order to optimize and reduce query execution time. The results obtained are query execution with genetic algorithm optimization to show the best execution time. The genetic algorithm processes the query by changing the structure of the relation and rearranging it. The fitness value generated from the genetic algorithm becomes the best solution. The fitness used is the highest fitness of each experiment results. In this experiment, the database used is MySQL sample database which is named as employees. The database has a total of over 3,000,000 rows in 6 tables. Queries are designed by using 5 relations in the form of a left deep tree. The execution time of the query is 8.14247 seconds and the execution time after the optimization of the genetic algorithm is 6.08535 seconds with the fitness value of 0.90509. The time generated after optimization of the genetic algorithm is reduced by 25.3%. It shows that genetic algorithm can reduce query execution time by optimizing query in the part of relation. Therefore, query optimization with genetic algorithm can be an alternative solution and can be used to maximize query performance.
A Hybrid Security Algorithm AES and Blowfish for Authentication in Mobile Applications
Purwinarko, Aji;
Hardyanto, Wahyu
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.8151
Nowadays, everything is within our grasp and with the mobile phones become easier. Its use is not limited to calls and SMS but has become a tool that can be used to serve business transactions, banking, academic of data through mobile applications. Tus, the security of authentication in the mobile application needs to be improved to avoid a hacker attack. This article presents an authentication in the mobile application to the server using a hybrid of cryptographic algorithm Advanced Encryption Standard (AES) and Blowfish. AES and Blowfish is a symmetric key algorithm is very fast and powerful. With the utilization of a large block size of AES and Blowfish to encrypt keys, AES security will be much more robust and complicated to attacked. So, it will be difficult for hackers to perform Man in the Middle (MitM) attacks.
K-Nearest Neighbor and Naive Bayes Classifier Algorithm in Determining The Classification of Healthy Card Indonesia Giving to The Poor
Safri, Yofi Firdan;
Arifudin, Riza;
Muslim, Much Aziz
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.12057
Health is a human right and one of the elements of welfare that must be realized in the form of giving various health efforts to all the people of Indonesia. Poverty in Indonesia has become a national problem and even the government seeks efforts to alleviate poverty. For example, poor families have relatively low levels of livelihood and health. One of the new policies of the Sakti Government Card Program issued by the government includes three cards, namely Indonesia Smart Card (KIP), Healthy Indonesia Card (KIS) and Prosperous Family Card (KKS). In this study to determine the feasibility of a healthy Indonesian card (KIS) required a method of optimal accuracy. The data used in this study is KIS data which amounts to 200 data records with 15 determinants of feasibility in 2017 taken at the Social Service of Pekalongan Regency. The data were processed using the K-Nearest Neighbor algorithm and the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm. This can be seen from the accuracy of determining the feasibility of K-Nearest Neighbor algorithm of 64%, while the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm is 96%, so the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm is the optimal algorithm in determining the feasibility of healthy Indonesian card recipients with an increase of 32% accuracy. This study shows that the accuracy of the results of determining feasibility using a combination of K-Nearest Neighbor-Naive Bayes Classifier algorithms is better than the K-Nearest Neighbor algorithm.
Poverty Data Model as Decision Tools in Planning Policy Development
Mirza, Ahmad Haidar
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.14022
Poverty is the main problem in a country both in developing countries to the developed countries, both in structural poverty, cultural and natural. That is, poverty is no longer seen as a measure of the failure of the Government to protect and fulfill the fundamental rights of its citizens but as a challenge of the nation to realize a fair society, prosperous and dignified sovereign. Various efforts have been made in determining government policy measures in an effort to overcome poverty, one of them by conducting a survey to assess the poor. The results of the survey of the various activities of the organization obtained a variety of database versions poverty to areas or locations. The information generated from the poverty database only includes recapitulation of poor people to the area or location. One step is to process the data on poverty in a process of Knowledge Discovery in Databases (KDD) to form a data mining poverty. Data mining is a logical combination of knowledge of data, and statistical analysis developed in the knowledge business or a process that uses statistical techniques, mathematics, artificial intelligence, artificial and machine-learning to extract and identify useful information for the relevant knowledge from various large databases.
Comparison of Dynamic Programming Algorithm and Greedy Algorithm on Integer Knapsack Problem in Freight Transportation
Sampurno, Global Ilham;
Sugiharti, Endang;
Alamsyah, Alamsyah
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.13360
At this time the delivery of goods to be familiar because the use of delivery of goods services greatly facilitate customers. PT Post Indonesia is one of the delivery of goods. On the delivery of goods, we often encounter the selection of goods which entered first into the transportation and  held from the delivery. At the time of the selection, there are Knapsack problems that require optimal selection of solutions. Knapsack is a place used as a means of storing or inserting an object. The purpose of this research is to know how to get optimal solution result in solving Integer Knapsack problem on freight transportation by using Dynamic Programming Algorithm and Greedy Algorithm at PT Post Indonesia Semarang. This also knowing the results of the implementation of Greedy Algorithm with Dynamic Programming Algorithm on Integer Knapsack problems on the selection of goods transport in PT Post Indonesia Semarang by applying on the mobile application. The results of this research are made from the results obtained by the Dynamic Programming Algorithm with total weight 5022 kg in 7 days. While the calculation result obtained by Greedy Algorithm, that is total weight of delivery equal to 4496 kg in 7 days. It can be concluded that the calculation results obtained by Dynamic Programming Algorithm in 7 days has a total weight of 526 kg is greater when compared with Greedy Algorithm.
Uncertainty Ontology for Module Rules Formation Waterwheel Control
Azmi, Zulfian -;
Nasution, Mahyuddin K. M.;
Mawengkang, Herman;
Zarlis, M
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.14188
Implementation of Uncertainty model has not given maximum result in forming rule on an inference of a case. For testing to determine whether water quality is high, medium and low. The input variables used are temperature, pH, salinity and Disolved Oxygen. Testing is done by looking at the water turbidity change in the shrimp pond, to determine the water quality. Its water quality determines in the control module of the waterwheel rotation.Rolling the waterwheel moves quickly if pond water quality is low, moving slowly if water quality is medium and immobile if water quality is good. And the establishment of the rule with the approach of knowledge of Ontology to determine the relation between several variables (temperature, Ph, Disolved Oxygen and salinity). Each variable is set to its certainty value in the form of fuzzy value. Next is determined the relation of the four variables for the formation of rule.
A High Performace of Local Binary Pattern on Classify Javanese Character Classification
Susanto, Ajib;
Sinaga, Daurat;
Sari, Christy Atika;
Rachmawanto, Eko Hari;
Setiadi, De Rosal Ignatius Moses
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.14017
The classification of Javanese character images is done with the aim of recognizing each character. The selected classification algorithm is K-Nearest Neighbor (KNN) at K = 1, 3, 5, 7, and 9. To improve KNN performance in Javanese character written by the author, and to prove that feature extraction is needed in the process image classification of Javanese character. In this study selected Local Binary Patter (LBP) as a feature extraction because there are research objects with a certain level of slope. The LBP parameters are used between [16 16], [32 32], [64 64], [128 128], and [256 256]. Experiments were performed on 80 training drawings and 40 test images. KNN values after combination with LBP characteristic extraction were 82.5% at K = 3 and LBP parameters [64 64].
Autocomplete and Spell Checking Levenshtein Distance Algorithm To Getting Text Suggest Error Data Searching In Library
Yulianto, Muhamad Maulana;
Arifudin, Riza;
Alamsyah, Alamsyah
Scientific Journal of Informatics Vol 5, No 1 (2018): May 2018
Publisher : Universitas Negeri Semarang
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.15294/sji.v5i1.14148
Nowadays internet technology provide more convenience for searching information on a daily. Users are allowed to find and publish their resources on the internet using search engine. Search engine is a computer program designed to facilitate a user to find the information or data that they need. Search engines generally find the data based on keywords they entered, therefore a lot of case when the user canât find the data that they need because there are an error while entering a keyword. Thats why a search engine with the ability to detect the entered words is required so the error can be avoided while we search the data. The feature that used to provide the text suggestion is autocomplete and spell checking using Levenshtein distance algorithm. The purpose of this research is to apply the autocomplete feature and spell checking with Levenshtein distance algorithm to get text suggestion in an error data searching in library and determine the level of accuracy on data search trials. This research using 1155 data obtained from UNNES Library. The variables are the input process and the classification of books. The accuracy of Levenshtein algorithm is 86% based on 1055 source case and 100 target case.