cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Workflow Scheduling in Cloud Environment Using Firefly Optimization Algorithm Shahin Ghasemi; Asra Kheyrolahi; Abdusalam Abdulla Shaltooki
JOIV : International Journal on Informatics Visualization Vol 3, No 3 (2019)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1125.759 KB) | DOI: 10.30630/joiv.3.3.266

Abstract

One of the issues in cloud computing is workflow scheduling. A workflow models the process of executing an application comprising a set of steps and its objective is to simplify the complexity of application management. Workflow scheduling maps each task to a proper resource and sorts tasks on each resource to meet some efficiency measures such as processing and transmission costs, load balancing, quality of service, and etc. Task scheduling is an NP-Complete problem. In this study, meta-heuristic firefly algorithm (FA) is used to present a workflow scheduling algorithm. The purpose of the proposed scheduling algorithm is to explore optimal schedules such that the cost of processing and transmission of the whole workflow are minimized while there will be load balancing among the processing stations. The proposed algorithm is implemented in MATLAB and its efficiency is compared with cat swarm optimization (CSO) algorithm. The evaluations show that the proposed algorithm outperforms CSO in finding better solutions.
Achieving Lightweight Verifiable Privacy Preserving Search Over Encrypted Data Selasi Kwame Ocansey; Charles Fynn Oduro
JOIV : International Journal on Informatics Visualization Vol 3, No 3 (2019)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1359.697 KB) | DOI: 10.30630/joiv.3.3.267

Abstract

When cloud clients outsource their database to the cloud, they entrust management operations to a cloud service provider who is expected to answer the client’s queries on the cloud where database is located. Efficient techniques can ensure critical requirements for outsourced data’s integrity and authenticity. A lightweight privacy preserving verifiable scheme for outsourcingdatabase securely is proposed, our scheme encrypts data before outsourcing and returned query results are verified with parameters of correctness and completeness. Our scheme is projected on lightweight homomorphic encryption technique and bloom filter which are efficiently authenticated to guarantee the outsourced database’s integrity, authenticity, and confidentiality. An ordering challenge technique is proposed for verifying top-k query results. We conclude by detailing our analysis of security proofs, privacy, verifiability and the performance efficiency of our scheme. Our proposed scheme’s proof and evaluation analysis show its security and efficiency for practical deployment. We also evaluate our scheme’s performances over two UCI data sets.
Neural Network Techniques for Time Series Prediction: A Review Muhammad Faheem Mushtaq; Urooj Akram; Muhammad Aamir; Haseeb Ali; Muhammad Zulqarnain
JOIV : International Journal on Informatics Visualization Vol 3, No 3 (2019)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (902.684 KB) | DOI: 10.30630/joiv.3.3.281

Abstract

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series.  Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.
Advanced Extremely Efficient Detection of Replica Nodes in Mobile Wireless Sensor Networks Mehdi Safari; Elham Bahmani; Mojtaba Jamshidi; Abdusalam Shaltooki
JOIV : International Journal on Informatics Visualization Vol 3, No 4 (2019)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (900.726 KB) | DOI: 10.30630/joiv.3.4.254

Abstract

Today, wireless sensor networks (WSNs) are widely used in many applications including the environment, military, and explorations. One of the most dangerous attacks against these networks is node replication. In this attack, the adversary captures a legal node of the network, generates several copies of the node (called, replica nodes) and injects them in the network. Various algorithms have been proposed to handle replica nodes in stationary and mobile WSNs. One of the most well-known algorithms to handle this attack in mobile WSNs is eXtremely Efficient Detection (XED). The main idea of XED is to generate and exchange random numbers among neighboring nodes. The XED has some drawbacks including high communication and memory overheads and low speed in the detection of replica nodes. In this paper, an algorithm is presented to improve XED. The proposed algorithm is called Advanced XED (AXED) in which each node observes a few numbers of nodes and whenever two nodes meet, a new random number is generated and exchanged. The efficiency of the proposed algorithm is evaluated in terms of the memory and communication overheads and its results are compared with existing algorithms. The comparison results show that the proposed algorithm imposes lower overheads to the nodes. In addition, the proposed algorithm is simulated and the simulation results show that the proposed algorithm is able to detect replica nodes faster than XED.
Big Data Environment for Realtime Earthquake Data Acquisition and Visualization Louis Nashih Uluwan Arif; Ali Ridho Barakbah; Amang Sudarsono; Renovita Edelani
JOIV : International Journal on Informatics Visualization Vol 3, No 4 (2019)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (3512.439 KB) | DOI: 10.30630/joiv.3.4.320

Abstract

Indonesia is a country that has the highest level of earthquake risk in the world. In the past 10 years, there have been ± 90,000 earthquake events recorded and always increasing along with the explosion of earthquake data occurs at any time. The process of collecting and analyzing earthquake data requires more effort and takes a long computational time. In this paper, we propose a new system to acquire, store, manage and process earthquake data in Indonesia in real-time, fast and dynamic by utilizing features in the Big Data Environment. This system improves computational performance in the process of managing and analyzing earthquake data in Indonesia by combining and integrating earthquake data from several providers to form a complete unity of earthquake data. An additional function is the existence of an API (Application Programming Interface) embedded in this system to provide access to the results of earthquake data analysis such as density, probability density function and seismic data association between provinces in Indonesia. The process in this system has been carried out in parallel and improved computing performance. This is evidenced by the computational time in the preprocessing process on a single-core master node, which requires 55.6 minutes, but a distributed computing process using 15 cores can speeds up with only 4.82 minutes.
Incremental Associative Mining based Risk-Mapping System for Earthquake Analysis in Indonesia Renovita Edelani; Ali Ridho Barakbah; Tri Harsono; Louis Nashih Uluwan Arif
JOIV : International Journal on Informatics Visualization Vol 3, No 4 (2019)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1760.156 KB) | DOI: 10.30630/joiv.3.4.319

Abstract

Indonesia is one of the largest archipelagic countries in the world that has the highest risk of an earthquake. The major causes of earthquakes in this country are plate movements and volcanic activity. Earthquakes in Indonesia has a cause and effect relationship between each province. This disaster caused severe damage including a lot of people to get killed, injured and lose their money and property. We must minimize the impact of the earthquake by forming earthquake risk mapping. The risk of seismicity in Indonesia can vary each year, so it needs to be analyzed how the changes in risk are each addition of earthquake data. This paper proposes an earthquake risk mapping system with Associative Mining based on incremental earthquake data that have the highest values of confidence rates from the seismic association between provinces in Indonesia. The system uses the Incremental Association rule method to see the trend in the value of changes in confidence for each addition of earthquake data every 5 years. This system proposes 3 main features, which are (1) Data Retrieval and Preprocessing, (2) Association Rule Mining, (3) Incremental Associative Mining based risk mapping. For the experimental study, the system used data from 1963-2018. The results show that the provinces of Maluku, North Maluku, Nusa Tenggara Timur, North Sulawesi, and Papua have an incremental association risk of an earthquake.
Predicting Diabetes by adopting Classification Approach in Data Mining Rapinder Kaur
JOIV : International Journal on Informatics Visualization Vol 3, No 2-2 (2019): Internet of Things and Smart Environments
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1135.75 KB) | DOI: 10.30630/joiv.3.2-2.229

Abstract

As the world is growing fast, the metamorphosing of things, lifestyle, perceptions of people and resources is taking place. But the elevation in technology has become a challenge now as the ideas, innovations are amplifying. One of the biggest things the advancement and elevations in technology has given birth is “Big Data”. In this data massive amount of information is hidden. In order to refine or process this data and to find out and unmask the insights, many techniques and algorithms have been evolved, one of which is the data mining. The data mining is the approach or procedure which helps in detaching or extracting profitable and fruitful knowledge, reports and facts from the rough or impure data. The prediction analysis is approach comprehended from data mining to forecast and figure out the future making using classification technique. This research work is based on the diabetes prediction by making use of classification approach. In the existing approach SVM classifier is applied for the prediction analysis. To increase accuracy approach of KNN classifier is applied for the prediction analysis. Both the proposed and existing methods are implemented in Python. The simulation results show that accuracy of KNN is increased and execution time is reduced.
A Multi-Criteria Ranking Algorithm Based on the VIKOR Method for Meta-Search Engines Mojtaba Jamshidi; Mastoreh Haji; Mohamad Reza Kamankesh; Mahya Daghineh; Abdusalam Abdulla Shaltooki
JOIV : International Journal on Informatics Visualization Vol 3, No 3 (2019)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1062.593 KB) | DOI: 10.30630/joiv.3.3.269

Abstract

Ranking of web pages is one of the most important parts of search engines and is, in fact, a process that through it the quality of a page is estimated by the search engine. In this study, a ranking algorithm based on VIKOR multi-criteria decision-making method for Meta-Search Engines (MSEs) was proposed. In this research, the considered MSE first will receive the suggested pages associated with the search term from eight search engines including, Teoma, Google, Yahoo!, AlltheWeb, AltaVista, Wisenut, ODP, MSN. The results, at most 10 first pages are selected from each search engine and creates the initial dataset contains 80 web pages. The proposed parser is then executed on these pages and the eight criteria including the rank of web page in the related search engine, access time, number of repetitions of search terms, positions of search term at the webpage, numbers of media at the webpage, the number of imports in the webpage, the number of incoming links, and the number of outgoing links are extracted from these web pages. Finally, by using the VIKOR method and these extracted criteria, web pages will rank and 10 top results will be provided for the user. To implement the proposed method, JAVA and MATLAB languages are used. In the experiments, the proposed method is implemented for a query and its ranking results have been compared in terms of accuracy with three famous search engine including Google, Yahoo, and MSN. The results of comparisons show that the proposed method offers higher accuracy.
Performance Evaluation of TCP Vegas over TCP Reno and TCP NewReno over TCP Reno Tanjia Chowdhury; Mohammad Jahangir Alam
JOIV : International Journal on Informatics Visualization Vol 3, No 3 (2019)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1771.52 KB) | DOI: 10.30630/joiv.3.3.270

Abstract

In the Transport layer, there are two types of Internet Protocol are worked, namely- Transmission Control Protocol (TCP) and User datagram protocol (UDP). TCP provides connection oriented service and it can handle congestion control, flow control, and error detection whereas UDP does not provide any of service. TCP has several congestion control mechanisms such as TCP Reno, TCP Vegas, TCP New Reno, TCP Tahoe, etc. In this paper, we have focused on the behavior performance between TCP Reno and TCP Vegas, TCP New Reno over TCP Reno, when they share the same bottleneck link at the router. For instigating this situation, we used drop-tail and RED algorithm at the router and used NS-2 simulator for simulation. From the simulation results, we have observed that the performance of TCP Reno and TCP Vegas is different in two cases. In drop tail algorithm, TCP Reno achieves better Performance and throughput and act more an aggressive than Vegas. In Random Early Detection (RED) algorithm, both of congestion control mechanism provides better fair service when they coexist at the same link. TCP NewReno provides better performance than TCP Reno.
Thermostats: an Open Source Shiny App for Your Open Data Repository Dasapta Erwin Irawan; Muhammad Aswan Syahputra; Prana Ugi; Deny Juanda Puradimaja
JOIV : International Journal on Informatics Visualization Vol 3, No 2-2 (2019): Internet of Things and Smart Environments
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1146.77 KB) | DOI: 10.30630/joiv.3.2-2.282

Abstract

Hydrochemical analysis has emerged as a powerful methodology in geothermal system profiling. Indonesia is the capital of geothermal energy with its more than 100 active volcanoes. Therefore we need to have an analytical, data-driven, and user-focused online application of geothermal water quality. Proudly we introduce Thermostats (https://aswansyahputra.shinyapps.io/thermostats/). We collected water quality from 416 geothermal sites across Indonesia. Three main objectives are to provide an online open-free to use data repository, to visualize the dataset to suit user’s needs, and to help users understand the geothermal system of each particular site. At the end, we hope they like this system and donate their own dataset to make it better for future users. We designed this online app using Shiny, because it’s open source, lightweight and portable. It’s very intuitive to load our descriptive, bivariate and multivariate statistics. We selected Principal Component Analysis and Cluster Analysis as two strong statistics for water sample classification. Users could add their own dataset by making a pull request on Github (https://github.com/dasaptaerwin/thermostats) or sending it to us by email to make it visible in the application and included in the visualization. We make this application portable, so it can be installed on a local computer or a server, to enable an easy and fluid way of data sharing between collaborators.

Page 15 of 118 | Total Record : 1172