Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : Sriwijaya Journal of Informatics and Applications

Automatic Clustering and Fuzzy Logical Relationship to Predict the Volume of Indonesia Natural Rubber Export Widya Aprilini; Dian Palupi Rini; Hadipurnawan Satria
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.51

Abstract

Natural rubber is one of the pillars of Indonesia's export commodities. However, over the last few years, the export value of natural rubber has decreased due to an oversupply of this commodity in the global market. To overcome this problem, it is possible to predict the volume of Indonesia natural rubber exports. Predicted values can also help the government to compile market intelligence for natural rubber commodities periodically. In this study, the prediction of the export volume of natural rubber was carried out using the Automatic Clustering as an interval maker in the Fuzzy Time Series or usually called Automatic Clustering and Fuzzy Logical Relationship (ACFLR). The data used is 51 data per year from 1970 to 2020. The purpose of this study is to predict the volume of Indonesia natural rubber exports and compare the prediction results between the Automatic Clustering and Fuzzy Logical Relationship (ACFLR) and Chen's Fuzzy Time Series. The results showed that there was a significant difference between the two methods, ACFLR got 0.5316% MAPE with  and Chen's Fuzzy Time Series model got 8.009%. Show that the ACFLR method performs better than the pure Fuzzy Time Series in predicting volume of Indonesia natural rubber exports.
Automatic Data Extraction Utilizing Structural Similarity From A Set of Portable Document Format (PDF) Files Hadipurnawan Satria; Anggina Primanita
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.89

Abstract

Instead of storing data in databases, common computer-aided office workers often choose to keep data related to their work in the form of document or report files that they can conveniently and comfortably access with popular off-the-shelf softwares, such as in Portable Document Format (PDF) format files. Their workplaces may actually use databases but they usually do not possess the privilege nor the proficiency to fully utilize them. Said workplaces likely have front-end systems such as Management Information System (MIS) from where workers get their data containing reports or documents.These documents are meant for immediate or presentational uses but workers often keep these files for the data inside which may come to be useful later on. This way, they can manipulate and combine data from one or more report files to suit their work needs, on the occasions that their MIS were not able to fulfill such needs. To do this, workers need to extract data from the report files. However, the files also contain formatting and other contents such as organization banners, signature placeholders, and so on. Extracting data from these files is not easy and workers are often forced to use repeated copy and paste actions to get the data they want. This is not only tedious but also time-consuming and prone to errors. Automatic data extraction is not new, many existing solutions are available but they typically require human guidance to help the data extraction before it can become truly automatic. They may also require certain expertise which can make workers hesitant to use them in the first place. A particular function of an MIS can produce many report files, each containing distinct data, but still structurally similar. If we target all PDF files that come from such same source, in this paper we demonstrated that by exploiting the similarity it is possible to create a fully automatic data extraction system that requires no human guidance. First, a model is generated by analyzing a small sample of PDFs and then the model is used to extract data from all PDF files in the set. Our experiments show that the system can quickly achieve 100% accuracy rate with very few sample files. Though there are occasions where data inside all the PDFs are not sufficiently distinct from each other resulting in lower than 100% accuracy, this can be easily detected and fixed with slight human intervention. In these cases, total no human intervention may not be possible but the amount needed can be significantly reduced. 
Predictive Modeling of Air Quality Index Using Ensemble Learning and Multivariate Analysis Primanita, Anggina; Satria, Hadipurnawan
Sriwijaya Journal of Informatics and Applications Vol 5, No 2 (2024)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v5i2.121

Abstract

Breathing polluted air can result in multiple health problems. Thus, it is important to understand and predict the air quality in the environment. Air Quality Index (AQI) is a unit used to measure the air pollutants. In Indonesia, this value is measured and published by the Meteorological, Climatological, and Geophysical Agency regularly. In this research, four commonly used regression algorithms were used to analyzed AQI data, namely, Random Forest, Decision Tree, K-Neural Network, and Ada Boost. All the algorithms model were developed to analyzed 1096 AQI data. The Mean Squared Error value of each model was computed as a measure of comparison. It is found that the Random Forest is the best performing algorithm. It can generalize well without overfitting to the data set.