cover
Contact Name
Much Aziz Muslim
Contact Email
a212muslim@yahoo.com
Phone
+628164243462
Journal Mail Official
shmpublisher@gmail.com
Editorial Address
J. Karanglo No. 64 Semarang
Location
Kota semarang,
Jawa tengah
INDONESIA
Journal of Soft Computing Exploration
Published by shm publisher
ISSN : 27467686     EISSN : 27460991     DOI : -
Core Subject : Science,
Journal of Soft Computing Exploration is a journal that publishes manuscripts of scientific research papers related to soft computing. The scope of research can be from the theory and scientific applications as well as the novelty of related knowledge insights. Soft Computing: Artificial Intelligence Applied Algebra Neuro Computing Fuzzy Logic Rough Sets Probabilistic Techniques Machine Learning Metaheuristics And Many Other Soft-Computing Approaches Area Of Applications: Data Mining Text Mining Pattern Recognition Image Processing Medical Science Mechanical Engineering Electronic And Electrical Engineering Supply Chain Management, Resource Management, Strategic Planning Scheduling Transportation Operational Research Robotics
Articles 146 Documents
Flood early warning system at Jakarta dam using internet of things (IoT)-based real-time fishbone method to support industrial revolution 4.0 Farabi, Muhammad Rizqi Al; Sintawati, Andini
Journal of Soft Computing Exploration Vol. 5 No. 2 (2024): June 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i2.293

Abstract

This research aims to develop an effective flood early warning system to provide timely information to the public and support the government in disaster management. The Raspberry Pi mini-computer functions as the central control, collecting data from the Water Level Sensor to measure water height, the Ultrasonic Sensor for further monitoring, the DHT11 Sensor to monitor temperature and humidity, and a Buzzer as an audible warning device. The research method involves review of the literature and data acquisition from related journals. These data are utilized to design an Internet of Things (IoT)-based flood detection tool with the Raspberry Pi minicomputer as the main controller. The system can be implemented in vulnerable locations such as reservoirs, sluice gates, and rivers, as part of the Smart City and Smart Environment concepts. The test results indicate that the developed early warning system, integrating the Raspberry Pi minicomputer, the Water Level Sensor, the the Ultrasonic Sensor DHT11 Sensor, and Buzzer, approaches perfection. Real-time information is transmitted through the Twitter social media platform, which is shown to be more effective than manual notifications. The system can provide accurate early warnings, reduce flood-related damages, and positively contribute to flood prevention and disaster management efforts. This research is expected to make a significant contribution to improving the community and government preparedness for future flood disasters.
An optimum hyperparameters of restnet-50 for orchid classification based on convolutional neural network Alvian Ideastari, Nukat; Atika Sari, Christy; Faisal, Edi; Arifin, Zaenal; Danang Krismawan, Andi; Muslih, Muslih
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.297

Abstract

There are many types of orchids in Indonesia, such as Phalaenopsis Amabilis (Moon Orchid), Cattleya, etc. Because the shape and color of each orchid flower looks the same, a system is needed that can classify orchid flowers. In this research, we will use a system using a Convolutional Neural Network with ResNet50 architecture to classify orchid species. There are 4 types of orchids that will be used, namely Moon Orchids, xDoritaenopsis Orchids, Cattleya Orchids, and Coelogyne Pandurata Orchids (1000 datasets for each type). The aim of this research is to implement deep learning using the Convolutional Neural Network method combined with the ResNet50 architecture and identifying the types of orchid flowers and calculating accuracy when identifying orchid flower types. This research uses 4000 orchid image datasets, with a data split of 80:20 so that 800 images are used as training data and 200 as test data. ResNet50 uses a confusion matrix evaluation, namely Accuracy, Precision, Recall, Specificity and F1-score with epochs 10, 20, 30, 40. From the research that has been carried out, it produces the highest accuracy on Test Data with the 30th epoch, reaching 98.87%. and the lowest accuracy on Test Data with the 10th epochs which produces an accuracy of 97.75%.
Light sensor optimization based on finger blood estimation and IoT-integrated Fathurrahman, Haris Imam Karim; Robi'in, Bambang; Saputro, Sigit Suryo; Sudaryanti, Sudaryanti
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.298

Abstract

Diabetes mellitus is a prevalent disease in society. This condition results from various causes, such as lifestyle choices or genetic predisposition. To prevent diabetes mellitus, blood glucose levels must be monitored periodically, and dietary consumption must be managed. Blood glucose monitoring still uses the incision or minimally invasive approach. This approach poses a risk of infection and damage. This study devised a method to optimize a light sensor to measure blood glucose levels. This approach uses sensor optimization and an integrated Internet of Things (IoT) technology. The research findings demonstrate that the use of the optimization strategy leads to increased consistency in sensor values, which may then be transmitted wirelessly through the IoT network. The research results demonstrate that using the optimization strategy leads to increased consistency in sensor values, which may then be wirelessly transmitted through the IoT network.
Measuring the usability effectiveness of using card menus and tree menus in school web applications Hadiq, Hadiq; Solehatin, Solehatin; Djuniharto, Djuniharto; Muslim, Much Aziz
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.299

Abstract

The aim of this research is to measure the usability effectiveness of a web application by using card menus and tree menus using user-friendly criteria and access speed as indicated by the number of clicks made by the user. The method used in this research is the Task-centered User Interface method, where this method allows for planning and evaluating the arrangement of the interface according to user needs. There are four stages in this method, including user identification by conducting needs analysis, the second phase is user interface design. The third phase is the implementation of the card menu and tree menu design, and the fourth face is testing the usability and effectiveness requirements. From the research that has been carried out regarding measuring the effectiveness of using card menus, it is more effective to use than tree menus because you can directly lift the menu and access it. Meanwhile, for usability, the card menus have a higher usability index than the tree menus. Meanwhile, for usability measurements carried out by direct observation and distributing questionnaires, the resulting percentage of user understanding, ease, and speed for the card menu display was 87% and for the tree menu was 60% so that the card menu display was more accepted by users than the tree menu. The new thing provided by the results of this research is in the form of suggestions that can be used by web application developers to use the right type of menu in building web-based applications with the same specifications as in the case of school finance applications.
Enhancing soccer pass receiver prediction in broadcast images through advanced deep learning techniques: A comprehensive study on model optimization and performance evaluation Paneru, Biplov; Paneru, Bishwash; Poudyal, Ramhari; Poudyal, Khem
Journal of Soft Computing Exploration Vol. 5 No. 2 (2024): June 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i2.301

Abstract

In this study, we present a graph neural network (GNN) model specifically designed for football pass receiver prediction in Broadcast Images is presented in this study. Important node properties, including ball possession indicators, hot-encoded team values, and normalized ground placements, are incorporated into the model along with a careful weighting of edges to account for player distances. With weighted BCE loss used to overcome class imbalance, its architecture consists of a linear layer, numerous GNN Message Passing layers, a SoftMax activation, and binary cross-entropy (BCE) loss for training. Of 206 examples, 101 valid predictions were made, indicating a predictive accuracy of 0.50 according to the evaluation data. Comparative analyzes show that GAT-V2 (0.85) and GAT (0.63) perform better in terms of optimization and accuracy, respectively. The effectiveness in recognizing football pass receivers is demonstrated in this paper, highlighting developments in computer vision applications for sports analytics.
Using genetic algorithm feature selection to optimize XGBoost performance in Australian credit Pertiwi, Dwika Ananda Agustina; Ahmad, Kamilah; Salahudin, Shahrul Nizam; Annegrat, Ahmed Mohamed; Muslim, Much Aziz
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.302

Abstract

To reduce credit risk in credit institutions, credit risk management practices need to be implemented so that lending institutions can survive in the long term. Data mining is one of the techniques used for credit risk management. Where data mining can find information patterns from big data using classification techniques with the resulting level of accuracy. This research aims to increase the accuracy of classification algorithms in predicting credit risk by applying genetic algorithms as the best feature selection method. Thus, the most important feature will be used to search for credit risk information. This research applies a classification method using the XGBoost classifier on the Australian credit dataset, then carries out an evaluation by measuring the level of accuracy and AUC. The results show an increase in accuracy of 2.24%, with an accuracy value of 89.93% after optimization using a genetic algorithm. So, through research on genetic algorithm feature selection, we can improve the accuracy performance of the XGBoost algorithm on the Australian credit dataset.
A new CNN model integrated in onion and garlic sorting robot to improve classification accuracy Lestari, Apri Dwi; Khan, Atta Ullah; Pertiwi, Dwika Ananda Agustina; Muslim, Much Aziz
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.304

Abstract

The profit share of the vegetable market, which is quite large in the agricultural industry, needs to be equipped with the ability to classify types of vegetables quickly and accurately. Some vegetables have a similar shape, such as onions and garlic, which can lead to misidentification of these types of vegetables. Through the use of computer vision and machine learning, vegetables, especially onions, can be classified based on the characteristics of shape, size, and color. In classifying shallot and garlic images, the CNN model was developed using 4 convolutional layers, with each layer having a kernel matrix of 2x2 and a total of 914,242 train parameters. The activation function on the convolutional layer uses ReLu and the activation function on the output layer is softmax. Model accuracy on training data is 0.9833 with a loss value of 0.762.
Comparison of gridsearchcv and bayesian hyperparameter optimization in random forest algorithm for diabetes prediction Muzayanah, Rini; Pertiwi, Dwika Ananda Agustina; Ali, Muazam; Muslim, Much Aziz
Journal of Soft Computing Exploration Vol. 5 No. 1 (2024): March 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i1.308

Abstract

Diabetes Mellitus (DM) is a chronic disease whose complications have a significant impact on patients and the wider community. In its early stages, diabetes mellitus usually does not cause significant symptoms, but if it is detected too late and not handled properly, it can cause serious health problems. To overcome these problems, diabetes detection is one of the solutions used. In this research, diabetes detection was carried out using Random Forest with gridsearchcv and bayesian hyperparameter optimization. The research was carried out through the stages of study literature, model development using Kaggle Notebook, model testing, and results analysis. This study aims to compare GridSearchCV and Bayesian hyperparameter optimizations, then analyze the advantages and disadvantages of each optimization when applied to diabetes prediction using the Random Forest algorithm. From the research conducted, it was found that GridSearchCV and Bayesian hyperparameter optimization have their own advantages and disadvantages. The GridSearchCV hyperparameter excels in terms of accuracy of 0.74, although it takes longer for 338,416 seconds. On the other hand, Bayesian hyperparameter optimization has a lower accuracy rate than GridSearchCV optimization with a difference of 0.01, which is 0.73 and takes less time than GridSearchCV for 177,085 seconds.
Comparison of the performance of naive bayes and support vector machine in sirekap sentiment analysis with the lexicon-based approach Setiyawan, Ramadhana; Mustofa, Zaenal
Journal of Soft Computing Exploration Vol. 5 No. 2 (2024): June 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i2.367

Abstract

The general public often uses the SiRekap application to see the progress of the election and to provide critical statements. Policies made by the government have good and bad outcomes, and users end up leaving their reviews and ratings on the Google Play Store, where the app can be downloaded. These reviews can be collected and processed into useful information such as sentiment analysis using Naïve Bayes and Support Vector Machine methods. Both methods have differences during training and during evaluation. The difference in results from the various scenarios tested was not much different. When training the Support Vector Machine model is able to process comment data labeled with a lexicon 10% better than the Naïve Bayes model by looking at the results of the accuracy of the two models. During the accuracy evaluation process, the two models produce the same accuracy of 72%. Although both models get the same accuracy during the evaluation process, there are differences in precision, recall, and f1 score. The difference is that the Support Vector Machine model is 5% better for precision, 8% for recall, and 3% for f1-score compared to the Naïve Bayes model. This research is limited to only knowing the performance of two machine learning models, namely the use of naive bayes and svm by using a label lexicon. The results obtained can be improved for the better. Improving the evaluation results can be done by adding data or using text data augmentation and there is creation from experts related to language sentiment.
Optimizing the implementation of the BFS and DFS algorithms using the web crawler method on the kumparan site Mustaqim, Amirul; Dinova, Dony Benaya; Fadhilah, Muhammad Syafiq; Seivany, Ravenia; Prasetiyo, Budi; Muslim, Much Aziz
Journal of Soft Computing Exploration Vol. 5 No. 2 (2024): June 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i2.309

Abstract

Efficient access to timely information is critical in today's digital era. Web crawlers, automated programs that navigate the Internet, play an important role in collecting data from websites such as Kumparan, a leading news site in Indonesia. This research shows the effectiveness of the Breadth-First Search (BFS) and Depth-First Search (DFS) algorithms in indexing Kumparan content. The results of the research show that BFS consistently indexes more files comprehensively but with longer execution times compared to DFS, which provides faster initial results but with fewer files. For example, at depth 4 BFS indexed 949 files in 886.94 seconds, while DFS indexed 470 files in 233.02 seconds. These findings highlight the balance between precision and speed when selecting a crawling algorithm tailored to the needs of a particular website. This research provides insights into optimizing web crawler technology for complex websites such as Coil and suggests avenues for further research to improve permission efficiency and adaptability across a variety of crawling scenarios.

Page 10 of 15 | Total Record : 146