Claim Missing Document
Check
Articles

Found 14 Documents
Search

OPTIMIZATION OF PORTFOLIO USING FUZZY SELECTION Wardani, Rahmania Ayu; Surono, Sugiyarto; Wen, Goh Kang
BAREKENG: Jurnal Ilmu Matematika dan Terapan Vol 16 No 4 (2022): BAREKENG: Journal of Mathematics and Its Applications
Publisher : PATTIMURA UNIVERSITY

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (517.741 KB) | DOI: 10.30598/barekengvol16iss4pp1325-1336

Abstract

The problem of portfolio optimization concerns the allocation of the investor’s wealth between several security alternatives so that the maximum profit can be obtained. One of the methods used is Fuzzy Portfolio Selection to understand it better. This method separates the objective function of return and the objective function of risk to determine the limit of the membership function that will be used. The goal of this study is to understand the application of the Fuzzy Portfolio Selection method over shares that have been chosen on a portfolio optimization problem, understand return and risk, and understand the budget proportion of each claim. The subject of this study is the shares of 20 companies included in Bursa Efek Indonesia from 1 January 2021 until 1 January 2022. The result of this study shows that from 20 shares, there are 10 shares that is suitable in the forming of optimal portfolio, those are ADRO (0%), ANTM (43.3%), ASII (0%), BBCA (0%), BBRI (0%), BBTN (0%), BRPT (0%), BSDE (0%), ERAA (16%), and INCO (40.7%). The expected return from the portfolio is 0.0878895207 or 8.8% for the return and 0.0226022117 or 2.3% for the risk.
FUZZY TIME SERIES BASED ON THE HYBRID OF FCM WITH CMBO OPTIMIZATION TECHNIQUE FOR HIGH WATER PREDICTION Irsalinda, Nursyiva; Laely, Dera Kurnia; Surono, Sugiyarto
BAREKENG: Jurnal Ilmu Matematika dan Terapan Vol 17 No 3 (2023): BAREKENG: Journal of Mathematics and Its Applications
Publisher : PATTIMURA UNIVERSITY

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30598/barekengvol17iss3pp1245-1256

Abstract

Time series data represents measurements taken over a specific period and is often employed for forecasting purposes. The typical approach in forecasting involves the analysis of relationships among estimated variables.In this study, we apply Fuzzy Time Series (FTS) to water level data collected every 10 minutes at the Irish Achill Island Observation Station. The FTS, which is based on Fuzzy C-Means (FCM), is hybridized with the Cat and Mouse Based Optimizer (CMBO). This hybridization of FCM with the CMBO optimizer aims to address weaknesses inherent in FTS, particularly concerning the determination of interval lengths, with the ultimate goal of enhancing prediction accuracy.Before conducting forecasts, we execute the FCM-CMBO process to determine the optimal centroid used for defining interval lengths within the FTS framework. Our study utilizes a dataset comprising 52,562 data points, obtained from the official Kaggle website. Subsequently, we assess forecasting accuracy using the Mean Absolute Percent Error (MAPE), where a smaller percentage indicates superior performance. Our proposed methodology effectively mitigates the limitations associated with interval length determination and significantly improves forecasting accuracy. Specifically, the MAPE percentage for FTS-FCM before optimization is 20.180%, while that of FCM-CMBO is notably lower at 18.265%. These results highlight the superior performance of the FCM-CMBO hybrid approach, which achieves a forecasting accuracy of 81.735% when compared to actual data.
Chi-Square Feature Selection with Pseudo-Labelling in Natural Language Processing Afriyani, Sintia; Surono, Sugiyarto; Solihin, Iwan Mahmud
JTAM (Jurnal Teori dan Aplikasi Matematika) Vol 8, No 3 (2024): July
Publisher : Universitas Muhammadiyah Mataram

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31764/jtam.v8i3.22751

Abstract

This study aims to evaluate the effectiveness of the Chi-Square feature selection method in improving the classification accuracy of linear Support Vector Machine, K-Nearest Neighbors and Random Forest in natural language processing when combined with classification algorithms as well as introducing Pseudo-Labelling techniques to improve semi-supervised classification performance. This research is important in the context of NLP as accurate feature selection can significantly improve model performance by reducing data noise and focusing on the most relevant information, while Pseudo-Labelling techniques help maximise unlabelled data, which is particularly useful when labelled data is sparse. The research methodology involves collecting relevant datasets, thus applying the Chi-Square method to filter out significant features, and applying Pseudo-Labelling techniques to train semi-supervised models. In this study, the dataset used in this research is the text data of public comments related to the 2024 Presidential General Election, which is obtained from the Twitter scrapping process. The characteristics of this dataset include various comments and opinions from the public related to presidential candidates, including political views, support, and criticism of these candidates. The experimental results show a significant improvement in classification accuracy to 0.9200, with precision of 0.8893, recall of 0.9200, and F1-score of 0.8828. The integration of Pseudo-Labelling techniques prominently improves the performance of semi-supervised classification, suggesting that the combination of Chi-Square and Pseudo-Labelling methods can improve classification systems in various natural language processing applications. This opens up opportunities to develop more efficient methodologies in improving classification accuracy and effectiveness in natural language processing tasks, especially in the domains of linear Support Vector Machine, K-Nearest Neighbors and Random Forest well as semi-supervised learning.
DynamicWeighted Particle Swarm Optimization - Support Vector Machine Optimization in Recursive Feature Elimination Feature Selection: Optimization in Recursive Feature Elimination Sya'idah, Irma Binti; Surono, Sugiyarto; Khang Wen, Goh
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.3963

Abstract

Feature Selection is a crucial step in data preprocessing to enhance machine learning efficiency, reduce computational complexity, and improve classification accuracy. The main challenge in feature selection for classification is identifying the most relevant and informative subset to enhance prediction accuracy. Previous studies often resulted in suboptimal subsets, leading to poor model performance and low accuracy. This research aims to enhance classification accuracy by utilizing Recursive Feature Elimination (RFE) combined with Dynamic Weighted Particle Swarm Optimization (DWPSO) and Support Vector Machine (SVM) algorithms. The research method involves the utilization of 12 datasets from the University of California, Irvine (UCI) repository, where features are selected via RFE and applied to the DWPSO-SVM algorithm. RFE iteratively removes the weakest features, constructing a model with the most relevant features to enhance accuracy. The research findings indicate that DWPSO-SVM with RFE significantly improves classification accuracy. For example, accuracy on the Breast Cancer dataset increased from 58% to 76%, and on the Heart dataset from 80% to 97%. The highest accuracy achieved was 100% on the Iris dataset. The conclusion of these findings that RFE in DWPSO-SVM offers consistent and balanced results in True Positive Rate (TPR) and True Negative Rate (TNR), providing reliable and accurate predictions for various applications.