Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : Journal of Applied Data Sciences

Volatility Analysis of Cryptocurrencies using Statistical Approach and GARCH Model a Case Study on Daily Percentage Change Sarmini, Sarmini; Widiawati, Chyntia Raras Ajeng; Febrianti, Diah Ratna; Yuliana, Dwi
Journal of Applied Data Sciences Vol 5, No 3: SEPTEMBER 2024
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v5i3.261

Abstract

Cryptocurrency has become a significant subject in the global financial market, attracting investors and traders with its high volatility and profit potential. This study analyzes the daily volatility and GARCH volatility of six major cryptocurrencies: Bitcoin (BTC), Ethereum (ETH), Litecoin (LTC), USD Coin (USDC), Tether (USDT), and Ripple (XRP). Daily percentage change data and GARCH volatility are analyzed over specific time periods. The analysis reveals that Bitcoin (BTC) has an average daily percentage change of 0.366%, while Ethereum (ETH) has 0.376%. Litecoin (LTC) shows a daily percentage change of 0.166%, whereas USD Coin (USDC) and Tether (USDT) have very low daily percentage changes, nearly approaching zero. In terms of GARCH volatility, Ethereum (ETH) stands out with a volatility of 0.198, followed by Bitcoin (BTC) with a volatility of 0.121. The study's results indicate that cryptocurrencies are vulnerable to extreme price fluctuations, evidenced by their asymmetry distribution and kurtosis. Volatility correlation analysis reveals significant relationships, important for risk management and portfolio diversification. These findings contribute to understanding cryptocurrency volatility characteristics and aid stakeholders in making informed investment decisions.
Predicting Network Performance Degradation in Wireless and Ethernet Connections Using Gradient Boosting, Logistic Regression, and Multi-Layer Perceptron Models Widiawati, Chyntia Raras Ajeng; Sarmini, Sarmini; Yuliana, Dwi
Journal of Applied Data Sciences Vol 6, No 1: JANUARY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i1.519

Abstract

This study explores predicting network performance degradation in wireless and Ethernet connections using three machine learning algorithms: XGBoost, Logistic Regression, and Multi-Layer Perceptron (MLP). Key metrics, including accuracy, precision, recall, F1-score, and AUC-ROC, were employed to evaluate model performance. The MLP classifier achieved the highest accuracy (98.7%) and AUC-ROC (0.9998), with a precision of 1.0000 and recall of 0.8622, resulting in an F1-score of 0.9260. Logistic Regression provided reasonable baseline performance, with an accuracy of 93.67%, AUC-ROC of 0.9565, and an F1-score of 0.5992, but struggled with non-linear dependencies. XGBoost showed limited utility in detecting degradation events, achieving an F1-score of 0 despite a perfect AUC-ROC (1.0), indicating sensitivity to imbalanced data. Through hyperparameter tuning, MLP demonstrated robustness in capturing complex patterns in network latency metrics (local_avg and remote_avg), with remote_avg emerging as the most predictive feature for identifying degradation across both network types. Visualizations of latency dynamics demonstrate the higher predictive relevance of remote latency (remote_avg) in both network types, where spikes in this metric are closely associated with degradation. The findings underscore the effectiveness of using latency metrics and machine learning to anticipate network issues, suggesting that MLP is particularly well-suited for real-time, predictive network monitoring. Integrating such models could enhance network reliability by enabling proactive intervention, crucial for sectors reliant on continuous connectivity. Future work could expand on feature sets, explore adaptive thresholding, and implement these predictive models in live network environments for real-time monitoring and automated response.
GAN-Enhanced Radial Basis Function Networks for Improved Landslide Susceptibility Mapping Widiawati, Chyntia Raras Ajeng; Maulita, Ika; Purwati, Yuli; Wahid, Arif Mu'amar
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1035

Abstract

Landslide susceptibility modeling is a critical task for disaster mitigation, yet it is frequently undermined by a severe class imbalance inherent in landslide datasets, where non-landslide instances vastly outnumber actual landslide events. This imbalance leads to biased machine learning models with poor predictive power for the minority (landslide) class, resulting in unreliable hazard maps. This study, focusing on the high-risk area of Malang Regency, Indonesia, addresses this challenge by proposing an innovative framework that integrates a Generative Adversarial Network (GAN) for synthetic data augmentation with a Radial Basis Function Network (RBFN) for classification. A highly imbalanced dataset with a 1:10 ratio of landslide to non-landslide points was constructed to establish a realistic baseline. On this data, the RBFN model, while theoretically powerful for capturing non-linear relationships, failed completely, achieving a Recall of 0.00 for the landslide class. The novelty of this research lies in the specific application of a GAN, trained for 15,000 epochs, to generate high-fidelity synthetic landslide data, thereby creating a perfectly balanced training set. After retraining on this augmented data and undergoing a systematic hyperparameter tuning process, the RBFN’s performance was dramatically transformed. The optimized model achieved an F1-Score of 0.9333 and a Recall of 0.8750, elevating its performance from total failure to a level competitive with the robust Random Forest benchmark. This work validates that the integrated GAN-RBFN approach is a highly effective methodology for overcoming the data imbalance problem in geospatial hazard modeling. By turning a previously unreliable classifier into a powerful predictive tool, this method has significant practical implications for developing more accurate landslide susceptibility maps, which are crucial for informed spatial planning and enhancing early warning systems.