Claim Missing Document
Check
Articles

Found 11 Documents
Search

Learning management systems with emphasis on the Moodle at UniSA Ghosh, Anusua; Nafalski, Andrew; Nedic, Zorica; Wibawa, Aji Prasetya
Bulletin of Social Informatics Theory and Application Vol. 3 No. 1 (2019)
Publisher : Association for Scientific Computing Electrical and Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/businta.v3i1.160

Abstract

With recent advances in technology and the Internet, the concept of teaching and learning have evolved significantly. Conventional face-to-face teaching is becoming a thing of the past as knowledge is everywhere and accessible from anywhere. Thus, a shift to e-learning is gaining momentum. Educational institute, companies, individuals and training organizations are embracing new technology and creating a shared online platform to facilitate learning, referred to as the Learning Management Systems (LMS). LMS are software that provide an online portal to collaborate in teaching and learning seamlessly, making it more productive and engaging. This paper aims to review the top ten LMSs both cloud based and open source with regards to their compatibility, usefulness, security, accessibility, scalability, stability/reliability and de-sign in general with emphasis on the recent development of the Moodle and NetLab at University of South Australia (UniSA). The open source online learning platform Moodle is adopted by UniSA to provide educators a space for collaborative learning using the optimized tools to create activities. Moreover, the Netlab online remote laboratory developed at UniSA, provides a platform for academic staff for teaching and demonstrations during lectures and for students to conduct practical experiments remotely on real laboratory equipment.
A decade evolution of virtual and remote laboratories Andini, Nurul Fajriah; Dewi, Popy Maulida; Marida, Tyas Agung Cahyaning; Wibawa, Aji Prasetya; Nafalski, Andrew
Bulletin of Social Informatics Theory and Application Vol. 7 No. 1 (2023)
Publisher : Association for Scientific Computing Electrical and Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/businta.v7i1.203

Abstract

The conventional-experimental laboratory has problems of times, expenses, risks, distances, and spaces. A potential solution to such problems is a virtual and remote laboratory. It uses web or mobile applications and internet networks for virtual learning. This paper discusses the recent development of virtual and remote laboratories in the last decade. An in-depth literature review is performed to discover any facts about VR laboratory development. The results of this review may lead to the future development of virtual and remote laboratories.
Detecting emotions using a combination of bidirectional encoder representations from transformers embedding and bidirectional long short-term memory Wibawa, Aji Prasetya; Cahyani, Denis Eka; Prasetya, Didik Dwi; Gumilar, Langlang; Nafalski, Andrew
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 6: December 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i6.pp7137-7146

Abstract

One of the most difficult topics in natural language understanding (NLU) is emotion detection in text because human emotions are difficult to understand without knowing facial expressions. Because the structure of Indonesian differs from other languages, this study focuses on emotion detection in Indonesian text. The nine experimental scenarios of this study incorporate word embedding (bidirectional encoder representations from transformers (BERT), Word2Vec, and GloVe) and emotion detection models (bidirectional long short-term memory (BiLSTM), LSTM, and convolutional neural network (CNN)). With values of 88.28%, 88.42%, and 89.20% for Commuter Line, Transjakarta, and Commuter Line+Transjakarta, respectively, BERT-BiLSTM generates the highest accuracy on the data. In general, BiLSTM produces the highest accuracy, followed by LSTM, and finally CNN. When it came to word embedding, BERT embedding outperformed Word2Vec and GloVe. In addition, the BERT-BiLSTM model generates the highest precision, recall, and F1-measure values in each data scenario when compared to other models. According to the results of this study, BERT-BiLSTM can enhance the performance of the classification model when compared to previous studies that only used BERT or BiLSTM for emotion detection in Indonesian texts.
Mean-Median Smoothing Backpropagation Neural Network to Forecast Unique Visitors Time Series of Electronic Journal Wibawa, Aji Prasetya; Utama, Agung Bella Putra; Lestari, Widya; Saputra, Irzan Tri; Izdihar, Zahra Nabila; Pujianto, Utomo; Haviluddin, Haviluddin; Nafalski, Andrew
Journal of Applied Data Sciences Vol 4, No 3: SEPTEMBER 2023
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v4i3.97

Abstract

Sessions or unique visitors is the number of visitors from one IP who accessed a journal portal for the first time in a certain period of time. The large number of unique daily average subscriber visits to electronic journal pages indicates that this scientific periodical is in high demand. Hence, the number of unique visitors is an important indicator of the accomplishment of an electronic journal as a measure of the dissemination in accelerating the journal accreditation system. Numerous methods can be used for forecasting, one of which is the backpropagation neural network (BPNN). Data quality is very important in building a good BPNN model, because the success of modeling at BPNN is very dependent on input data. One way that can be carried out to improve data quality is by smoothing the data. In this study, the forecasting method for predicting time series data for unique visitors to electronic journals employed three models, respectively BPNN, BPNN with mean smoothing, and BPNN with median smoothing. Based on the findings, the results of the smallest error were obtained by the BPNN model with a mean smoothing with MSE 0.00129 and RMSE 0.03518 with a learning rate of 0.4 on 1-2-1 architecture which can be used as a forecast for unique visitors of electronic journals.
PSO based Hyperparameter tuning of CNN Multivariate Time- Series Analysis Putra Utama, Agung Bella; Wibawa, Aji Prasetya; Muladi, Muladi; Nafalski, Andrew
JOIN (Jurnal Online Informatika) Vol 7 No 2 (2022)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v7i2.858

Abstract

Convolutional Neural Network (CNN) is an effective Deep Learning (DL) algorithm that solves various image identification problems. The use of CNN for time-series data analysis is emerging. CNN learns filters, representations of repeated patterns in the series, and uses them to forecast future values. The network performance may depend on hyperparameter settings. This study optimizes the CNN architecture based on hyperparameter tuning using Particle Swarm Optimization (PSO), PSO-CNN. The proposed method was evaluated using multivariate time-series data of electronic journal visitor datasets. The CNN equation in image and time-series problems is the input given to the model for processing numbers. The proposed method generated the lowest RMSE (1.386) with 178 neurons in the fully connected and 2 hidden layers. The experimental results show that the PSO-CNN generates an architecture with better performance than ordinary CNN.
Modelling Naïve Bayes for Tembang Macapat Classification Wibawa, Aji Prasetya; Ningtyas, Yana; Atmaja, Nimas Hadi; Zaeni, Ilham Ari Elbaith; Utama, Agung Bella Putra; Dwiyanto, Felix Andika; Nafalski, Andrew
Harmonia: Journal of Arts Research and Education Vol 22, No 1 (2022): June 2022
Publisher : Department of Drama, Dance and Music, FBS, Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/harmonia.v22i1.34776

Abstract

The tembang macapat can be classified using its cultural concepts of guru lagu, guru wilangan, and guru gatra. People may face difficulties recognizing certain songs based on the established rules. This study aims to build classification models of tembang macapat using a simple yet powerful Naïve  Bayes classifier. The Naive Bayes can generate high-accuracy values from sparse data. This study modifies the concept of Guru Lagu by retrieving the last vowel of each line. At the same time, guru wilangan’s guidelines are amended by counting the number of all characters (Model 2) rather than calculating the number of syllables (Model 1). The data source is serat wulangreh with 11 types of tembang macapat, namely maskumambang, mijil, sinom, durma, asmaradana, kinanthi, pucung, gambuh, pangkur, dandhanggula, and megatruh. The k-fold cross-validation is used to evaluate the performance of 88 data. The result shows that the proposed Model 1 performs better than Model 2 in macapat classification. This promising method opens the potential of using a data mining classification engine as cultural teaching and preservation media.
Congestion Predictive Modelling on Network Dataset Using Ensemble Deep Learning Purnawansyah, Purnawansyah; Wibawa, Aji Prasetya; Widiyaningtyas, Triyanna; Haviluddin, Haviluddin; Raja, Roesman Ridwan; Darwis, Herdianti; Nafalski, Andrew
Journal of Applied Data Sciences Vol 5, No 4: DECEMBER 2024
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v5i4.333

Abstract

Network congestion arises from factors like bandwidth misallocation and increased node density leading to issues such as reduced packet delivery ratios and energy efficiency, increased packet loss and delay, and diminished Quality of Service and Quality of Experience. This study highlights the potential of deep learning and ensemble learning for network congestion analysis, which has been less explored compared to packet-loss based, delay-based, hybrid-based, and machine learning approaches, offering opportunities for advancement through parameter tuning, data labeling, architecture simulation, and activation function experiments, despite challenges posed by the scarcity of labeled data due to the high costs, time, computational resources, and human effort required for labeling. In this paper, we investigate network congestion prediction using deep learning and observe the results individually, as well as analyze ensemble learning outcomes using majority voting, from data that we recorded and clustered using K-Means. We leverage deep learning models including BPNN, CNN, LSTM, and hybrid LSTM-CNN architectures on 12 scenarios formed out of the combination of level datasets, normalization techniques, and number of recommended clusters and the results reveal that ensemble methods, particularly those integrating LSTM and CNN models (LSTM-CNN), consistently outperform individual deep learning models, demonstrating higher accuracy and stability across diverse datasets. Besides that, it is preferably recommended to use the QoS level dataset and the combinations of 3 clusters due to the most consistent evaluation results across different configurations and normalization strategies. The ensemble learning evaluation results show consistently high performance across various metrics, with accuracy, Matthews Correlation Coefficient, and Cohen's Kappa values nearing 100%, indicates excellent predictive capability and agreement. Hamming Loss remains minimal highlighting the low misclassification rates. Notably, this study advances predictive modeling in network management, offering strategies to enhance network efficiency and reliability amidst escalating traffic demands for more sustainable network operations.
Comparative Performance of Transformer Models for Cultural Heritage in NLP Tasks Suryanto, Tri Lathif Mardi; Wibawa, Aji Prasetya; Hariyono, Hariyono; Nafalski, Andrew
Advance Sustainable Science Engineering and Technology Vol. 7 No. 1 (2025): November-January
Publisher : Science and Technology Research Centre Universitas PGRI Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26877/asset.v7i1.1211

Abstract

AI and Machine Learning are crucial in advancing technology, especially for processing large, complex datasets. The transformer model, a primary approach in natural language processing (NLP), enables applications like translation, text summarization, and question-answer (QA) systems. This study compares two popular transformer models, FlanT5 and mT5, which are widely used yet often struggle to capture the specific context of the reference text. Using a unique Goddess Durga QA dataset with specialized cultural knowledge about Indonesia, this research tests how effectively each model can handle culturally specific QA tasks. The study involved data preparation, initial model training, ROUGE metric evaluation (ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum), and result analysis. Findings show that FlanT5 outperforms mT5 on multiple metrics, making it better at preserving cultural context. These results are impactful for NLP applications that rely on cultural insight, such as cultural preservation QA systems and context-based educational platforms.
Journal Classification Using Cosine Similarity Method on Title and Abstract with Frequency-Based Stopword Removal  Nurfadila, Piska Dwi; Wibawa, Aji Prasetya; Zaeni, Ilham Ari Elbaith; Nafalski, Andrew
International Journal of Artificial Intelligence Research Vol 3, No 2 (2019): December 2019
Publisher : Universitas Dharma Wacana

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (231.173 KB) | DOI: 10.29099/ijair.v3i2.99

Abstract

Classification of economic journal articles has been done using the VSM (Vector Space Model) approach and the Cosine Similarity method. The results of previous studies are considered to be less optimal because Stopword Removal was carried out by using a dictionary of basic words (tuning). Therefore, the omitted words limited to only basic words. This study shows the improved performance accuracy of the Cosine Similarity method using frequency-based Stopword Removal. The reason is because the term with a certain frequency is assumed to be an insignificant word and will give less relevant results. Performance testing of the Cosine Similarity method that had been added to frequency-based Stopword Removal was done by using K-fold Cross Validation. The method performance produced accuracy value for 64.28%, precision for 64.76 %, and recall for 65.26%. The execution time after pre-processing was 0, 05033 second.
Journal Unique Visitors Forecasting Based on Multivariate Attributes Using CNN Dewandra, Aderyan Reynaldi Fahrezza; Wibawa, Aji Prasetya; Pujianto, Utomo; Utama, Agung Bella Putra; Nafalski, Andrew
International Journal of Artificial Intelligence Research Vol 6, No 2 (2022): Desember 2022
Publisher : Universitas Dharma Wacana

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (379.839 KB) | DOI: 10.29099/ijair.v6i1.274

Abstract

Forecasting is needed in various problems, one of which is forecasting electronic journals unique visitors. Although forecasting cannot produce very accurate predictions, using the proper method can reduce forecasting errors. In this research, forecasting is done using the Deep Learning method, which is often used to process two-dimensional data, namely convolutional neural network (CNN). One-dimensional CNN comes with 1D feature extraction suitable for forecasting 1D time-series problems. This study aims to determine the best architecture and increase the number of hidden layers and neurons on CNN forecasting results. In various architectural scenarios, CNN performance was measured using the root mean squared error (RMSE). Based on the study results, the best results were obtained with an RMSE value of 2.314 using an architecture of 2 hidden layers and 64 neurons in Model 1. Meanwhile, the significant effect of increasing the number of hidden layers on the RMSE value was only found in Model 1 using 64 or 256 neurons.