Claim Missing Document
Check
Articles

Forecasting Kunjungan Wisatawan Dengan Long Short Term Memory (LSTM) Sugiartawan, Putu; Jiwa Permana, Agus Aan; Prakoso, Paholo Iman
Jurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI) Vol 1 No 1 (2018): September
Publisher : INFOTEKS (Information Technology, Computer and Sciences)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (965.51 KB) | DOI: 10.33173/jsikti.5

Abstract

Bali is one of the favorite tourist attractions in Indonesia, where the number of foreign tourists visiting Bali is around 4 million over 2015 (Dispar Bali). The number of tourists visiting is spread in various regions and tourist attractions that are located in Bali. Although tourist visits to Bali can be said to be large, the visit was not evenly distributed, there were significant fluctuations in tourist visits. Forecasting or forecasting techniques can find out the pattern of tourist visits. Forecasting technique aims to predict the previous data pattern so that the next data pattern can be known. In this study using the technique of recurrent neural network in predicting the level of tourist visits. One of the techniques for a recurrent neural network (RNN) used in this study is Long Short-Term Memory (LSTM). This model is better than a simple RNN model. In this study predicting the level of tourist visits using the LSTM algorithm, the data used is data on tourist visits to one of the attractions in Bali. The results obtained using the LSTM model amounted to 15,962. The measured value is an error value, with the MAPE technique. The LSTM architecture used consists of 16 units of neuron units in the hidden layer, a learning rate of 0.01, windows size of 3, and the number of hidden layers is 1.
Comparison of CNN and CNN-LSTM Performance in Facial Expression Classification Based on FER2013 Dataset Savitri, Putu Ananda Adi; Permana, Agus Aan Jiwa; Puspa Dewi, Ni Putu Novita
Asian Journal of Science, Technology, Engineering, and Art Vol 4 No 1 (2026): Asian Journal of Science, Technology, Engineering, and Art
Publisher : Darul Yasin Al Sys

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58578/ajstea.v4i1.8252

Abstract

Although facial expression recognition (FER) using deep learning has received increasing attention in prior studies, research specifically addressing the comparative effectiveness of sequential modeling on static image data remains limited. This study aims to evaluate and compare the performance of a pure Convolutional Neural Network (CNN) model and a hybrid CNN–Long Short-Term Memory (CNN-LSTM) model in classifying seven basic facial expressions using the static FER2013 dataset. A quantitative experimental approach with a comparative study design was employed, utilizing the publicly available FER2013 dataset and two custom deep learning architectures. Data were obtained from FER2013 and model performance was evaluated using accuracy, precision, recall, F1-score, and AUC-ROC metrics. The findings indicate that the pure CNN model significantly outperformed the CNN-LSTM model, achieving a testing accuracy of 63.25% compared to 46.82% for the hybrid model; the CNN provided strong discrimination for visually distinct classes but continued to struggle with visually similar expressions. These results contribute to the theoretical development of deep learning architecture selection and expand understanding of the application of sequence models to static data. The study concludes that data characteristics (static versus temporal) play a crucial role in determining model effectiveness, and that for static datasets such as FER2013, a pure CNN constitutes the more appropriate choice. The implications of this research include theoretical contributions to the growing literature on deep learning-based FER and practical recommendations for developers to prioritize CNN architectures for non-temporal image classification tasks, while also highlighting opportunities for future research on transfer learning and attention mechanisms to better capture subtle expression nuances.
IMPLEMENTASI MODEL LLAMA VISION DENGAN IN-CONTEXT LEARNING UNTUK PEMBUATAN CAPTION OTOMATIS Vandyta, Joe Aqilla; Seputra, Ketut Agus; Permana, Agus Aan Jiwa
JUTIM (Jurnal Teknik Informatika Musirawas) Vol 11 No 1 (2026): JUTIM (Jurnal Teknik Informatika Musirawas) Maret
Publisher : LPPM UNIVERSITAS BINA INSAN

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32767/jutim.v11i1.2860

Abstract

Penelitian ini bertujuan untuk merancang dan membangun aplikasi Android bernama Descripix yang memanfaatkan model LLaMA Vision untuk menghasilkan caption gambar secara otomatis. Latar belakang dari penelitian ini adalah banyaknya content creator, fotografer, dan pengguna media sosial yang mengalami creative block dalam membuat caption, sehingga menghambat konsistensi publikasi konten mereka. Metode pengembangan yang digunakan adalah Waterfall dengan pengujian Black Box. Sistem yang dibangun mengintegrasikan metadata gambar seperti author, tanggal pengambilan dan lokasi sebagai input tambahan dalam proses captioning. Penerapan metode In-Context Learning (ICL) dalam prompting menghasilkan caption yang lebih konsisten, kontekstual, dan sesuai dengan pola linguistik yang diharapkan. Perbandingan hasil generasi caption dengan dan tanpa metode ICL membuktikan bahwa, penerapan ICL menghasilkan output yang lebih akurat, konsisten, dan kontekstual dengan mengeliminasi elemen yang tidak relevan. Aplikasi memiliki dua mode pengguna: guest dapat mengunggah gambar dan menghasilkan caption, sementara authenticated user dapat menyimpan, mengedit, dan mengelola riwayat caption. Hasil pengujian Black Box terhadap 12 skenario menunjukkan bahwa seluruh fungsi utama aplikasi menunjukan tingkat keberhasilan 100%, memvalidasi bahwa seluruh fitur utama berfungsi sesuai espektasi. Dengan demikian, aplikasi ini dapat menjadi solusi efektif untuk membantu pengguna tetap aktif di media sosial saat mengalami creative block dan meningkatkan produktivitas dalam pembuatan konten.
Co-Authors A. A. Gede Yudhi Paramartha Agus Halid, Agus Agus Seputra I Ketut Alkautsar, Yoga Rizky Arditya, I Putu Dion Artha, I Kadek Bayu Danu Artha, I Komang Windra Baskara Nugraha, I Gusti Bagus Darmayasa, Ngakan Nyoman DIATMIKA, KETUT TUTUR Elly Herliyani Erma Susanti Gede Aditra Pradnyana Gede Arya Ardivan Pratama Saputra Gede Nanda Ageng Nugraha Gede Saindra Santyadiputra Gede Wahyu Purnama Gunawan, I Gede Made Deny Surya I Gd Ny Werdyana Guna Mertha I Gusti Agung Putu Bagus Satria Wicaksana I Gusti Ayu Purnamawati I Gusti Ngurah Wikranta Arsa, I Gusti Ngurah I Kadek Nicko Ananda I Kadek Suranata I Ketut Gading I Ketut Purnamawan I Made Ardwi Pradnyana I Made Pageh I Made Putrama I Made Sukarsa I Made Sukarsa I Nyoman Laba Jayanta I Nyoman Saputra Wahyu Wijaya I Nyoman Saputra Wahyu Wijaya Ida Bagus Sebali Mahesa Yogi Ifdil Ifdil Ika Arfiani Kadek Wirahyuni Komang Setemen Kusuma, I Komang Arya Adi Kusumadewi, Ni Putu Ari Made Sudarma Made Sudarma Mahagangga, Komang Adi Satya Marta Dinata, Kadek Prima Giant Naitboho, Okthen Orlanda Naswin, Ahmad Ni Ketut Kertiasih Ni Luh Ita Purnami Ni Putu Dwi Sucita Dartini Ni Putu Novita Puspa Dewi Ni Wayan Marti Octavia, I Gusti Ayu Adiani pande sindu Pande, Satria Imawan Adi Putra Pande Pracasitaram, Gede Made Surya Bumi Pracasitaram, I Gede Made Surya Bumi Prakoso, Paholo Iman Pramudya, Dewa Gede Bhaskara Pranadi Sudhana, I G P Fajar Puridiasta, I Gede Deindra Dwija Puspa Dewi, Ni Putu Novita Putrama, Made Putu Ony Andewi PUTU SUGIARTAWAN Rezania Agramanisti Azdy, Rezania Agramanisti Rukmi Sari Hartati Rukmi Sari Hartati Saputra Wahyu Wijaya Savitri, Putu Ananda Adi Siami, M. Ikbal Sindu, I Gede Partha Sumiyatun Sunia Raharja, I Made Swari, Gusti Putu Ayu Mas Meita Pradnya Tarigan, Thomas Edyson Vandyta, Joe Aqilla Widodo Prijodiprodjo Wijaya, I Gede Saputra Wahyu Winata, I Gede Arya Wirayani, Made Padmi Witjaksana, Putu Gede Dimas Yoga Rizky Alkautsar Yoga Sucipta, Gede Yudhantara, Kadek Prasta