Claim Missing Document
Check
Articles

Found 13 Documents
Search

Implementasi Metode Min-Max Stock Pada Sistem Informasi Persediaan Berbasis Android Purwita Sari; Ahmad Fali Oklilas; Iman Saladin B. A.
Jurnal Nasional Teknologi dan Sistem Informasi Vol 8, No 1 (2022): April 2022
Publisher : Jurusan Sistem Informasi, Fakultas Teknologi Informasi, Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/TEKNOSI.v8i1.2022.17-24

Abstract

Pencatatan persediaan di Instansi Pemerintah saat ini sudah menggunakan aplikasi persediaan yang bersifat stand alone, namun aplikasi ini hanya memfasilitasi proses distribusi barang dan kontrol melalui kartu stok, belum memenuhi kebutuhan akan proses pengendalian pengadaan barang persediaan yang efektif dan efisien. Di dalam proses distribusi barang sangat diperlukan pengendalian untuk menjaga stok barang agar selalu tersedia, untuk itu dibutuhkan sistem yang dapat mengontrol kondisi stok barang untuk mengetahui kebutuhan pengadaan yang tepat dan akurat. Untuk itu perlu dikembangkan suatu Sistem Informasi Pengendalian Barang Persediaan dengan menerapkan metode FAST (Framework for the Application of System Thinking) berbasis android yang bertujuan agar Instansi Pemerintah dapat meminimalisir terjadinya stok berlebih pada barang persediaan yang menjadi penyebab pemborosan dan kekuarangan stok barang persediaan yang dapat menghambat kelancaran kegiatan operasional. Sistem yang akan dikembangan ini juga menerapkan metode Min-Max Stock sehingga aliran pendistribusian barang masuk dan barang keluar akan lebih teratur serta pemesanan lebih terencana. Hasil dari penelitian ini menghasilkan sistem informasi persediaan barang yang dapat membantu mempercepat pengolahan data barang, mengurangi kesalahan jika ada data yang terduplikasi, meminimalisir keterlambatan dalam penyampaian laporan bulanan dan mempercepat proses distribusi barang kebutuhan di lingkungan instansi pemerintah.
Klasifikasi Teks Multilabel pada Artikel Berita Menggunakan Long Short-Term Memory dengan Word2Vec Winda Kurnia Sari; Dian Palupi Rini; Reza Firsandaya Malik; Iman Saladin B. Azhar
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 2 (2020): April 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (639.099 KB) | DOI: 10.29207/resti.v4i2.1655

Abstract

Multilabel text classification is a task of categorizing text into one or more categories. Like other machine learning, multilabel classification performance is limited to the small labeled data and leads to the difficulty of capturing semantic relationships. It requires a multilabel text classification technique that can group four labels from news articles. Deep Learning is a proposed method for solving problems in multilabel text classification techniques. Some of the deep learning methods used for text classification include Convolutional Neural Networks, Autoencoders, Deep Belief Networks, and Recurrent Neural Networks (RNN). RNN is one of the most popular architectures used in natural language processing (NLP) because the recurrent structure is appropriate for processing variable-length text. One of the deep learning methods proposed in this study is RNN with the application of the Long Short-Term Memory (LSTM) architecture. The models are trained based on trial and error experiments using LSTM and 300-dimensional words embedding features with Word2Vec. By tuning the parameters and comparing the eight proposed Long Short-Term Memory (LSTM) models with a large-scale dataset, to show that LSTM with features Word2Vec can achieve good performance in text classification. The results show that text classification using LSTM with Word2Vec obtain the highest accuracy is in the fifth model with 95.38, the average of precision, recall, and F1-score is 95. Also, LSTM with the Word2Vec feature gets graphic results that are close to good-fit on seventh and eighth models.
Penerapan Data Mining Dan Tekonologi Machine Learning Pada Klasifikasi Penyakit Jantung Iman Saladin B. Azhar; Winda Kurnia Sari
Jurnal Sistem Informasi Vol 14, No 1 (2022)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (398.495 KB) | DOI: 10.36706/jsi.v14i1.16140

Abstract

Saat ini, dalam dunia kesehatan, data analisis dapat diproses untuk mendeteksi dan mendiagnosa penyakit. Dengan perkembangan teknologi, peranan data mining, dan kebutuhan studi digunakan untuk memecahkan masalah tersebut. Maka dari itu, kami memutuskan untuk mengklasifikasikan penyakit jantung menggunakan 3 teknik machine learning: Logistic Regression, K-Nearest Neighbors, Random Forest, dan Tuned K-Nearest Neighbors dengan bahasa pemrograman python. Dataset yang digunakan dalam penelitian ini mempunyai 13 fitur, 1 variabel label, dan 303 contoh di mana 138 menderita karena penyakit cardiovascular dan 165 sehat. Pengukuran yang digunakan untuk membandingkan kinerja teknik data mining yaitu akurasi, presisi, recall, dan f-measure. Hasilnya menunjukkan bahwa Logistic Regression merupakan teknik dengan kinerja terbaik dan mendapatkan akurasi tertinggi 88,52%.
Perancangan Data Warehouse untuk Mendukung Sistem Pengelolaan Dokumen Digital dan Tugas Akhir Mahasiswa di Perpustakaan Universitas Sriwijaya Ali Bardadi; Iman Saladin B.Azhar; Muhammad Hidayat; Nurul Afifah
Jurnal Sistem Informasi Vol 14, No 1 (2022)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1713.429 KB) | DOI: 10.36706/jsi.v14i1.17191

Abstract

Universitas Sriwijaya’s Library currently manages digital documents where the information displayed is still not comprehensive. Therefore, a large data storage area is needed which is able to present the information quickly. So that it can produce more maximum information in the Universitas Sriwijaya library by building a data warehouse. The design of the data warehouse focuses on presenting the final project documents of Universitas Sriwijaya. The method used in designing a data warehouse is Kimball's Nine-Step Methodology. This method is able to present data in accordance with the information needs of digital document data and  final project. Based on the Star Schema data model and data analysis using OLAP, the data warehouse is able to integrate data which makes it easier to access various information very quickly and precisely. The implementation of data warehouse hoped that it can be a solution to monitor academic activities of digital documents and the final project documents of Universitas Sriwijaya
simulasi RFID dari supply chain management menggunakan blockchain Ahmad Fali Oklilas; Arif Tumpal Leonardo Sianturi; Huda Ubaya; Rossi Passarella; iman saladin b. azhar
JUPITER (Jurnal Penelitian Ilmu dan Teknologi Komputer) Vol 15 No 1d (2023): Jupiter Edisi April 2023
Publisher : Teknik Komputer Politeknik Negeri Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5281./6502/15.jupiter.2023.04

Abstract

Blockchain is a collection of blocks that are placed in a sequential order and can hold data containing past transactions. The blockchain has various qualities and benefits, including the fact that it is decentralized and immutable. Blockchain is widely employed in different sectors due to its advantages; one example is supply chain management. In this study, supply chain management is performed with two RFID simulation scenarios, where RFID antennas operate as supply chain management nodes grouped in such a manner as to establish a flow of goods delivery trips, and then products travel is carried out with RFID tags as product IDs. This study creates a simulation tool that can include supply chain management data into the blockchain, having transparency, traceability, and data security.Keywords: Blockchain, Supply chain management, RFID, Data Security, Transparency, Traceability, Immutable.
Perbandingan Kinerja Neural Network dengan Metode Klasifikasi Tradisional dalam Mendiagnosis Penyakit Jantung: Sebuah Studi Komparatif Winda Kurnia Sari; Iman Saladin B Azhar
Jurnal Sistem Informasi Vol 15, No 1 (2023)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/jsi.v15i1.20875

Abstract

Dalam dunia medis, penyakit jantung menjadi salah satu penyebab kematian terbanyak. Oleh karena itu, perlu dikembangkan sistem yang dapat membantu dalam deteksi dan diagnosis penyakit jantung. Dalam penelitian ini, kami menggunakan proses neural network untuk membantu dalam deteksi penyakit jantung dengan menggunakan data training dan testing yang telah dikumpulkan. Data yang digunakan terdiri dari berbagai fitur klinis dan faktor risiko yang dikumpulkan dari pasien yang terkena penyakit jantung. Hasil dari penelitian lain untuk mendiagnosa penyakit jantung dengan metode klasifikasi tradisional menunjukkan akurasi: Logistic Regression 88.52%, K-Nearest Neighbors 78.69%, Random Forest Classifier 86.89%, dan Tuned K-Nearest Neighbors 85.25%. Sedangkan, model neural network yang dikembangkan dapat mengklasifikasikan pasien berdasarkan kondisi jantung mereka dengan akurasi mencapai 91%. Proses pelatihan model melibatkan penggunaan algoritma optimasi RMSprop, dengan cross-validation dan parameter tuning yang dilakukan untuk mencapai hasil terbaik. Model ini mampu memproses input dengan kecepatan tinggi dan menghasilkan hasil klasifikasi yang akurat. Neural network dapat membantu diagnosis awal penyakit jantung bagi tenaga medis. Namun, peningkatan akurasi dan keandalan membutuhkan penelitian lebih lanjut dengan data yang lebih besar dan fitur klinis yang beragam. Dengan optimalisasi model ini, diharapkan penanganan penyakit jantung menjadi lebih efektif dan efisien.
Deteksi Objek Serupa Menggunakan You Only Look Once (YOLO3.0) Sri Desy Siswanti; Kharisma Kharisma Kharisma; Ahmad Fali Okilas; Huda - Ubaya; Iman - Saladin; Ghufron - Mubaroq
Jurnal Sistem Informasi Vol 15, No 2 (2023)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/jsi.v15i2.21694

Abstract

Saat ini deteksi objek yang digabungkan dengan sistem AI sering digunakan untuk mendeteksi (objek dalam sebuah gambar. Hal ini telah diaplikasikan dalam dikehidupan sehari sehari diantaranya di bidang pertahanan, sistem pengawasan suatu kota maupun dipakai  pada fitur mobil untuk meminimumkan kecelakaan. Dalam paper ini  fokus membahas deteksi objek menggunakan YOLO3.  Objek yang dideteksi dalam paper ini adalah truk dan bus, kedua kendraaan tersebut merupakan kendaraan beroda empat berbentuk persegi panjang  yang sering ditemukan dijalan. Kedua kendaraan ini merupakan objek berbentuk mirip dan kadang keliru untuk identifikasi kedua objek tersebut. Sistem ini dimulai dari membuat dataset  gambar truk dan bus, dimana dataset ini terdiri dari data training dan dataset. Kemudian dilanjutkan dengan proses  ekstrasi fitur yang menggunakan metode Darknet-53 dan deteksi objek menggunakan  Feature Pyramid Network (FPN), akhirnya jika dikenali maka objeknya akan diberikan bounding box. Tujuan dari proses ini mendapatkan nilai akurasi dengan  objek yang memiliki bentuk yang  mirip dan mencari faktor yang mempengaruhi nilai akurasi tersebut.  Hasil proses deteksi objek menggunakan YOLO3 dapat menaikkan nilai akurasi pada objek dengan bentuk serupa, walaupun terdapat beberapa  kelemahan. Faktor yang mempengaruhi  nilai akurasi yaitu nilai threshold yang sangat mempemgaruhi  dalam membedakan  bentuk objek yang satu dengan objek yang lain terutama jika objek tersebut memiliki bentuk yang mirip
Mapping the Distribution of Covid-19 Information using a Web-based Information System Ali Ibrahim; Ahmad Fali Okllilas; Iman Saladin B. Azhar; Yadi Utama; Ahmad Hafizh Zahran
Sistemasi: Jurnal Sistem Informasi Vol 13, No 2 (2024): Sistemasi: Jurnal Sistem Informasi
Publisher : Program Studi Sistem Informasi Fakultas Teknik dan Ilmu Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32520/stmsi.v13i2.3773

Abstract

Based on information from several official government websites and national television media, at the beginning of November 2021, there was a decrease in the spread of the COVID-19 virus. At the end of November 2022, a new type of virus was discovered, namely SARS-CoV-2, also known as the Omicron variant, resulting in additional cases. Therefore, researchers conducted research with the aim of providing information about the spread of the virus with a mapping system down to the RT level, so that the public gets detailed and real-time information about the area based on mapping. The urgency of this research is that, with the results of this research, the public can take better care of and be able to provide self-awareness regarding health protocols. The public can have access to detailed information and mapping about the spread of the virus. This research adopts changes in the design science research methodology proposed by Hevner. The results of the research are an Information System for Mapping the Distribution of COVID-19 Cases and Their Danger Level with 5 calculation categories, namely class I with a range of 66–80 has very high danger level criteria, class II with a range of 51–65 has high danger level criteria, class III with a range of 36–50 has medium level criteria, class IV with a range of 36–50 has criteria for a low level of danger, and class V with a range of 5–20 has a very low level of danger.
Fake News Detection Using Optimized Convolutional Neural Network and Bidirectional Long Short-Term Memory Sari, Winda Kurnia; Azhar, Iman Saladin B.; Yamani, Zaqqi; Florensia, Yesinta
Computer Engineering and Applications Journal Vol 13 No 03 (2024)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18495/comengapp.v13i03.492

Abstract

The spread of fake news in the digital age threatens the integrity of online information, influences public opinion, and creates confusion. This study developed and tested a fake news detection model using an enhanced CNN-BiLSTM architecture with GloVe word embedding techniques. The WELFake dataset comprising 72,000 samples was used, with training and testing data ratios of 90:10, 80:20, and 70:30. Preprocessing involved GloVe 100-dimensional word embedding, tokenization, and stopword removal. The CNN-BiLSTM model was optimized with hyperparameter tuning, achieving an accuracy of 96%. A larger training data ratio demonstrated better performance. Results indicate the effectiveness of this model in distinguishing fake news from real news. This study shows that the CNN-BiLSTM architecture with GloVe embedding can achieve high accuracy in fake news detection, with recommendations for further research to explore preprocessing techniques and alternative model architectures for further improvement.
A Comparative Study of Deep Learning’s Performance Methods for News Article using Word Representations Azhar, Iman Saladin B.; Sari, Winda Kurnia; Gumay, Naretha Kawadha Pasemah
SISTEMASI Vol 14, No 2 (2025): Sistemasi: Jurnal Sistem Informasi
Publisher : Program Studi Sistem Informasi Fakultas Teknik dan Ilmu Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32520/stmsi.v14i2.5090

Abstract

In natural language processing (NLP), text classification is a crucial task that involves analyzing textual data, which often has high dimensionality. A good word representation is essential to address this challenge, and the word representation using GloVe is one of the popular methods that provides pre-trained word representations in high-dimensional vectors. This research evaluates the effectiveness of three deep learning techniques Convolutional Neural Network (CNN), Deep Neural Network (DNN), and Long Short-Term Memory (LSTM) for online news classification using 300-dimensional GloVe word representations. The CNN model utilizes convolutional and pooling layers to extract local features, the DNN relies on dense layers to learn abstract representations, while the LSTM excels at capturing long-term dependencies between words. The results show that the LSTM model achieved the best accuracy at 93.45%, followed by CNN at 91.24%, and DNN at 90.67%. The superiority of LSTM is attributed to its ability to effectively capture temporal relationships and context, while CNN offers efficiency with faster training times. Although DNN produced solid performance, it is less optimal in understanding word sequences. These findings indicate that LSTM outperforms the other models in online news text classification tasks.