Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : SMARTICS Journal

Perbandingan Hasil Performa Optimasi Transposisi Hill Cipher dan Vigenere Cipher pada Citra Digital Bayu Firmanto; Devita Putri Kusuma Ningrum; Arief Bramanto Wicaksono Putra
SMARTICS Journal Vol 7 No 2: SMARTICS Journal (Oktober 2021)
Publisher : Universitas PGRI Kanjuruhan Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21067/smartics.v7i2.5931

Abstract

The security system is one of the crucial factors in sending data. Cryptography is a security system with a working system that converts data into passwords that are difficult to understand. This studycompares the results of the transposition optimization performance on the classic algorithm, Hill Cipher andVigenere Cipher. This research aims to find out the best algorithm suitable for the transposition technique and improve the image encryption results. Optimization performance is assessed visually and tested using the Discrete Cosine Transform (DCT) method, then testing the encrypted image using the MSE methods. Thisstudy used three image samples with PNG format measuring 200 * 200 and 220 * 220 pixels. The average value in the first image sample of the original Hill Cipher algorithm and with transposition optimization is 24.79% and 24.77%. The original Vigenere Cipher algorithm and transposition optimization, respectively, are 26.54% and 26.75% in the image. It shows that transposition optimization is more suitable to combine with the Vigenere Cipher algorithm because randomness increases.
Tinjauan Perkembangan Kecerdasan Buatan Berbasis Arsitektur Transformer Firmanto, Bayu; As'ad Shidqy Aziz; Jendra Sesoca
SMARTICS Journal Vol 10 No 1 (2024): SMARTICS Journal (April 2024)
Publisher : Universitas PGRI Kanjuruhan Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21067/smartics.v10i1.8351

Abstract

Artificial Intelligence, especially technique utilizing machine learning using transformer architecture has experienced rapid progress. The transformer architecture was first introduced in 2017 and laid the foundation for the development of larger and more accurate models in NLP, some of which use BERT and GPT. This review examines five studies that have made significant contributions to the development of the transformer architecture, including research by Vaswani, Devlin, Brown, and Dai. The results of this study shows that the transformer architecture is capable of improving training efficiency, accuracy, and long-context understanding in various NLP tasks. However, there are still some issues with this technology that need to be addressed further.