cover
Contact Name
Nurul Khairina
Contact Email
nurulkhairina27@gmail.com
Phone
+6282167350925
Journal Mail Official
nurul@itscience.org
Editorial Address
Jl. Setia Luhur Lk V No 18 A Medan Helvetia Tel / fax : +62 822-5158-3783 / +62 822-5158-3783
Location
Kota medan,
Sumatera utara
INDONESIA
Journal of Computer Networks, Architecture and High Performance Computing
ISSN : 26559102     EISSN : 26559102     DOI : 10.47709
Core Subject : Science, Education,
Journal of Computer Networks, Architecture and Performance Computing is a scientific journal that contains all the results of research by lecturers, researchers, especially in the fields of computer networks, computer architecture, computing. this journal is published by Information Technology and Science (ITScience) Research Institute, which is a joint research and lecturer organization and issued 2 (two) times a year in January and July. E-ISSN LIPI : 2655-9102 Aims and Scopes: Indonesia Cyber Defense Framework Next-Generation Networking Wireless Sensor Network Odor Source Localization, Swarm Robot Traffic Signal Control System Autonomous Telecommunication Networks Smart Cardio Device Smart Ultrasonography for Telehealth Monitoring System Swarm Quadcopter based on Semantic Ontology for Forest Surveillance Smart Home System based on Context Awareness Grid/High-Performance Computing to Support drug design processes involving Indonesian medical plants Cloud Computing for Distance Learning Internet of Thing (IoT) Cluster, Grid, peer-to-peer, GPU, multi/many-core, and cloud computing Quantum computing technologies and applications Large-scale workflow and virtualization technologies Blockchain Cybersecurity and cryptography Machine learning, deep learning, and artificial intelligence Autonomic computing; data management/distributed data systems Energy-efficient computing infrastructure Big data infrastructure, storage and computation management Advanced next-generation networking technologies Parallel and distributed computing, language, and algorithms Programming environments and tools, scheduling and load balancing Operation system support, I/O, memory issues Problem-solving, performance modeling/evaluation
Articles 795 Documents
Implementation of Fuzzy Logic for Chili Irrigation Integrated with Internet of Things Angga Prasetyo; Arief Rahman Yusuf; Yovi Litanianda; Sugianti; Fauzan Masykur
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 2 (2023): Article Research Volume 5 Issue 2, July 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i2.2518

Abstract

Chili, mustard greens, and tomatoes have always been farmers' favored crops, despite their high water and labor demands. Adapt to these conditions by utilizing smart agriculture systems (SAS) agricultural techniques that involve technology such as automatic irrigation that regulates watering based solely on routine, regardless of land conditions. This type of control during the transitional season can lead to root rot and fungisarium disease on chile plants. In the form of an embedded system with internet of things (IoT) monitoring, a system incorporating artificial intelligence such as fuzzy logic is proposed as a solution. Fuzzy logic will regulate irrigation based on the land's humidity and temperature using computational mathematics. Beginning with the fuzzyification stage to map the sensor's temperature and humidity input values, fuzzy logic is applied. The creation of an inference engine in the NodeMcu 8266 microcontroller to interpret fuzzy rule statements in the form of aggregation of minimum conditions with the AND operator, followed by the combination of a single set value of 0 and 1 in the fuzzy system to produce an appropriate actuator response After the entire system has been prototyped, testing is conducted to determine the responsiveness of the fuzzy program code to changes in the simulated agricultural cultivation land ecosystem. This study found that the fuzzy logic program code embedded in the nodeMCU8266 microcontroller effectively controls the spraying duration of the pump in response to various simulated environmental conditions within 3.6 seconds.
Design and Build a Quality Assurance Document Archiving Application Using the Rapid Application Development Danang Rudy Purnomo; Inung Diah Kurniawati; Latjuba Sofyana STT
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 2 (2023): Article Research Volume 5 Issue 2, July 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i2.2521

Abstract

Higher Education quality assurance is a planned and sustainable process in systematically improving the quality of Higher Education. One of the functions of quality assurance for tertiary institutions is for accreditation activities, in study programs accreditation preparation is an activity that often requires time, energy and the minds of the academic community. The problem that often arises in quality assurance activities is that supporting data and information that are required for completeness have not been well documented. As time goes by, there are more and more documents and the search for documents is getting longer, because you have to open documents in every folder where these documents are not small. The purpose of this research is to overcome some of the problems that arise, including good archive management, documents can be grouped by year or criteria and time efficiency in recapitulating all quality assurance documents. The method used in the development of this system is the Rapid Application Development method, which enables faster system development. Document Archiving Information System for Website-Based Quality Assurance in Informatics Engineering Study Program, University of PGRI Madiun, was created using the Laravel framework, while databases used MySQL. The tools used in the process of making this system are XAMPP for the database server, Visual Studio Code for the text editor. In system testing using black box testing which only tests software functionality, the test shows valid results and no errors or bugs occur.
Risk Analysis of Information Security in Balikpapan International Airport Service Desk Plus (SDP) Using The Octave Allegro Method Novi Indrayani; Norma Amalia
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 2 (2023): Article Research Volume 5 Issue 2, July 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i2.2524

Abstract

Indonesia, as a developing country, is not exempt from the advancements in information and communication technology. However, these advancements in information and communication technology can bring negative impacts, such as an increasing threat of misuse. SDP (Service Desk Plus) is a system that serves as a management tool for IT services, facilitating employees from various departments in requesting services and reporting ICT (Information Communication Technology) incidents. SDP has faced challenges or obstacles that have hindered its optimal use, such as IT services experiencing downtime, inaccessible ICT services, and SDP users frequently sharing usernames and passwords. Based on these threats, it is necessary to conduct a further analysis of information security risks regarding the security of implementing SDP centrally using the OCTAVE Allegro method. OCTAVE Allegro is a framework that utilizes the OCTAVE approach with a primary focus on information assets, designed to provide faster results without requiring in-depth knowledge of risk assessment. The results of this research identified three risks that can be mitigated, namely user data password errors with a relative risk score of 27, internet downtime with a relative risk score of 31, and file intrusion with a relative risk score of 38, considering the likelihood of threats occurring. Additionally, there is one accepted risk, which is the input error of incident data, with a relative risk score of 19.
Implementation Of The Data Mining Cart Algorithm In The Characteristic Pattern Of New Student Admissions Ahmad Syahban Rifandy Siregar; Yunita Sari Siregar; Mufida Khairani
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.1975

Abstract

University of Harapan Medan is one of the private universities in North Sumatra which has an Informatics Engineering Study Program with Good Accreditation. With better accreditation, the number of students who register is also increasing. At the admission of new students, the committee has a huge pile of data, making it difficult in the process of whether the student passed or did not pass. Therefore, in this study, we will implement data mining with the CART (Classification And Regression Tree) algorithm. Data mining is a technique to determine the characteristic pattern of a variable or data criteria with a large amount. In the CART method, the data is first converted into testing data, which will then be used to form a classification tree by calculating the value of information gain, Gini index and goodness of split. From the results obtained, it will be re-determined terminal nodes, marking class labels and finally pruning the classification tree which produces a decision tree. In this study, the number of testing data was 75 with 3 criteria, namely the average value of report cards, CAT test scores, and interview scores. The results of testing data testing using RapisMiner 5.3 software produce 23 number of characteristic pattern rules, where node 1 is the CAT test score, level 1 branch node is the interview score criteria and level 2 branch node is the average report card value.
Base-Delta Dynamic Block Length and Optimization on File Compression Tommy; Ferdy Riza; Rosyidah Siregar; Manovri Yeni; Andi Marwan Elhanafi; Ruswan Nurmadi
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.1993

Abstract

Delta compression uses the previous block of bytes to be used as a reference in the compression process for the next blocks. This approach is increasingly ineffective due to the duplication of byte sequences in modern files. Another delta compression model uses the numerical difference approach of the sequence of bytes contained in a file. Storing the difference value will require fewer representation bits than the original value. Base + Delta is a compression model that uses delta which is obtained from the numerical differences in blocks of a fixed size. Developed with the aim of compressing memory blocks, this model uses fixed-sized blocks and does not have a special mechanism when applied to file compression in general. This study proposes a compression model by developing the concept of Base+Delta encoding which aims to be applicable to all file types. Modification and development carried out by adopting a dynamic block size using a sliding window and block header optimization on compressed and uncompressed blocks giving promising test results where almost all file formats tested can be compressed with a ratio that is not too large but consistent for all file formats where the ratio compression for all file formats obtained between 0.04 to 12.3. The developed compression model also produces compression failures in files with high uncompressed blocks where the overhead of additional uncompressed blocks of information causes files to become larger with a negative ratio obtained of -0.39 to -0.48 which is still relatively small and acceptable.
Analysis Of Decision Support Systems Edas Method In New Student Admission Selection Yunita Sari Siregar; Ahmad Zakir; Nenna Irsa Syahputri; Herlina Harahap; Divi Handoko
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.2057

Abstract

University of Harapan Medan is one of the private tertiary institutions in North Sumatra which has an informatics engineering study program. The informatics engineering study program is a study program that has many enthusiasts. Every year this study program graduates more than 200 students. To produce graduates who have potential, reliability and competence in the field of technology and information, it is necessary to make a selection at the beginning, namely at the time of admission of new students. There are 5 criteria used in the selection process, including the average report card score, basic ability test, computer ability test, psychological test, and interview. Each criterion has 5 weights of values, namely very high, high, medium, low and very low.  The selection process for admission of new informatics engineering students with a decision support system for the EDAS (Evaluation Based On Distance From Average Solution) method.  Where the stages in this method are by normalizing the decision matrix and looking for the average from alternatives, then from these results calculate the average positive distance (PDA) and negative distance (NDA) as well as the assessment of the weighted attribute weights of SPi and SNi, after that the normalization of positive and negative distance weights is carried out for determining the ranking score. From the results of the analysis carried out using the EDAS method, with a sample of 10 prospective students it was concluded that the 6th order student candidate had the highest score with a score of 0.519 and the lowest score in the 7th order student with a score of 0.14. Therefore, the level of accuracy of the EDAS method in selecting new student admissions is around 20%. Of course, this accuracy value will change with large data samples.
Information Technology Governance Audit Using COBIT 5 Framework in the Natural Resources Conservation Office I Gede Eka Artha Putra; Luh Joni Erawati Dewi; I Made Gede Sunarya
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.2079

Abstract

One of the organizations that requires the implementation of good information technology governance is the natural resources conservation office. The audit was conducted to determine the level of IT process capability based on the COBIT 5 standard and to determine the level of gaps (gaps) owned by the natural resources conservation office. The process of implementing the audit begins with observing the agency's environment related to activity data and its implementation, then mapping the data with business objectives according to COBIT 5, followed by mapping business objectives with IT objectives to obtain IT processes. The IT processes obtained are then selected to obtain important IT processes according to company officials. The TI process obtained is then processed using the Guttman method. The results of the IT process capability level, namely EDM01, EDM02 and APO09, are at level 3 (established). The gaps found need to be given a corrective strategy to achieve the capability expected by the institution, namely 4 (predictable process) by providing recommendations regarding steps to achieve the expected capability value. The recommendations and improvements provided use ISO/IEC 15504:2 and ISO27002 standards which were obtained by mapping IT processes on COBIT 5.
Sentiment Analysis of Twitter Cases of Riots at Kanjuruhan Stadium Using the Naive Bayes Method Bryan Jerremia Katiandhago; Akhmad Mustolih; Wachyu Dwi Susanto; Pungkas Subarkah; Chendri Irawan Satrio Nugroho
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.2196

Abstract

Sentiment analysis is a process carried out to analyze opinions, sentiments, judgments, and emotions from the riot case at the kanjuruhan stadium. The purpose of this research is to find out public opinion about the tragedy that is currently happening at the Kanjuruhan Stadium. the data was obtained from social media Twitter using the Twitter API, then after that, an analysis was carried out. data from the results of the analysis will be classified using the Naive Bayes method. The classification process is divided into 7 (seven) stages, namely Crawling, Cleansing, pre-processing, labeling, classification, data training, and data testing. In the labeling process, data is classified into 2 (two) classes, namely the positive class and the negative class. The data obtained before the preprocessing process was 1963 tweets, after the preprocessing the data obtained was 1001 tweets. The data will be trained and tested using the naive Bayes classification method. classification results obtained precision values of 82% for negative data and 65% for positive data, recall values obtained 74% for negative data and 75% for positive data, F1-score values obtained 78% for negative data and 70% for positive data, while accuracy value obtained 74%.
Forensic Web Analysis on The Latest Version of Whatsapp Browser Dicky Satrio Ikhsan Utomo; Yudi Prayudi; Erika Ramadhani
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 1 (2023): Article Research Volume 5 Issue 1, January 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i1.2286

Abstract

With the rapid growth of technology and the increasing number of smartphone users, social media applications have proliferated. Among them, WhatsApp has emerged as the most widely used application, with over a quarter of the world's population using it since 2009. To meet the increasing customer demands, WhatsApp has introduced a browser version, which has undergone continuous updates and improvements. The latest version of WhatsApp exhibits significant differences in features and settings compared to its predecessors, particularly in conversations, images, video recordings, and other aspects. Consequently, this research focuses on analyzing artifacts that can aid in forensic investigations. The study aims to extract artifacts related to conversation sessions, as well as media data such as audio files, contact numbers, photos, videos, and more. To achieve these objectives, various forensic tools will be employed to assist in the artifact search within the WhatsApp browser. The research adopts the NIST framework and utilizes forensic techniques like Autopsy and FTK Imager to read encrypted backup database files. These files contain valuable information such as deleted conversations, phone logs, photos, videos, and other data of interest. Analyzing the artifacts from the WhatsApp browser version contributes to forensic activities, providing valuable insights into the evidence that can be obtained from conversations and media files. By leveraging forensic tools and techniques, forensic practitioners can effectively retrieve and analyze data from the encrypted backup database files. In summary, this research explores the artifacts within the WhatsApp browser version, sheds light on its distinct features, and presents a forensic approach utilizing the NIST framework and forensic tools like Autopsy and FTK Imager to examine encrypted backup database files containing crucial deleted data, conversations, and media files.
Revolutionizing Healthcare: How Deep Learning is poised to Change the Landscape of Medical Diagnosis and Treatment Ahsan Ahmad; Aftab Tariq; Hafiz Khawar Hussain; Ahmad Yousaf Gill
Journal of Computer Networks, Architecture and High Performance Computing Vol. 5 No. 2 (2023): Article Research Volume 5 Issue 2, July 2023
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v5i2.2350

Abstract

Deep learning has become a significant tool in the healthcare industry with the potential to change the way care is provided and enhance patient outcomes. With a focus on personalised medicine, ethical issues and problems, future directions and opportunities, real-world case studies, and data privacy and security, this review article investigates the existing and potential applications of deep learning in healthcare. Deep learning in personalised medicine holds enormous promise for improving patient care by enabling more precise diagnoses and individualised treatment approaches. But it's important to take into account ethical issues like data privacy and the possibility of bias in algorithms. Deep learning in healthcare will likely be used more in the future to manage population health, prevent disease, and improve access to care for underprivileged groups of people. Case studies give specific examples of how deep learning is already changing the healthcare industry, from discovering rare diseases to forecasting patient outcomes. To fully realize the potential of deep learning in healthcare, however, issues including data quality, interpretability, and legal barriers must be resolved. Remote monitoring and telemedicine are two promising areas where deep learning is lowering healthcare expenses and enhancing access to care. Deep learning algorithms can be used to analyse patient data in real-time, warning medical professionals of possible problems before they worsen and allowing for online discussions with experts. Finally, when applying deep learning to healthcare, the importance of data security and privacy cannot be understated. To preserve patient data and guarantee its responsible usage, the appropriate safeguards and rules must be implemented. Deep learning has the ability to transform the healthcare industry by delivering more individualised, practical, and efficient care. However, in order to fully realize its promise, ethical issues, difficulties, and regulatory barriers must be solved. Deep learning has the potential to significantly contribute to enhancing patient outcomes and lowering healthcare costs with the right safeguards and ongoing innovation