cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Informatics and Communication Technology (IJ-ICT)
ISSN : 22528776     EISSN : 27222616     DOI : -
Core Subject : Science,
International Journal of Informatics and Communication Technology (IJ-ICT) is a common platform for publishing quality research paper as well as other intellectual outputs. This Journal is published by Institute of Advanced Engineering and Science (IAES) whose aims is to promote the dissemination of scientific knowledge and technology on the Information and Communication Technology areas, in front of international audience of scientific community, to encourage the progress and innovation of the technology for human life and also to be a best platform for proliferation of ideas and thought for all scientists, regardless of their locations or nationalities. The journal covers all areas of Informatics and Communication Technology (ICT) focuses on integrating hardware and software solutions for the storage, retrieval, sharing and manipulation management, analysis, visualization, interpretation and it applications for human services programs and practices, publishing refereed original research articles and technical notes. It is designed to serve researchers, developers, managers, strategic planners, graduate students and others interested in state-of-the art research activities in ICT.
Arjuna Subject : -
Articles 22 Documents
Search results for , issue "Vol 13, No 2: August 2024" : 22 Documents clear
Comparative analysis of heart failure prediction using machine learning models Kanakala, Srinivas; Prashanthi, Vempaty
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp297-305

Abstract

Heart failure is a critical health problem worldwide, and its prediction is a major challenge in medical science. Machine learning has shown great potential in predicting heart failure by analyzing large amounts of medical data. Heart failure prediction with the help of machine learning classification algorithms involves the use of models such as decision trees, logistic regression, and support vector machines to identify and analyze potential risk factors for heart failure. By analyzing large datasets containing medical and lifestyle-related variables, these models can accurately predict the likelihood of heart failure occurrence in individuals. In our research, the heart failure prediction and comparison are done using Logistic Regression, KNN, SVM, decision tree and random forest The accurate identification of high-risk individuals enables early intervention and better management of heart failure, reducing the risk of mortality and morbidity associated with this condition. Overall, machine learning algorithms play a major role in improving the accuracy of heart failure risk assessment, allowing for more personalized and effective prevention and treatment strategies.
Improving 4G LTE network quality using the automatic cell planning Yuhanef, Afrizal; Yusnita, Sri; Khairani, Redha Anadia
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp231-238

Abstract

The growing demand for network services leads to an increase in traffic load on eNodeB, resulting in decreased network quality and performance, necessitating optimization. This research analyses the results of optimising 4G reference signal received power (RSRP), signal to interference noise ratio (SINR) and throughput parameters using the automatic cell planning (ACP) method. ACP has been shown to significantly improve the performance and quality of 4G LTE networks compared to traditional cell planning methods. Based on the standard parameter RSRP, increased after ACP optimisation which is dominant in the range ≥ -100 s.d ˃ -85 dBm and obtained an average value of -98.59 dBm with good category. The average SINR has increased by 18.23 dB with a good category. The dominant throughput is in the 14,000 Kbps range with an average value of 50,241.08 Kbps with the excellent category. The ACP method can enhance the performance of 4G LTE networks, potentially addressing operator issues of unstable network quality due to poor coverage. The ACP method significantly enhances 4G LTE network performance, coverage, and user experience, potentially addressing unstable network quality due to poor coverage. This research is crucial for both users and the telecoms industry.
Solana blockchain technology: a review Mishra, Debani Prasad; Behera, Sandip Ranjan; Behera, Subhashis Satyabrata; Patro, Aditya Ranjan; Salkuti, Surender Reddy
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp197-205

Abstract

The introduction of a review article on the Solana blockchain is critical to setting the stage for the arguments and evidence to follow. This paragraph will provide context to the reader by discussing the current state of blockchain technology and introducing Solana as a potential solution. Blockchain technology has the potential for countless applications, ranging from financial transactions to secure data storage. However, existing blockchain systems suffer from scalability issues, were confirmation times and network congestion limit transaction volumes. This review paper on the Solana blockchain is valuable for those seeking an in-depth understanding of the design and efficacy. Given the increasing number of blockchain technologies available in the market, potential adopters face the challenge of selecting the most suitable blockchain network for their specific use case. A well-constructed review provides necessary information on the functioning of the technology, including its strengths and limitations. It also enables readers to compare various blockchain technologies and judge their suitability for their specific needs. Therefore, reviews like this one play a crucial role in helping to advance blockchain technology by driving the adoption of superior blockchain networks.
Enhancing PI controller performance in grid-connected hybrid power systems Pavan, Gollapudi; Babu, A. Ramesh
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp264-271

Abstract

The optimal operation of a microgrid was buildup of both uncontrollable (solar, wind) and controllable (batteries, diesel generators) electrical energy sources are enclosed in this paper. By replacing controllers, the variations in wavering power supply caused by load fluctuations are managed. The objective of the research paper is to optimize these controller gain settings for effective use of electrical energy. In this paper integral time square error principle is combined along with the Cuckoo search algorithm (CSA) and particle swarm algorithm (PSA) to obtain the accurate, precise and appropriate results. It enhances the microgrid's steady-state sensitive responsiveness in comparison to trial-and-error techniques, assuring a stable supply of electricity to the load.
Design of an efficient Transformer-XL model for enhanced pseudo code to Python code conversion Kuche, Snehal H.; Gaikwad, Amit K.; Deshmukh, Meghna
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp223-230

Abstract

The landscape of programming has long been challenged by the task of transforming pseudo code into executable Python code, a process traditionally marred by its labor-intensive nature and the necessity for a deep understanding of both logical frameworks and programming languages. Existing methodologies often grapple with limitations in handling variable-length sequences and maintaining context over extended textual data. Addressing these challenges, this study introduces an innovative approach utilizing the Transformer-XL model, a significant advancement in the domain of deep learning. The Transformer-XL architecture, an evolution of the standard Transformer, adeptly processes variable-length sequences and captures extensive contextual dependencies, thereby surpassing its predecessors in handling natural language processing (NLP) and code synthesis tasks. The proposed model employs a comprehensive process involving data preprocessing, model input encoding, a self-attention mechanism, contextual encoding, language modeling, and a meticulous decoding process, followed by post-processing. The implications of this work are far-reaching, offering a substantial leap in the automation of code conversion. As the field of NLP and deep learning continues to evolve, the Transformer-XL based model is poised to become an indispensable tool in the realm of programming, setting a new benchmark for automated code synthesis.
One time pad for enhanced steganographic security using least significant bit with spiral pattern Rihartanto, Rihartanto; Budi Utomo, Didi Susilo; Rizal, Ansar; Diartono, Dwi Agus; Februariyanti, Herny
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp168-177

Abstract

Data is an important commodity in today’s digital era. Therefore, data needs to get adequate security to prevent misuse. A common data security practice in the transmission of information is cryptography. Another approach is steganography, which hides secret messages in other media that are not confidential and can be accessed by the public. In this study, the spiral pattern is used for data placement using the least significant bit (LSB) method. Modifications were made to the 2-bits LSB to increase the data capacity that can be hidden. In order to increase security, the data is first converted into a datastream using random numbers as one time pad (OTP). exclusive-OR (XOR) operation is performed on datastream and OTP to get encrypted data to be hidden. The results showed that the image quality of the steganography results at a capacity close to 100% was still fairly good, as indicated by a peak signal-to-noise ratio (PSNR) value greater than 46 dB. Visually, the steganographic image does not look different from the original one. Likewise, the use of random numbers as OTP succeeded in changing the hidden data significantly, as indicated by the avalanche effect value above 50%.
Blockchain and ML in land registries a transformative alliance Shukla, Vishnu; Raipurkar, Abhijeet Ramesh; Chandak, Manoj B.
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp239-247

Abstract

This study presents a novel method for merging blockchain security and machine learning (ML) valuation to update land register systems. The system offers a safe, open, and effective framework for documenting and managing land ownership, addressing issues with conventional land registry procedures. Blockchain technology creates a tamper-proof record by cryptographically combining transactions and time-stamped entries to provide an immutable and decentralized ledger. In addition to building a solid foundation for the land registry system, this strengthens trust. Simultaneously, ML algorithms examine variables such as amenities and location to remove inflated pricing, providing accurate assessments and encouraging openness in the real estate sector. The system has been put into practice and verified in small-scale applications. Its features include enhanced data security, expedited ownership transfers, and accurate asset appraisals. Collaboration between governments, regulatory agencies, and technology suppliers is necessary for widespread deployment. Land registration procedures will change as a result of the revolutionary partnership between blockchain and ML technology, which offers a more effective, safe, and future-ready environment. Accepting this ground-breaking technique establishes a new benchmark for the updating of land ownership data and is a major step toward a more sophisticated and dependable method in the industry.
Alzheimer’s disease diagnosis using convolutional neural networks model Samanvi, Potnuru; Agrawal, Shruti; Mallick, Soubhagya Ranjan; Lenka, Rakesh Kumar; Palei, Shantilata; Mishra, Debani Prasad; Salkuti, Surender Reddy
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp206-213

Abstract

The global healthcare system and related fields are experiencing extensive transformations, taking inspiration from past trends to plan for a technologically advanced society. Neurodegenerative diseases are among the illnesses that are hardest to treat. Alzheimer’s disease is one of these conditions and is one of the leading causes of dementia. Due to the lack of permanent treatment and the complexity of managing symptoms as the severity grows, it is crucial to catch Alzheimer’s disease early. The objective of this study was to develop a convolutional neural network (CNN)-based model to diagnose early-stage Alzheimer’s disease more accurately and with less data loss than methods previously discovered. CNN, is adept at processing and recognising images and has been employed in various diagnostic tools and research in the healthcare sector, showing limitless potential. Convolutional, pooling and fully linked layers are the common layers that make up a CNN. In this paper, five CNN modelswere randomly chosen (ResNet, DenseNet, MobileNet, Inception, and Xception) and were trained. ResNet performed the best and was chosen to undergo additional modifications to improve accuracy to 95.5%. This was a remarkable achievement that made us hopeful for the performance of this model in larger datasets as well as other disease detection.
Improved inception-V3 model for apple leaf disease classification Sirait, Dheo Ronaldo; Sutikno, Sutikno; Sasongko, Priyo Sidik
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp161-167

Abstract

Apple, a nutrient-rich fruit belonging to the genus Malus, is recognized for its fiber, vitamins, and antioxidants, giving health benefits such as improved digestion and reduced cardiovascular disease risk. In Indonesia, the soil and climate create favorable conditions for apple cultivation. However, it is essential to prioritize the health of the plant. Biotic factors, such as fungal infections like apple scabs and pests, alongside abiotic factors like temperature and soil moisture, impact the health of apple plants. Computer vision, specifically convolution neural network (CNN) inception-V3, proves effective in aiding farmers in identifying these diseases. The output layer in inception-V3 is essential, generating predictions based on input data. For this reason, in this paper, we add an output layer in inception-V3 architecture to increase the accuracy of apple leaf disease classification. The added output layers are dense, dropout, and batch normalization. Adding a dense layer after flattening typically consolidates the extracted features into a more compact representation. Dropout can help prevent overfitting by randomly deactivating some units during training. Batch normalization helps normalize activations across batches, speeding up training and providing stability to the model. Test results show that the proposed method produced an accuracy of 99.27% and can increase accuracy by 1.85% compared to inception-V3. These enhancements showcase the potential of leveraging computer vision for precise disease diagnosis in apple crops.
Mobile forensics tools and techniques for digital crime investigation: a comprehensive review Sutikno, Tole
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp321-332

Abstract

Extracting and analyzing data from smartphones, IoT devices, and drones is crucial for conducting digital crime investigations. Effective cyberattack mitigation necessitates the use of advanced Android mobile forensics techniques. The investigation necessitates proficiency in manual, logical, hex dump, chip-off, and microread methodologies. This paper provides a comprehensive overview of Android mobile forensics tools and techniques for digital crime investigation, as well as their use in gathering and analyzing evidence. Forensic software tools like Cellebrite UFED, Oxygen Forensic Detective, XRY by MSAB, Magnet AXIOM, SPF Pro by SalvationDATA, MOBILedit Forensic Express, and EnCase Forensic employ both physical and logical techniques to retrieve data from mobile devices. These advanced tools offer a structured approach to tackling digital crimes effectively. We compare dependability, speed, compatibility, data recovery accuracy, and reporting. Mobile-network forensics ensures data acquisition, decryption, and analysis success. Conclusions show that Android mobile forensics tools for digital crime investigations are diverse and have different capabilities. Mobile forensics software offers complete solutions, but new data storage and encryption methods require constant development. The continuous evolution of forensic software tools and a comprehensive tool classification system could further enhance digital crime investigation capabilities.

Page 1 of 3 | Total Record : 22