cover
Contact Name
Tommy
Contact Email
lpkdgeneration2022@gmail.com
Phone
+6285695565558
Journal Mail Official
tommy@admi.or.id
Editorial Address
Perumahan Bumi Dirgantara Permai Blok CL NO 5, Jl. Durian, Jati Asih, Bekasi, Provinsi Jawa Barat, 17421
Location
Kab. bekasi,
Jawa barat
INDONESIA
International Journal Science and Technology (IJST)
ISSN : 28287223     EISSN : 28287045     DOI : https://doi.org/10.56127/ijst.v1i2
International Journal Science and Technology (IJST) is a scientific journal that presents original articles about research knowledge and information or the latest research and development applications in the field of technology. The scope of the IJST Journal covers the fields of Informatics, Mechanical Engineering, Electrical Engineering, Information Systems and Industrial Engineering. This journal is a means of publication and a place to share research and development work in the field of technology.
Articles 93 Documents
COMPARISON OF PRE-TRAINED BERT-BASED TRANSFORMER MODELS FOR REGIONAL LANGUAGE TEXT SENTIMENT ANALYSIS IN INDONESIA Taufiq Dwi Purnomo; Joko Sutopo
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1739

Abstract

This study compared the performance of eight pre-trained BERT-based models for sentiment analysis across ten regional languages in Indonesia. The objective was to identify the most effective model for analyzing sentiment in low-resource Indonesian languages, given the increasing need for automated sentiment analysis tools. The study utilized the NusaX dataset and evaluated the performance of IndoBERT (IndoNLU), IndoBERT (IndoLEM), Multilingual BERT, and NusaBERT, each in both base and large variants. Model performance was assessed using the F1-score metric. The results indicated that models pre-trained on Indonesian data, specifically IndoBERT (IndoNLU) and NusaBERT, generally outperformed the multilingual BERT and IndoBERT (IndoLEM) models. IndoBERT-large (IndoNLU) achieved the highest overall F1-score of 0.9353. Performance varied across the different regional languages. Javanese, Minangkabau, and Banjar consistently showed high F1 scores, while Batak Toba proved more challenging for all models. Notably, NusaBERT-base underperformed compared to IndoBERT-base (IndoNLU) across all languages, despite being retrained on Indonesian regional languages. This research provides valuable insights into the suitability of different pre-trained BERT models for sentiment analysis in Indonesian regional languages.
ANALYSIS OF PARENT-CHILD INTERNET ADDICTION TEST IN SDIT AL IMAN BINTARA Tissa Maharani
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1770

Abstract

The development of information and communication technology and the implementation of distance learning during the COVID-19 pandemic have made everyone use gadgets, including children and toddlers. In addition to having a positive impact, of course, there are many negative impacts, one of which is gadget addiction. Gadget addiction can be detected using the PARENT-CHILD INTERNET ADDICTION TEST (PCIAT) developed by Dr. Kimberly Young based on Internet Addiction Test (IAT). The purpose of this research is to detect and analyze the use of gadgets and the internet using PCIAT in SDIT Al Iman Bintara students through respondent, namely their parents, and increase the awareness of parents about the dangers of gadget and internet addiction. The results of this research are, from 516 students, as many as 472 student guardians filled out the PCIAT questionnaire. A total of 288 respondent children (67.4%) showed NO SYMPTOMS of gadget addiction, 124 respondent children (29%) showed MILD symptoms, and 15 respondent children (3.5%) showed moderate symptoms. Cooperation between parents and the school is needed in regulating the use of children's gadgets, and consistency in the implementation of the rules.
Adversarial AI: Threats, Defenses, and the Role of Explainability in Building Trustworthy Systems Deepak Kejriwal; Pujari, Tejaskumar Dattatray
International Journal Science and Technology Vol. 2 No. 2 (2023): July: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v2i2.1955

Abstract

Artificial Intelligence has made possible the latest revolutions in the industry. Nevertheless, adversarial AI turns out to be a serious challenge because of its tendency to exploit the vulnerabilities of machine learning models, breach their security, and eventually lead them to fail, mostly unless very few. Adversarial attacks can be evasion and poisoning, model inversion, and so forth; they indeed say how fragile an AI system is and also suggest a proper immediate call for solid defensive structures. Several adversarial defense mechanisms have been proposed―from adversarial training to defensive distillation and certified defenses―yet they remain vulnerable to high-level attacks. This included the emergence of explainable artificial intelligence (XAI) as one of the significant components in AI security, whereby capturing interpretability and transparency can lead to better threat detection and user trust. This work encompasses a literature review of adversarial AIs, current developments in adversarial defenses, and the role played by XAI in reducing threats from such adversarial systems. In effect, the paper presents an integrated framework with techniques of explainability for the building of resilient, transparent, and trustworthy AI systems.
Balancing Innovation and Privacy: A Red Teaming Approach to Evaluating Phone-Based Large Language Models under AI Privacy Regulations Mangesh Pujari; Anil Kumar Pakina; Anshul Goel
International Journal Science and Technology Vol. 2 No. 3 (2023): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v2i3.1956

Abstract

The rapid deployment of large language models (LLMs) on mobile devices has introduced significant privacy concerns, particularly regarding data collection, user profiling, and compliance with evolving AI regulations such as the GDPR and the AI Act. While these on-device LLMs promise improved latency and user experience, their potential to inadvertently leak sensitive information remains understudied. This paper proposes a red teaming framework to systematically assess the privacy risks of phone-based LLMs, simulating adversarial attacks to identify vulnerabilities in model behavior, data storage, and inference processes. We evaluate popular mobile LLMs under scenarios such as prompt injection, side-channel exploitation, and unintended memorization, measuring their compliance with strict privacy-by-design principles. Our findings reveal critical gaps in current safeguards, including susceptibility to context-aware deanonymization and insufficient data minimization. We further discuss regulatory implications, advocating for adaptive red teaming as a mandatory evaluation step in AI governance. By integrating adversarial testing into the development lifecycle, stakeholders can preemptively align phone-based AI systems with legal and ethical privacy standards while maintaining functional utility.
Enhancing Cybersecurity in Edge AI through Model Distillation and Quantization: A Robust and Efficient Approach Mangesh Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1957

Abstract

The rapid proliferation of Edge AI has introduced significant cybersecurity challenges, including adversarial attacks, model theft, and data privacy concerns. Traditional deep learning models deployed on edge devices often suffer from high computational complexity and memory requirements, making them vulnerable to exploitation. This paper explores the integration of model distillation and quantization techniques to enhance the security and efficiency of Edge AI systems. Model distillation reduces model complexity by transferring knowledge from a large, cumbersome model (teacher) to a compact, efficient one (student), thereby improving resilience against adversarial manipulations. Quantization further optimizes the student model by reducing bit precision, minimizing attack surfaces while maintaining performance. We present a comprehensive analysis of how these techniques mitigate cybersecurity threats such as model inversion, membership inference, and evasion attacks. Additionally, we evaluate trade-offs between model accuracy, latency, and robustness in resource-constrained edge environments. Experimental results on benchmark datasets demonstrate that distilled and quantized models achieve comparable accuracy to their full-precision counterparts while significantly reducing vulnerability to cyber threats. Our findings highlight the potential of distillation and quantization as key enablers for secure, lightweight, and high-performance Edge AI deployments.
Efficient TinyML Architectures for On-Device Small Language Models: Privacy-Preserving Inference at the Edge Mangesh Pujari; Anshul Goel; Anil Kumar Pakina
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1958

Abstract

Deploying small language models (SLMs) on ultra-low-power edge devices requires careful optimization to meet strict memory, latency, and energy constraints while preserving privacy. This paper presents a systematic approach to adapting SLMs for Tiny ML, focusing on model compression, hardware-aware quantization, and lightweight privacy mechanisms. We introduce a sparse ternary quantization technique that reduces model size by 5.8× with minimal accuracy loss and an efficient federated fine-tuning method for edge deployment. To address privacy concerns, we implement on-device differential noise injection during text preprocessing, adding negligible computational overhead. Evaluations on constrained devices (Cortex-M7 and ESP32) show our optimized models achieve 92% of the accuracy of full-precision baselines while operating within 256KB RAM and reducing inference latency by 4.3×. The proposed techniques enable new applications for SLMs in always-on edge scenarios where both efficiency and data protection are critical.
AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks Anil Kumar Pakina; Ashwin Sharma; Deepak Kejriwal
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1960

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace. This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy. Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors. Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.
Neuro- Symbolic Compliance Architectures: Real-Time Detection of Evolving Financial Crimes Using Hybrid AI Anil Kumar Pakina; Mangesh Pujari
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1961

Abstract

This paper proposes NeuroSym-AML, a new neuro-symbolic AI framework explicitly designed for the real-time detection of evolving financial crimes with a special focus on cross-border transactions. By combining Graph Neural Networks (GNNs) with interpretable rule-based reasoning, our system dynamically adapts to emerging money laundering patterns while ensuring strict compliance with FATF/OFAC regulations. In contrast to static rule-based systems, NeuroSym-AML shows better performance-an 83.6% detection accuracy to identify financial criminals, which demonstrated a 31% higher uplift compared with conventional systems-produced by utilizing datasets from 14 million SWIFT transactions. Furthermore, it is continuously learning new criminal typologies, providing decision trails that are available to regulatory audit in real-time. Key innovations include: (1) the continuous self-updating of detection heuristics, (2) automatic natural language processing of the latest regulatory updates, and (3) adversarial robustness against evasion techniques. This hybrid architecture bridges the scalability of machine learning with interpretability of symbolic AI, which can address crucial gaps for financial crime prevention, therefore delivering a solution for satisfying both adaptive fraud detection and transparency in decision-making in high-stakes financial environments.
Ethical and Responsible AI: Governance Frameworks and Policy Implications for Multi-Agent Systems Tejaskumar Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 3 No. 1 (2024): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i1.1962

Abstract

Semi-autonomous, augmented- Artificial Intelligence has become increasingly relevant as collective activities are practiced by two or more autonomic entities. MAS and AI at the intersection have fostered very new waves of socioeconomic exchange, necessitating technological governance and, the most challenging element of them all, ethical governance. These autonomous systems involve a network of decision-making agents working in a decentralized environment, entailing very high accountability, transparency, explanability, ethical alignment, and practically everything in between. The escalated societal functioning of these systems necessitates massive social governance policy interventions and an interdisciplinary governance framework. As an overarching look of multispecialty fields, the research aimed to underscore and pinpoint technology like responsible AI, normative governance frameworks, and multi-agent coordination. This paper unravels insofar as the ethical dilemmas in MAS, picking up loose threads from such international governance configurations and proposing a more adaptive regulatory ethic from an awareness of what it means to coordinate intelligent agents. Bringing together thoughts from ethics, law, computer science, and policy studies, the paper essentially sketches out a path for establishing an AI environment that is sustainable, trustworthy, and ethically grounded.
Ethical and Responsible AI in the Age of Adversarial Diffusion Models: Challenges, Risks, and Mitigation Strategies Tejaskumar Pujari; Anshul Goel; Deepak Kejriwal
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1963

Abstract

The rapid pace of diffusion models in generative AI has completely restructured many fields, particularly with respect to image synthesis, video generation, and creative data enhancement. However, promising developments remain tinged with ethical questions in view of diffusion-based model dual-use. By misusing these models, purveyors could think up deepfaked videos, unpredictable forms of misinformation, instead outing cyber warfare-related attacks over the Internet, therefore aggravating societal vulnerabilities. This paper explores and analyzes these potential ethical risks and adversarial threats of diffusion-based artificial intelligence technologies. We lay out the basis for good AI-the notion of fair, accountable, transparent, and robust (FATR) systems-discussing efforts underway to mitigate these ethical risks through watermarking, model alignment, and regulatory mechanisms. Thus, from the dialogue with ethical viewpoints, also touching upon cybersecurity, military policy, or governance, we present a conceptual model to encapsulate probable ethical considerations in the development and deployment of diffusion models. Human-centered values need to be advanced by a proactive convergent bonding among researchers, decision-makers, and civil society players during the strengthening of a tributary of generative AI's power.

Page 9 of 10 | Total Record : 93