Claim Missing Document
Check
Articles

Found 2 Documents
Search

Sustainable Supply Chain Operations Through Artificial Intelligence: Pathways to EcoEfficient Logistics Sheikh, Abdullah; Rinvee, Tajbiha Mehonaj; Sheikh, Md Shakil
International Journal of Supply Chain Management Vol 14, No 5 (2025): International Journal of Supply Chain Management (IJSCM)
Publisher : ExcelingTech

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59160/ijscm.v14i5.6349

Abstract

Global supply chains face unprecedented volatility and environmental pressure. This paper proposes a conceptual framework that leverages artificial intelligence (AI) to achieve eco-efficient logistics, aligning competitive performance with environmental goals. The framework explains how AI-driven demand forecasting tools, dynamic routing, warehouse optimization, reverse logistics, and supply chain transparency directly lead to quantifiable results such as fuel savings, carbon emissions reduction, and waste reduction. The findings demonstrate that AI-based analytics can optimize critical functions like dynamic routing and predictive forecasting, leading to reduced fuel consumption, lower carbon emissions, and enhanced operational resilience. The methodology involved extracting common themes and applications from these real-world examples to validate and ground the proposed conceptual model, ensuring it is both theoretically sound and practically applicable. The paper concludes that AI is not only a technical tool but also a strategic path essential to a sustainable, competitive and truly sustainable future for global logistics.
Optimizing neural networks: a comparative study of activation functions in deep learning Mobarki, Ahmed; Sheikh, Abdullah
International Journal of Electrical and Computer Engineering (IJECE) Vol 16, No 2: April 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v16i2.pp945-963

Abstract

Activation functions play a pivotal role in deep learning (DL) models, thus shaping their learning capabilities, convergence behavior, and generalization performance. However, the selection of activation functions without systematic evaluation in many applications has limited the model's performance. Inappropriate activation functions may cause gradients to shrink or blow-up during backpropagation, thereby affecting effective learning. To conquer this problem, this paper provides a novel comprehensive empirical investigation of nine activation functions, including traditional functions like rectified linear unit (ReLU), Sigmoid, Tanh, and ELU, and modern nonlinearities like Swish, Mish, GELU, and SMU. In the proposed methodology, these nine activation functions are evaluated within two prominent neural network architectures, namely convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs), across benchmark datasets, namely CIFAR-10, CIFAR-100, and MNIST. The evaluation criteria include validation accuracy, loss, training time, and gradient stability. Experimental results proved that GELU activation function improved MLP accuracy to 98.03% and CNN accuracy to 93.82% while maintaining stable gradients and low loss values of 0.088 and 0.221, respectively. These findings provided practical guidelines for selecting activation functions suited to specific task complexities and model depths, contributing to the design of more efficient and accurate DL systems.