Claim Missing Document
Check
Articles

Found 2 Documents
Search

Vision-Based Soft Mobile Robot Inspired by Silkworm Body and Movement Behavior Abed, Ali A.; Al-Ibadi, Alaa; Abed, Issa A.
Journal of Robotics and Control (JRC) Vol 4, No 3 (2023)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v4i3.16622

Abstract

Designing an inexpensive, low-noise, safe for individual, mobile robot with an efficient vision system represents a challenge. This paper proposes a soft mobile robot inspired by the silkworm body structure and moving behavior. Two identical pneumatic artificial muscles (PAM) have been used to design the body of the robot by sewing the PAMs longitudinally. The proposed robot moves forward, left, and right in steps depending on the relative contraction ratio of the actuators. The connection between the two artificial muscles gives the steering performance at different air pressures of each PAM. A camera (eye) integrated into the proposed soft robot helps it to control its motion and direction. The silkworm soft robot detects a specific object and tracks it continuously. The proposed vision system is used to help with automatic tracking based on deep learning platforms with real-time live IR camera. The object detection platform, named, YOLOv3 is used effectively to solve the challenge of detecting high-speed tiny objects like Tennis balls. The model is trained with a dataset consisting of images of   Tennis balls. The work is simulated with Google Colab and then tested in real-time on an embedded device mated with a fast GPU called Jetson Nano development kit. The presented object follower robot is cheap, fast-tracking, and friendly to the environment. The system reaches a 99% accuracy rate during training and testing. Validation results are obtained and recorded to prove the effectiveness of this novel silkworm soft robot. The research contribution is designing and implementing a soft mobile robot with an effective vision system.
Improved DeepFake Image Generation Using StyleGAN2-ADA with Real-Time Personal Image Projection Abed, Ali A.; Talib, Doaa Alaa; Sharkawy, Abdel-Nasser
Buletin Ilmiah Sarjana Teknik Elektro Vol. 7 No. 4 (2025): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/biste.v7i4.14659

Abstract

This paper presents an improved approach for DeepFake image generation using StyleGAN2-ADA framework. The system is designed to generate high-quality synthetic facial images from a limited dataset of personal photos in real time. By leveraging the Adaptive Discriminator Augmentation (ADA) mechanism, the training process is stabilized without modifying the network architecture, enabling robust image generation even with small-scale datasets. Real-time image capturing and projection techniques are integrated to enhance personalization and identity consistency. The experimental results demonstrate that the proposed method achieve a very high generation performance, significantly outperforming the baseline StyleGAN2 model. The proposed system using StyleGAN2-ADA achieves 99.1% identity similarity, a low Fréchet Inception Distance (FID) of 8.4, and less than 40 ms latency per generated frame. This approach provides a practical solution for dataset augmentation and supports ethical applications in animation, digital avatars, and AI-driven simulations.