Claim Missing Document
Check
Articles

Found 4 Documents
Search

A Novel Tourniquet with an Alarm System, Replaceable Components, and The Ability to Adjust Pressure and Detect Body Temperature for Medical Applications Bameri, Mohammad Mahdi; Abolhassani, Moussa; Bameri, Mohammad Hasan; Shafaghi, Shadi; Ghorbani, Fariba; Shafaghi, Masoud
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 5 No 4 (2023): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v5i4.329

Abstract

A tourniquet is a practical device in the medical field that is employed to collect blood and prevent bleeding in medical centers. The need for a durable tourniquet with leveled pressure adjustment capabilities, which indicates the time, can be used for both blood collection and control of intense bleeding, and has replaceable components in case of damage is strongly felt. Thus, developing a tourniquet with the aforementioned features is highly beneficial. The designed tourniquet includes four main parts (end hook, strap, main clip, and pin), and the other three parts are mounted on the strap. This tourniquet has a unique strap design and a sturdy lock and fastener, a body temperature detection sensor, a pressure adjustment section at four levels, a timer with an alarm, buttons (pins) to open the strap, a linen pad, and a special end hook (with an end clamp). The proposed time warning tourniquet with replaceable components has the ability to adjust the pressure, detect the temperature, and create local pressure, and therefore, can boost the performance of the medical staff during blood sampling and bleeding control procedures.
AN EXAMINATION OF THE IMPLEMENTATION OF INTERNET OF THINGS IN HEALTHCARE UTILIZING SMARTWATCHES Boshrabadi, Fatemeh Sadeghi; Abolhassani, Moussa; Shafaghi, Shadi; Ghorbani, Fariba; Shafaghi, Masoud
Indonesian Journal of Health Administration (Jurnal Administrasi Kesehatan Indonesia) Vol. 12 No. 2 (2024): December
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jaki.v12i2.2024.292-304

Abstract

Background: Smartwatches can use sensors to collect and send data to medical teams and family members through the platform of the Internet of things (IoT). The data are first analysed on the platform and the final results are used by the medical team. Aims: This paper reviews and categorises studies conducted in the field of the Internet of things based on smartwatches. Methods: The covered papers have been published over 13 years from 2010 to 2022. The search yielded 227 papers out of which 43 papers were reviewed after screening. The search keywords were “wearables, internet of things, smartwatches, smart bracelet, healthcare, and disease”. The search covered databases including PubMed, ScienceDirect, and IEEE. Results: Smartwatches are used in three fields of healthcare, including palliative care, speech therapy, diagnosis, disease prevention, rehabilitation, and health improvement. Conclusion: Smartwatches are not free of drawbacks and have not received the attention they deserve in the healthcare field. Given the potential of smartwatches, they can be useful in the health sector. Keywords: Disease, IOT, Smartwatch, Wearable
UAV With the Ability to Control with Sign Language and Hand by Image Processing Hojaji, Hediyeh; Delisnav, Alireza; Ghafouri Moghaddam, Mohammad Hossein; Ghorbani, Fariba; Shafaghi, Shadi; Shafaghi, Masoud
JITCE (Journal of Information Technology and Computer Engineering) Vol 8 No 2 (2024): Journal of Information Technology and Computer Engineering
Publisher : Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/jitce.8.2.49-57.2024

Abstract

Automatic recognition of sign language from hand gesture images is crucial for enhancing human-robot interaction, especially in critical scenarios such as rescue operations. In this study, we employed a DJI TELLO drone equipped with advanced machine vision capabilities to recognize and classify sign language gestures accurately. We developed an experimental setup where the drone, integrated with state-of-the-art radio control systems and machine vision techniques, navigated through simulated disaster environments to interact with human subjects using sign language. Data collection involved capturing various hand gestures under various environmental conditions to train and validate our recognition algorithms, including implementing YOLO V5 alongside Python libraries with OpenCV. This setup enabled precise hand and body detection, allowing the drone to navigate and interact effectively. We assessed the system's performance by its ability to accurately recognize gestures in both controlled and complex, cluttered backgrounds. Additionally, we developed robust debris and damage-resistant shielding mechanisms to safeguard the drone's integrity. Our drone fleet also established a resilient communication network via Wi-Fi, ensuring uninterrupted data transmission even with connectivity disruptions. These findings underscore the potential of AI-driven drones to engage in natural conversational interactions with humans, thereby providing vital information to assist decision-making processes during emergencies. In conclusion, our approach promises to revolutionize the efficacy of rescue operations by facilitating rapid and accurate communication of critical information to rescue teams.
UAV With the Ability to Control with Sign Language and Hand by Image Processing Hojaji, Hediyeh; Delisnav, Alireza; Ghafouri Moghaddam, Mohammad Hossein; Ghorbani, Fariba; Shafaghi, Shadi; Shafaghi, Masoud
JITCE (Journal of Information Technology and Computer Engineering) Vol. 8 No. 2 (2024)
Publisher : Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/jitce.8.2.49-57.2024

Abstract

Automatic recognition of sign language from hand gesture images is crucial for enhancing human-robot interaction, especially in critical scenarios such as rescue operations. In this study, we employed a DJI TELLO drone equipped with advanced machine vision capabilities to recognize and classify sign language gestures accurately. We developed an experimental setup where the drone, integrated with state-of-the-art radio control systems and machine vision techniques, navigated through simulated disaster environments to interact with human subjects using sign language. Data collection involved capturing various hand gestures under various environmental conditions to train and validate our recognition algorithms, including implementing YOLO V5 alongside Python libraries with OpenCV. This setup enabled precise hand and body detection, allowing the drone to navigate and interact effectively. We assessed the system's performance by its ability to accurately recognize gestures in both controlled and complex, cluttered backgrounds. Additionally, we developed robust debris and damage-resistant shielding mechanisms to safeguard the drone's integrity. Our drone fleet also established a resilient communication network via Wi-Fi, ensuring uninterrupted data transmission even with connectivity disruptions. These findings underscore the potential of AI-driven drones to engage in natural conversational interactions with humans, thereby providing vital information to assist decision-making processes during emergencies. In conclusion, our approach promises to revolutionize the efficacy of rescue operations by facilitating rapid and accurate communication of critical information to rescue teams.