Claim Missing Document
Check
Articles

Found 2 Documents
Search

Load Carrying Assistance Device Pogo Suit Sangram Redkar
IAES International Journal of Robotics and Automation (IJRA) Vol 5, No 3: September 2016
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1096.557 KB) | DOI: 10.11591/ijra.v5i3.pp161-175

Abstract

Wearable robots including exoskeletons, powered prosthetics, and powered orthotics must add energy to the person at an appropriate time to enhance, augment, or supplement human performance. Adding energy while not being in sync with the user can dramatically hurt performance. Many human tasks such as walking, running, and hopping are repeating or cyclic tasks and a robot can add energy in sync with the repeating pattern for assistance. A method has been developed to add energy at the appropriate time to the repeating limit cycle based on a phase oscillator. The phase oscillator eliminates time from the forcing function which is based purely on the motion of the user. This approach has been simulated, implemented and tested in a robotic backpack which facilitates carrying heavy loads. The device oscillates the load of the backpack, based on the motion of the user, in order to add energy at the correct time and thus reduce the amount of energy required for walking with a heavy load. Models were developed in Working Model 2-D, a dynamics simulation software, in conjunction with MATLAB to verify theory and test control methods. The control system developed is robust and has successfully operated on different users, each with their own different and distinct gait. The results of experimental testing validated the corresponding models.
Using Deep Learning for Human Computer Interface via Electroencephalography Sangram Redkar
IAES International Journal of Robotics and Automation (IJRA) Vol 4, No 4: December 2015
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1129.416 KB) | DOI: 10.11591/ijra.v4i4.pp292-310

Abstract

In this paper, several techniques used to perform EEG signal pre-processing, feature extraction and signal classification have been discussed, implemented, validated and verified; efficient supervised and unsupervised machine learning models, for the EEG motor imagery classification are identified. Brain Computer Interfaces are becoming the next generation controllers not only in the medical devices for disabled individuals but also in the gaming and entertainment industries. In order to build an effective Brain Computer Interface, it is important to have robust signal processing and machine learning modules which operate on the EEG signals and estimate the current thought or intent of the user. Motor Imagery (imaginary hand and leg movements) signals are acquired using the Emotiv EEG headset. The signal have been extracted and supplied to the machine learning (ML) stage, wherein, several ML techniques are applied and validated. The performances of various ML techniques are compared and some important observations are reported. Further, Deep Learning techniques like autoencoding have been used to perform unsupervised feature learning. The reliability of the features is presented and analyzed by performing classification by using the ML techniques. It is shown that hand engineered ‘ad-hoc’ feature extraction techniques are less reliable than the automated (‘Deep Learning’) feature learning techniques. All the findings in this research, can be used by the BCI research community for building motor imagery based BCI applications such as Gaming, Robot control and autonomous vehicles.