Articles
64 Documents
Search results for
, issue
"Vol 25, No 2: February 2022"
:
64 Documents
clear
Topology network effects for DSSH circuit on vibration energy harvesting using piezoelectric materials
Youssef El Hmamsy;
Chouaib Ennawaoui;
Abdelowahed Hajjaji
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp721-731
Energy extraction takes place using several different technologies, depending on the type of energy and how it is used. The objective of this paper is to study topology influence for a smart network based on piezoelectric materials using the DSSH (Double Synchronized Switch Harvesting). In this work, has been presented network topology for circuit DSSH (DSSH Standard, Independent DSSH, DSSH in parallel, Mono DSSH, DSSH in series). Using simulation-based on a structure with embedded piezoelectric system harvesters, then compare different topology of circuit DSSH for knowledge is how to connect the circuit DSSH together and how to implement accurately this circuit strategy for maximizing the total output power. The network topology DSSH extracted power a technique allows again up to in terms of maximal power output compared with network topology standard extracted at the resonant frequency. The simulation results shows that by using the same input parameters the maximum efficiency for topology DSSH in parallel produces 120% more energy than topology DSSH-series. In addition, the energy harvesting by Mono-DSSH is more than DSSH-series by 650% and it has exceeded DSSH-ind by 240%.Energy extraction takes place using several different technologies, depending on the type of energy and how it is used. The objective of this paper is to study topology influence for a smart network based on piezoelectric materials using the DSSH (Double Synchronized Switch Harvesting). In this work, has been presented network topology for circuit DSSH (DSSH Standard, Independent DSSH, DSSH in parallel, Mono DSSH, DSSH in series). Using simulation-based on a structure with embedded piezoelectric system harvesters, then compare different topology of circuit DSSH for knowledge is how to connect the circuit DSSH together and how to implement accurately this circuit strategy for maximizing the total output power. The network topology DSSH extracted power a technique allows again up to in terms of maximal power output compared with network topology standard extracted at the resonant frequency. The simulation results shows that by using the same input parameters the maximum efficiency for topology DSSH in parallel produces 120% more energy than topology DSSH-series. In addition, the energy harvesting by Mono-DSSH is more than DSSH-series by 650% and it has exceeded DSSH-ind by 240%.
Low power architecture of logic gates using adiabatic techniques
Minakshi Sanadhya;
Devendra Kumar Sharma
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp805-813
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
An efficient and robust parallel scheduler for bioinformatics applications in a public cloud: A bigdata approach
Leena Ammanna;
Jagadeeshgowda Jagadeeshgowda;
Jagadeesh Pujari
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp1078-1086
In bioinformatics, genomic sequence alignment is a simple method for handling and analysing data, and it is one of the most important applications in determining the structure and function of protein sequences and nucleic acids. The basic local alignment search tool (BLAST) algorithm, which is one of the most frequently used local sequence alignment algorithms, is covered in detail here. Currently, the NCBI's BLAST algorithm (standalone) is unable to handle biological data in the terabytes. To address this problem, a variety of schedulers have been proposed. Existing sequencing approaches are based on the Hadoop MapReduce (MR) framework, which enables a diverse set of applications and employs a serial execution strategy that takes a long time and consumes a lot of computing resources. The author, improves the BLAST algorithm based on the BLAST-BSPMR algorithm to achieve the BLAST algorithm. To address the issue with Hadoop's MapReduce framework, a customised MapReduce framework is developed on the Azure cloud platform. The experiment findings indicate that the suggested bulk synchronous parallel MapReduce-basic local alignment search tool (BSPMR-BLAST) algorithm matches bioinformatics genomic sequences more quickly than the existing Hadoop-BLAST method, and that the proposed customised scheduler is extremely stable and scalable.
Classify arrhythmia by using 2D spectral images and deep neural network
Tran Anh Vu;
Hoang Quang Huy;
Pham Duy Khanh;
Nguyen Thi Minh Huyen;
Trinh Thi Thu Uyen;
Pham Thi Viet Huong
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp931-940
Electrocardiogram (ECG) is the most common method for monitoring the working of the heart. ECG signal is the basis to determine normal or abnormal rhythm, thereby helping to accurately diagnose cardiovascular diseases. Therefore, an automatic algorithm to detect and diagnose abnormal heart rhythms is essential. There are many methods of classifying arrhythmias using machine learning algorithms such as k-nearest neighbors (KNN), support vector machines (SVM), based on the features extracted from the record of ECG signal. Actually, deep learning algorithms are evolving and highly effective in image analysis and processing. In this research, a dense neural network model is proposed to classify normal and abnormal beats. Input ECG signal presenting a time series is converted into 2-D spectral image by applying wavelet transform. Our research is evaluated based on using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. The accuracy of the classification algorithm we employ is 99.8%, demonstrating the model's validity when compared to other reports' findings. This is the foundation for our algorithm to prove it can be utilized as an efficient model for categorizing arrhythmia using ECG signals.
A fast and non-trainable facial recognition system for schools
Kazeem Oyebode;
Kingsley Chiwuike Ukaoha
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp989-994
Deep learning models have been at the forefront of facial recognition because they deliver improved classification accuracy over traditional ones. Regardless, deep learning models require an extensive dataset for training. To significantly cut down on its training time and dataset volume, pretrained models, have been used although, they are still required to undergo the usual training process for custom facial recognition tasks. This research focuses on an improved facial recognition system that lacks the training and retraining requirements. The system uses an existing deep learning feature extraction model. First, a user stands before a camera-enabled system. After that, the user supplies a unique identification number to fetch a corresponding face image from the database. This process generates two face feature vectors. One from the camera and that retrieved from the database. The cosine distance function determines the similarity value of these vectors. When the cosine distance value falls below a set threshold, the face is recognized and access granted. If the cosine distance of the two vectors gives a value above this threshold, access is denied. The proposed model performs satisfactorily on publicly available datasets.
Soft computing techniques for early diabetes prediction
Sabah Anwer Abdulkareem;
Hussein Y. Radhi;
Yousra Ahmed Fadil;
Hussain Mahdi
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp1167-1176
Diabetes mellitus is a chronic, life-threatening, and complicated condition. Around 1.5 million deaths due to diabetes have been documented, according to a World Health Organization (WHO) estimation in 2019. In the world of medicine, predicting diabetes risk is a difficult and time-consuming task. Many past studies have been conducted to investigate and clarify diabetes symptoms and variables. To solve these persisting issues, however, more critical clinical criteria must be considered. A comparative analysis based on three soft computing strategies for diabetes prediction has been carried out and achieved in this work. Among the computational intelligence methods used in this study are fuzzy analytical hierarchy processes (FAHP), support vector machine (SVM), and artificial neural networks (ANNs). The techniques reveal promising performance in predicting diabetes reliably and effectively in terms of several classification evaluation metrics, according to experimental analysis and assessment conducted on 520 participants using a publicly available dataset.
Social cyber-criminal, towards automatic real time recognition of malicious posts on Twitter
Yasser Ibrahim;
Mohammed Abdel Razek;
Nasser El-Sherbeny
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp1199-1207
Easy access to the internet throughout the world has fully reformed the usage of social communication such as Facebook, Twitter, Linked In which are becoming a part of our life. Accordingly, cybercrime has become a vital problem, especially in developing countries. The dissemination of information with no risk of being discovered and fetched leads to an increase in cyber-criminal. Meanwhile, the huge amount of data continuously produced from Twitter made the discovery process of cyber-criminals is a tough assignment. This research will contribute in determined on the build the comparable vectors for (positive and negative classes) and then the classify incoming tweets to predicate his class (positive or negative). The proposed routines staring with the construct super comparable vectors (SCV) (positive and negative vectors), and the construct vector for the incoming tweet, and then calculate similarities with both SCV and compare calculated similarities to predicate class of incoming tweet. In this research, we used some common techniques for calculating the weight of terms in tweets to construct SCV. To ensure the successful operation of the proposed system, we performed a pilot analysis on a real example of an examination. Research Improves precision, recall, and F1 values by 87%, 59%, 69.99%, respectively.
Particle swarm optimization based interval type 2 fuzzy logic control for motor rotor position control of artificial heart pump
Raghda Saad Raheem;
Mohammed Y. Hassan;
Saleem Khalefa Kdahim
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp814-824
Artificial heart pump (AHP) is employed to replace the native damaged heart and perform its functions. Bearingless brushless DC (BBLDC) motors are used for the implementation of the AHP. BBLDC motor is a highly nonlinear model with uncertainties and its mathematical model is hard to be found accurately. In this paper, BBLDC motor is simulated. Proportional plus integral (PI) controller is proposed to control the rotor suspension current. Furthermore, a type 2 proportional plus integral plus derivative-like fuzzy logic controller (T2 PID-Like FLC) is proposed to control the motor rotor (x, y) positions. Particle swarm optimization (PSO) technique is employed to find the best controller scaling factors and to optimize the controller inputs membership functions distribution within its universe of discourse. Simulation results showed enhancement in levitating the rotor to the required position, when using T2 PID-like FLC as compared with using type 1 PID-like fuzzy logic controller. The enhancement is measured using integral of absolute error (IAE) as a cost function to achieve 64.18% and 81.81% in the x and y axes respectively. The Performance of the motor is enhanced by 20%, which decreases the rotor oscillation and increases the ability to withstand the system disturbances and nonlinearity
Efficient wireless power transmission to remote the sensor in restenosis coronary artery
Mokhalad Alghrairi;
Nasri Sulaiman;
Wan Zuha Wan Hasan;
Haslina Jaafar;
Saad Mutashar
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp771-779
In this study, the researchers have proposed an alternative technique for designing an asymmetric 4 coil-resonance coupling module based on the series-to-parallel topology at 27 MHz industrial scientific medical (ISM) band to avoid the tissue damage, for the constant monitoring of the in-stent restenosis coronary artery. This design consisted of 2 components, i.e., the external part that included 3 planar coils that were placed outside the body and an internal helical coil (stent) that was implanted into the coronary artery in the human tissue. This technique considered the output power and the transfer efficiency of the overall system, coil geometry like the number of coils per turn, and coil size. The results indicated that this design showed an 82% efficiency in the air if the transmission distance was maintained as 20 mm, which allowed the wireless power supply system to monitor the pressure within the coronary artery when the implanted load resistance was 400 Ω.
Different analytical frameworks and bigdata model for Internet of Things
Ayushi Chahal;
Preeti Gulia;
Nasib Singh Gill
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 2: February 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijeecs.v25.i2.pp1159-1166
Sensor devices used in internet of things (IoT) enabled environment produce large amount of data. This data plays a major role in bigdata landscape. In recent years, correlation, and implementation of bigdata and IoT is being extrapolated. Nowadays, predictive analytics is gaining attention of many researchers for big IoT data analytics. This paper summarizes different sort of IoT analytical platforms which consist in-built features for further use in machine learning, MATLAB, and data security. It emphasizes on different machine learning algorithms that plays important role in big IoT data analytics. Besides different analytical frameworks, this paper highlights the proposed model for bigdata in IoT domain and elaborates different forms of data analytical methods. Proposed model comprises different phases i.e., data storing, data cleaning, data analytics, and data visualization. These phases cover the basic characteristics of bigdata V’s model and most important phase is data analytics or big IoT analytics. This model is implemented using an IoT dataset and results are presented in graphical and tabular form using different machine learning techniques. This study enhances researchers’ knowledge about various IoT analytical platforms and usability of these platforms in their respective problem domains.