Basappa Vijaya, Ajay Prakash
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

HybridTransferNet: soil image classification through comprehensive evaluation for crop suggestion Raju, Chetan; Davanageri Virupakshappa, Ashoka; Basappa Vijaya, Ajay Prakash
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp1702-1710

Abstract

Soil image classification is a critical task within the realms of agriculture and environmental applications. In recent years, the integration of deep learning has sparked significant interest in image-based soil classification. Transfer learning, a well-established technique in image classification, involves finetuning a pre-trained model on a specific dataset. However, conventional transfer learning methods typically focus solely on fine-tuning the final layer of the pre-trained model, which may not suffice to attain high performance on a new task. HybridTransferNet, a unique hybrid transfer learning approach designed for soil classification based on images is proposed in this paper. HybridTransferNet goes beyond the conventional approach by finetuning not only the final layer but also a select number of earlier layers in a pre-trained ResNet50 model. This extension results in substantially enhanced ability to classify when compared to standard transfer learning methods. Our evaluation of HybridTransferNet, conducted on a soil classification dataset, encompasses the reporting of various performance indicators, such as the F1 score, recall, accuracy, and precision. Our findings from experiments highlight HybridTransferNet's advantages over conventional transfer learning strategies, establishing it as a state-of-the-art solution in the domain of soil classification.
Optimized multi-layer self-attention network for feature-level data fusion in emotion recognition Umesh Patil, Basamma; Davanageri Virupakshappa, Ashoka; Basappa Vijaya, Ajay Prakash
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i4.pp4435-4444

Abstract

Understanding human emotions across diverse data sources presents challenges in various applications including healthcare, human-machine interaction, security, marketing, and gaming. Prior research has explored fusion techniques to address multimodal data heterogeneity, yet often overlooks the importance of discriminative unimodal information and potential complementarity among fusion strategies. Recognizing emotions from video and audio data poses challenges such as non-verbal cues interpretation, varying expression, ambiguity in context, and the need for nuanced feature extraction to capture subtle emotional nuances accurately. To tackle these issues, it is imperative to employ efficient emotion representation and multimodal fusion techniques, as these tasks have significant importance within the realm of multifaceted recognizing study. This study introduced a novel approach, optimized multi-layer self-attention network for emotion recognition (OMSN-ER), focusing on feature-level data fusion. OMSN-ER precisely assesses emotional states by merging facial and voice data, utilizing a multi-layer progressive dense residual fusion network and a self-attention mountain gazelle convolution neural network. Implemented in Python with the RAVDESS dataset, the methodology achieves exceptional accuracy (0.9908), surpassing benchmarks and demonstrating efficacy in multimodal emotion recognition. This research represents promising advancements in the intricate field of emotion recognition.