Borde, Santosh
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Transfer learning based leaf disease detection model using convolution neural network Raut, Rahul; Bidve, Vijaykumar; Sarasu, Pakiriswamy; Kakade, Kiran Shrimant; Shaikh, Ashfaq; Kediya, Shailesh; Borde, Santosh; Pakle, Ganesh
Indonesian Journal of Electrical Engineering and Computer Science Vol 36, No 3: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v36.i3.pp1857-1865

Abstract

The plants are attacked from various micro-organisms, bacterial illnesses, and pests. The signs are normally identified via leaves, stem, or fruit inspection. Illnesses that generally appeared on vegetation are from leaves and causes big harm if not managed in the early ranges. To stop this huge harm and manipulate the unfold of disorder this work implements a software system. This research work customs deep neural network to gain knowledge of probable illnesses on leaves within the early phases so it can be stopped early. Deep neural network (DNN) used for image classification. This work mainly focuses a neural network model of leaves ailment detection. The commonly available plant leaves dataset is undertaken with a dataset having special training of disease detection. In this work VGG16, ResNet50, Inception V3 and Inception ResNetV2 architectural techniques are implemented to generate and compare the results. Results are compared on the factors like precision, accuracy, recall and F1-Score. The results lead to the conclusion, that the convolution neural network (CNN) is more impactful technique to perceive and predict plant diseases.
Use of explainable AI to interpret the results of NLP models for sentimental analysis Bidve, Vijaykumar; Shafi, Pathan Mohd; Sarasu, Pakiriswamy; Pavate, Aruna; Shaikh, Ashfaq; Borde, Santosh; Pratap Singh, Veer Bhadra; Raut, Rahul
Indonesian Journal of Electrical Engineering and Computer Science Vol 35, No 1: July 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v35.i1.pp511-519

Abstract

The use of artificial intelligence (AI) systems is significantly increased in the past few years. AI system is expected to provide accurate predictions and it is also crucial that the decisions made by the AI systems are humanly interpretable i.e. anyone must be able to understand and comprehend the results produced by the AI system. AI systems are being implemented even for simple decision support and are easily accessible to the common man on the tip of their fingers. The increase in usage of AI has come with its own limitation, i.e. its interpretability. This work contributes towards the use of explainability methods such as local interpretable model-agnostic explanations (LIME) to interpret the results of various black box models. The conclusion is that, the bidirectional long short-term memory (LSTM) model is superior for sentiment analysis. The operations of a random forest classifier, a black box model, using explainable artificial intelligence (XAI) techniques like LIME is used in this work. The features used by the random forest model for classification are not entirely correct. The use of LIME made this possible. The proposed model can be used to enhance performance, which raises the trustworthiness and legitimacy of AI systems.