This Author published in this journals
All Journal IJOT IJHCS
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Cartographic Map Abstraction Using R Programming (Literacy Rate, HDI and Poverty Data Interpolation of Nepal) Yagyanath Rimal; Sakuntala Pageni
International Journal on Orange Technologies Vol. 1 No. 2 (2019): IJOT
Publisher : Research Parks Publishing LLC

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

R programming with map data interaction is another research area for data scientists due to high data publication in modern electronic records web applications at the federal administrative structure of Nepal. However the analysis of map and its presentation on web with interactively with data interperation has not much concern with data scientist while website design around the world. The data like central, province, local administrative government bodies many times published data for civil concerned. The dynamic records presentation with local boundaries map having interactive facilities has a new concept using r programming. Here the researcher could easily have developed the local administrative map using GIS shapefile then local level records generally gathered and stored in ms excel will automatically be integrated with this template easily so that local administrative agencies will easily update web site using r programming i.e rpubs. Which discards the registration domain, and knowledge of web application design intelligent site. The best application of this type of data interactive map with data interoperation would be highly applicable to local governance of Nepal where there was a large type of data and records that were developed in sequential order for public concern. The interactive VDC, district, province data commonly highlights data like, education rates and HDI information of any location that could easily be published. The developed model is available (http://rpubs.com/yagyarimal/556607) with interactive website pages quickly utilizing the intelligent markdown with a flash dashboard design template structure.
Naïve Bayes Machine Learning Classification with R Programming: A case study of binary data sets Yagyanath Rimal
International Journal on Orange Technologies Vol. 1 No. 2 (2019): IJOT
Publisher : Research Parks Publishing LLC

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

This analytical review paper clearly explains Naïve Bayes machine learning techniques for simple probabilistic classification based on bayes theorem with the assumption of independence between the characteristics using r programming. Although there is large gap between which algorithm is suitable for data analysis when there was large categorical variable to be predict the value in research data. The model is trained in the training data set to make predictions on the test data sets for the implementation of the Naïve Bayes classification. The uniqueness of the technique is that gets new information and tries to make a better forecast by considering the new evidence when the input variable is of largely categorical in nature that is quite similar to how our human mind works while selecting proper judgement from various alternative of choices and can be applied in the neuronal network of the human brain does using r programming. Here researcher takes binary.csv data sets of 400 observations of 4 dependent attributes of educational data sets. Admit is dependent variable of gre, score gpa and rank of previous grade which ultimately determine whether student will be admitted or not for next program. Initially the gra and gpa variables has 0.36 percent significant in the association with rank categorical variable. The box plot and density plot demonstrate the data overlap between admitted and not admitted data sets. The naïve Bayes classification model classify the large data with 0.68 percent for not admitted where as 0.31 percent were admitted. The confusion matrix, and the prediction were calculated with 0.68 percent accuracy when 95 percent confidence interval. Similarly, the training accuracy is increased from 29 percent to 32 percent when naïve Bayes algorithm method as use kernel is equal to TRUE that ultimately decrease misclassification errors in the binary data sets.
Multivariate imputation for missing data handling a case study on small and large data sets Yagyanath Rimal
International Journal of Human Computing Studies Vol. 2 No. 1 (2020): IJHCS
Publisher : Research Parks Publishing LLC

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31149/ijhcs.v2i1.352

Abstract

Abscent of records generally termed as missing data which should be treated properly before analysis procedes in data analysis. There were many researchers who undoubtedly mislead their research findings without proper treatment of missing data, therefore this review research try to explain the best ways of missing data handling using r programming. Generally, many researchers apply mean and median imputation but this sometimes creates bios in many situations, therefore, the researcher tries to explain some basic association among other research variables with treating missing data using r programming. The imputation process suggests five alternatives be replaced for missing data values were generated automatically and substituted easily at the process of data cleaning and data preparation. Here researcher explains two sample data for missing treatment and explains many ways for graphical interpretation of them. The first data set with 12 observation describes the easiest way of missing replacement and the second vehicle failure data from internet of 1624 records, whose missing pattern were calculated and replaced with to the respective data sets before analysis.