Lingareddy, Sanjeev C.
Unknown Affiliation

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Video saliency detection using modified high efficiency video coding and background modelling Narasimha, Sharada P.; Lingareddy, Sanjeev C.
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 13, No 2: July 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v13.i2.pp431-440

Abstract

Video saliency has a profound effect on our lives with its compression efficiency and precision. There have been several types of research done on image saliency but not on video saliency. This paper proposes a modified high efficiency video coding (HEVC) algorithm with background modelling and the implication of classification into coding blocks. This solution first employs the G-picture in the fourth frame as a long-term reference and then it is quantized based on the algorithm that segregates using the background features of the image. Then coding blocks are introduced to decrease the complexity of the HEVC code, reduce time consumption and overall speed up the process of saliency. The solution is experimented upon with the dynamic human fixation 1K (DHF1K) dataset and compared with several other state-of-the-art saliency methods to showcase the reliability and efficiency of the proposed solution.
Task level energy and performance assurance workload scheduling model in distributed computing environment Bakka, Jagadevi; Lingareddy, Sanjeev C.
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v13.i1.pp210-216

Abstract

Scientific workload execution on distributed computing platform such as cloud environment is time intense and expensive. The scientific workload has task dependencies with different service level agreement (SLA) prerequisite at different levels. Existing workload scheduling (WS) design are not efficient in assuring SLA at task level. Alongside, induce higher cost as majority of scheduling mechanisms reduce either time or energy. In reducing, cost both energy and makespan must be optimized together for allocating resource. No prior work has considered optimizing energy and processing time together in meeting task level SLA requirement. This paper present task level energy and performance assurance (TLEPA)-WS algorithm for distributed computing environment. The TLEPA-WS guarantees energy minimization with performance requirement of parallel application under distributed computational environment. Experiment results shows significant reduction in using energy and makespan; thereby reduces cost of workload execution in comparison with various standard workload execution models.
Transliteration and translation of Hindi language using integrated domain-based Auto-encoder K, Vathsala M; Lingareddy, Sanjeev C.
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i4.pp4906-4914

Abstract

The main objective of translation is to translate words' meanings from one language to another; in contrast, transliteration does not translate any contextual meanings between languages. Transliteration, as opposed to translation, just considers the individual letters that make up each word.  In this paper an Integrated deep neural network transliteration and translation model (NNTT) based autoencoder model is developed. The model is segmented into transliteration model and translation model; the transliteration involves the process of converting text from one script to another evaluated on the Dakshina dataset wherein Hindi typically uses a sequence-to-sequence model with an attention mechanism, the translation model is trained to translate text from one language to another. Translation models regularly use a sequence-to-sequence model performed on the WAT (Workshop on Asian Translation) 2021 dataset with an attention mechanism, similar to the one used in the transliteration model for Hindi. The proposed NNTT model merges the in-domain and out-domain frameworks to develop a training framework so that the information is transferred between the domains. The results evaluated show that the proposed model works effectively in comparison with the existing system for the Hindi language.