Articles
Managers Perceptions towards the Success of E-performance Reporting System
A'ang Subiyakto;
Ditha Septiandani;
Evy Nurmiati;
Yusuf Durachman;
Mira Kartiwi;
Abd. Rahman Ahlan
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 15, No 3: September 2017
Publisher : Universitas Ahmad Dahlan
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.12928/telkomnika.v15i3.5133
Managers are the key informants in the information system (IS) success measurements. In fact, besides the determinant agents are rarely involved in the assessments, most of the measurements are also often performed by the technical stakeholders of the systems. Therefore, the results may questionable. This study was carried to explain the factors that influence the success of an e-performance reporting system in an Indonesian university by involving ± 70% of the managers (n=66) in the sampled institution. The DeLone and McLean model was adopted and adapted here following the suggestions of the previous meta-analysis studies. The collected data was analyzed using the partial least squares-structural equation modelling (PLS-SEM) for examining the four hypotheses. Despite the findings revealed acceptances of the overall hypotheses, the weak explanation of the user satisfaction variable towards the net benefit one had been the highlighted point. Besides the study limitations, the point may also be the practical and theoretical considerations for the next studies, especially for the IS success studies in Indonesia
Privacy and Personal Data Protection in Electronic Voting: Factors and Measures
Muharman Lubis;
Mira Kartiwi;
Sonny Zulhuda
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 15, No 1: March 2017
Publisher : Universitas Ahmad Dahlan
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.12928/telkomnika.v15i1.3804
In general, electronic voting as the technology advancement offers the opportunities to reduce the time and budget of implementation which present the greater advantages than traditional approach. It seeks establish the privacy framework in the context of electronic voting that aligns with the mutual comprehension of relevant factors and measures. The result found that privacy concern and perceived benefit have influenced personal data protection significantly. The success and failure of electronic voting implementation depend on the fulfilment of the voter needs on privacy and personal data protection.
A Coherent Framework for Understanding the Success of an Information System Project
Syopiansyah Jaya Putra;
A'ang Subiyakto;
Abd. Rahman Ahlan;
Mira Kartiwi
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 14, No 1: March 2016
Publisher : Universitas Ahmad Dahlan
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.12928/telkomnika.v14i1.2711
This paper elucidates the sequential revisions of an information system (IS) project framework across the research model development and its examinations. The authors adopted, adapted, and combined five concepts of the project management discipline and the information processing theory to revise the framework. Besides the use of this multi-dimensional perspective, the authors were also succeeded to present an interrelation between the framework and the examined model within a coherent representation. It was one of the essential points of this model development study, in particular for presenting the research focus. It may be trivial issue for the experts in the research fields, but the coherent illustration is one of the critical issues in the validity measurement of a model, whereas the inexpert ones may need a guideline to represent the interrelationship. Such points became the main contribution of this study to fill the gap in the literatures, particularly in the lack of comprehensive detail of a research model development.
On the use of voice activity detection in speech emotion recognition
Muhammad Fahreza Alghifari;
Teddy Surya Gunawan;
Mimi Aminah binti Wan Nordin;
Syed Asif Ahmad Qadri;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (903.469 KB)
|
DOI: 10.11591/eei.v8i4.1646
Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.
A critical insight into multi-languages speech emotion databases
Syed Asif Ahmad Qadri;
Teddy Surya Gunawan;
Muhammad Fahreza Alghifari;
Hasmah Mansor;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (303.859 KB)
|
DOI: 10.11591/eei.v8i4.1645
With increased interest of human-computer/human-human interactions, systems deducing and identifying emotional aspects of a speech signal has emerged as a hot research topic. Recent researches are directed towards the development of automated and intelligent analysis of human utterances. Although numerous researches have been put into place for designing systems, algorithms, classifiers in the related field; however the things are far from standardization yet. There still exists considerable amount of uncertainty with regard to aspects such as determining influencing features, better performing algorithms, number of emotion classification etc. Among the influencing factors, the uniqueness between speech databases such as data collection method is accepted to be significant among the research community. Speech emotion database is essentially a repository of varied human speech samples collected and sampled using a specified method. This paper reviews 34 `speech emotion databases for their characteristics and specifications. Furthermore critical insight into the imitational aspects for the same have also been highlighted.
Development of Face Recognition on Raspberry Pi for Security Enhancement of Smart Home System
Teddy Surya Gunawan;
Muhammad Hamdan Hasan Gani;
Farah Diyana Abdul Rahman;
Mira Kartiwi
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 5, No 4: December 2017
Publisher : IAES Indonesian Section
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52549/ijeei.v5i4.361
Nowadays, there is a growing interest in the smart home system using Internet of Things. One of the important aspect in the smart home system is the security capability which can simply lock and unlock the door or the gate. In this paper, we proposed a face recognition security system using Raspberry Pi which can be connected to the smart home system. Eigenface was used the feature extraction, while Principal Component Analysis (PCA) was used as the classifier. The output of face recognition algorithm is then connected to the relay circuit, in which it will lock or unlock the magnetic lock placed at the door. Results showed the effectiveness of our proposed system, in which we obtain around 90% face recognition accuracy. We also proposed a hierarchical image processing approach to reduce the training or testing time while improving the recognition accuracy.
A critical insight into multi-languages speech emotion databases
Syed Asif Ahmad Qadri;
Teddy Surya Gunawan;
Muhammad Fahreza Alghifari;
Hasmah Mansor;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (303.859 KB)
|
DOI: 10.11591/eei.v8i4.1645
With increased interest of human-computer/human-human interactions, systems deducing and identifying emotional aspects of a speech signal has emerged as a hot research topic. Recent researches are directed towards the development of automated and intelligent analysis of human utterances. Although numerous researches have been put into place for designing systems, algorithms, classifiers in the related field; however the things are far from standardization yet. There still exists considerable amount of uncertainty with regard to aspects such as determining influencing features, better performing algorithms, number of emotion classification etc. Among the influencing factors, the uniqueness between speech databases such as data collection method is accepted to be significant among the research community. Speech emotion database is essentially a repository of varied human speech samples collected and sampled using a specified method. This paper reviews 34 `speech emotion databases for their characteristics and specifications. Furthermore critical insight into the imitational aspects for the same have also been highlighted.
On the use of voice activity detection in speech emotion recognition
Muhammad Fahreza Alghifari;
Teddy Surya Gunawan;
Mimi Aminah binti Wan Nordin;
Syed Asif Ahmad Qadri;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (903.469 KB)
|
DOI: 10.11591/eei.v8i4.1646
Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.
A critical insight into multi-languages speech emotion databases
Syed Asif Ahmad Qadri;
Teddy Surya Gunawan;
Muhammad Fahreza Alghifari;
Hasmah Mansor;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (303.859 KB)
|
DOI: 10.11591/eei.v8i4.1645
With increased interest of human-computer/human-human interactions, systems deducing and identifying emotional aspects of a speech signal has emerged as a hot research topic. Recent researches are directed towards the development of automated and intelligent analysis of human utterances. Although numerous researches have been put into place for designing systems, algorithms, classifiers in the related field; however the things are far from standardization yet. There still exists considerable amount of uncertainty with regard to aspects such as determining influencing features, better performing algorithms, number of emotion classification etc. Among the influencing factors, the uniqueness between speech databases such as data collection method is accepted to be significant among the research community. Speech emotion database is essentially a repository of varied human speech samples collected and sampled using a specified method. This paper reviews 34 `speech emotion databases for their characteristics and specifications. Furthermore critical insight into the imitational aspects for the same have also been highlighted.
On the use of voice activity detection in speech emotion recognition
Muhammad Fahreza Alghifari;
Teddy Surya Gunawan;
Mimi Aminah binti Wan Nordin;
Syed Asif Ahmad Qadri;
Mira Kartiwi;
Zuriati Janin
Bulletin of Electrical Engineering and Informatics Vol 8, No 4: December 2019
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (903.469 KB)
|
DOI: 10.11591/eei.v8i4.1646
Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.