Chanintorn Jittawiriyanukoon
Assumption University

Published : 11 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 11 Documents
Search

Performance evaluation of wireless local area network with congested fading channels Chanintorn Jittawiriyanukoon; Vilasinee Srisarkun
International Journal of Electrical and Computer Engineering (IJECE) Vol 12, No 1: February 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v12i1.pp411-417

Abstract

The IEEE 802.11ay wireless communication standard consents gadgets to link in the spectrum of millimeter wave (mm-Wave) 60 Giga Hertz band through 100 Gbps bandwidth. The development of promising high bandwidth in communication networks is a must as QoS, throughput and error rates of bandwidth-intensive applications like merged reality (MR), artificial intelligence (AI) related apps or wireless communication boggling exceed the extent of the chronic 802.11 standard established in 2012. Thus, the IEEE 802.11ay task group committee has newly amended recent physical (PHY) and medium access control (MAC) blueprints to guarantee a technical achievement especially in link delay on multipath fading channels (MPFC). However, due to the congestion of super bandwidth intensive apps such as IoT and big data, we propose to diversify a propagation delay to practical extension. This article then focuses on a real-world situation and how the IEEE 802.11ay design is affected by the performance of mm-Wave propagation. In specific, we randomize the unstable MPFC link capacity by taking the divergence of congested network parameters into account. The efficiency of congested MPFC-based wireless network is simulated and confirmed by advancements described in the standard.
Estimation of regression-based model with bulk noisy data Chanintorn Jittawiriyanukoon
International Journal of Electrical and Computer Engineering (IJECE) Vol 9, No 5: October 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (713.95 KB) | DOI: 10.11591/ijece.v9i5.pp3649-3656

Abstract

The bulk noise has been provoking a contributed data due to a communication network with a tremendously low signal to noise ratio. An appreciated method for revising massive noise of individuals through information theory is widely discussed. One of the practical applications of this approach for bulk noise estimation is analyzed using intelligent automation and machine learning tools, dealing the case of bulk noise existence or nonexistence. A regression-based model is employed for the investigation and experiment. Estimation for the practical case with bulk noisy datasets is proposed. The proposed method applies slice-and-dice technique to partition a body of datasets down into slighter portions so that it can be carried out. The average error, correlation, absolute error and mean square error are computed to validate the estimation. Results from massive online analysis will be verified with data collected in the following period. In many cases, the prediction with bulk noisy data through MOA simulation reveals Random Imputation minimizes the average error.
Granularity analysis of classification and estimation for complex datasets with MOA Chanintorn Jittawiriyanukoon
International Journal of Electrical and Computer Engineering (IJECE) Vol 9, No 1: February 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (261.625 KB) | DOI: 10.11591/ijece.v9i1.pp409-416

Abstract

Dispersed and unstructured datasets are substantial parameters to realize an exact amount of the required space. Depending upon the size and the data distribution, especially, if the classes are significantly associating, the level of granularity to agree a precise classification of the datasets exceeds. The data complexity is one of the major attributes to govern the proper value of the granularity, as it has a direct impact on the performance. Dataset classification exhibits the vital step in complex data analytics and designs to ensure that dataset is prompt to be efficiently scrutinized. Data collections are always causing missing, noisy and out-of-the-range values. Data analytics which has not been wisely classified for problems as such can induce unreliable outcomes. Hence, classifications for complex data sources help comfort the accuracy of gathered datasets by machine learning algorithms. Dataset complexity and pre-processing time reflect the effectiveness of individual algorithm. Once the complexity of datasets is characterized then comparatively simpler datasets can further investigate with parallelism approach. Speedup performance is measured by the execution of MOA simulation. Our proposed classification approach outperforms and improves granularity level of complex datasets.
Evaluation of a Multiple Regression Model for Noisy and Missing Data Chanintorn Jittawiriyanukoon
International Journal of Electrical and Computer Engineering (IJECE) Vol 8, No 4: August 2018
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (359.441 KB) | DOI: 10.11591/ijece.v8i4.pp2220-2229

Abstract

The standard data collection problems may involve noiseless data while on the other hand large organizations commonly experience noisy and missing data, probably concerning data collected from individuals. As noisy and missing data will be significantly worrisome for occasions of the vast data collection then the investigation of different filtering techniques for big data environment would be remarkable. A multiple regression model where big data is employed for experimenting will be presented. Approximation for datasets with noisy and missing data is also proposed. The statistical root mean squared error (RMSE) associated with correlation coefficient (COEF) will be analyzed to prove the accuracy of estimators. Finally, results predicted by massive online analysis (MOA) will be compared to those real data collected from the following different time. These theoretical predictions with noisy and missing data estimation by simulation, revealing consistency with the real data are illustrated. Deletion mechanism (DEL) outperforms with the lowest average percentage of error.
Simulation for predictive maintenance using weighted training algorithms in machine learning Chanintorn Jittawiriyanukoon; Vilasinee Srisarkun
International Journal of Electrical and Computer Engineering (IJECE) Vol 12, No 3: June 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v12i3.pp2839-2846

Abstract

In the production, the efficient employment of machines is realized as a source of industry competition and strategic planning. In the manufacturing industries, data silos are harvested, which is needful to be monitored and deployed as an operational tool, which will associate with a right decision-making for minimizing maintenance cost. However, it is complex to prioritize and decide between several results. This article utilizes a synthetic data from a factory, mines the data to filter for an insight and performs machine learning (ML) tool in artificial intelligence (AI) to strategize a decision support and schedule a plan for maintenance. Data includes machinery, category, machinery, usage statistics, acquisition, owner’s unit, location, classification, and downtime. An open-source ML software tool is used to replace the short of maintenance planning and schedule. Upon data mining three promising training algorithms for the insightful data are employed as a result their accuracy figures are obtained. Then the accuracy as weighted factors to forecast the priority in maintenance schedule is proposed. The analysis helps monitor the anticipation of new machines in order to minimize mean time between failures (MTBF), promote the continuous manufacturing and achieve production’s safety.
Proposed classification for eLearning data analytics with MOA Chanintorn Jittawiriyanukoon
International Journal of Electrical and Computer Engineering (IJECE) Vol 9, No 5: October 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (328.005 KB) | DOI: 10.11591/ijece.v9i5.pp3569-3575

Abstract

Elearning education has developed a crucial factor in the educational organization. With the situation of declining student size, elearning has to offer more cross-departmental and multi-disciplinary courses for individual needs to go over “one-size-fits-all” traditional model. Elearning data analytics which has not been professionally classified cannot produce reliable results. Classifications for elearning data help comfort the accuracy of outcomes and reducible pre-processing time. This research proposes a practical model for individual learning and personality. The proposed model based on data from the LMS classifies both the student preferences and personalities. The model helps design future curricula to suit student personalities, which intangibly assists them to be efficient in the study practice. Performance of the proposed classification is evaluated by using MOA software. It outperforms and improves the accuracy of complex elearning datasets. Besides, the results indicate an achievement in the students' study time after applying the association rule model on the elearning.
Evaluation of graphic effects embedded image compression Chanintorn Jittawiriyanukoon; Vilasinee Srisarkun
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 6: December 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v10i6.pp6606-6617

Abstract

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
Proposed algorithm for image classification using regression-based pre-processing and recognition models Chanintorn Jittawiriyanukoon
International Journal of Electrical and Computer Engineering (IJECE) Vol 9, No 2: April 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (196.381 KB) | DOI: 10.11591/ijece.v9i2.pp1021-1027

Abstract

Image classification algorithms can categorise pixels regarding to image attributes with the pre-processing of learner’s trained samples. The precision and classification accuracy are complex to compute due to the variable size of pixels (different image width and height) and numerous characteristics of image per se. This research proposes an image classification algorithm based on regression-based pre-processing and the recognition models. The proposed algorithm focuses on an optimization of pre-processing results such as accuracy and precision. To evaluate and validate, recognition model is mapped in order to cluster the digital images which are developing the problem of a multidimensional state space. Simulation results show that compared to existing algorithms, the proposed method outperforms with the optimal number of precision and accuracy in classification as well as results higher matching percentage based upon image analytics.
Performance evaluation of listwise deletion for impaired datasets in multiple regression-based prediction Chanintorn Jittawiriyanukoon
Indonesian Journal of Electrical Engineering and Computer Science Vol 15, No 2: August 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v15.i2.pp1009-1018

Abstract

Multiple Regression-Based Prediction (MRBP) is an emerging calculation to or analysis technique cope with the future by compiling the history of data. The MRBP characteristic will include an approximation for the associations between physical observations and predictions. MRBP is a predictive model, which will be an important source of knowledge in terms of an interesting trend to be followed in the future. However, there is impairment in the MRBP dataset, wherein each form of missing and noisy data has caused an error and is unavailable further analysis. To overcome this unavailability, so that the data analytics can be moved on, two treatment approaches are introduced. First, the given dataset is denoised; next, listwise deletion (LD) is proposed to handle the missing data. The performance of the proposed technique will be investigated by dealing with datasets that cannot be executed. Employing the Massive Online Analysis (MOA) software, the proposed model is investigated, and the results are summarized. Performance metrics, such as mean squared error (MSE), correlation coefficient (COEF), mean absolute error (MAE), root mean squared error (RMSE), and the average error percentage, are used to validate the proposed mechanism. The proposed LD projection is confirmed through actual values. The proposed LD outperforms other treatments as it only requires less state space, which reflects low computation cost, and proves its capability to overcome the limitation of analysis.
Evaluation of computer network security using attack undirected geography Chanintorn Jittawiriyanukoon
Indonesian Journal of Electrical Engineering and Computer Science Vol 16, No 3: December 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v16.i3.pp1508-1514

Abstract

To secure a wealth of data traversing the computer network at your fingertips is compulsory. But when attack arises at various parts of the network it is difficult to protect, especially when each incident is investigated separately. Geography is a necessary construct in computer networks. The analytics of geography algorithms and metrics to curate insight from a security problem are a critical method of analysis for computer systems. A geography based representation is employed to highlight aspects (on a local and global level) of a security problem which are Eigenvalue, eccentricity, clustering coefficient and cliques. Network security model based on attack undirected geography (AUG) is familiarized. First, analysis based upon association rules is presented then the attack threshold value is set from AUG. The probability of an individual attack edge and associated network nodes are computed in order to quantify the security threat. The simulation is exploited to validate that results are effective.