cover
Contact Name
Suwanto Sanjaya
Contact Email
suwantosanjaya@uin-suska.ac.id
Phone
-
Journal Mail Official
coreit@uin-suska.ac.id
Editorial Address
-
Location
Kab. kampar,
Riau
INDONESIA
Jurnal CoreIT
ISSN : 2460738X     EISSN : 25993321     DOI : -
Core Subject : Science,
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi published by Informatics Engineering Department – Universitas Islam Negeri Sultan Syarif Kasim Riau with Registration Number: Print ISSN 2460-738X | Online ISSN 2599-3321. This journal is published 2 (two) times a year (June and December) containing the results of research on Computer Science and Information Technology.
Arjuna Subject : -
Articles 172 Documents
Membandingkan Tingkat Efisiensi Metode Tsukamoto dan Sugeno untuk kasus Pneumonia Wahyuni, Elyza Gustri
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 7, No 2 (2021): Desember 2021
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (530.218 KB) | DOI: 10.24014/coreit.v7i2.15085

Abstract

There are three methods that can be used to implement the Fuzzy Inference System (FIS), namely Tsukamoto, Mandani and Sugeno. Each of the three methods has its own characteristics and advantages. Several third studies used this method to compare the efficiency level of different cases. This study also aims to see the most efficient method by comparing the two FIS methods, namely Tsukamoto and Sugeno, based on medical cases from previous studies that have tested the validity of the results from pulmonary specialists. The data used are the same data as previous studies, namely regarding the diagnosis of pneumonia. The analytical method used is Mean Absolute Percantage Error (MAPE) to get the accuracy value of how close a measurement result is to the actual number. Based on the cases tested, the key from the Sugeno method resulted in a smaller MAPE than Tsukamoto, namely 3.15%, which means that the Sugeno method results closer to the pneumonia score/actual value.
Determination of Discounts Using K-Means Clustering with RFM Models in Retail Business Zebua, Rina Sisca; Heroza, Rahmat Izwan; Adrian, Monterico; Atrinawati, Lovinta Happy
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 1 (2022): June 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (835.8 KB) | DOI: 10.24014/coreit.v8i1.14695

Abstract

Intense competition in the business sector motivates every company to manage services to regular consumers to the fullest. Increase customer loyalty can be done by grouping customers into several groups and determine appropriate and effective marketing strategies for each group. This study aims to propose the right targeting of discounts that can increase customer loyalty in the retail business. Customer grouping uses data mining techniques with the Cross-Industry Standard Process for Data Mining (CRISP-DM) method, which is divided into six phases, namely business understanding, data understanding, data preparation, modelling, evaluation, and deployment. The formation of this cluster uses k-means clustering method and is based on RFM (recency, frequency, monetary) analysis. From the results of the silhouette test on 2734 transaction data from 210 customers of PT. XYZ from October 2019 to March 2020, three customer clusters were formed. From these three clusters, one cluster that has the best frequency and monetary values is chosen so that it is considered the worthiest group to be given a discount in order to maintain its loyalty.
Computer Vision for Identifying and Classifying Green Coffee Beans: A Review Ligar, Bonang Waspadadi
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 1 (2022): June 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (323.141 KB) | DOI: 10.24014/coreit.v8i1.17450

Abstract

Coffee is widely consumed around the world, also considered one of the most important beverages today.  Factors contributing to the quality of coffee beans such as color, texture, size, aroma, etc. and other processes along the production chain such as plant, roasting, and grinding. Those processes will be worthless if the quality of the coffee bean is low. It is important to only use the best quality coffee beans. Therefore, the challenge is to develop a system that uses computer vision to either identify high quality beans or classify them by their species to ease the effort needed by all actors in the supply chain. Providing information for end customers is a defining factor to push forward the coffee industry. This paper aims to review literatures within the topic of using computer vision for coffee beans. After reviewing a selected number of studies which corresponds with the topic chosen in our paper, computer vision techniques were used for two main reasons, identification and classification. Researches on this topic are still limited. Hence, it can be concluded that there are still plenty of room for study on this topic. This study also aims to help provide research material for future researchers.
Implementation of Google Translate Application Programming Interface (API) as a Text and Audio Translator Nabila, Zhara; Ayu, Humairoh Ratu; Surtono, Arif
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 1 (2022): June 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (600.517 KB) | DOI: 10.24014/coreit.v8i1.15629

Abstract

Through this translator program, it is craved that it can avail the general public to understand foreign language videos, can be useful in the world of education and technology, and can avail the persons with disabilities be up to communicate.The method used is a classification method that functions to detect the flow of shapes to instruct the class attribute as the task of the input attribute by generating automatic output through three stages, namely Machine Learning, Natural Language Processing, and Speech.The results showed that 90.38% of videos were successfully translated into text and audio, 9.62% of videos failed to be translated because the owner limited public interaction, and 89%-97% synchronization between text and audio.In this research, a text and audio translator program has been created using the Application Programming Interface (API). This program is a configuration of deep learning, machine translation, and text-to-speech designed using the high-level programming language python. The system used is a predictive system in which the system tries to predict the output equally the wishes of the user. 
Web-Based General Affair Information System Using Prototyping Method Amrulloh, Arif; Saintika, Yudha
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 1 (2022): June 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (517.505 KB) | DOI: 10.24014/coreit.v8i1.17029

Abstract

Abstract. Operations are one of the main activities in the company; almost all companies have a General Affairs (GA) section that takes care of all household and operational matters. The bigger the company, the more complex the problems faced. To overcome these problems, the role of technology is needed. In today's digital era, many technologies can be used to assist companies in administrative activities, one of which is a website-based application. Website-based applications have advantages over desktop applications because users only need a browser to access the application. This research will build a web-based general affairs administration system using the prototype method. The prototype is a method that requires the developer's interaction with the client so that it can overcome the incompatibility between the system developer and the client. Tests are carried out using black-box testing techniques focusing on checking system functionality. The results of tests conducted by 26 respondents show that the system built is 100% feasible and meets expectations. 
Application of Predictive Analytics To Improve The Hiring Process In A Telecommunications Company Jayanti, Luh Putu Saraswati Devia; Wasesa, Meditya
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 1 (2022): June 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (341.211 KB) | DOI: 10.24014/coreit.v8i1.16915

Abstract

Industry 4.0 refers to the increasing tendency towards automation and data exchange in technologies like Big Data and AI. The existence of technology means telecommunication companies have to adapt. Therefore, it takes great people so that the company can continue to survive. The problem that companies often face in hiring great people is that it costs a lot and takes a long time to recruit. Predictive analysis can assist in identifying system issues and solutions. This study aims to develop predictive analytics that can improve recruitment screening based on CVs and find the best predictive model for the company to reduce costs and long recruitment cycles using technology. The authors built an analytical prediction model in four stages: data collection, data preprocessing, model building, and model evaluation. This technique uses Random Forest and Naive Bayes classification algorithms. Both systems properly predicted more data sets with 70% accuracy, 70% precision, and a recall rate above 80%. Compared between the two techniques, Random Forest outperforms Naive Bayes for this predictive model. A lot of people are talking about predictive analytics for hiring, but there aren't many data mining frameworks that can help to find rules based on the CVs of people who have worked for companies before.Keywords: Recruitment, Human Resource, HR Analytic, Predictive Analytic, Random Forest, Naïve Bayes
Analysis and Design of Information Systems for Lecturer Performance Reports at Jambi Muhammadiyah University Kurniawansyah, Kevin; Marthiawati. H, Noneng; Sari, Anita Puspita
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 2 (2022): December 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1996.673 KB) | DOI: 10.24014/coreit.v8i2.19874

Abstract

In a tertiary institution, the existence of lecturers is needed to carry out the tridharma activities of higher education and it is the responsibility of the lecturers themselves. Higher Education Tridharma covers Education, Research and Service. There are several problems that researchers see related to the performance of lecturers, especially for the University of Muhammadiyah Jambi Higher Education. So far, the processing of lecturer performance data at the University of Muhammadiyah Jambi is carried out conventionally, namely inputting data into Microsoft Excel by the admin on duty and additional files that have been submitted by each lecturer are simply stored on the computer without a data center or centralized data storage. . The purpose of this study is to analyze and design a lecturer performance report information system that is able to simplify business processes, use lecturer performance reports according to needs and process lecturer performance data which includes identity data, lecturer tridharma data, and other supporting data more effectively and efficiently by system prototyping method that produces a web-based lecturer performance report information system at the Muhammadiyah University of Jambi to overcome existing problems
Academic Information Service Chatbot Using HMM and AIML Affandes, Muhammad; Pizaini, Pizaini
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 2 (2022): December 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (501.229 KB) | DOI: 10.24014/coreit.v8i2.19638

Abstract

UIN Suska Riau campus led to an escalation amount of data and information that must be maintained, such as academic information. UIN Suska Riau is responsible for managing and providing academic information to students and other academic communities. We can ask the Customer Care Center (C3) in Academic System or come directly to the PTIPD UIN Suska Riau office for academic questions. There still has limitations to serving existing questions submitted through C3 because officers can only serve during working hours both online and offline. Chatbots can be used to support the work of C3 officers in serving the questions asked. This system is built based on Named Entity Recognition (NER) using Artificial Intelligence Markup Language (AIML). We perform NER analysis using HMM. This study uses the contents of the academic manual as a base knowledge with 150 categories of questions and 30 answers that produce an accuracy of 55%.
Application of Triple Exponential Smoothing Method to Predict LQ45 Saham Stock Price Nurdin, Nurdin
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 2 (2022): December 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (857.033 KB) | DOI: 10.24014/coreit.v8i2.14935

Abstract

The capital market is one of the investment models that is currently growing so rapidly because there are more and more digital-based investment platforms that can be accessed using mobile smartphones. The amount of interest in investing makes many people who experience losses due to not understanding the investment risks. For this reason, it is necessary to have the ability to analyze technically based on historical data. The object of this research is LQ45 shares in three companies, Indofood Sukses Makmur Tbk (INDF), Unilever Indonesia Tbk (UNVR), and Aneka Tambang Tbk (ANTM). The method used in this research is the Triple Exponential Smoothing method which is a prediction method that utilizes the statistical analysis method. The variables used in this study are historical prices ranging from Open, High, Low, and Close prices. The stages used are the collection of 125 historical data, where the data is taken through the Google Finance financial database. Then the Triple Exponential Smoothing calculation process is carried out, the data is stored in the database and presented in the form of graphs and tables. By using the parameter values = 0.13 and = 0.87 in the end it produces a Mean margin error level of Open price -0.10681%, High price -1.1156%, Low price 1.4616%, and Close price -0.2504%. The results of the study mean the margin of error is between -0.1% to 1%. The application of Triple Exponential Smoothing can be applied to predict stock prices. This research is to help investors analyze stock price movements.
Data Warehouse Design For Sales Transactions on CV. Sumber Tirta Anugerah Syaputra, Muhammad Dwiky; Nazir, Alwis; Gusti, Siska Kurnia; Sanjaya, Suwanto; Syafria, Fadhilah
Jurnal CoreIT: Jurnal Hasil Penelitian Ilmu Komputer dan Teknologi Informasi Vol 8, No 2 (2022): December 2022
Publisher : Fakultas Sains dan Teknologi, Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (644.133 KB) | DOI: 10.24014/coreit.v8i2.19800

Abstract

Many data warehouses are implemented in companies engaged in retail, CV. Sumber Tirta Anugerah is one of the paint product retail companies that has not implemented it yet. As time goes by, the sales transaction data is getting more and more difficult to process because it is still stored in Microsoft Excel. This is a serious problem in utilizing historical data to assist in making a decision. It is difficult to store sales data because the data is quite large and a lot. Based on the above problems, a data warehouse design is needed for sales transaction data. This data warehouse design uses Kimball's nine-steps method and star schema. To perform the ETL process (extract, transform, and load) using Pentaho software. In this data warehouse design, Tableau software is used to visualize the processed data into a graph and dashboard report. The result of this research is a data warehouse design using nine steps and a star schema which gets a transformation response time of 4048 MS. 

Page 11 of 18 | Total Record : 172