Ardhana, Naufal Reky
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Information Retrieval Related to Information Regarding Covid-19 Using Transformers Architecture Wiktasari, Wiktasari; Prayitno, Prayitno; Kartika, Vinda Setya; Lavindi, Eri Eli; Ardhana, Naufal Reky; Nariswana, Rucirasatti
Jurnal Teknik Informatika (Jutif) Vol. 6 No. 2 (2025): JUTIF Volume 6, Number 2, April 2025
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2025.6.2.2606

Abstract

The spread of the COVID-19 virus has occurred exponentially, necessitating advanced search technologies that provide accurate information. The primary challenge in searching for COVID-19 related information involves the diversity and rapid changes in data, as well as the need to understand specific medical contexts. Unstructured information sources, such as research articles, news reports, and social media discussions, add complexity to retrieving relevant and up-to-date information. As the volume of data and information related to the COVID-19 pandemic increases, there is a pressing need for effective and accurate information retrieval systems. Transformer architecture, known for its capabilities in natural language processing and managing complex contexts, offers great potential to enhance search quality in the healthcare domain. BERT is a deep learning model that performs searches based on specific queries, with search results sorted accordingly. The ranking process uses BERT architecture to compare the performance of transformer encoders, specifically between bi-encoders and cross-encoders. A bi- encoder is an architecture where two separate encoders process two different inputs, such as queries and documents. In contrast, a cross-encoder processes two texts simultaneously using a single encoder, allowing the model to capture contextual interactions between them. Research indicates that cross-encoder performance is significantly better than bi-encoder for cases with relatively small data sets. Evaluation results show that the NDCG score for bi-encoder is 0.89, while for cross-encoder it is 0.9. The mAP score for bi-encoder is 0.7, and for cross-encoder, it is 0.89. Both bi-encoder and cross-encoder achieved an MRR score of 1.0.