RES: An Interpretable Replicability Estimation System for Research Publications Academic Article uri icon

abstract

  • Reliable and faithful research is the cornerstone of breakthrough advancements and disruptive innovations. Assessing the credibility of scientific findings and claims in research publications has long been a time-consuming and challenging task for researchers and decision-makers. In this paper, we introduce RES - an intelligent system that assists humans in analyzing the credibility of scientific findings and claims in research publications in the field of social and behavioral sciences by estimating their replicability. The pipeline of RES consists of four major modules that perform feature extraction, replicability estimation, result explanation, and sentiment analysis respectively. Our evaluation based on human experts' assessments suggests that the RES has achieved adequate performance. The RES is also built with a Graphical User Interface (GUI) that is publicly accessible at https://tamu-infolab.github.io/RES/.

published proceedings

  • THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE

author list (cited authors)

  • Wang, Z., Feng, Q., Chatterjee, M., Zhao, X., Liu, Y., Li, Y., ... Caverlee, J.

citation count

  • 0

complete list of authors

  • Wang, Zhuoer||Feng, Qizhang||Chatterjee, Mohinish||Zhao, Xing||Liu, Yezi||Li, Yuening||Singh, Abhay Kumar||Shipman, Frank M||Hu, Xia||Caverlee, James

publication date

  • 2022