Background: Peer evaluation is the cornerstone of science evaluation. In this paper, we analyze whether or not a form of peer evaluation, the pre-publication selection of the best papers in Computer Science (CS) conferences, is better than random, when considering future citations received by the papers.
Methods: Considering 12 conferences (for several years), we collected the citation counts from Scopus for both the best papers and the non-best papers.