Using social media tools to rank research papers

There’s a new way to evaluate how relevant research papers are: social bookmarking systems.

The details of this new approach are described in the latest issue of the International Journal of Internet Technology and Secured Transactions.

    

A very large amount of people use social bookmarking systems. There aren’t many people who don’t use at least one of these systems whether it’s Delicious, Connotea, Trunk.ly, Reddit or any of the many others. For scholars and researchers CiteULike is one of more popular ones and been around since November 2004. CiteULike lets users bookmark references but it also embeds traditional bibliographic management.

    

The people that use these types of systems usually learn that the only way to make them useful for others is to make certain that you tag your references carefully, but selectively.



    

In general, social bookmarking is very useful but it could be more useful if it had a better ranking system.

    

Researchers in Thailand have now suggested “CiteRank”, which is a combination of a similarity ranking with a static ranking.

    

“Similarity ranking measures the match between a query and a research paper index,” they explain. “While a static ranking, or a query-independent ranking, measures the quality of a research paper.”

    

Siripun Sanguansintukul of Chulalongkorn University in Bangkok and coworkers have utilized a group of factors containing number of groups citing the posted paper, year of publication, research paper postdate, and priority of a research paper to determine a static ranking score, it is then combined with the query-independent measure to give the CiteRank.

    

The team tried out their new ranking algorithm by having literature researchers rate the results it generated in ranking research papers attained from the search engines based on a table that uses, TTA, tag-title-abstract. The weighted algorithm used by CiteRank is 80:20 in which a combination of similarity ranking 80% and static ranking 20% was found to be most effective.

They discovered that many literature researchers favored reading more recent papers or just-posted papers but they also rated classic papers that emerged in the results very highly if they were posted in different user groups or communities. Users located good papers based on its priority rating but TTA was still important.

“CiteRank combines static ranking with similarity ranking to enhance the effectiveness of the ranking order,” explains Sanguansintukul. “Similarity ranking measures the similarity of the text (query) with the document. Static ranking employed the factors posted on paper. Four factors used are: year of publication, posted time, priority rating and number of groups that contained the posted paper.”

“Improving indexing not only enhances the performance of academic paper searches, but also all document searches in general. Future research in the area consists of extending the personalization; creating user profiling and recommender system on research paper searching.” the team says.

The experimental results that came from the study can help researchers customize the algorithm to tweak its rankings and it will also let them improve the search results even more.

Web