Abstract
Evaluating journal quality and finding high-quality articles in the biomedical literature are challenging information retrieval tasks. The most widely used method for journal evaluation is impact factor, while novel approaches for finding articles are PubMed's clinical query filters and machine learning-based filter models. The related literature has focused on the average behavior of these methods over all topics. The present study evaluates the variability of these approaches for different topics. We find that impact factor and clinical query filters are unstable for different topics while a topic-specific impact factor and machine learning-based filter models appear more robust. Thus when using the less stable methods for a specific topic, researchers should realize that their performance may diverge from expected average performance. Better yet, the more stable methods should be preferred whenever applicable.
Original language | English (US) |
---|---|
Title of host publication | MEDINFO 2007 - Proceedings of the 12th World Congress on Health (Medical) Informatics |
Subtitle of host publication | Building Sustainable Health Systems |
Pages | 716-720 |
Number of pages | 5 |
Volume | 129 |
State | Published - Dec 1 2007 |
Event | 12th World Congress on Medical Informatics, MEDINFO 2007 - Brisbane, QLD, Australia Duration: Aug 20 2007 → Aug 24 2007 |
Other
Other | 12th World Congress on Medical Informatics, MEDINFO 2007 |
---|---|
Country/Territory | Australia |
City | Brisbane, QLD |
Period | 8/20/07 → 8/24/07 |