The use of top-tier journal publications and books on respected university presses are good starting points — but why not number of citations? Citations are a crude indicator of how much any given piece of research is being read and applied. There is a bit of magic in this (in that certain fields become suddenly ‘hot’ independent of research quality), but it would be another worthwhile set of data to try and track.
Thank you for your comment. We will be submitting all comments to the Research Community Committee for consideration.
Citation metrics can be very useful tools for gauging research quality and impact. Raw citation counts may be crude indicators but there are a number of more sophisticated metrics such as the h-index, contemporary h-index, Eigenfactor, Article Influence scores, etc. There are also a number of citation analysis tools such as Web of Science and Publish or Perish that can help researchers evaluate their impact.
In addition to citation metrics, other performance indicators might include the number of internationally coauthored articles, books and other works; according to the NSF, coauthored articles grew from 40% of the world’s total science & engineering articles in 1988 to 64% in 2008. The number of invention disclosures and patent applications might also be measured; universities are starting to recognize faculty for obtaining patents and the NSF counts both articles and patents as research outputs.
Thanks very much for your message, and for participating in this process. I will forward your comments appropriately for distribution to the Research Community Committee.
Global impact, citations, publications (conferences and literature), and educational impact would be good to consider. Would caution against equating value of research with size of grant funding and specific granting agencies – this would serve to limit academic freedom and discovery.
Thank you for your comment.
We will be submitting all comments to the Research Community Committee for consideration.
I agree with Joy Wee that the university cannot equate research success with funding and grants, although that has been the general trend. There are academic areas where what one needs is time, not equipment, teams of research assistants, etc.
Guess it would be important to compare like with like, ie compare sociology citations with sociology citations. Citation patterns differ widely by field, hence it wouldn’t be helpful to compare a philosopher’s impact with that of a clinician. Same for productivity (ie quantity of peer reviewed papers). Same for IF (to gauge journal impact – as opposed to that elusive quality). You got to compare impact of journals in the same basket of journals, not impact as such. Comparing the impact of theology journals with the impact of physics journals would be pointless, comparing theology journals amongst each other seems fair game. Having said hat, I do think journal impact factors are one essential tool with which to gauge the relevance of an author’s contribution to academic discourse in his or her field.
Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
Stay up to date with the RSS Feed
Office of the Vice-Principal (Research)
Senate Academic Planning Task Force