B. Alternatives to Citation Analysis
Unlike citation analysis, most other measures of scholarly prestige and influence do tend to be vulnerable to personal bias. One of the most commonly used methods is peer review. In this method, a researcher may survey directors or chairs of criminology departments, or survey members of a scholarly society, such as the American Society of Criminology or the Academy of Criminal Justice Sciences, and ask them to rank academic journals, books, or PhD programs in criminology and criminal justice. However, it is clear that the results of this type of survey may be affected by the respondents’ personal opinions. For example, scholars who have served as an editor or member of the editorial board of a particular journal may be more likely to give that journal a higher ranking because of their familiarity with it. Similarly, when ranking PhD programs, scholars may be inclined to give the program from which they graduated or where they currently are employed a higher ranking, or they may be inclined to give a rival department a lower ranking. In essence, regardless of the respondent’s desire to be objective, personal preferences or knowledge may creep in and affect his or her responses to this type of survey.
A related method is to consider which individuals receive scholarly prizes or are elected to major offices in scholarly societies. This method is similar to peer review, because in most cases, recipients are chosen or elected by members of the field. However, these methods all tend to identify the same individuals. They are also equally vulnerable to bias, because it is obviously easy to be influenced by personal likes or dislikes of the individual, department, or scholarly work under review.
Another method that is used to measure prestige and influence is to count the number of journal publications of an individual criminologist or of the entire faculty of a criminology department. This method clearly is far more quantitative and objective than that of peer review; however, it measures only productivity and does give a clear indication of influence. Just because an article is published in a journal does not mean that it will be read and/or cited or that other scholars will view the article as being important in any way. In addition, many of these studies attempt to weight the publications in some way, such as by the prestige of the journal. This often reduces the objectivity of the method, because journal prestige usually is determined by peer review. Finally, the rapidly increasing number of journals in the field of criminology is creating many additional outlets for publication and may serve to inflate publication rates, thus reducing the validity of this measure.
Overall, it appears that none of these methods provides as straightforward, objective, and quantitative measure of scholarly influence and prestige as citation analysis. Although citation analysis has its shortcomings, it appears to be more valid and reliable than any other method. “The overwhelming body of evidence clearly supports the use of citation analysis as a measure of scholarly eminence, influence, and prestige” (Cohn, Farrington, & Wright, 1998, p. 4).
C. Advantages of Citation Analysis
One of the most notable advantages of citation analysis is that, unlike other measures of scholarly influence or prestige, such as peer rankings, citation analysis is objective and quantitative and is not affected by any personal bias. Whether the data are obtained from a citation index such as SSCI or from the reference lists of journals and other scholarly publications, they are readily and publicly available and cannot be affected by any personal bias, even that of the researcher.
The question of reliability and validity of citation counts has been examined by a variety of researchers. Research in a wide variety of academic disciplines supports the relationship between citation counts and other measures of scholarly influence, intellectual reputation, professional prestige, and scientific quality. Citation counts have been found to be highly correlated with scholarly productivity, peer ratings of professional eminence, scholarly recognition (e.g., election to the National Academies of Science), and the receipt of scholarly prizes (e.g., the Nobel Prize in physics). There also appears to be a strong correlation between citation counts and ratings of the prestige of university departments and doctoral programs. Researchers also have found citation counts to be correlated with peer rankings and journal publications. Rushton (1984) stated, “It is fair to say that citation measures meet all the psychometric criteria for reliability,” and concluded that “citation counts are highly valid indices of ‘quality’” (p. 34)
Another concern frequently raised by those who oppose the use of this method is that it focuses on quantity rather than quality of citations. However, this appears to be an untenable position, given that citation counts are highly correlated with other measures of prestige and influence. Although it has been suggested that a high citation count may indicate a past contribution to the field, rather than a current or ongoing one, research suggests that, in general, scholars tend to cite more recent works rather than older ones. Researchers such as Cohn et al. (1998) have suggested that the influence of scholarly works tend to decay over time as they are supplanted by more recent work. One recent study estimated that social science research works have a half-life for citations of only about 6 years (Cohn & Farrington, 2008).