Are You Happy with Your h-Index?

Grant and recruitment committees are increasingly showing interest in the h-index of individual researchers as a measure of the impact of their research. As a media researcher, should you care about the H-word?

Once going through grant applications the other day, I was astounded by the fact that an increasing number of researchers seemed to announce their h-index. Are we, media and communication researchers, more and more inclined to play the index game, supported by the platformisation of the academic online presence, where different metrics play an increasingly visible role?

Recently, an engineering professor painted a scary vision of a scholarly ranking on Twitter by publishing a picture of a conference programme that lined up plenary speakers according to their h-index. Even if the conference turned out to be a predatory conference, the tweet aroused strong reactions: over 220 retweets and 3,200 likes.

Even if many experts in altmetrics, bibliometrics or scientometrics have put forward that the h-index is an obsolete and inappropriate metric in many respects, many academic platforms, such as the ones by Google, Elsevier and Clarivate, continue to maintain its centrality.

Introduced by the American physics professor Jorge E. Hirsch in the early days of emerging altmetrics in 2005, the h-index intends to measure author-level impact (see Hirsch, 2005a). H-index – also referred to as the Hirsch number – is calculated through the number of papers (h) with a citation number the same or larger than (≥) h. The index, in other words, describes a maximum value such that h articles have at least h citations each – the author has published at least h papers that have each been cited at least h times.

In plain English: If a researcher holds an h-index of 16, he or she has published 16 papers cited at least 16 times.

The issue becomes more complicated by the fact that the index is calculated differently by Google Scholar, Scopus, and Web of Science. You are likely to have a higher score on Google Scholar than on the two other platforms.

Regardless of the platform, the result strongly depends on the academic age and the discipline. The longer the academic career, the higher the number of accumulated citations, and thus the scores – if you are working in the natural sciences. The index leans upon the natural sciences’ understanding of research and is, consequently, the most frequently used in hard sciences as a metric. Media researchers with such an orientation prove more successful, while media researchers in the humanities, who publish relatively high numbers in monographs in their national languages, are placed at the bottom.

There are also other indices, which are basically improvements of the h-index or that attempt to correct the disciplinary and age differences. But, honestly, do you know what your G-index is and how it is calculated?

What is a good h-index?

When discussing impact with emerging and junior scholars, I often get the question of what a good h-index is – or what is normal or typical? Or at least enough, and not to be ashamed of.

This is always the point where I feel tempted to divert the common focus to the meaningfulness of work, which I will get back to at the end of this text.

Nevertheless, if any comparisons are necessary, results are preferably compared between researchers within the same discipline. A recent study in the field of clinical pathology (Schreiber and Giustini, 2019) found that assistant professors had an h-index of 2–5, associate professors 6–10, and full professors 12–24 (see also Rad et al., 2010).

“A typical h-index for full professors ranges from 12 to 24.”

Or, as Hirsch put it: 84 per cent of Nobel prize winners in physics had an h-index of at least 30, which is outstanding (Hirsch, 2005b). Indeed, the most prolific Nordic journalism and media professors have h-indices of almost 70 on Google Scholar and almost 40 on Web of Science, making it to global individual rankings such as Clarivate’s list of highly cited researchers. Stephen Hawking has 130 on Google Scholar and 82 on Web of Science.

However, an interesting feature of social networks is the so-called friendship paradox (see Benevenuto et al., 2016): A user’s connections are on average higher than oneself, or your friends seem to have more friends than you do. And for the h-index, comparisons may make researchers feel that they rank below average in comparison with their co-authors.

Furthermore, the field of media and communication research is broad and diverse, ranging from natural and formal sciences to the humanities, and even covering branches of artistic research. There are thus major field-internal differences, and it may be more appropriate in some disciplines to be involved in the h-index concurrence than others.

Some scholars argue for h-index as it helps us distinguish between a one-night wonder and an enduring performer (see, e.g., Cronin and Meho, 2006). Those who want to get nerdy may dissect their scholarly career by using Harzing’s Publish or Perish software, which analyses the entire production of a scholar by tracing a range of metrics across time. It works even if the researcher has very few citations.

From quantification to qualification

Impact, coupled with the universities’ third assignment and as a result of funding and ranking endeavors, is, of course, important to address in measurable terms. Obviously, the possibility of capturing both the quantitative and qualitative output of a researcher into a single number feels fascinating to many. Scholars – and especially funders, as they highly inform the actions of scholars – should, however, not get obsessed with the h-index. It should be treated as an indicative measure, as it is.

Moreover, the European Commission, who has set open science as a policy priority, has formed an expert group on altmetrics to reform the measurements. The Commission has claimed that new indicators must be developed to complement the conventional indicators for research quality and impact, and to do justice to open science practices.

Instead of focusing on rankings, I would rather focus on asking: What is enough, and, more specifically, what is enough for you to feel satisfied with your performance? What are the qualitative dimensions of your scholarly work that make you happy? That may be the best h-index.

Namely, sometimes it suffices that one single person finds your research results useful and valuable.

The author is happy with her h-index and addressed the topic recently in the webinar “Increasing Research Impact through Promotion” organised by the Journal of Creative Communications (JOCC) at Sage.

Some Recent Studies on h-Index

Agarwal, A. et al. (2016). Bibliometrics: Tracking research impact by selecting the appropriate metrics. Asian Journal of Andrology, 18(2), 296–309.

Ali, M. J. (2021.) Forewarned is forearmed: The h-Index as a scientometric. Seminars in Ophthalmology, 36(1–2),

Ayaz, S., Masood, N., & Islam, M. A. (2018). Predicting scientific impact based on h-index. Scientometrics, 114, 993–1010.

Koltun, V., & Hafner, D. (2021). The h-index is no longer an effective correlate of scientific reputation. PLoS ONE, 16(6), e0253397.

Norris, M., & Oppenheim, C. (2010). The h‐index: a broad review of a new bibliometric indicator. Journal of Documentation, 66(5).

Photo: Asal Lofti/Unsplash
Some Recent Studies on h-Index