Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

COJ Reviews & Research

Impact Factor: the Journal Competition, Scientific Excellence or Fool’s Game in Publishing Industry?

Wycliffe Wanzala*

Department of Biomedical Sciences, School of Science and Information Sciences, Kenya

*Corresponding author:Wycliffe Wanzala, Department of Biomedical Sciences, School of Science and Information Sciences, Kenya

Submission: June 08, 2018;Published: June 21, 2018

DOI: 10.31031/COJRR.2018.01.000508

ISSN 2639-0590
Volume1 Issue2

Abstract

In today’s world of competition for economic survival, it is not easy to give a convincing answer to the question in the title of this communication. As authors, researchers, academicians, leaders of research programmes and scholars, we cannot afford to ignore the subject of impact Factor because it is directly and indirectly affecting our livelihoods at all levels in the society, as decisions are now being made based on them to evaluate our performances and that of companies, departments and institutions. A lot about Impact Factors has been discussed in many spheres of human life and of course, everybody is right in his/her arguments. What best defines us as what we are as individual authors, researchers, academicians and scholars should be prioritized beyond the commercial elegance being attached. A careful and serious consideration need to be taken in order to avoid jeopardizing productive and developmental research. Having known the malpractices of Editors - in - Chief to attain a high Impact Factors for their respective journals, is it worthwhile therefore, maintaining the Impact Factor as a proxy measure of quality of research and academia in the society? Considering the origin and evolution of Impact Factor as an index metric measure for research journals and human malpractice nature, it is not prudent therefore for Impact Factor to be used to assess the quality and capability of individual authors, researchers, academicians, research programmes and scholars as well as institutions and/or companies.

keywordsCitation analysis; Science Citation Index; Journals; Publishing industry; Research and academia

Abbreviations: IF: Impact factor; JIF: Journal Impact Factor; JCR: Journal Citation Reports; SCI: Science Citation Index; ISI: Institute for Scientific Information; CIA: Citation Indexing and Analysis; ICSU: International Council for Science; DORA: Declaration on Research Assessment

Introduction

Background of impact factor Impact factor (IF) or Journal Impact Factor (JIF) has been variably defined as a measure of the average number of citations to recent articles published in that journal in a particular year under consideration to indicate the relative significant value or rank of that journal within its field. The IF of any one given journal is calculated as the number of citations received in that year of articles published in that journal during the two preceding years divided by the total number of articles published in that journal during the two preceding. Impact factors are calculated yearly starting from 1975 for journals listed in the Journal Citation Reports (JCR).

The IF was devised by Eugene K. Garfield following the development of Science Citation Index (SCI) in the Institute for Scientific Information (ISI), which started in 1955 and was sold to Thomson Corporation (Thomson Reuters) in 1992 [1,2]. The intention of Garfield’s (1) ideas on Citation Indexing and Analysis (CIA) was to allow authors, researchers, academicians and scholars to expedite their research process, evaluate the impact of their work, spot scientific trends, and trace the history of modern scientific thoughts. Garfield’s [1] commercial elegance in having turned what was, at least at the time, difficult to understand and specialist metric into a highly profitable business has been noted [3]. The commercial value attached to the IF is currently compromising the initial intention to identify and rate quality research and academia as Editor-in-Chiefs of various journals have increasingly adopted and employed insidious tactics (e.g. coercive citation) of editorial policies that allow them to increase the Impact Factor of their respective journals unfairly [4,5]. The definition of IF brings out a consideration of the overall quality of a journal instead of the quality of individual articles published in that journal, thus directly pasting the picture of the marketability and commercialisation values and henceforth, fame beyond quality research and academia. This has consistently lured more people to publish in journals with relatively high IF, when the bait is purely proxy and not academically and research based on the quality of the content of individual manuscripts as was initially indented. Nevertheless, citation itself is relative and depends on a number of varied factors not always purely “qualitative” in nature [4,7]. The integrity of citation analysis as a measure of quality therefore comes into disrepute and may not be indisputably valid as majority thought, thus putting the world of measuring quality of research, bibliometrics and scientometrics into dilemma [7-9].

Why impact factor (IF) is held in high esteem but with descending voices?

It is claimed that IF has a large, but controversial, influence on the way published research and academia is perceived and evaluated in the society [10]. However, a myriad of reasons do exist for its prominence in the world. More and more researchers are valuing IF as Cite Factor launches “Real Time Impact Factor” to help increase the visibility and ease of use of open access scientific and scholarly journals. The IF is used as a yardstick to select candidates for positions as PhD student, post doc and academic staff, to promote professors and to select and renew grant proposals for funding. The IF is also used to distribute internal grants, resources and infrastructures in universities; to establish scientific collaborations in the context of international networks; to select reviewers and editors for journals; to select speakers at scientific conferences; to select members of scientific commissions e.g. to evaluate grant proposals or select new staff members and to determine the scientific output in university rankings. However, some funding organizations worldwide have started reducing the influence of the IF parameter on their strategy to fund excellent science. Above all, is the commercial value attached to it. In all these circumstances, the young scientists with their good, productive and developmental science are disadvantaged in many ways as they are rarely considered because of their association with journals with poor or very low IF. Research and academia are becoming the loser in this game [6] as IF is being used to assess individual researchers and/ or institutions [11]. However, this increasingly common criterion of measuring research output is valueless, baseless and indeed is quite unfair to subject people to such conditions as the Impact Factor does not measure what an individual, institution, journal is worth [12,13]. This therefore explains why leading scientific organizations worldwide such as the European Association of Science Editors (EASE), the International Council for Science (ICSU), the Deutsche Forschungsgemeinschaft (German Research Foundation), the National Science Foundation (in US), the Research Assessment Exercise (in UK), the American Society for Cell Biology led San Francisco Declaration on Research Assessment (DORA) and the League of European Research Universities have rejected the use of Impact Factor in evaluating scientific outputs and scientists themselves [7,12-15].

The h metrics system in google scholar citations

The h-matrics indices were suggested as an alternative for Impact Factor but they have their own disadvantages too and do not also merit. The h-index, h-core and h-median metrics in Google Scholar Citations [16] are author-level metrics, which measure the bibliometric impact of individual authors, researchers, academicians and scholars. The h-matrics indices, discovered by Jorge Eduardo Hirsch [17], are also based on citation analysis as a bibliometric method [18,19], which focuses on the set of the scientist’s most cited papers and the number of citations that they have received in other publications [20]. The matrics indices provide a simple way for authors to keep track of citations to their articles and quickly gauge the visibility and influence of recent articles in scholarly publications. However, current discussions in the fields of academia and research indicate that there are a number of situations in which the h-index, h-core and h-median metrics may provide misleading information about the output of individual authors, researchers, academicians and scholars [21]. On the other hand, players in the field of research, academia and publishing industry are determined to maintain the citation analysis such as the h metrics by developing alternatives and modifications to them [22-26] but so far, with no eminent solution. New developments of alternatives and modifications include the provision of i10-index in Google Schoolar, e-index, s-index, c-index, inclusion of a measure of Erdős number, g-index, additional of 3 h- metrics proposed: h2 lower, h2 center and h2 upper to give a more accurate representation of the distribution shape, a successive Hirsch-type-index of i for institutions, o-index, m-index (m-quotient), cited half-life, cited half-life, immediacy index and an individual h-index normalized by the number of authors has been proposed: h1 = h2/N, with N being the number of authors considered in the h papers. The nature of dynamism of seeking solution(s) to the qualitative analysis in research and academia industries, is an indicator of commercial value attached to the issue at hand more than the initial, quality value of research per se. How these two issues should be separated, and independently pursued, remains a mystery!

Further, it should be noted that citation analysis is not a new phenomenon on the market. For instance, the Science Citation Index (SCI), which started in 1961 and officially launched in 1964, is currently owned by Thomson Reuters in the United States of America and covers more than 6,500 notable and significant journals across 150 disciplines, from 1900 to the present [1,27]. This implies that the SCI has been in use for a period of 55 years and therefore stopping it or changing it in the society is an uphill task as by nature, humans are not easily hooked into any change of any kinds once a given process and/or trend is acceptable as norm in the society. With commercial value attached to this citation analysis process, the change may only become feasible and acceptable with a sustainable alternative, which may probably enhance the attached commercial value of the concerned stakeholders.

Conclusion

It has also been noted that there are many ways in which the owners of any one given journal can manipulate it in favour for a high impact factor. For instance, McPeek [6] summarised six points, which journals are mischievously using to increases their Impact Factors. If the Impact Factor is what scientists, publishing industry and all stakeholders have been hooked into believing that it is the current yardstick and henceforth, using it to classify quality, then as McPeek [6] asked in his presentation, “does Impact Factor measure the quality and importance of the science being produced and published in a journal, given the scientific malpractices being witnessed?” Some of this is incidental, some is innocuous, and some is immoral. Impact Factors are now playing scientists for fools, and we seem to be willing participants in this “fool’s game”. McPeek [6] further summarised that, world all over, research and academia are becoming the loser in this “fool’s game”. As some institutions calculate false IF (10), is it worthwhile therefore, maintaining the IF as a proxy measure of quality in the world of research and academia in our society?

References

  1. Garfield E (1955) Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas. Sci 122(3159): 108-111.
  2. Garfield E (2006) The History and Meaning of The Journal Impact Factor. JAMA 295(1): 90-93.
  3. Editorial, Biol (2009) J Phys Chem 9(4): 139-40.
  4. Arnold DN, Fowler KK (2011) Nefarious Numbers. Notices Am Math Soc 58(3): 434-437.
  5. Wilhite AW, Fong EA (2012) Coercive Citation in Academic Publishing. Sci 335(6068): 542-543.
  6. McPeek M (2012) Want to increase your Impact Factor? Mind Games 2.0: Blogg in bout science and life.
  7. Callaway E (2016) Beat It, Impact Factor! Publishing Elite Turns Against Controversial Metric. Nature 535(7611): 210-211.
  8. Harnad S (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics Sci Environ Polit 8(11): 103-107.
  9. Harnad S (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79(1): 147-156.
  10. Garfield E (1998) The Impact Factor and Using It Correctly. Der Unfallchirurg 48(2): 413-414.
  11. Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314(7079): 498-502.
  12. Moustafa K (2015) The Disaster of the Impact Factor. Sci Eng Ethics 21(1): 139–142.
  13. Wesel M van (2016) Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications. Sci Eng Ethics 22(1): 199-225.
  14. Rossner M, Van Epps H, Hill E (2007) Show me the data. J Cell Biol 179(6): 1091-1092.
  15. Cabello F, Rascon MT (2015) The Index and the Moon. Mortgaging Scientific Evaluation. International Journal of Communication 9.
  16. Suzuki H (2012) Google Scholar Metrics for Publicatio
  17. Hirsch JE (2005) An index to quantify an individual’s scientific research output. PNAS 102(46): 16569-16572.
  18. Pilkington A (2009) Bibliometrics at Royal Holloway.
  19. Pritchard A (1969) Statistical Bibliography or Bibliometrics? J Doc 25(4): 348-349.
  20. Jones T, Huggett S, Kamalski J (2011) Finding a Way Through the Scientific Literature: Indexes and Measures. World Neurosurg 76(1-2): 36-38.
  21. Wendl M (2007) H-Index: However Ranked, Citations Need Context. Nature. 449(7161): 403.
  22. Jayant SV (2005) V-index: A fairer index to quantify an individual’s research output capacity. BMJ 331(7528): 1339–c–1340–c.
  23. Batista PD, Campiteli MG, Kinouchi O, Alexandre S, Martinez AS (2006) Is it possible to compare researchers with different scientific interests? Scientometrics 68(1): 179-189.
  24. Sidiropoulos A, Katsaros D, Manolopoulos Y (2007) Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72(2): 253–280.
  25. Anderson TR, Hankin RKS, Killworth PD (2008) Beyond the Durfee square: Enhancing the h-index to score total publication output. Scientometrics. 76(3): 577-588.
  26. Baldock C, Ma RMS, Orton CG, Orton CG (2009) The h index is the best measure of a scientist’s research productivity. Med Phys 36(4): 1043- 1045.
  27. Garfield E (2007) The evolution of the Science Citation Index. Int Microbial 10(1): 65-69.

© 2018 Wycliffe Wanzala. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.