Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

COJ Reviews & Research

Is Quantitative Measurement a Reliable Instrument to Judge the Quality of Academics in Research and Publications?

Nurdiana Gaus*

Department of Policy and Management, STIKS Tamalanrea Makassar, Indonesia

*Corresponding author:Nurdiana Gaus, Department of Policy and Management, STIKS Tamalanrea Makassar, Indonesia

Submission: July 04, 2018;Published: August 15, 2018

DOI: 10.31031/COJRR.2018.01.000515

ISSN 2639-0590
Volume1 Issue3

Abstract

New Public Management with its quantifiable and tangible measurements has been used to measure research and publication productivities of academics in universities around the world. However, such measurements have created much debate regarding their effectiveness, credibility, and accuracy to measure what should be measured to determine the productivities of academics and, thus, determine the quality of academics and their institutions. This trend is growing in Indonesia as well, and such measurements in Indonesian universities have triggered tensions and contestations, highlighting public media of communication. This, thus, tends to polarise the opinions of Indonesian academics. A number of academics cogently contend that such measures that foreground numbers and figures tend to undermine the basic meaning and basic underlying values embodied in the process of conducting research- and wish to opt these out from their academic work. While others agree that the publications and their citations (h-index) are reliable tools to judge the quality and capacity of researchers. Consequently, indeed, this h-index may haphazardly divide academics into top ranking academics and lowest ranking academics that can be evidently seen from the Science Index and Technology (SINTA) set-up by the Indonesian government.

Keywords: Quantitative measurements; Quality; Ranking; H-index; Higher education

Introduction

New Public Management with its quantifiable and tangible measurements has been used to measure research and publication productivities of academics in universities around the world. However, such measurements have created much debate regarding their effectiveness, credibility, and accuracy to measure what should be measured to determine the productivities of academics and, thus, determine the quality of academics and their institutions Elton [1]. This trend is growing in Indonesia as well, and such measurements in Indonesian universities have triggered tensions and contestations, highlighting public media of communication. This, thus, tends to polarise the opinions of Indonesian academics. A number of academics cogently contend that such measures that foreground numbers and figures tend to undermine the basic meaning and basic underlying values embodied in the process of conducting researchand wished to opt these out from their academic work. While others agree that the publications and their citations (h-index) are reliable tools to judge the quality and capacity of researchers. Consequently, indeed, this h-index may haphazardly divide academics into top ranking academics and lowest ranking academics that can be evidently seen from the Science Index and Technology (SINTA) set-up by the Indonesian government.

This SINTA poses a platform for the government to assess and monitor the productivities of Indonesian academics, particularly those holding an academic rank of ‘lektor kepala’ (associate professors) and professors. As advocated by the principle of New Public Management (NPM here in), the sanction and reward mechanisms have been applied to ensure and gain compliance from Indonesian academics to enhance their research and publication productivities. The strong desire of the government in effectuating this policy is based on the political economic agenda of the government to increase the economic competitiveness of Indonesia at International level Gaus et al. [2-4]. This can only be reached through the empowerment of universities in Indonesia. Given that Indonesian universities were rooted in the prolonged patrimonial polity of New Order Regime, where the power of the then ruling party seized Indonesian universities and given that in order to apply NPM principles that are market-driven, it is important to shift this condition. To change this condition, a set of steering at the distance instrument is effectuated, including in the assessment of productivities of research and publication that truly relies on the rates of publication and h-index.

However, indeed, this practice, for some academics, has tended to create a gap amongst and discrimination on academics them selves, which differentiate them into top ranking academic researchers and low-ranking academic researchers. This phenomenon has been addressed in international literature Billot [5]. Nevertheless, Fox [6] asserted that research and publication are two important aspects in the life of scientists as well. Publication of research acts as a means of social process in the world of scientists. Publication is a way for them to communicate and exchange their ideas and findings. It uses media to obtain recognition regarding the verification of the reliability of information they provide, to gain a feeling of being important people for having been able to contribute valuable and empirical knowledge, and to acquire critical feedback on their work. In addition to that, through publication, scientists can enjoy the sense of professional recognition and satisfaction; esteem; promotion; advancement; and funding for their incoming research Fox [6]. Further, publication can give an indication of productivity in the work of scientists in the sense that the work can become ‘a work’ if it is in the form of a conventional, physical (published) by which it ‘can be received, assessed, and acknowledged by the scientific community’ [6].

However, publications and h-index, indeed, are problematic as they can simply rely on numbers, neglecting or obscuring other aspects contained in the articles; such as the contents, the frequency of academics to produce and publish articles, and the difference of paradigm in the field of disciplines [7]. Thus, this measurement tends to be unfair measures, and, thus, disadvantages other academics.

Why are the rates of publication and the h-index to rank academics problematic?

To clearly understand these issues, let us take Indonesian online database system of ranking academics in research as an exemplar, which is called SINTA (Science and Technology Index). The SINTA system works on the basis of identifying, justifying, scoring, and ranking the rates of Indonesian academics’ publications and h-index based on the two databases of Scopus and Google scholar. Academics have to register to this system by providing their details both in the Scopus and Google scholar (their ID). Having done this, their records of publication will automatically be traced by the SINTA via its operator whom justifying, and scoring the academics’ work. Academics are then ranked based on their rates of publication and high h-index in both of the databases (irrespective of whether they publish every year or not) are placed in the array of top positions, while on the part of academics who regularly publish annually and indexed in Scopus database but lack or even without citation will automatically be placed on the lower rankings. Are these items counted as quality? An array of figures or numbers in a sheet of paper? Who have rights to justify that these are valid tool to determine what counts as quality in research?

On the surface, there is nothing wrong with this system. However, to scrupulously examine it, an array of questions of untrustworthiness arises. If the h-index becomes the ground to rank academic regardless of the regularity of publishing articles, this would tend to be unjust and thus disadvantage academics whose many publication indexed in Scopus, instead of in Google scholar but unfortunately lack of citations. The former, indicates only numbers and, therefore, needs a greater extent of justification. Is this valid and fair? What about the effort that has much been put by academics to publish on the annual basis in credible journals but they are not fortunate enough to get cited? Is productivity meaningless and invaluable aspects of the research process?

It has been argued that highly cited academics indicate their quality. This again raises a question. Do we as researchers or academics in our research life and experiences citing articles simply because the articles are high quality? My experience as a novice researcher in higher education taught me that I cited certain articles simply because they were relevant with my research topic and, therefore, I cited to support my argument. Bearing this in mind, so, is it fair if an academic has, let us say, two published articles but fortunately highly get cited to be placed on top position as compared to another academic whose many published articles, let us say, 13 but less cited? What is more if there is no line of demarcation that differentiate hard and soft sciences where they have different epistemic tradition about knowledge and the style of writing for publishing articles in journals that tend to be shorter than those of soft sciences.

This condition tends to dupe not only do academics but also the stakeholders involved in higher education. The other problematic issue in relation to quantitative measurement used in SINTA is found in the way universities are ranked based on the number of research documents, identified in both of databases. For instance, university A is ranked at the 7th position amongst other 100 universities just because its research documents are, let us say, 900. Quantitative measures just look at this figure so university A is placed at the 7th position without looking at the fact that there are 900 researchers or academics in the university. So, 900 documents indicates unproductiveness of university A, as 900 researches can only produce 900 documents published, and thus fails to claim the quality and productivity of academics in university A. So can we still trust this measurement and the system? Ironically, the system has blindly included h-index and publications powered by Google scholar. Google scholar as an engine search works in a notion of “whatever published online is indexed”. This, would then, lead to the indexation of any articles identified by the engine, leading to the indexation of articles published in unaccredited, not credible journals. This condition may have advantaged academics publishing their articles in such journals, devaluing the scientific and academic standards and ethics in the writing of articles for the journal publications. Such an issue, unfortunately, has gone unnoticed by Indonesian government, risking to the misleading practice in the assessment and evaluation of academic performance in research and in the selection of those who deserve to get their research funded.

Conclusion

It is an irony that academics in higher education in Indonesia have been subject to the game of quality and rankings, harnessed with publications and h-index indicators. Such a practice has many problematic issues regarding their reliability in measuring what counts as quality and thus ranks academics and universities.

Figures or numbers that quantifiable measures entail have failed to justify their reliability to determine quality, thus risking to the misleading practice of defining quality and of ranking academics and universities.

References

  1. Elton, Lewis (2004) Goodhart’s law and performance indicators in higher education. Evaluation and Research in Education 18(1): 120-128.
  2. Gaus Nurdiana, Hall D (2015a) Neoliberal governance in Indonesian universities: the impact upon academic identity. International Journal of Sociology and Social Policy 35(9/10): 666-682.
  3. Gaus N, Hall D (2015b) Weapon of the weak: the hidden transcripts of academics resistance to policy imperatives in Indonesian Universities. International Journal of Sociology and Social Policy. 35(9/10): 683-698.
  4. Gaus N, Hall D (2016) Performance indicators in Indonesia Universities: The perception of academics. Higher Education Quarterly 70(2): 127- 144.
  5. Billot J (2010) The imagined and the real: Identifying the tensions for academic identity. Higher Education Research and Development 29(6): 709-721.
  6. Fox MF (1983) Publication productivity among scientists: A critical review. Social Studies of Science 13(2): 285-305.
  7. Biglan A (1973) The Characteristics of subject matter in different academic areas. Journal of Applied Psychology 57(3): 195-203.

© 2018 Nurdiana Gaus. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.