Now showing 1 - 2 of 2
  • Publication
    When data replace norms: Platformisation of knowledge production
    (2021-08-13)
    Little attention has been paid to the research infrastructure that is tracing, tracking, monitoring and benchmarking individuals' and groups' performance and their implications for epistemic cultures and knowledge production. This paper discusses how the use of evaluative metrics and the dominence of data analytics can lead to platformisation of knowledge production by examining the normative view of science and epistemic cultures and the current development of research infrastructure such as vertical integration of research products. This paper argues that the dominance of commercial platformisation can decimate the negotiation powers of those who produce and review scientific outputs because researchers are acculturated to chase after funding, metrics, and data-driven economic and societal impacts. It is the objective of this paper to open up critical examination of the platformisation of knowledge production.
      105
  • Publication
    Evaluation complacency or evaluation inertia? A study of evaluative metrics and research practices in Irish universities
    (Oxford University Press (OUP), 2019-07) ;
    Evaluative metrics have been used for research assessment in most universities and funding agencies with the assumption that more publications and higher citation counts imply increased productivity and better quality of research. This study investigates the understanding and perceptions of metrics, as well as the influences and implications of the use of evaluative metrics on research practices, including choice of research topics and publication channels, citation behavior, and scholarly communication in Irish universities. Semi-structured, in-depth interviews were conducted with researchers from the humanities, the social sciences, and the sciences in various career stages. Our findings show that there are conflicting attitudes toward evaluative metrics in principle and in practice. The phenomenon is explained by two concepts: evaluation complacency and evaluation inertia. We conclude that evaluative metrics should not be standardized and institutionalized without a thorough examination of their validity and reliability and without having their influences on academic life, research practices, and knowledge production investigated. We also suggest that an open and public discourse should be supported for the discussion of evaluative metrics in the academic community.
      276