Panel Paper: Applying Machine Learning to Evaluate and Compare Research Grant Programmes: Are Funders Better at Selecting Researchers or Research Areas?

Saturday, November 10, 2018
8224 - Lobby Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Giovanni Ko1, Walter Theseira2 and Michael Khor1, (1)Nanyang Technological University, (2)Singapore University of Social Sciences


Evaluating and comparing the performance of research funding programmes is challenging because programmes differ in how they (a) select grants and (b) select research areas for funding. Do programmes perform well because they are good at selecting research projects, or because they concentrate funding in high-output research areas? These mechanisms can be distinguished if we can identify and control for a set of research projects in common scientific areas. Previous approaches have used costly and arbitrary manual classification of research projects, or relied on coarse research groupings defined by bibliometric services. We propose a new solution: Apply machine learning to map research projects funded by one agency into the funding structure of a different agency. We identify and control for common areas of research, and separately identify the effects of grant selection from research area composition. We apply our method to compare three high-impact high-risk research programmes funding early-career life scientists: The U.S. National Institutes of Health’s New Innovators Award (NIH-NIA), the European Research Council Starting Grant (ERC-StG), and the Singapore National Research Foundation Fellowship (NRFF). We show that the NIH-NIA and NRFF concentrate research funding in selective portions of the life sciences, compared to the ERC-StG which by design evenly distributes research funding. Within common research areas, NIH-NIA and NRFF researchers exhibit faster growth in citations, and to a lesser extent publications, than equivalent ERC-StG researchers. This suggests the NIH-NIA and NRFF are able to select researchers who deliver superior research outcomes.

JEL Codes: O3; C8; H5; I2