Megaphone or Mute Button? an Experiment to Test the Equity Implications of Representative Bureaucracy for Machine Learning
Friday, November 8, 2019
Plaza Building: Concourse Level, Plaza Court 6 (Sheraton Denver Downtown)
*Names in bold indicate Presenter
This article uses experimental evidence to investigate the potential for representative bureaucracies to moderate the consequences of adopting artificial intelligence, in the form of machine learning (ML) systems, to automate public sector decision making. ML systems have proven to be extremely susceptible to artifacts in their training data that introduce bias and lead to suboptimal decision output (Buolamwini and Gebru 2018). Public sector organizations have a responsibility to avoid making biased decisions, both because of their mandate to protect individual rights, and because of the power that they wield over people’s lives. Previous theoretical work on representative bureaucracy and empirical work on active representation in particular suggest that when bureaucratic decision makers share characteristics with the population that they serve, they are more likely to treat individuals fairly (Meier 2019). The implication for public sector use of ML is that, all else equal, administrative data generated under conditions of active representation should contain less embedded bias, and thus may improve ML performance when used as training data.
We test this hypothesis with an experimental research design using a deep learning ML architecture trained on administrative data previously employed to identify and estimate the effect of active representation in the context of education policy (Nicholson-Crotty et al 2016). The results will provide public managers, policymakers, and researchers contextually-relevant evidence of whether past active representation can affect future automated decision making. The findings also contribute a novel empirical test for the effect of active bureaucratic representation in public organizations.