Poster Paper: Pre-Trial Algorithmic Risk Assessments: Value Conflicts, Inherent Limitations, and Harm-Mitigation By Design

Friday, April 12, 2019
Continuing Education Building - Room 2070 - 2090 (University of California, Irvine)

*Names in bold indicate Presenter

Marc Faddoul, Henriette Ruhrmann and Joyce S Lee, University of California, Berkeley


Pretrial detention is a major factor driving incarceration in the U.S. criminal justice system, as 21.6% of the U.S. prison population are pretrial detainees. Many jurisdictions have thus taken action to move toward risk assessment tools to reduce the pretrial detainee population, address judicial bias in detention decisions, or remedy inequities due to monetary release conditions in the cash bail system. Following a brief background on risk assessment tool use in California, we perform a risk assessment of the Public Safety Assessment (PSA), software used in San Francisco and other jurisdictions to assist judges in deciding whether defendants need to be detained before their trial. Informed by pre-existing literature and interviews with a diverse range of stakeholders – including previously incarcerated individuals – we leverage the Handoff Model, a new theoretical framework, to analyze value implications of delegating decision making to technology.

Having identified both benefits and limitations of the algorithmic PSA solution, we conclude that while the PSA is a well-intentioned improvement on existing tools, there are multiple areas in which the tool could better support values inherent to human-based decision making. These include addressing inherent disparities, making the model more transparent and less reductionist, as well as exercising caution when building in automatic overrides. Given the very high error rates that pretrial risk assessments entail, we believe that the software’s recommendations must be considered with more critical judgement than they currently are. Significant choices were made during the PSA design to extrapolate variable weights from the training data, which we argue are not sufficiently available to the public, missing sufficient translation of technical language into value tradeoffs that are more easily understood by relevant stakeholders.

We thus offer mitigations that may improve the PSA’s implementation, as well as a completely alternative design, the Handoff Tree. This model offers a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design. The model pairs every prediction with an associated error rate, offers a paradigm shift when uncertainty is too high, intelligently and fully delegating decision making to the judge. By explicitly stating error rate, the Handoff Tree aims both to limit the impact of predictive disparity between factors such as race and gender, and to prompt judges to be more critical of retention recommendations, given the high rate of false positives they often entail. Considerations about error rates are made an integral part of the prediction reports, which provide nuance and interpretability on what a “high risk” recommendation means, as well as attempting to directly mitigate predictive discrimination. Precision and robustness can also be increased by extending the tree to a forest, though requiring reductions in accountability and interpretability. The design of such a model involves intricate trade-offs, which could lead one to question the value of such an alternative. However, such tussles are inherent to data science, and the way they are addressed is what makes a model accurate and fair.