Poster Paper:
Pre-Trial Algorithmic Risk Assessments: Value Conflicts, Inherent Limitations, and Harm-Mitigation By Design
*Names in bold indicate Presenter
Having identified both benefits and limitations of the algorithmic PSA solution, we conclude that while the PSA is a well-intentioned improvement on existing tools, there are multiple areas in which the tool could better support values inherent to human-based decision making. These include addressing inherent disparities, making the model more transparent and less reductionist, as well as exercising caution when building in automatic overrides. Given the very high error rates that pretrial risk assessments entail, we believe that the software’s recommendations must be considered with more critical judgement than they currently are. Significant choices were made during the PSA design to extrapolate variable weights from the training data, which we argue are not sufficiently available to the public, missing sufficient translation of technical language into value tradeoffs that are more easily understood by relevant stakeholders.
We thus offer mitigations that may improve the PSA’s implementation, as well as a completely alternative design, the Handoff Tree. This model offers a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design. The model pairs every prediction with an associated error rate, offers a paradigm shift when uncertainty is too high, intelligently and fully delegating decision making to the judge. By explicitly stating error rate, the Handoff Tree aims both to limit the impact of predictive disparity between factors such as race and gender, and to prompt judges to be more critical of retention recommendations, given the high rate of false positives they often entail. Considerations about error rates are made an integral part of the prediction reports, which provide nuance and interpretability on what a “high risk” recommendation means, as well as attempting to directly mitigate predictive discrimination. Precision and robustness can also be increased by extending the tree to a forest, though requiring reductions in accountability and interpretability. The design of such a model involves intricate trade-offs, which could lead one to question the value of such an alternative. However, such tussles are inherent to data science, and the way they are addressed is what makes a model accurate and fair.