Lady Blind Justice (Photo: Marc Treble via flickr)
Overview

Algorithms play an increasingly prominent role in societal decision-making in a variety of settings. Online streaming services use them to recommend new music, movies, or television shows; criminal justice courts use them, controversially, to predict the future behavior of someone accused or convicted of a crime. Their proponents claim that they are objective and accurate, and they are often presented as sophisticated and mysterious. But they’re not infallible: Even the most carefully-designed algorithms may produce biased outcomes, and blind trust in those programs can cause, perpetuate or even amplify societal problems. 

We want to demystify algorithms and help everyone understand how they work in the real world. We are researchers from the Santa Fe Institute and the University of New Mexico with backgrounds in computer science, political science, mathematics, and law. We are available to provide expertise and guidance to policymakers to help them understand algorithms and their policy implications, and help them decide whether and under what circumstances they should be employed.

Our work centers on the need for transparency. We believe stakeholders should know an algorithm’s strengths and weaknesses, as well as its best uses and limitations, to make the best decisions. What data was used to design and train it? Does this data mean what we think it does? How will we know if it works in practice, and how will we measure its performance? Can it be independently audited for accuracy and fairness? What kind of explanation or appeal is available to those affected by it? Will its use create unexpected feedbacks in human behavior?

Our first project focuses on the national issue of access to housing. Lenders, landlords and brokers often use algorithms to decide whether to approve or deny loan and rental applications, but the historical and geographic data used to train those algorithms can give rise to bias against certain socioeconomic or racial groups. The Department of Housing and Urban Development (HUD), the government agency charged with improving access to home ownership, recently proposed amendments that would effectively allow lenders to circumvent anti-discrimination lawsuits and avoid liability by blaming the algorithm, rather than its application. 

These changes fail to account for the subtleties of evaluating algorithms or recognizing unintended consequences, and they relieve lenders and other defendants of responsibility. We have summarized and submitted our concerns to the Federal Register, focusing on four key arguments showing the importance of understanding the use of algorithms in these decisions, and recommendations for best practices. 

Future projects will focus on the spectrum of ways that governments, corporations, and institutions are increasingly relying on algorithms, with the constant goal of boosting transparency. Some people see algorithms as miraculous crystal balls; others see them as malevolent attempts to control our lives. For the most part neither of these extremes are true, but only by demanding transparency can we find the most beneficial ways to use these powerful tools. 

Our members include Elizabeth Bradley, G. Matthew Fricke, Mirta Galesic, Joshua Garland, Cristopher MooreAlfred Mathewson, Melanie Moses, Kathy Powers,  Sonia M. Gipson Rankin, and Gabriel R. Sanchez.