IN AMERICA, computers have been used to assist bail and sentencing decisions for many years. Their proponents argue that the rigorous logic of an algorithm, trained with a vast amount of data, can make judgments about whether a convict will reoffend that are unclouded by human bias. Two researchers have now put one such program, COMPAS, to the test. According to their study, published in Science Advances, COMPAS did neither better nor worse than people with no special expertise.
Are programs better than people at predicting reoffending?