Published November 13, 2023
Seventy Hall, MSW ’20, a PhD candidate in the University at Buffalo School of Social Work, recently led a project to examine how researchers addressed equity, ethics and other issues in their use of algorithms to predict child welfare-related outcomes.
The paper, “A Systematic Review of Sophisticated Predictive and Prescriptive Analytics in Child Welfare: Accuracy, Equity and Bias,” was published in the Child and Adolescent Social Work Journal.
Hall's co-authors were Melanie Sage, PhD, owner and founder, Sage Training and Consulting; Carol F. Scott, PhD ’19, senior technical writer, Michigan Institute for Clinical & Health Research; and Kenneth Joseph, PhD, assistant professor in UB's Department of Computer Science and Engineering.
Child welfare agencies increasingly use machine learning models to predict outcomes and inform decisions. These tools are intended to increase accuracy and fairness but can also amplify bias.
This systematic review explores how researchers addressed ethics, equity, bias and model performance in their design and evaluation of predictive and prescriptive algorithms in child welfare. We searched EBSCO databases, Google Scholar and reference lists for journal articles, conference papers, dissertations and book chapters published between January 2010 and March 2020. Sources must have reported on the use of algorithms to predict child welfare-related outcomes and either suggested prescriptive responses or applied their models to decision-making contexts. We calculated descriptive statistics and conducted Mann-Whitney U tests and Spearman’s rank correlations to summarize and synthesize findings.
Of 15 articles, fewer than half considered ethics, equity or bias, or engaged participatory design principles as part of model development/evaluation. Only one-third involved cross-disciplinary teams. Model performance was positively associated with number of algorithms tested and sample size. No other statistical tests were significant.
Interest in algorithmic decision-making in child welfare is growing, yet there remains no gold standard for ameliorating bias, inequity and other ethics concerns. Our review demonstrates that these efforts are not being reported consistently in the literature and that a uniform reporting protocol may be needed to guide research. In the meantime, computer scientists might collaborate with content experts and stakeholders to ensure they account for the practical implications of using algorithms in child welfare settings.