Data and research have tremendous potential to inform policymaking, allowing us to identify population trends and to assess the effectiveness of programs. Unfortunately the increasing importance placed on these tools has resulted in their frequent misuse. One recent article in the Chronicle of Social Change, a major online child welfare publication, exemplifies typical errors often made by public officials and accepted uncritically by the media.
The article is called The Program New York Says Helped Cut Newborn Removals to Foster Care. In it, Ahmed Jallow reports that the number of infants removed into foster care in New York State has “plummeted” while the same indicator has been increasing in the majority of states. Jallow quotes unnamed “state officials” that a home visiting program called Healthy Families New York (HFNY) is “the primary reason for this reduction in infant removals” and devotes most of the article to explaining and supporting this assertion. Unfortunately, the officials Jallow quotes simply don’t have the evidence to substantiate their claims. Rather than make this clear, Jallow reports these unbacked claims without qualifications and even adds additional misleading information to bolster them. These issues can be grouped into several categories.
Attributing causality without evidence. The centerpiece of the article is the claim by New York State officials that the HFNY home visiting program is the primary reason for the reduction in infant removals in New York City. HFNY is New York’s version of one of the most popular home visiting models, which is called Healthy Families America (HFA). The difficulty of proving causality is well-known by social scientists, and journalists who write about policy should know enough to caution against accepting such blanket statements. To reduce child removals, a home visiting program would first have to reduce child maltreatment, and that reduction would have to be translated into a reduced removal rate. There are many factors that could more directly affect the number of infant removals, such as a shift in policy to prioritize keeping families together while accepting higher risks to children. And indeed, in New York City, by far the largest jurisdiction in the state, the Commissioner of the Administration on Human Services has attributed the decline in its foster care rolls to his agency’s “focus on keeping families together wherever we can.”
Making factual errors. Jallow states that “evaluations of HFNY show a significant impact in preventing further maltreatment incidents for parents involved with child protective services.” Actually, evaluations do not show a significant impact of the HFA model on child maltreatment. As a matter of fact, the respected California Evidence based Clearinghouse on Child Welfare (CEBC) gave HFA a rating of “4” for prevention of child abuse and neglect, which means that studies have failed to find that it has any effect on child maltreatment. (The only worse rating is 5, which indicates that a program may be harmful to participants.) The only evaluation that Jallow cites is an interim report from an ongoing evaluation of HFNY suggesting that the program might reduce subsequent reports among women who had a previous substantiation for abuse or neglect. However, this study was never published in a peer-reviewed journal and therefore was not included in CEBC’s review.
Misusing evidence-based practice compilations. The CEBC and other clearinghouses of evidence-based practices can be very helpful to lay audiences by digesting and translating the results of methodologically complex studies and rating programs by the strength of their evidence. But users must be careful to read and understand the reports they are using. Jallow states that the HFA home visiting model (of which HFNY is an example) “has the highest rating of effectiveness on the California Evidence-Based Clearinghouse.” But he was reading the wrong report. As mentioned above, CEBC found that HFA failed to demonstrate any effect on child abuse and neglect. It is in a separate report on home visiting programs for child well-being that HFA CEBC gave HFA its top rating (“well supported by research evidence”) because of its impact on outcomes other than child abuse and neglect.
Overgeneralization: “In terms of documented proof, home visiting is the one that we know absolutely works,” Timothy Hathaway, executive director of Prevent Child Abuse New York, told Mr. Jallow. Unfortunately, Mr. Hathaway was overgeneralizing. There are many different home visiting programs which vary based on the nature of the provider, the content of the program, the goals of the program, and other factors. The effects of most home visiting programs on child abuse and neglect have been disappointing. The only program that has been found to have well-supported evidence of an impact on child abuse and neglect from CEBC is the Nurse Family Partnership program, which is very expensive and difficult to implement, and can only be used for certain populations–like first-time mothers. It is not surprising that many jurisdictions have opted to implement HFA instead.
Disregarding recent data. In addition to all the problems cited above, Jallow and his New York State informants chose to disregard the most recent data on foster care entries in New York. Jalloh reports, accurately, that the decline in infant foster care placement between 2012 and 2016 was part of an overall decline in the number of New York children entering foster care. And as Jallow states, this decline occurred while entries into foster care increased on the national level. But the pattern was reversed in 2017: nationally, foster care entries decreased slightly, while New York’s foster care entries increased. We don’t yet have the 2017 data for infants, but it seems likely that the trend in infant removals also reversed. Could it be that New York is starting to see the same kind of increase in removals that occurred earlier in many other states? Perhaps a growing opioid crisis in western New York is contributing to this, or perhaps the increase in child removals stems from concern that the focus on family preservation is endangering children. And indeed an increase in child removals in New York City over the past 18 months has been attributed to an increase in hotline reports and a more aggressive response to these reports by investigative staff in the wake of the highly-publicized child abuse deaths of two children who were known to the system but not removed. Disregarding the most recent year of data certainly makes for a clearer picture, but but it may be a less accurate one.
Jallow’s article illustrates how a flawed understanding of research and data can lead to faulty conclusions. A grandiose claim that one program is responsible for large changes in an indicator like child removals deserves initial skepticism and rigorous vetting. Uncritical acceptance of such claims can lead to misguided policy decisions, like a decision to direct more funding to a program that is unproven. The press should scrutinize such claims assiduously, rather than accepting them credulously, presenting them without qualifications, or adding flawed arguments in favor of these claims.