Roundtable: Variation in Human Services Systematic Reviews: How Much, Why, and What’s Next?
(Methods and Tools of Analysis)

Thursday, November 7, 2019: 10:15 AM-11:45 AM
Plaza Building: Concourse Level, Governor's Square 10 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Organizer:  Emily Sama-Miller, Mathematica
Moderator:  Emily Sama-Miller, Mathematica
Speakers:  Emily Schmitt, U.S. Department of Health and Human Services, Megan E. Lizik, U.S. Department of Labor and Erika Liliedahl, U.S. Office of Management and Budget

Systematic evidence reviews gather, critique, and summarize research evidence. They may take different forms due to the question they address, the breadth of research about it, and the time horizon for completing the review. Despite this variation, they have at least four shared components: (1) specifying the topic of the review, (2) identifying a literature search process, (3) establishing and applying a rubric to assess individual studies, and (4) summarizing findings across the studies that are reviewed. These reviews have proliferated in the health sciences field in the past four decades, and in the past decade in the human services research field as a series of initiatives funded by the U.S. government; in both fields, transparently communicating methods is essential.

This roundtable will convene four experts, including federal staff familiar with reviews that summarize research on human services interventions, to discuss four key questions. First, what are the goals and mandates that guide some exemplar federally-funded human services reviews? Second, how do the purposes of each review lead to differences in their approaches? To what extent do these human services research reviews involve each component that is typically present in a systematic review, and other components that are becoming common in health sciences reviews? Three, do difference across human services reviews create challenges for policymakers who use the reviews? Finally, would adopting some approaches from health sciences improve credibility of human services reviews?

In the health field, scientists have cultivated an expectation that credible reviews will involve at least the four main systematic review components. Other expectations include: (1) prospectively registering systematic reviews with an international database, (2) specifying a research question using the PICOTS (problem/population, intervention, outcome, comparison, timeframe, and setting) framework, (3) relying on the definitions specified by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group to describe the certainty of evidence and strength of recommendations findings in commonly understood terms, and (4) publishing methods and results – generally relying on a meta-analytic approach –in peer-reviewed journals. To support transparency and replicability, health sciences systematic reviewers are expected to report a minimum set of items (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, or PRISMA).

Federally-funded human services reviews that aim to inform policymaking and service delivery have many commonalities. They each involve at least some aspects of the four common systematic review components. Some use a PICOTS-like process to specify research questions, assess both evidence certainty and strength of recommendations (somewhat similar to GRADE), and report many elements of the PRISMA checklist, but do not use any shared lexicon (including the one from the health sciences field) to define their approach. Federally-funded human services systematic reviews also differ meaningfully from one another. The complexity of human services interventions and settings, the goal or mandate of the review, and the policy or program area of focus explain some of these differences. Other differences may be due to “siloing” of content and systematic review expertise.



See more of: Methods and Tools of Analysis
See more of: Roundtable