Violence Prevention: Moving from Evidence to Implementation

By Katrina Baum, Katherine M. Blakeslee, Jacqueline Lloyd, Anthony Petrosino
October 15, 2013 | Discussion Paper

 

The purpose of this discussion paper is twofold: to identify progress in the use of evidence-based violence prevention programs and selected resources and to discuss the critical gap between the evidence and its translation into demonstrably effective community-based programs. In January 2013, the Institute of Medicine (IOM) convened a 2-day workshop on the evidence base for violence prevention. The IOM Forum on Global Violence Prevention assembled experts to discuss what works to prevent violence, where to find evidence, and challenges faced by practitioners, communities, and policy makers attempting to make use of the existing evidence. (A detailed summary of the workshop is available at: http://nam.edu/wp-content/uploads/2015/06/TheEvidence-for-Violence-Prevention-Across-the-Lifespan-and-Around-the-World) This discussion paper highlights several implementation challenges when using such evidence, specifically in the context of the United States.

 

Background

Violence is intentional harm caused to another person through the use of threats or physical assault. Although some demographic groups are more vulnerable than others, violence causes death or emotional or physical harm to men, women, and children of all ages, races, ethnicities, and religions across communities and cultures. Violence prevention necessarily includes stakeholders from multiple disciplines and sectors.

According to data from the Bureau of Justice Statistics, an estimated 3.8 million nonfatal, violent victimizations occurred in 2010 alone, based on a national survey of persons age 12 or older (Truman, 2011). Homicide was ranked as the second leading cause of death for 15- to 24- year-olds according to the National Vital Statistics System (CDC, 2010). And the Department of Health and Human Services estimated that 1.6 million referrals were screened for child abuse or neglect (HHS, 2012). Despite encouraging reports of reductions in violence in the United States, these data demonstrate that millions of individuals, including children, are subjected to violence. The magnitude of the violence underscores the need to implement programs that have demonstrated effectiveness to prevent violence. Fortunately, the evidence for strategies, programs, and interventions to prevent violence has increased during the last few decades, and the ability to access this evidence has been transformed by technological developments.

Since the 1990s, significant effort has been made to identify evidence-based programs and to make information about these programs accessible to those who can use it. The U.S. government has made several efforts to identify and promote evidence for violence prevention. In 1997, in response to then-Attorney General Janet Reno’s interest in the application of science to crime prevention, University of Maryland researchers published a report to Congress, Preventing Crime: What Works, What Doesn’t, and What’s Promising (Sherman et al., 1997). In 1999, the U.S. Surgeon General convened an effort identifying effective violence prevention programs, especially for youth, titled Youth Violence: A Report of the Surgeon General (HHS, 2001). The National Research Council (NRC) and the IOM published a report in 2009, titled Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities. Such initiatives reflect policy makers’ recognition of the need to identify effective violence prevention strategies.

 

The Evidence

The implementation of evidence-based programs has been facilitated during the past two decades by methodological and technological developments. Systematic reviews and evidence-based registries have facilitated the identification of evidence-based violence prevention programs. Meta-analyses and systematic reviews apply stringent research standards for synthesizing separate but similar studies. The same rules for rigor and transparency that are applied to survey or experimental research are now commonly applied to the conduct of research syntheses. These include carefully specified questions, detailed descriptions of search strategies and eligibility criteria, reliable data extraction or coding, appropriate analytic strategies, and detailed reporting. Such reviews are often convincing to policy makers, because they provide comprehensive assessments of the body of evidence responding to a key question, sometimes permit reconciliation of conflicting studies when possible, set the context for proposed studies, and identify new areas for funding and implementation.

A number of organizations have been created to prepare such reviews. In 1993, the Cochrane Collaboration was established to create systematic reviews of research in health care; the Cochrane Developmental, Psychosocial and Learning Disorders Group manages the production of reviews in areas relevant to social programs, including violence prevention. In 2000, inspired by the production and success of Cochrane, the Campbell Collaboration was instituted to produce reviews in the area of social and educational intervention. The Campbell Crime and Justice Group oversees reviews relevant to violence, although there is some overlap with the Campbell Social Welfare Group. The U.S. Centers for Disease Control and Prevention conducts a number of systematic reviews through its Community Guide to Preventative Services. Several of these are relevant to violence prevention, including reviews on early childhood home visitation, school-based violence prevention programs, and an intervention known as Therapeutic Foster Care. These are a few of the organizations that have adopted a series of systematic reviews to provide good evidence on violence prevention.

The evidence-based registry is another methodological development that has facilitated sharing evidence with those that can use it. The evidence-based registry has some of the same characteristics as the systematic review, particularly in setting explicit eligibility criteria and screening for evidence based on the criteria. One distinction is usually the scope of the work; registries tend to provide evidence on very specific and fine-grained interventions, and systematic reviews are usually broader. Some registries solicit nominations of evidence-based programs from the field and ask for evaluation reports on those programs. Almost all registries are designed to “rate” the strength of the evidence about program impact, and designations are made based on the rigor of the studies, the number of studies providing evidence that are relevant, and the magnitude of the program impact. Blueprints for Violence and Substance Abuse Prevention, the U.S. Department of Justice’s CrimeSolutions.gov, the World Health Organization’s Effective Violence Prevention database, and the Substance Abuse and Mental Health Services Administration’s National Registry of Effective Programs and Practices are among the registries relevant to violence prevention programs and practices.

Electronic publication through the Internet is a technological development that has made it possible for good evidence to be put in the hands of decision makers within seconds. With the exception of the Cochrane Collaboration’s library, all the other resources are freely available and accessible on the Internet. Cochrane does allow abstracts of its reviews to be freely accessed, and the organization has also made strides in working with governments to ensure that the Cochrane Library is freely accessible in some parts of the world, e.g., to health care professionals in Brazil and all citizens in Ireland. Electronic publication not only means that dissemination is worldwide and instantaneous, but also that updates and important corrections can be made quickly to reflect the dynamic nature of research.

The advent of systematic reviews and evidence-based registries, in concert with electronic publication, has meant that carefully-vetted evidence is immediately available on a worldwide basis to decision makers concerned with implementing violence prevention programs. Unfortunately, these innovations do not completely remove all the challenges to using evidence to implement violence prevention. Despite the promise that reviews would reconcile conflicting studies, it is also possible that syntheses of research could come to somewhat different conclusions. A classic case for this is Multi-Systemic Therapy (MST), a comprehensive approach to treatment used primarily in dealing with young, violent offenders. A systematic review published by Curtis et al. (2004) indicated that MST reduced subsequent re-offending among serious and sometimes violent juveniles. Littell (2005) published a Campbell Collaboration review indicating that the findings for MST largely dissipated when more rigorous synthesis methods were applied. Although subsequent research seems to land on the side of average positive impacts for MST, the conflicting reviews do mean that the manager will need to be at least an armchair “connoisseur of evidence.”

Other potential challenges to using and making sense of the evidence include the sheer number of systematic reviews and registries relevant to violence prevention, the varied definitions of “evidence” and criteria for program “success” used by registries, particularly the criterion for a “statistically significant finding at 6 months on an outcome of interest,” and the level of sophistication required to decipher the reviews and registries as the methods in producing evidence become more statistically advanced.

These advances have made the rigorous evidence for violence prevention more accessible to those who can use it. Based on the sources described above, there are a number of strategies that have been demonstrated in at least in one prior setting to be effective in reducing violence.

 

Implementation

Despite the increase in the evidence base for violence prevention programs and advances in accessibility of the evidence, major challenges remain with transferring effective programs to different real-world settings. Two key questions are how to get programs that are known to be effective into wider use, and, equally important, how to halt the use of programs that have demonstrated no discernible positive effect or have had harmful or toxic effects. Implementation rests on obtaining and using evidence-based interventions in real-world settings, including how programs are adopted, sustained, and taken to scale (NRC and IOM, 2009). Fixsen and Blase (2009) refer to implementation as the link between science and practice. The challenge of moving proven programs into practice is not specific to the field of violence prevention. In the public health field, there is often a lag between when evidence that a prevention program, policy, or practice improves health becomes “known” and the eventual successful adoption of that program or practice in real-world settings (Walker et al., 2003). Violence prevention efforts face a number of challenges to implementation which may hamper the ability to reduce violence on a wider scale.

A further challenge is balancing faithful implementation of an evidence-based program with adaptation or modification of the program to meet the needs of a specific population or community. Most programs require some adaptation to meet the needs of the community context in which it will be implemented. The challenge is how to do this while preserving the core program components. The 2009 NRC and IOM report describes three alternative implementation approaches:

  1. Direct adoption of specific evidence-based prevention programs involves delivering the program with fidelity to increase the likelihood of obtaining an impact in the new setting, similar to what was observed in original studies. This approach involves limited adaptation of the program to the particular community or context.
  2. Adaptation of an existing program to meet community needs focuses on selecting an evidence-based program that matches the community’s needs and characteristics and modifying the program to be relevant to the community. This approach involves researchers and community leaders working collaboratively to adapt programs in ways that are meaningful and relevant to the recipients.
  3. Community-driven implementation involves decision making of community leaders in collaboration with researchers. The focus is on relevance to the community and program sustainability. The process relies on community based participatory research driven by an agenda developed by community members and key constituents. This partnership between researchers and community members is important for successful implementation.

 

The most appropriate approach for any given community depends on a number of specific contextual factors, including needs, resources, and availability of existing proven programs suited to meeting those needs. All three approaches emphasize the value of evaluation in understanding how and why a particular approach works or does not work in a given situation (NRC and IOM, 2009).

Recognition of the challenges of implementation has led to the development of models to assist communities in developing an infrastructure for identifying, selecting, and implementing evidence-based programs. Two tested models are Communities That Care (CTC), a prevention system designed to reduce adolescent delinquency and substance use, and Promoting SchoolCommunity-University Partnerships to Enhance Resilience (PROSPER), a system devised for broad implementation of evidence-based programs that support youth development and reduce early substance use in rural communities. (More information on the programs is available in Chapter 11 of Preventing Mental, Emotional, and Behavioral Disorders Among Young People (pp. 300-301). Available at http://www.nap.edu/catalog.php?record_id=12480.)

The important common elements of the CTC and PROSPER models are community mobilization, partnerships, capacity building, utilization of evidence-based approaches, and fidelity of implementation. These models provide a process to address some of the key barriers to implementation of evidence-based programs and are designed to help communities achieve sustained and high-fidelity implementation and to reduce community-level violence and other risk behaviors (Spoth et al., 2013). Community-based delivery system models, such as CTC and PROSPER, hold promise for widespread implementation of evidence-based programs to prevent violence and other risk behaviors.

 

Discussion

Evidence on the effectiveness of interventions for violence prevention has increased in the past two decades, facilitated by technology and methodological advances. However, effective implementation of such interventions has not increased proportionately. Systematic reviews and meta-analyses provide a good basis for new research, funding proposals, and program development, but the gap between evidence and its use in implementation remains significant. Effective interventions may be scattered geographically and relatively small in scale, making replication and scaling up more challenging. In many cases, interventions that have been demonstrated to be ineffective continue to be implemented. The following are strategies that are critical to closing this gap between evidence and implementation.

 

Finding the Evidence and Proven Approaches

Fortunately, the growth of systematic reviews and meta-analyses has made evidence of proven approaches more available and accessible. One of the challenges is getting this evidence to the people who are in a position to use it. Linkages and two-way channels through which data and evidence can flow to practitioners and key decision makers in a way that addresses their questions and needs and that can also be fed back to researchers is critical.

 

Making the Links Between Evidence and Implementation

Real or virtual forums within which practitioners and policy makers can communicate the needs of real-world settings more easily and frequently to researchers could help make these linkages. Existing structures and resources could be utilized to support two-way linkages and communications channels between researchers and those on the ground. The jargon used by researchers and academicians may not translate easily to practitioners, policy makers, and community members, making it important to communicate research findings in plain language that is understandable by all stakeholders.

 

Adapting Evidence-Based Approaches to New or Different Contexts

Implementation should be based on science; however, the intervention also must be flexible enough to fit the particular needs and capacities of the local community it will serve. The effectiveness and sustainability of an intervention are situation-specific and depend on the commitment and capacity of the local community and the availability of resources. Despite the evidence to support an intervention, if it is culturally unacceptable, it will not be adopted. The involvement and buy-in of community leaders who reflect the demographic composition of the community is critical.

 

Significance of the Policy Context

Policy, whether at the local or national level, can affect whether an intervention will be implemented and how effectively. Resource and policy priorities may define when and where program interventions receive attention and support. Reaching policy makers with evidence for effective interventions can be a key step in the process of securing adoption.

 

Sustainability

Planning for sustainability must be done at the start of the activity. Sustainability will depend not only on the effectiveness of the intervention, but also on the degree to which the local community feels ownership of the activity. External support cannot be relied on indefinitely. The capacity and support for sustaining the activity need to be included in the plan early on, not as an afterthought or when external support is ending.

 

Resources for Research and Implementation

Violence prevention interventions which are based on demonstrably-effective approaches depend on the availability of resources for research and implementation. Carrying out research and evaluation on the implementation of programs is not possible without resources. Innovation and technological advances can sometimes facilitate data collection and analysis, cutting the cost of research, but there is always a cost. Local communities are rarely resource-rich, so that supporting intervention programs without some type of external support is challenging. However, the cost of not basing on-the-ground interventions on evidence is too great for the links between research and implementation to be ignored.

 


References

  1. CDC (Centers for Disease Control and Prevention). 2010. Ten leading causes of death by age groups United States—2010. National Vital Statistics System, National Center for Health Statistics, CDC. Available at: http://www.cdc.gov/injury/wisqars/pdf/10LCID_All_Deaths_By_Age_Group_2010-a.pdf (accessed August 6, 2013).
  2. Curtis, N. M., K. R. Ronan, and C. M. Borduin. 2004. Multisystemic treatment: A meta-analysis of outcome studies. Journal of Family Psychology (8):411-419. https://doi.org/10.1037/0893-3200.18.3.411
  3. Fixsen, D. L., and K. A. Blase. 2009. Implementation: The missing link between research and practice. NIRN Implementation Brief #1. Chapel Hill, NC: The University of North Carolina, FPG, NIRN, January 2009. Available at: https://files.eric.ed.gov/fulltext/ED507422.pdf (accessed May 25, 2020).
  4. HHS (U.S. Department of Health and Human Services). 2012. Child maltreatment 2011. Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. Available at: http://www.acf.hhs.gov/programs/cb/research-data-technology/statisticsresearch/child-maltreatment (accessed on August 6, 2013).
  5. Office of the Surgeon General, National Center for Injury Prevention and Control, National Institute of Mental Health, and the Center for Mental Health Services. 2001. Youth violence: A report of the surgeon general. Rockville, MD: HHS. Available at: https://pubmed.ncbi.nlm.nih.gov/20669522/ (accessed May 25, 2020).
  6. Littell, J., M. Popa, and B. Forsythe. 2005. Multisystemic therapy for social, emotional, and behavioral problems in youth aged 10-17. Campbell Systematic Reviews 1. https://doi.org/10.1002/14651858.CD004797.pub4
  7. National Research Council and Institute of Medicine. 2009. Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities. Washington, DC: The National Academies Press. https://doi.org/10.17226/12480.
  8. Sherman, L. W., D. Gottfredson, D. MacKenzie, R. Eck, P. John, and S. Bushway. 1997. Preventing crime: What works, what doesn’t, what’s promising: A report to the United States Congress. Available at: https://www.ncjrs.gov/pdffiles/171676.PDF (accessed on August 6, 2013).
  9. Spoth, R., L. A. Rohrbach, M. Greenberg, P. Leaf, C. H. Brown, A. Fagan, R. F. Catalano, M. A. Pentz, Z. Sloboda, and J. D. Hawkins. 2013. Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (TSci Impact) framework. Prevention Science 14(4):319-351. https://doi.org/10.1007/s11121-012-0362-6
  10. Truman, J. L. 2011. Criminal victimization, 2010. Bureau of Justice Statistics, NCJ 235508. Washington, DC: U.S. Department of Justice. Available at: https://www.bjs.gov/content/pub/pdf/cv10.pdf (accessed May 25, 2020).
  11. Walker, A. E., J. Grimshaw, M. Johnston, N. Pitts, N. Steen, and M. Eccles. 2003. PRIME—PRocess modeling in ImpleMEntation research: Selecting a theoretical basis for interventions to change clinical practice. BMC Health Services Research 3(22). https://doi.org/10.1186/1472-6963-3-22

 

DOI

https://doi.org/10.31478/201310b

Suggested Citation

Baum, K., K. M. Blakeslee, J. Lloyd, and A. Petrosino. 2013. Violence Prevention: Moving from Evidence to Implementation. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201310b

Acknowledgments

The authors acknowledge Dean Fixsen for comments on an earlier draft of this paper.

Disclaimer

The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organizations or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.


Join Our Community

Sign up for NAM email updates