FASD and TRC Call to Action 34.4: A Consideration of Evaluation Methods

Types of Evaluation

The following section will examine evaluation methodologies based on the limited amount of information that is available about FASD and justice programs that have been evaluated. These approaches are not exhaustive; they are a starting point for thinking about evaluation approaches. The authors have also included tips to modify methodologies so that they are FASD-informed. Basic methodologies for research or evaluation can be considered in two ways: qualitative and quantitative.

Qualitative approaches gather information from individuals typically in a written, visual, or oral format. They do not focus on statistical analysis and place an emphasis on narrative details that can be captured through methods that focus on narrative, image, and depiction. What follows are some ways to gather such information, and modifications that could be made to support persons with FASD during the evaluation process:

Note: When working with Indigenous peoples or communities, be aware of cultural protocols or norms that might be associated with asking individuals to share their story (whether through an interview or expressive arts practice). For example, if something were to come up related to a cultural story, it should be understood that in many Indigenous communities there are stories that are only told during a specific season, in a particular location, or not shared with those outside the community. Appropriate protocols should be understood and respected. Working collaboratively with community can include employing community research associates who will understand these norms and will also be positioned to translate and analyze visual and oral material. The evaluation team needs to be trauma-informed and to understand the broader contexts and histories within which the project and evaluation is being conducted. The process of sharing stories can be triggering and the types of questions asked should be designed with a trauma-lens to minimize re-traumatizing people that are involved in the project.

Quantitative approaches typically gather information from many people for the purpose of comparison, capturing responses using numbers, which results in a database. The most common ways to gather such information is through:

In practice, the evaluation team must consider not only what they wish to evaluate, but also who they need to consult to gather the appropriate information. If, for example, an organization wanted to evaluate the effectiveness of a program in supporting individuals with cognitive disabilities through the criminal justice process, this could be done in a number of ways. First, is the evaluation local or broad? Are only the program workers’ effectiveness in one city being evaluated? Or perhaps an entire province or all of Canada? If the evaluation is at the local level, there might be a small enough number of workers that individual interviews or focus groups would work best, and gather the most amount of detailed information. If the evaluation is to be national, it would likely be logistically and financially impossible to interview every single worker. In this case, it would be preferable to use a well-designed survey.

The evaluator must also recognize that program effectiveness cannot simply be evaluated based on information provided by the workers themselves. Other points of view should be collected including those of judges, lawyers, and the individuals receiving the service, in order to achieve a holistic understanding of the program’s effectiveness. In only evaluating one group in any given context, the evaluation risks achieving a fragmented understanding.

The “Best Practices for FASD Service Delivery: Guide and Evaluation Toolkit,” (Pei et al. n.d.), developed collaboratively in Alberta, indicates four interconnected aspirational principles in evaluation. The toolkit was an interdisciplinary endeavor to meet the needs of people with FASD and this is an established best practice. Concurrently making the toolkit available free and online is also a best practice moving forward. The toolkit has guiding principles that can be directly used in program design and evaluation, or revised to guide local practices. These principles are: consistency, collaboration, interdependence, and proactivity. The guide offers these aspirational principles and outlines the level of evidence to support best practices in each area. Additionally, the Toolkit includes template surveys to be used in evaluation. While not created for the criminal justice context specifically, this resource might be of assistance to those agencies that are trying to move towards FASD-informed and best practices in their agency. As noted in the document, much of what guides FASD programming and practices could be described as collective wisdom. Collective wisdom, however, does not always align with what the literature tells us. Accordingly, there is a need to strike a balance between projects that are grounded in collective wisdom and lived experience, while also trying to identify in what ways empirical evidence can support that wisdom and experience.

The following excerpt from the “Best Practices for FASD Service Delivery: Guide and Evaluation Toolkit” describes the core principles:

  1. Consistency – in placement, relationship and approach. This includes stable living conditions, long term relationships, and support structures that are the same between settings. Consistency in all of these aspects promotes a system in which responses are structured and dependable.
  2. Collaboration – truly integrated systems of responding are needed from the grass roots to the policy development level. This requires organizational support, including time allotments for meetings and intentional strategy planning between types of services and levels of service delivery. All points of care should be educated on FASD in order to promote common goals, and a consistent message and approach.
  3. Interdependence – the delicate dance between dependency and complete independence, in which expectations are managed based on each client’s individual situation. This includes anticipation of transition periods and clear planning to navigate change in proactive ways. Programs should harness the development of individuals’ competencies in a supportive environment that recognizes the need for a lifelong supportive role.
  4. Proactivity – learning to anticipate rather than respond. This approach fosters control and promotes a success focused trajectory rather than the use of problem avoidance strategies. Early interventions are key to developing change oriented behaviours and preventing secondary disabilities (Pei et al. n.d., 6).

The document then offers a summary of best practices ranked by level of evidence to support each practice. This could be helpful for program design and evaluation as it offers FASD-informed best practices when establishing goals that can be measurable.

The Best Practices Guide offers the following advice in the context of evaluation:

  1. Consistency: Programs that are developed to support individuals could focus on consistency which could be measured by relative stability during the entire criminal justice experience (e.g. fewer breaches when on conditions or fewer charges when in custody when needs are met). This would be quantitative in nature and involve case file management and review.
  2. Collaboration: Programs could place an emphasis on breaking down silos and facilitating collaboration. These collaborations could increase understanding and awareness. While a simple survey or questionnaire can measure increased awareness, this does not necessarily address if increased awareness changes the actions of workers, but more elaborate interviews could be undertaken or pre/post-surveys asking about awareness could be done.
  3. Interdependence: Programs that place an emphasis on fostering interdependence might want to consider a series of interviews with their clients over the course of months or even years. This would be time consuming but would have value. Alternatively, an arts-based method could yield nuanced data.
  4. Proactivity: Most programs aspire to be proactive. The question is, proactive about what? A possible method could be to look at the program design and indicate what elements are understood to be proactive, and then engage in interviews with the staff on a semi-regular basis in which they discuss their understandings of what issues they must be proactive about. This could then be combined with analysis of case file reviews, to understand where there were issues that arose that could have been addressed with more proactive measures.

Adding to the Best Practices Guide and returning to the research questions focused on being culturally-responsive and non-stigmatizing; before an agency can engage in evaluative work that is not stigmatizing and is culturally-responsive, one could argue, they would need to embed these aspirations in their program design. There are a number of academic and online resources to assist agencies in better understanding how to develop ground-up collaborations to support the co-design of programs and evaluations; some of these are listed in the Annex to this paper. “Indigenous Approaches to Program Evaluation” (National Collaborating Centre for Aboriginal Health 2013) offers an overview of different types of program evaluations including needs assessment, logic models, and how to assess impact. Embedded in this document is the importance of stakeholder engagement as well as participatory methods that allow evaluation results. Also included is discussion of protocols and respectful engagement as they embrace Kirkness and Barnhardt’s (2001) four Rs of working with Indigenous communities: Respect, Relevance, Reciprocity, and Responsibility. The authors of this report advocate strongly for on-the ground collaboration to co-designing programs and evaluation.