FASD and TRC Call to Action 34.4: A Consideration of Evaluation Methods
Types of Evaluation
The following section will examine evaluation methodologies based on the limited amount of information that is available about FASD and justice programs that have been evaluated. These approaches are not exhaustive; they are a starting point for thinking about evaluation approaches. The authors have also included tips to modify methodologies so that they are FASD-informed. Basic methodologies for research or evaluation can be considered in two ways: qualitative and quantitative.
Qualitative approaches gather information from individuals typically in a written, visual, or oral format. They do not focus on statistical analysis and place an emphasis on narrative details that can be captured through methods that focus on narrative, image, and depiction. What follows are some ways to gather such information, and modifications that could be made to support persons with FASD during the evaluation process:
- Conversation or interview methods such as:
- Interviews (a discussion between two or more people). Interviews can be structured with a formal set of questions that each person is asked. Some interviews are less structured with a guide of questions or ideas to discuss and others are entirely informal in which an organic conversation takes place.
- Tip: Make sure interview questions are written at a grade five level or lower and in plain language.
- Oral histories are a longer discussion focused on collecting life stories. The knowledge gathered from individuals focuses on their first-hand experiences and can be used to offer a broader context to their life and circumstances.
- Tip: Make sure you build in numerous breaks during lengthy conversations to avoid fatigue.
- Focus groups where several people (typically around 8-10) are gathered for a discussion, again potentially structured, semi-structured, or informal.
- Tip: Try to reduce the number of distractions in the room, by dimming lights, removing busy wall art, and reducing noise-creating objects.
- Interviews (a discussion between two or more people). Interviews can be structured with a formal set of questions that each person is asked. Some interviews are less structured with a guide of questions or ideas to discuss and others are entirely informal in which an organic conversation takes place.
- Visual and audio exploration methods such as:
- Photovoice, where individuals are given a camera and directed to capture the essence of a program or experience. This strategy can be powerful when the visual images are narrated by the photographer and help to identify areas that are succeeding or need improvement.
- Tip: Allow a support person to accompany individuals with FASD to complete their evaluation contribution.
- Arts-based evaluation tools are plentiful and only limited by the evaluator’s imagination. Evaluation could happen through drawing (pictures, maps, graphs, etc.), creating (murals, crafts, quilting/sewing, memory-boxes, etc.), dance, music, and so forth. These methods have been developed in a number of settings and are frequently utilized in health research. This can include creating long-lasting material evidence (such as murals or other visual material) that demonstrates engagement by participants. This means that arts-based methods can become both the evidence of the process and a final product. Arts-based evaluation can allow for a client-centered practice and one that does not rely as heavily on verbal engagement, as interviews and surveys might. Arts-based evaluation can use semi-structured prompts in which the response is artistic. Arts-based evaluation can be combined effectively with other research methods. The evaluator must be positioned to lead and translate art into interpretable, accurate, and meaningful evaluation results. Arts-based evaluation schemes have great potential to make evaluation strategies accessible to those that are designing the project as well as participating.
- Tip: Be aware of individual sensory sensitivities, such as touch or smell, when selecting art mediums (e.g., strong scented markers and paints, or play-doh/clay).
- Photovoice, where individuals are given a camera and directed to capture the essence of a program or experience. This strategy can be powerful when the visual images are narrated by the photographer and help to identify areas that are succeeding or need improvement.
Note: When working with Indigenous peoples or communities, be aware of cultural protocols or norms that might be associated with asking individuals to share their story (whether through an interview or expressive arts practice). For example, if something were to come up related to a cultural story, it should be understood that in many Indigenous communities there are stories that are only told during a specific season, in a particular location, or not shared with those outside the community. Appropriate protocols should be understood and respected. Working collaboratively with community can include employing community research associates who will understand these norms and will also be positioned to translate and analyze visual and oral material. The evaluation team needs to be trauma-informed and to understand the broader contexts and histories within which the project and evaluation is being conducted. The process of sharing stories can be triggering and the types of questions asked should be designed with a trauma-lens to minimize re-traumatizing people that are involved in the project.
Quantitative approaches typically gather information from many people for the purpose of comparison, capturing responses using numbers, which results in a database. The most common ways to gather such information is through:
- Surveys in which the researcher asks a series of well structured, typically closed-ended questions, to which people are only able to respond in a set number of ways. For example, a question such as “How likely are you to recommend this service to a friend?” where the respondent is able to answer on a scale of 1-5, where 1 is very unlikely and 5 is very likely.
- Tip: Support individuals to complete surveys by helping them to read and/or fill these out in person, as opposed to sending surveys electronically or by mail.
- Administrative data which involves the gathering and examination of information including but not limited to counts (e.g. how many people use a service), or case files and the details included therein.
- Tip: When counting participants do not exclude individuals that have not completed a particular program. Their participation and their stories are still valid. Consider reaching out to workers to see if the client would like to participate and build in a more flexible method to include their perspectives. The reason they left the program might speak to much needed modifications for retention of others in the future.
In practice, the evaluation team must consider not only what they wish to evaluate, but also who they need to consult to gather the appropriate information. If, for example, an organization wanted to evaluate the effectiveness of a program in supporting individuals with cognitive disabilities through the criminal justice process, this could be done in a number of ways. First, is the evaluation local or broad? Are only the program workers’ effectiveness in one city being evaluated? Or perhaps an entire province or all of Canada? If the evaluation is at the local level, there might be a small enough number of workers that individual interviews or focus groups would work best, and gather the most amount of detailed information. If the evaluation is to be national, it would likely be logistically and financially impossible to interview every single worker. In this case, it would be preferable to use a well-designed survey.
The evaluator must also recognize that program effectiveness cannot simply be evaluated based on information provided by the workers themselves. Other points of view should be collected including those of judges, lawyers, and the individuals receiving the service, in order to achieve a holistic understanding of the program’s effectiveness. In only evaluating one group in any given context, the evaluation risks achieving a fragmented understanding.
The “Best Practices for FASD Service Delivery: Guide and Evaluation Toolkit,” (Pei et al. n.d.), developed collaboratively in Alberta, indicates four interconnected aspirational principles in evaluation. The toolkit was an interdisciplinary endeavor to meet the needs of people with FASD and this is an established best practice. Concurrently making the toolkit available free and online is also a best practice moving forward. The toolkit has guiding principles that can be directly used in program design and evaluation, or revised to guide local practices. These principles are: consistency, collaboration, interdependence, and proactivity. The guide offers these aspirational principles and outlines the level of evidence to support best practices in each area. Additionally, the Toolkit includes template surveys to be used in evaluation. While not created for the criminal justice context specifically, this resource might be of assistance to those agencies that are trying to move towards FASD-informed and best practices in their agency. As noted in the document, much of what guides FASD programming and practices could be described as collective wisdom. Collective wisdom, however, does not always align with what the literature tells us. Accordingly, there is a need to strike a balance between projects that are grounded in collective wisdom and lived experience, while also trying to identify in what ways empirical evidence can support that wisdom and experience.
The following excerpt from the “Best Practices for FASD Service Delivery: Guide and Evaluation Toolkit” describes the core principles:
- Consistency – in placement, relationship and approach. This includes stable living conditions, long term relationships, and support structures that are the same between settings. Consistency in all of these aspects promotes a system in which responses are structured and dependable.
- Collaboration – truly integrated systems of responding are needed from the grass roots to the policy development level. This requires organizational support, including time allotments for meetings and intentional strategy planning between types of services and levels of service delivery. All points of care should be educated on FASD in order to promote common goals, and a consistent message and approach.
- Interdependence – the delicate dance between dependency and complete independence, in which expectations are managed based on each client’s individual situation. This includes anticipation of transition periods and clear planning to navigate change in proactive ways. Programs should harness the development of individuals’ competencies in a supportive environment that recognizes the need for a lifelong supportive role.
- Proactivity – learning to anticipate rather than respond. This approach fosters control and promotes a success focused trajectory rather than the use of problem avoidance strategies. Early interventions are key to developing change oriented behaviours and preventing secondary disabilities (Pei et al. n.d., 6).
The document then offers a summary of best practices ranked by level of evidence to support each practice. This could be helpful for program design and evaluation as it offers FASD-informed best practices when establishing goals that can be measurable.
The Best Practices Guide offers the following advice in the context of evaluation:
- Consistency: Programs that are developed to support individuals could focus on consistency which could be measured by relative stability during the entire criminal justice experience (e.g. fewer breaches when on conditions or fewer charges when in custody when needs are met). This would be quantitative in nature and involve case file management and review.
- Collaboration: Programs could place an emphasis on breaking down silos and facilitating collaboration. These collaborations could increase understanding and awareness. While a simple survey or questionnaire can measure increased awareness, this does not necessarily address if increased awareness changes the actions of workers, but more elaborate interviews could be undertaken or pre/post-surveys asking about awareness could be done.
- Interdependence: Programs that place an emphasis on fostering interdependence might want to consider a series of interviews with their clients over the course of months or even years. This would be time consuming but would have value. Alternatively, an arts-based method could yield nuanced data.
- Proactivity: Most programs aspire to be proactive. The question is, proactive about what? A possible method could be to look at the program design and indicate what elements are understood to be proactive, and then engage in interviews with the staff on a semi-regular basis in which they discuss their understandings of what issues they must be proactive about. This could then be combined with analysis of case file reviews, to understand where there were issues that arose that could have been addressed with more proactive measures.
Adding to the Best Practices Guide and returning to the research questions focused on being culturally-responsive and non-stigmatizing; before an agency can engage in evaluative work that is not stigmatizing and is culturally-responsive, one could argue, they would need to embed these aspirations in their program design. There are a number of academic and online resources to assist agencies in better understanding how to develop ground-up collaborations to support the co-design of programs and evaluations; some of these are listed in the Annex to this paper. “Indigenous Approaches to Program Evaluation” (National Collaborating Centre for Aboriginal Health 2013) offers an overview of different types of program evaluations including needs assessment, logic models, and how to assess impact. Embedded in this document is the importance of stakeholder engagement as well as participatory methods that allow evaluation results. Also included is discussion of protocols and respectful engagement as they embrace Kirkness and Barnhardt’s (2001) four Rs of working with Indigenous communities: Respect, Relevance, Reciprocity, and Responsibility. The authors of this report advocate strongly for on-the ground collaboration to co-designing programs and evaluation.
- Date modified: