Evaluation of the Access to Justice in Both Official Languages Initiative

3. Methodology

The evaluation strategy was based on an approach using multiple sources of evidence. The information gathered was analyzed by comparing the results of the various sources of evidence, which in this case included documentation and various types of key informant interviews. To the extent possible, emphasis was made on results that were confirmed by the various sources as part of a triangulated approach. Findings were documented and analyzed using an evidence matrix.

The following sub-sections describe each line of evidence used for this evaluation.

3.1. Document and Data Review

A document and data review was conducted to inform the development of data collection instruments and to address the evaluation issues. The review focused on the contextual, management and operational framework for the Initiative to answer the relevant evaluation issues and questions, as outlined in the evaluation matrix. The documents included annual reports, previous evaluations, public reports and literature. An administrative and financial data review was also conducted and centered on data collected by the program with a focus on project outputs, outcomes and costs to better understand efficiency.

3.2. Key informant interviews

The key informant interviews conducted for this evaluation addressed the majority of evaluation questions, and were a significant line of evidence in gathering information on the need for and the effectiveness of the Initiative. Interviews were conducted with a total of 45 individuals within the following groups of stakeholders:

Most interviews were conducted by telephone in either English or French, depending on the preferences of the respondents. Respondents in the National Capital Region were provided the opportunity to have the interview conducted in person or over the telephone.

3.3. Mini Case Studies of Recipient Organizations’ Projects

During the key informant interviews, respondents were asked specific questions about projects funded by the Initiative (in addition to questions addressing other evaluation issues). This information allowed the evaluation team to focus and report on specific projects. They included the following:

Between two and five respondents provided views for each of the above projects.

3.4. Limitations and Mitigation Strategy

The evaluation findings should be reviewed in the context of the following limitations:

The evaluation relied mostly on documentation and a limited set of key informant interviews. Only a limited number of respondents outside the program and not affiliated with the projects were interviewed as part of the evaluation (n=9). To mitigate this limitation, the evaluation team developed a list of the most knowledgeable respondents, with the help of the program representatives. The evaluation team also maximized the use of each interview by combining questions related to the Initiative overall with specific questions about projects to which the respondents were associated. Finally, interviewees had the opportunity to present views about other projects they were not directly affiliated with, which brought additional external perspective about projects.

Another limitation was associated with the quality of the performance information, which mainly came from the final reports of project recipients. Unfortunately, the narrative format and the inconsistencies in reporting practices by recipient organizations did not allow the evaluation team to sum the reported impacts across projects. However, these information sources provided useful output and background information.