Evaluation of the Investigative Powers for the 21st Century Initiative

3. Evaluation Methodology

An interdepartmental Evaluation Working Group was established to support the evaluation by providing inputs, advice and suggestions regarding the design and conduct of the evaluation. The Working Group was established at the outset of the evaluation and included IP21C officials and representatives from evaluation units from each of the federal partner departments.

The methodology for this evaluation included multiple lines of evidence and employed the following data collection methods:

3.1. Document Review

The main internal documents reviewed included the following:

3.2. Review of Performance Information

As part of the Initiative’s performance measurement strategy, IP21C officials compiled performance data associated with each of the intended outcomes. At the time of the evaluation, information was available for three years: 2015-16 to 2017-18 inclusive. This data was reviewed for the purposes of the evaluation.

3.3. Literature Review

A focussed literature review was undertaken of articles and reports that provide information on such topics as trends in cybercrime and challenges to law enforcement; Europol’s annual Internet Organised Crime Threat Assessment (IOCTA); and assessment reports by the Council of Europe related to implementation of the Budapest Convention by Member States. In addition, an online search to identify court cases pertaining to the IP21C investigative powers between March 2015 and March 2019 was conducted and relevant case law was reviewed.

3.4. Review of Trends in Cybercrime and Computer-Assisted Crime

This review focussed on the collection and analysis of data on the incidence, investigation and resolution of two general categories of crime involving computer services:

Documents reviewed for this trends analysis related to the reporting of cybercrimes to police services in Canada, Eurobarometer surveys in Europe that included questions on cyber security incidents experienced by members of the public, and surveys of business organizations in Canada, and business and charitable organizations in England and Wales, regarding approaches to cybersecurity and cyber incidents.

3.5. Key Informant Interviews

A total of 36 key informant interviews were conducted, consisting of both internal and external key stakeholders. The breakdown is as follows:

In reporting the findings from the interviews, the following scale was used:

3.6. Limitations

The evaluation encountered a few methodological limitations or challenges, as discussed below by line of evidence.

Review of trends in cybercrime and computer-assisted crime. Many published surveys and estimates of the scale of cybercrime and its impacts are considered unreliable, incomplete, and/or inconsistent. In turn, these data weaknesses give rise to limitations in the evidence base to inform the development of cybercrime policies, response strategies and allocation of resources, not to mention the assessment of actions taken. Principal weaknesses and challenges identified in the literature reviewed include the following:

Review of Performance Information. IP21C officials were able to provide performance information for the three years of implementation. However, it was challenging for the evaluation to assess the effectiveness of the awareness and training undertaken as part of the Initiative, as post-training evaluations had not been implemented at the time of the events. To address this, the evaluation used key informant interviews to collect this data. Though many of the key informants could not recall the specific training activities in which they participated, most were very familiar with the key elements of the PCOCA.

Key Informant Interviews. One limitation was the possibility of introducing bias as a result of the approach to sampling for the key informant interviews as well as the voluntary nature of participation in this data collection method. Self-reported response bias occurs when individuals are reporting on their own activities and may want to portray themselves in the best light. Strategic response bias occurs when the participants answer questions with the desire to affect outcomes. To alleviate this, the evaluation ensured that the list of key informants was balanced so that a knowledgeable pool of respondents and a variety of internal and external perspectives was gathered.

Mitigation Strategy. To mitigate these limitations, the evaluation used multiple lines of evidence and triangulation to confirm the results.