2014 Foundation Grant “Live Pilot” Report

Table of Contents

Executive Summary

The 2014 Foundation Grant “live pilot” competition marks a major milestone in the transition to the new Investigator Initiated Programs and peer review processes. CIHR took a measured approach to piloting the new funding program, peer review processes and enabling systems to ensure we could monitor outcomes in an evidence-informed fashion and make adjustments and refinements to the design, as needed.

This report provides an overview of the 2014 Foundation Grant “live pilot” competition results; summarizes feedback received from researchers, peer reviewers, Virtual Chairs and research administrators; and, confirms enhancements made to the 2015 Foundation Grant “live pilot” competition.

Overall, feedback was positive regarding the effectiveness of the structured application process, clarity of the adjudication criteria and adjudication scale, and the value of the structured review process. A number of areas for improvement and the need for continued monitoring have been identified to ensure the Foundation Grant program meets its objectives. CIHR is implementing enhancements to the 2015 Foundation Grant “live pilot” competition based on the feedback received as outlined in Table 1.

Table 1. Enhancements implemented to the 2015 Foundation Grant “live pilot” competition

What We Heard Changes Implemented
Stage 1
  • A majority of reviewers found it challenging to apply the adjudication criteria and rate applications across career stages.
  • A minority of reviewers agreed that new/early career investigators should be ranked separately, and that a separate stream should exist within the competition.
  • The Stage 1 Foundation Interpretation Guidelines have been revised to ensure that the Leadership sub-criterion is applicable across career stages.
  • Operational changes have been made to have new/early career investigators reviewed separately at each stage of the competition.

A small minority of applicants and reviewers reported:

  • Significant overlap between the Productivity and Significance of Contributions sections; 
  • A lack of clarity around what should be included in the Vision and Program Direction section.
  • The Stage 1 Foundation Interpretation Guidelines have been revised to provide more clarity to reviewers in applying the adjudication criteria and to better assist applicants in structuring their applications.
  • Applicants and reviewers suggested limit increases and additions to sections of the Foundation CV.
  • The Foundation CV has been modified by:
    • Increasing the limits for Publications, Presentations, Recognitions, and Supervisory Activities;
    • Adding new sections for Reviewer and Assessment Activities and Memberships.
Stage 2
  • A minority of applicants and reviewers reported a lack of clarity as to what information should be provided to address each adjudication criterion.
  • The Stage 2 Foundation Interpretation Guidelines have been revised to improve clarity and further define the intent of each adjudication criterion.
  • A majority of applicants and reviewers recommended that character limits for the Research Approach section be increased.
  • The length of the Research Approach section has been increased to 3 pages (increased by 1 page).

A majority of applicants and reviewers recommended that the weighting of the sub-criteria be adjusted as follows:

  • Increase Research Concept and Research Approach;
  • Decrease Mentorship and Training and Quality of Support Environment.

The weighting of the sub-criteria have been adjusted:

  • Research Concept: 25% (increased 5%)
  • Research Approach: 25% (increased 5%)
  • Expertise: 20% (no change)
  • Mentorship and Training: 20% (no change)
  • Quality of Support Environment: 10% (decreased 10%)
  • A minority of reviewers reported that the budget assessment process was not clear.
  • The length of the budget justification has been increased to a full page (increased by ½ page).
  • Clearer instructions have been developed regarding what to include in budget justifications.
  • Applicants have been provided with their baseline amounts at Stage 2.
  • Reviewers have been provided with training on how to review the appropriateness of the budget proposed.
Final Assessment Stage
  • A majority of FAS reviewers were dissatisfied with the structured review process.

Operational changes to improve the FAS review process are currently being considered, including:

  • Clarifying the role of FAS reviewers, including the expertise required;
  • Developing additional training.
Ensuring High Quality Reviews

A small minority of reviewers suggested that:

  • Online discussions be synchronous;
  • Alerts be provided to notify reviewers of discussions on assigned applications;
  • Online discussions be mandatory where there are scoring discrepancies.
  • A tool has been developed that allows Virtual Chairs to produce reports to assist them in identifying applications for discussion (e.g., applications with high scoring discrepancies).
  • The ability for Virtual Chairs to flag an application for discussion has been built into the system.
  • The same Virtual Chairs will be used at both Stage 1 and Stage 2, where possible, for increased continuity.
  • The benefits and operational requirements of introducing synchronous discussions at Stage 2 are being explored.
  • Online discussions and Virtual Chair performance will be monitored to continue building effective strategies to ensure that meaningful online discussions are taking place.
  • A small minority of applicants and Virtual Chairs, and a majority of reviewers indicated that the quality of reviews for some applications was not of the quality expected.
  • A means for direct communication between Virtual Chairs and reviewers has been developed to enable Virtual Chairs to prompt reviewers to revise and/or expand their reviews, as required.
  • Additional training materials have been developed on the quality of reviews, as well as a mechanism to ensure that training materials are accessed by all reviewers.

CIHR will further evaluate specific design elements over the course of multiple  Foundation and Project Grant competition cycles to determine whether additional changes to the funding programs, peer review processes and enabling systems are required. It is our intention to keep the research community informed of any additional changes to either the new funding programs or peer review processes as the pilots continue. We appreciate the value of and thank the research community for their input and helpful suggestions.

1. Introduction

CIHR has undertaken a redesign of its Open Suite of Programs (now referred to as Investigator Initiated Programs) with the aim of creating a flexible and sustainable system capable of supporting leading-edge health research. Through the new Investigator Initiated Programs and peer review processes, CIHR intends to meet the needs of a broader disciplinary mix of researchers within CIHR’s mandate while decreasing applicant and reviewer burden and improving the fairness and quality of peer review. To deliver on this, the existing suite of investigator initiated funding mechanisms is being reconstituted into two new programs: the Foundation Grant and the Project Grant. While the Project Grant is about supporting ideas with the greatest potential for important advances in health-related knowledge, the health care system, and/or health outcomes, the Foundation Grant is designed to contribute to a sustainable foundation of health research leaders, by providing long-term support of innovative, high-impact programs of research.

The 2014 Foundation Grant “live pilot” competition is CIHR’s first Foundation Grant competition. At various stages of the competition, CIHR surveyed researchers, research administrators, reviewers, and Virtual Chairs to obtain their feedback on the application and peer review processes and identify areas for improvement. The feedback provided through surveys is invaluable in helping to inform and to improve the implementation of the new Investigator Initiated Programs and peer review processes.

This report presents both the results of these surveys and the results of the competition with an analysis of demographic data. We remain committed to keeping the research community and other stakeholders involved and informed.

2. Competition Overview

The Foundation Grant is expected to:

  • Support a broad base of research leaders across career stages, areas, and disciplines relevant to health;
  • Develop and maintain Canadian capacity in research and other health-related fields;
  • Provide research leaders with the flexibility to pursue new, innovative lines of inquiry;
  • Contribute to the creation and use of health-related knowledge through a wide range of research and/or knowledge translation activities, including any relevant collaboration.

To support the objective of the program – a sustainable foundation of health research leaders –there would be an annual intake of new investigators into the Foundation Grant portfolio. For this first competition, a minimum intake of 15% was established.

Multi-Stage Competition and Review Process

The 2014 Foundation Grant “live pilot” competition involved a three-stage competition and review process (as illustrated in Figure 1).

Stage 1 focused on the Caliber of the Applicant and Vision and Program Direction and Stage 2 focused on the Quality of the Proposed Program of Research, and Quality of the Expertise, Experience and Resources. The adjudication criteria and associated weighting for Stage 1 and Stage 2 as applied within the 2014 Foundation Grant “live pilot” competition are outlined below (Table 2).

Table 2. Adjudication criteria for the 2014 Foundation Grant “live pilot” competition

Stage Adjudication Criteria Weighting
1 Criterion 1: Caliber of the Applicant:
  • Leadership
  • Significance of Contributions
  • Productivity
75%
Criterion 2: Vision and Program Direction 25%
2 Criterion 1: Quality of the Program
  • Research Concept
  • Research Approach
40%
Criterion 2: Quality of the Expertise, Experience and Resources
  • Expertise
  • Mentorship and Training
  • Quality of Support Environment
60%

Stage 1 and 2 reviews were conducted remotely by expert reviewers supported by an internet-assisted platform that enabled communication among reviewers through asynchronous online discussion. Each application was assigned to up to 5 reviewers, based on appropriate matching between the application content and reviewer expertise, and each reviewer was assigned between 10‑18 applications. Reviewers self-declared their ability to review each application based on the summary of research proposal. Virtual Chairs assisted CIHR in validating the appropriateness of reviewer assignments. Reviewers at Stage 1 and Stage 2 had five weeks to complete their reviews, including four weeks to review applications and one week for online discussions.

Reviewers assessed their assigned applications by providing a structured review that consisted of rating each sub-criterion and by briefly commenting on the strengths and weaknesses in each section. At Stage 2, reviewers assessed the requested budget and justification to determine if the requested budget was appropriate to support the proposed program of research. The budget assessment was not factored into the scientific assessment of the application. As each review was completed, an initial rank order of the reviewer's set of assigned applications was automatically calculated based on the ratings. Once their initial ratings were complete, reviewers had access to the preliminary reviews of other reviewers who were assigned the same application so as to prompt discussion when warranted.

As part of the asynchronous online discussion, a Virtual Chair was assigned to the application to moderate the discussion. The Virtual Chairs were tasked with ensuring discussions took place if warranted, and could prompt reviewers to have a discussion. After the discussions, reviewers were given the opportunity to make adjustments to their reviews as required, including changing their ratings, rankings or comments. If a tie occurred in the ranking, reviewers were required to break the tie by changing application rank order positions up or down. Reviewers were asked to confirm the rank order of their assigned applications before submitting the final rankings to CIHR.

Each application was attributed an individual reviewer percent rank based on the rank order provided by each reviewer. The average of all individual reviewer percent ranks for a given application provided a consolidated percent rank. The standard deviation of the consolidated percent rank indicated the variability among reviewer rankings. Based on the consolidated percent ranks, an overall rank order for the competition was obtained. Note that the consolidated percent rank for each application was used to make decisions on which applications would move forward to the next stage.

Applications advanced to Stage 2 if:

  • They had a minimum consolidated percent rank above 65; or,
  • At least two reviewers ranked the application above the 78th percentile at Stage 1.

Applications advanced to the Final Assessment Stage (FAS) if:

For mid-careerFootnote 1 and establishedFootnote 2 investigators:

  • They had a minimum consolidated percent rank above 65; or,
  • At least three reviewers ranked the application above the 70th percentile at Stage 2; or,
  • At least two reviewers ranked the application above the 70th percentile at Stage 2 and the application had a minimum consolidated percent rank above 56.

For new/early careerFootnote 3 investigators:

  • They had a minimum consolidated percent rank above 50; or,
  • At least two reviewers ranked the application above the 70th percentile at Stage 2 and they had a minimum consolidated percent rank above 40.

The FAS involved the assessment of the results of the Stage 2 reviews. Applications that were assigned an individual review percent rank above the 90th percentile by at least four reviewers were identified as “green zone” applications and were recommended for funding without further discussion. All other applications advancing to the FAS outside of the “green zone” were flagged as “grey zone” applications and were assessed by FAS reviewers prior to the face-to-face interdisciplinary committee meeting.

To help committee members differentiate between “grey zone” applications, a binning system was used. Reviewers were asked to assign applications to a "Yes" bin (to be considered for funding) or a "No" bin (not to be considered for funding). Each reviewer was allocated a minimum number that may be placed in the "Yes" and "No" bins. This number was based on the number of applications assigned to each reviewer and the anticipated number of applications to be funded. Reviewers were provided two weeks to assess “grey zone” applications and the associated Stage 2 reviews, and were asked to submit their binning results to CIHR prior to the meeting.

Binning results were used to inform the grouping of “grey zone” applications into three groups:

  1. Group A (applications that the majority of the assigned committee members recommended should be funded)
  2. Group B (applications for discussion at the meeting)
  3. Group C (applications that the majority of the assigned committee members recommended should not be funded)

At the meeting, committee members first validated the groupings of applications within each of the three groups. This was achieved by committee members indicating whether they thought applications should be moved to a different group. Once the committee was satisfied with the groupings, the applications in Group A were recommended for funding without further discussion. The applications in Group B were discussed. An estimated funding cut-off line was displayed within the list of Group B applications as a tool to facilitate discussion. Once all of the applications in Group B were discussed, members were asked to vote on the Group B applications. Members were provided with a maximum number of "Yes" votes (to be considered for funding) based on both the number of applications in Group B and the estimated funding cut-off line. The committee’s recommendations for funding were submitted to CIHR for the final review of budgets and funding decisions.

Special considerations for the 2014 Foundation Grant “live pilot” competition

Restricted Eligibility: Eligibility restrictions were imposed on the 2014 Foundation Grant “live pilot” competition in order to manage application pressure. To be eligible to apply, it was required that one of the Program Leaders fall into one of the following three groups:

  • On July 30 2013, the Program Leader is the Nominated Principal Investigator or Co-Principal Investigator of a CIHR Open program grant with an expiry date no earlier than October 1, 2014 and no later than September 30, 2015. CIHR Open programs include:
    • Open Operating Grant Program (OOGP)
    • Partnerships for Health System Improvement (PHSI)
    • Knowledge Synthesis (KRS)
    • Knowledge to Action (KAL)
    • Proof of Principle Program Phase I and II (POP I and POP II)
    • Industry-Partnered Collaborative Research (IPCR)
  • The Program Leader is considered to be a new/early-career investigator at the Stage 1 application deadline.
  • On July 30 2013, the Program Leader has never held Open CIHR funding as a Nominated Principal Investigator or a Co-Principal Investigator.

The invited cohort of applicants who held CIHR Open funding are recognized as high caliber researchers, which contributed to a highly competitive environment.

New/Early Career Investigators: The success of new/early career investigators was actively monitored over the course of the 2014 Foundation Grant “live pilot” competition. To support the objective of the program – a sustainable foundation of health research leaders – a minimum intake of 15% of grants funded was established for new/early career investigators. In the 2014 Foundation Grant “live pilot” competition, for Stage 1 and Stage 2, all applications were reviewed together irrespective of career stage and all reviewers were instructed to take into consideration the career stage, research field and institution setting of all applicants when assessing each criterion.  In contrast, at the FAS, new/early career investigators were ranked only against other new/early career investigators.

3. Methods

3.1 Competition Design

Since 2013, CIHR has conducted a number of pilots in the context of existing Open programs to inform the transition to the new Investigator Initiated Programs and peer review processes. Piloting competition and peer review design elements allows CIHR to monitor, adjust and refine processes and systems as required in order to best support applicants and reviewers. As pilot studies are completed, they will contribute to the body of literature on program and peer review design. The design elements tested in this pilot are described in Table 3.

Table 3. Peer review design elements tested in the 2014 Foundation Grant “live pilot” competition

Peer Review Design Element Pilot Description
Structured Application & Review
  • The Foundation Grant application was structured to align with the adjudication criteria.
  • Applicants addressed each of the adjudication criteria in a specific application section with a defined character limit.
  • Reviewers provided a rating and written review for each adjudication criterion to ultimately rank their assigned applications.
Multi-Stage Review
  • Stage 1 and Stage 2: reviewers conducted preliminary reviews, discussed the merit of the applications through online discussion and ranked their assigned applications.
  • The majority of Stage 1 reviewers were assigned between 13 and 18 applications.
  • The majority of Stage 2 reviewers were assigned between 10 and 15 applications.
  • Final Assessment Stage: a face to face multidisciplinary committee recommended a set of applications for funding.
Remote Review & Online Discussion
  • Reviewers submitted their preliminary reviews and then had access to the other reviewers’ ratings, written reviews and rankings.
  • Reviewers were able to discuss applications/reviews through Asynchronous Online Discussion. 
  • At the end of the discussion period, reviewers were able to modify their reviews.
  • The review process for each application was monitored by an assigned Virtual Chair.
Rating Scale & Ranking System
  • An Adjudication Scale with five descriptors (Outstanding, Excellent, Good, Fair, Poor) was used to rate each sub-criterion, with granularity built into the top descriptors of the scale.
  • Once reviews were finalized, reviewers were required to confirm/adjust their final rank order.
Face-to-Face Meeting
  • A multidisciplinary committee of reviewers, different from Stage 1 and Stage 2 reviewers, were each assigned a cross section of applications.
  • The large majority of FAS reviewers were assigned 16 applications.
  • In advance of the meeting, reviewers were required to bin their assigned applications using yes/no funding recommendations.
  • The committee discussions were focused on the applications around the cut-off to ultimately make final recommendations of the applications to be funded.

3.2 Survey Process

The objective of the surveys was to assess the participant’s perception and experience within the application and peer review processes with respect to the design elements tested. Surveys were developed using online survey software called Fluid Survey. Eleven surveys were developed to coincide with each stage of the pilot process and for the purposes of this analysis, were broken down into the stages described in Table 4. The data presented in this report includes data from submitted survey reports that were either fully or partially completed.

Table 4. Survey process for the 2014 Foundation Grant “live pilot” competition

Survey Stage Participants Surveyed Focus of Survey
Stage 1 Application Submission

Stage 1 Applicants

Research Administrators

  • Structured application process
  • Structured application form
  • Adjudication criteria
Stage 1 Review Stage 1 Reviewers
  • Stage 1 structured review process
  • Reviewer workload
  • Structured application form
  • Adjudication criteria
  • Various elements of the Stage 1 review process (including the adjudication scale, rating and ranking process, online discussion and role of the Virtual Chair)
Stage 1 Virtual Chairs
  • Stage 1 structured review process
  • Role of the Virtual Chair
  • Virtual Chair workload
  • Various elements of the Stage 1 review process (including assigning applications, reading preliminary reviews and the online discussion)
Stage 1 Receipt of Competition Results Stage 1 Applicants
  • Structured review process
  • Quality of the reviews received
  • Overall satisfaction with the review process
Stage 2 Application Submission Stage 2 Applicants
  • Structured application process
  • Structured application form
  • Adjudication criteria
Stage 2 Review Stage 2 Reviewers
  • Stage 2 structured review process
  • Reviewer workload
  • Structured application form
  • Adjudication criteria
  • Various elements of the Stage 2 review process (including the adjudication scale, rating and ranking process, online discussion and role of the Virtual Chair)
Stage 2 Virtual Chairs
  • Stage 2 structured review process
  • Role of the Virtual Chair
  • Virtual Chair workload
  • Various elements of the Stage 2 review process (including assigning applications, reading preliminary reviews and the online discussion)
Final Assessment Stage (FAS) Review FAS Reviewers
  • FAS review process
  • Reviewer workload
  • Quality of Stage 2 reviews
  • Pre-meeting activities (including reading pre-meeting reviewer comments and the binning process)
  • Various aspects of the face-to-face meeting (including validating the binning of applications and the voting process)
Stage 2 and FAS Receipt of Competition Results

Stage 2 applicants who advanced to FAS

Stage 2 applicants who did not advance to FAS

  • Stage 2 structured review process
  • Quality of Stage 2 reviews received
  • Overall satisfaction with the Stage 2 review process

3.3 Survey Data Analysis

Survey results received from questions with yes/no and 7-point Likert scales were analyzed quantitatively and are presented graphically in Appendix 1 as the number and proportion of total responses received for a given question. The 7-point Likert scale was used to encourage the distribution of survey responses towards a degree of agreement or disagreement. Responses within the two ranges were reported as either agree or disagree.

As the open-ended questions were optional, only a subset of individuals who submitted surveys provided responses. Comments received for open-ended questions were analyzed qualitatively. For any given open-ended question, a subset of responses was used to identify themes. All responses were then coded against these themes. The proportion of comments received was calculated as the number of responses coded under a given theme divided by the total number of responses received for that question. Note that only responses relevant to the question asked were counted and that one comment could be coded to multiple response themes. The proportion of survey respondents was calculated as the number of responses coded under a given theme for a question divided by the total number of survey respondents (presented in Appendix 1) and represents the proportion of survey respondents who provided a response under a given theme. Themes identified by at least 5% of respondents are included in tables presented in Appendix 1.

For each design element, commonly expressed opinions were aggregated and reported as either design strengths or design concerns. Common and unique design considerations and suggestions were also reported for each design element. The reporting scale used to describe the percentage of respondents in agreement with a particular statement is described in Table 5.

Table 5. Reporting scale used to report on survey responses received for both close and open-ended questions

Reporting Scale
Small minority At least 5% of individuals, but less than 25%
Minority At least 25% of individuals but less than 50%
Majority Between 50% and 75% of individuals
Large majority More than 75%, but not all individuals
All The entire sample

Limitations of the Survey and Data Analysis

Sample size is limited and therefore conclusions from a given pilot should be drawn with caution. Over time, and multiple competitions, a much higher sample size will be collected.

The Foundation Grant is one of only two funding mechanisms that make up CIHR’s new Investigator Initiated Programs. As compared to the Open Operating Grant Program, the Foundation Grant supports only longer-term programs of research. A full comparison between CIHR’s previous and new Investigator Initiated Programs will be possible only when data from both the Foundation and Project Grants are available.

4. Summary of Competition Results

The overall competition results of this pilot will be summarized in the next sections including competition results broken down by:

  • Pillar
  • Career Stage
  • Eligibility Group
  • Sex
  • Language
  • Region
  • Primary CIHR Institute
  • Institution Size

4.1 Overall Competition Results

At Stage 1, 1,366 applications were deemed eligible and were reviewed by 443 reviewers, with over 98% of applications reviewed by 5 reviewers. A total of 468 applications were invited to Stage 2 and 445 eligible applications were received. Stage 2 applications were reviewed by 217 reviewers, with over 90% of applications reviewed by 5 reviewers. A total of 150 applications were funded in this competition (Figure 1).

4.2 Applications by Competition Stage and Pillar

Pillar affiliation was self-reported at the time of application. Applicants who did not select a primary pillar were excluded from the results presented in this section. In total, 6 out of 1,366 eligible applications submitted at Stage 1 did not select a primary pillar.

Overall, 63% of funded applications were in Biomedical research,13% of funded applications were in Clinical research and in Social, Cultural, Environmental and Population Health research, respectively, and 11% of funded applications were in Health Systems/Services research. Across the stages of the competition, the proportion of applications remained relatively constant for Health Systems/Services research and Social, Cultural, Environmental and Population Health research. From Stage 1 to the Final Assessment Stage, the proportion of applications in Biomedical research increased by 9%, while the proportion of applications in Clinical research dropped by 7% (Figure 2).

4.3 Applications by Competition Stage and Career Stage

Career stage was calculated based on the date of the applicant’s first academic appointment based on an assessment of their CV. Applications were divided into three groups: applications submitted by new/early career investigators, applications submitted by mid-career investigators, and applications submitted by established investigators.

At Stage 1, 41% of applications were submitted by new/early career investigators, 32% by mid-career investigators and 27% by established investigators. The distribution of applicants across career stages shifted as the competition progressed. The proportion of applications from new/early career investigators dropped to 19% at Stage 2 and accounted for 15% of funded applications. The proportion of applications from mid-career investigators accounted for 31% of Stage 2 applications and 25% of funded applications. The proportion of applications from established investigators increased across the stages of the competition, accounting for 50% of applications advancing to Stage 2 and 59% of funded applications (Figure 3).

4.4 Applications by Competition Stage and Eligibility Group

As discussed in the Competition Overview section, eligibility constraints were placed on the 2014 Foundation Grant “live pilot” competition. Applications were divided into three groups: New/Early Career Investigator; Never Held CIHR Open Funds; and, Currently Holds CIHR Open Funds. Note that the new/early career investigators group is the same as in section 4.3 above. For the purposes of this analysis, these categories are mutually exclusive (new/early career investigators were removed from any other group and may or may not hold CIHR Open funds).

The results for the New/Early Career Investigator group mirror what was found regarding career stage. Among applications submitted at Stage 1, similar proportions were submitted from the New/Early Career Investigator and the Currently Holds CIHR Open Funds eligibility groups with 41% and 39% of the total respectively, while 20% were submitted by the Never Held CIHR Open Funds eligibility group. The majority of applications that advanced to Stage 2 (67%) and the large majority of applications funded (79%) were from the Currently Holds CIHR Open Funds eligibility group, with 15% of funded applications from the New/Early Career Investigator and 5% from the Never Held CIHR Open Funds eligibility groups (Figure 4).

4.5 Applications by Competition Stage and Sex

Across all stages of the competition, the majority of applications came from males. At Stage 1, 63% of applications were submitted by males and 37% by females. Applications submitted by females dropped to 29% at Stage 2. Of those applications that were funded, 27% were submitted by females and 73% from males (Figure 5).

4.6 Applications by Competition Stage and Language

Overall, the large majority of applications that were submitted at Stage 1 (95%), that advanced to Stage 2 (97%) and that were funded (97%) were submitted in English (Figure 6).

4.7 Applications by Competition Stage and Region

The greatest proportion of applications submitted at Stage 1 and funded were received from Ontario (47% submitted at Stage 1; 43% funded), followed by Quebec (26% submitted at Stage 1; 27% funded). Approximately one quarter of Stage 1 applications were received from Western Canada (11% from British Columbia, 10% from Alberta, 2% from Manitoba and 1% from Saskatchewan). Of these, 16% of funded applications were from British Columbia, while 11% were from Alberta and 2% were from Manitoba. Few applications were submitted at Stage 1 from Atlantic Canada (less than 5% combined from Nova Scotia, Newfoundland and Labrador, New Brunswick and Prince Edward Island). Nova Scotia was the only Atlantic province to receive a Foundation Grant. There were no Stage 1 applications received from the Territories (Figure 7 and Table 6).

4.8 Applications by Competition Stage and Primary CIHR Institute

Primary CIHR InstituteFootnote 4 information was self-reported by the applicant at the time of submission of the application.

The proportion of applications moving from Stage 1 to funded remained relatively consistent across the thirteen Institutes, with a large majority changing only 1-2%. Applications affiliated with the Institute of Aging experienced the largest drop: from 5% submitted at Stage 1 to 2% funded (Figure 8).

The proportion of applications affiliated with each primary Institute varied greatly across the thirteen Institutes for those submitted at Stage 1, advancing to Stage 2 and funded. Applications affiliated with the Institute of Neurosciences, Mental Health and Addiction had the largest proportion submitted at Stage 1 (16%) and funded (17%). Applications affiliated with the Institute of Cancer Research and the Institute of Circulatory and Respiratory Health followed with the next highest proportion funded at 13% and 12% respectively. Applications affiliated with the Institute of Aboriginal Peoples’ Health and the Institute of Aging had the lowest proportion funded at 1% and 2% respectively. The only Institute without any primary affiliation to a funded application is the Institute of Gender and Health (Figure 8).

4.9 Applications by Competition Stage and Institution Size

For the purpose of this analysis, institution size has been defined using Maclean’s categorization system of Canadian post-secondary institutionsFootnote 5. Maclean’s places universities in one of three categories, recognizing the differences in types of institutions, levels of research funding, the diversity of offerings, and the breadth and depth of graduate and professional programs (Table 7).

Table 7. Institution size and Maclean’s categorization system of Canadian post-secondary institutions

Institution Size Maclean’s Categorization System
Large
  • Medical Doctoral – have a broad range of Ph.D. programs and research, as well as medical schools.
Medium
  • Comprehensive - have a significant amount of research activity and a wide range of programs at the undergraduate and graduate level, including professional degrees.
Small
  • Primarily Undergraduate – largely focused on undergraduate education, with relatively fewer graduate programs and graduate students.
Other
  • Those classified as “Other” did not fit Maclean’s categorization system.

Of total applications submitted at Stage 1, 89% were from large institutions, 8% were from medium institutions and the remaining 3% were from small and other institutions. As the competition progressed, the large majority of funded applications were from large institutions, accounting for 99% of funded applications (Figure 9).

5. Summary of Survey Results

The survey results of this pilot will be summarized in the next sections, and will focus on:

  1. Effectiveness of the Structured Application Process
  2. Clarity of the Adjudication Criteria and Scale
  3. Impact on Reviewer Workload
  4. Effectiveness of the Structured Review Process

A description of the pilot participants (survey participant demographics) can be found in Appendix 2. The detailed survey results can be found in Appendix 1.

5.1 Effectiveness of the Structured Application Process

Within this section, applicants, research administrators and reviewers’ experiences with the new structured application process and the Foundation Grant design elements will be discussed.

Overall Impression of the Structured Application Process

Overall, at Stage 1, the large majority of research administrators and the majority of Stage 1 applicants found the structured application format easy to work with, intuitive, and indicated that they were satisfied with the format (Figure 10). Similarly, the majority of Stage 2 applicants also agreed that the structured application format was easy to work with, intuitive and were generally satisfied with the process (Figure 11).

A majority of Stage 1 applicants indicated that completing the structured application form required less time and approximately one half indicated it was easier to use compared to the last application they submitted to CIHR (Figure 12).

A large majority of Stage 1 and Stage 2 reviewers agreed that the structured application format was helpful in their review process. In addition, a majority of Stage 1 and Stage 2 reviewers agreed that applicants were able to convey the information required to conduct a complete review using the structured application format (Figure 13 and 14).

A small minority of Stage 1 reviewers commented that applicants misunderstood or misinterpreted the adjudication criteria, that character limits were too restrictive and that the information provided in different sections of the application was redundant (Table 8).

A small minority of Stage 2 reviewers commented that the structured application format was useful and reduced reviewer burden. While another small minority commented that they did not find the format appropriate, indicating that applicants did not provide sufficient detail and were not successful at describing a program of research (Table 9).

Character Limits in the Structured Application Form

Stage 1

The majority of Stage 1 applicants indicated that character limits were appropriate for the sections on Leadership, Productivity, and Vision and Program Direction. However, approximately half of Stage 1 applicants did not find the character limits imposed on the Significance of Contributions section to be appropriate. In contrast, the majority of Stage 1 reviewers and research administrators agreed that character limits for all adjudication criteria were appropriate (Figure 15), with the majority of Stage 1 reviewers indicating that applicants made good use of the character limits (Figure 16).

Stage 2

The majority of Stage 2 applicants and reviewers found the character limits for each adjudication criterion to be appropriate except for the Research Approach section (Figure 17). When asked what the ideal character limit should be, Stage 2 applicants and reviewers indicated a preference for increasing the Research Approach section from 7,000 characters (2 pages) to 10,500‑12,250 characters (3‑3.5 pages) (Figure 18).

Value of Foundation Scheme CV

Program Leader applicants were required to complete a new CV template designed specifically for the Foundation Grant competition. Throughout the survey process, feedback regarding the usefulness and helpfulness of this new Foundation Scheme CV was collected.

The majority of both Stage 1 applicants and reviewers indicated that the CV was helpful in determining the caliber of an applicant (Figure 19). Similarly, the majority of Stage 2 applicants and reviewers agreed that the Foundation Scheme CV was helpful in determining the quality of the expertise, experience and resources that Stage 2 applicants possess (Figure 20).

Stage 1 and Stage 2 applicants and reviewers indicated that all sections of the CV were relevant. Applicants and reviewers found the limits for the Publications section, followed by the Presentations, Recognitions and Supervisory Activities sections of the Foundation Scheme CV to be least appropriate (Figures 21‑24). The following ideal limits were proposed by those applicants and reviewers who did not find the limits to be appropriate: a majority of applicants and reviewers requested that the limit for Publications be increased above 25, a majority of applicants and reviewers requested that the limit for Presentations be increased from 10 to 20 or more, a majority of applicants and reviewers requested that the limit for Recognitions be increased from 5 to 10, and a majority of applicants and reviewers requested that no limit be imposed for Supervisory Activities (Table 10 and 11).

When asked for feedback on the value of the Career Contributions table, the majority of applicants and reviewers agreed that the table provided useful information for Stage 1 and 2 reviewers (Figure 25 and 26).

When asked what other CV information applicants would have liked to convey to reviewers, a small minority of Stage 1 applicants indicated wanting to include Assessment and Review Activities (Table 12). A small minority of reviewers indicated that additional publication metrics would have been useful at Stage 1 review (Table 13).

Value of Applicant Support Materials

CIHR developed a number of supporting documents to assist applicants in completing required tasks across Stage 1 and 2 of the application process.

Throughout the survey process, feedback regarding the extent to which the supporting documents were used and their level of helpfulness was collected. While there was variation in the use of each document, the majority of Stage 1 and Stage 2 applicants found the documents used helpful (Figure 27 and 28).

Similarly, throughout the survey process, feedback regarding the extent to which webinars and interactive learning sessions were used and their level of helpfulness was collected. At Stage 1 and Stage 2 applicants participated in webinars more than they accessed the interactive lessons, with the number of webinar participants increasing at Stage 2. Although at Stage 1, the majority of applicants did not find the interactive lessons or the webinars to be helpful, at Stage 2, the majority of applicants indicated that the interactive lessons were helpful and the large majority indicated that the webinars were helpful (Figure 29).

5.2 Clarity of the Adjudication Criteria and Adjudication Scale

Within this section, applicant, research administrator and reviewer experiences with respect to the adjudication criteria, and reviewer experiences with respect to the adjudication scale, will be discussed.

Adjudication Criteria at Stage 1

The large majority of Stage 1 applicants and research administrators found the adjudication criteria to be clear (Figure 30). A small minority of Stage 1 applicants reported significant overlap between the Productivity and Significance of Contributions sub-criteria; a lack of clarity around what should be included in the Leadership and Vision and Program Direction sections; that adjudication criteria were not easily applicable across career stages; and that character limits for adjudication criteria were insufficient (Table 14).

Overall, it was clear to the majority of Stage 1 reviewers what they should be assessing for each criterion (Figure 31A). Moreover, the majority of Stage 1 reviewers felt that applicants understood what information should be included in each section of their application and indicated that they were able to assess each adjudication criterion using the information provided by the applicant (Figure 31B and 32).

The majority of Stage 1 reviewers agreed that the adjudication criteria were appropriate in that Stage 1 reviewers were able to assess the caliber of the applicant based on the criteria (Figure 33A). Similarly, reviewers indicated that the adjudication criteria allowed them to meaningfully distinguish differences in the caliber of applicants (Figure 33B). However, a minority commented that there was a lack of clarity on how to evaluate adjudication criteria across career stages, with a small minority noting that there was a particular lack of clarity on how to assess the Leadership criterion for new/early career investigators (Table 15), and a majority indicated that additional guidance regarding the information that should be included in each adjudication criterion is required (Figure 34).

The majority of Stage 1 applicants and reviewers did not feel that additional adjudication criteria should be considered or that any adjudication criteria should be removed (Figure 35). Currently, each Stage 1 adjudication criterion is weighted equally at 25% per criterion. The majority of Stage 1 applicants and reviewers found the weighting of the adjudication criteria to be appropriate (Figure 36).

Adjudication Criteria at Stage 2

It was clear to the large majority of Stage 2 applicants what application information should be included in each section of their application, with the exception of the Research Approach section where the majority of applicants agreed that it was clear what application information should be included (Figure 37A). Similarly, the majority of Stage 2 reviewers agreed that applicants understood what information should be included in each section of their application (Figure 37B), and it was clear to the majority of Stage 2 reviewers what they should be assessing for each adjudication criterion (Figure 37C). Accordingly, the majority of reviewers indicated that they were able to assess each adjudication criterion using the information provided by the applicant (Figure 38).

A minority of Stage 2 reviewers commented that the information provided by applicants for the Mentorship and Training sub-criterion was inconsistent, and it was unclear to reviewers how they should assess this section. In addition, a small minority of reviewers reported key information was missing from the Research Concept and Research Approach sections, that there was a lack of clarity on where applicants should include information on innovation, and how innovation should be assessed by reviewers (Table 16). In alignment, the majority of Stage 2 reviewers felt that additional guidance regarding the information that should be included in each adjudication criterion is required (Figure 39). The majority of Stage 2 applicants and reviewers indicated that they did not feel that any adjudication criteria should be added while approximately half of Stage 2 reviewers indicated adjudication criteria should be removed (Figure 40).

Currently, each Stage 2 adjudication criterion is weighted equally at 20% per criterion. In the survey, both applicants and reviewers were asked to indicate whether the weighting of the adjudication criteria was appropriate. The majority of Stage 2 applicants and reviewers indicated that only the weighting of the Expertise criterion was appropriate (Figure 41). The majority of applicants and reviewers indicated that the ideal weighting for Research Concept and Research Approach was 21‑30% and for Mentorship and Training and Quality of Support Environment was 0‑10% (Figure 42).

Adjudication Scale

Rating

The majority of Stage 1 and Stage 2 reviewers indicated that the descriptors for the adjudication scale were clear and useful (Figure 43). Additionally, the majority of Stage 1 and 2 reviewers indicated that the adjudication scale range was sufficient to describe meaningful differences between applications (Figure 44). The majority of Stage 1 and 2 reviewers indicated that they used the full range of the adjudication scale and the large majority of Stage 1 reviewers also indicated that they found rating applications useful for ranking them (Figure 44).

The large majority of Stage 1 and 2 reviewers indicated that the ratings (O++, O+, O, E++, E+, E, G, F, P) selected for each adjudication criterion aligned with their comments provided on each section (Figure 45). Furthermore, the large majority of Stage 1 and Stage 2 reviewers found it helpful to have added granularity at the top of the rating scale (e.g. O++, O+, O) in order to indicate differences between highly competitive applicants (Figure 46).

When asked how they felt about the rating process, the large majority of Stage 1 reviewers and the majority of Stage 2 reviewers agreed that rating each adjudication criterion was a useful tool for reviewers to help with ranking the applications (Figure 47).

Ranking

According to the majority of Stage 1 reviewers and the large majority of Stage 2 reviewers, the ranking process was intuitive and a large majority of Stage 1 reviewers also found the ranking process to be easy to use (Figure 48). Rating each adjudication criterion is meant to be a tool to inform the overall ranking decisions and the large majority of Stage 1 and 2 reviewers agreed that it is appropriate to adjust the ranking of applications before they submit their decisions to CIHR (Figure 49).

The large majority of Stage 1 reviewers and the majority of Stage 2 reviewers were required to break ties between applications as part of their review process. Stage 1 reviewers, on average, were required to break between 2-3 ties, while for Stage 2 reviewers the average was between 1 and 3 ties (Figure 50). For the majority of Stage 1 and 2 reviewers, the purpose and the process of breaking ties was clear (Figure 51 and 52). The large majority of Stage 1 and 2 reviewers agreed that the ratings (O++, O+, O, E++, E+, E, G, F, P) assigned to each adjudication criterion produced a rank list of applications that was more or less in order from best to worst application (Figure 53).

When asked to comment on the ranking process at Stage 1 and how it can be improved for future competitions, a minority of Stage 1 reviewers suggested that new/early career investigator applications be ranked separately (Table 17).

Assessing Applications Across Career Stages

The large majority of Stage 1 reviewers disagreed that the adjudication criteria could easily be applied across career stages (Figure 54A). Similarly, the majority of Stage 2 reviewers disagreed that the adjudication criteria for Stage 2, which focused on assessing the Quality of the Proposed Program of Research, and Quality of the Expertise, Experience and Resources, were applicable across career stages (Figure 54B).

The majority of Stage 1 reviewers indicated that they were unable to effectively rate applications across career stages (Figure 55). A small minority of Stage 1 reviewers reported that new/early career investigators did not provide enough evidence under Caliber of Applicant, which created challenges in rating/ranking applications across career stages (Table 18). A small minority of Stage 1 reviewers also commented that the Leadership and/or Significance of Contributions criteria affected the ranking of new/early career investigators, and mid-career investigators (Table 19).

Approximately equal proportions of Stage 2 reviewers agreed and disagreed with being able to effectively rate and rank applications across career stages at Stage 2 (Figure 56). A small minority of Stage 2 reviewers commented that the adjudication criteria favoured established investigators, and that they would prefer to review applications by career stage to ensure a fair review (Table 20).

5.3 Impact on Reviewer Workload

Within this section, reviewer experiences with respect to workload of Stage 1, Stage 2 and Final Assessment Stage reviews will be discussed.

Stage 1 Reviewer Workload

Responses from Stage 1 reviewers regarding their perceived workload were mixed. Approximately one third of reviewers indicated their perceived workload was just right, a little over a third indicated it was manageable to challenging, while approximately one quarter indicated that their workload was challenging (Figure 57A).

The majority of Stage 1 reviewers were assigned between 13 and 18 applications (Figure 57B). Compared to the last time they reviewed for a CIHR competition, the majority of Stage 1 reviewers indicated that the workload was less (Figure 57C).

Stage 1 reviewers were asked to compare the workload involved in each of the following activities to the last time they reviewed for a CIHR competition: reading one application; looking up additional information related to one application online; and writing the reviews of one application. The large majority of reviewers indicated that less work was involved in reading one application and writing the reviews of one application (Figure 58). Responses were mixed among Stage 1 reviewers regarding the workload associated with looking up additional information related to one application online, with approximately one third indicating less work was involved, one third indicating more work was involved and the remaining third were neutral (Figure 58).

Reviewers were asked to approximate the amount of time spent on various review activities. A large majority of reviewers indicated that it took 2 hours or less for each activity including, reading a single application, looking up additional information regarding an application, writing the review of a single application, reading other reviewers’ preliminary reviews, participating in online discussions and completing the ranking of assigned applications (Figure 59).

Stage 2 Reviewer Workload

Similar to Stage 1 reviewers, Stage 2 reviewers provided mixed feedback regarding their perceived workload. Approximately one quarter of reviewers indicated their perceived workload to be just right, a little less than half indicated it to be manageable to challenging and approximately one quarter indicated it to be challenging to excessive (Figure 60A).

The majority of Stage 2 reviewers were assigned between 8 and 13 applications (Figure 60B). In comparing the current workload assigned to Stage 2 reviewers to the last time they reviewed for an Open Operating Grant Program (OOGP) competition, the feedback varied. Approximately 40% of reviewers indicated that it was less work, while roughly an equal proportion indicated that it was more work, with the remaining proportion indicating a neutral response (Figure 60C).

Stage 2 reviewers were asked to compare the workload involved in each of the following activities to the last time they reviewed for an OOGP competition: reading one application; looking up additional information related to one application online; and writing the reviews of one application. A majority of reviewers indicated less work was involved in reading one application and writing the reviews of one application (Figure 61). Similar to Stage 1 reviewers, responses were mixed among Stage 2 reviewers regarding the workload associated with looking up additional information related to one application online with approximately one third indicating less work was involved, one third indicating more work was involved and the remaining third being neutral (Figure 61).

Stage 2 reviewers were asked to approximate the amount of time spent on various review activities. Similar to Stage 1 reviewers, a large majority of Stage 2 reviewers indicated that it took 2 hours or less for each activity including, looking up additional information regarding an application, writing the review of a single application, reading other reviewers’ preliminary reviews, participating in online discussions and completing the ranking of assigned applications. A majority of Stage 2 reviewers indicated it took less than 2 hours to read a single application (Figure 62).

FAS Reviewer Workload

Approximately one half of FAS reviewers indicated that the perceived reviewer workload was manageable to challenging, while approximately one quarter indicated it was just right and a remaining quarter indicated it was challenging (Figure 63A).

The large majority of FAS reviewers were assigned 16 applications (Figure 63B). Compared to the last time FAS reviewers reviewed for an OOGP competition, just under half of reviewers indicated that the workload assigned to them in this competition was less, approximately a third indicated that it required more work and the remaining proportion indicated a neutral response (Figure 63C).

Value of Reviewer Support Materials

CIHR developed a number of supporting materials to assist reviewers in completing their required tasks across Stage 1, Stage 2 and the FAS reviews.

Throughout the survey process, feedback regarding the use of supporting documents, webinars and interactive learning sessions and their level of helpfulness was collected. While the documents, lessons and webinars used varied, the large majority of reviewers who used the support materials found they were helpful (Figure 64‑69).

5.4 Effectiveness of the Structured Review Process

Within this section, the effectiveness and overall satisfaction experienced by reviewers (Stage 1, Stage 2 and FAS) of the new structured review process will be discussed.

Stage 1 and Stage 2 Review Process

Overall Satisfaction with the Stage 1 Review Process

Approximately half of Stage 1 reviewers indicated some level of satisfaction with the structured review process (Figure 70A). The majority of Stage 1 reviewers agreed that the structured review process was appropriate, was a useful way to provide feedback to applicants and was intuitive and easy to use (Figure 70B).

After receiving their notice of decision, Stage 1 applicants were asked to identify an overall level of satisfaction with the adjudication process. A majority of those who were successful at Stage 1 were satisfied with the adjudication process, while a majority of those who were unsuccessful were dissatisfied with the adjudication process. However, both a large majority of applicants successful at Stage 1 and a majority of applicants unsuccessful at Stage 1 agreed that there was value in the structured review process (Figure 71).

When asked to comment on the consistency of the reviews they received, a small minority of Stage 1 applicants noted that: there was variability between reviewer ratings and rankings; there was variability between reviewer comments, and/or their ratings and comments were discordant; and it was unclear how reviewers took career stage into consideration (Table 21).

When asked how CIHR could improve the review experience, a small minority of Stage 1 applicants suggested that CIHR: modify the review process to better take into account different career stages; encourage discussions between reviewers; include additional information with the reviews (e.g., rankings from each reviewer; description of the letter ratings); and monitor reviewer comments (Table 22).

Overall Satisfaction with the Stage 2 Review Process

A large majority of Stage 2 reviewers indicated that the structured review process was helpful to Stage 2 reviewers in their review process (Figure 72A). However, only a minority of Stage 2 reviewers indicated that they were overall satisfied with the structured review process (Figure 72B). Compared to the last time Stage 2 reviewers reviewed for CIHR for a non-pilot competition, the majority agreed that the structured review process made it easier to review. However, less than half of all respondents indicated that it was a better way to provide feedback to applicants (Figure 72C).

After receiving their notice of decision, either at Stage 2 or at the FAS, applicants were asked to identify an overall level of satisfaction with the adjudication process. A large majority of applicants who did not advance to the FAS were dissatisfied with the Stage 2 adjudication process, while a large majority of funded applicants were satisfied. Of those who advanced to the FAS but were not funded, an approximately equal proportion was satisfied and dissatisfied with the Stage 2 adjudication process (Figure 73A). A majority of applicants who were funded and applicants who advanced to the FAS but were not funded agreed that the structured review process had value; while less than half of applicants who were not funded agreed (Figure 73B). Over half of funded applicants agreed that the Stage 2 review process was fair and transparent, while less than half of applicants who advanced to the FAS but were not funded and only a small proportion (approximately 10%) of applicants who did not advance to the FAS agreed (Figure 73C).

When asked about the quality of Stage 2 peer review judgements, a majority of applicants who were funded and applicants who advanced to the FAS but were not funded were satisfied with the quality, while a large majority of applicants who were not funded were dissatisfied (Figure 74A). A large majority of funded applicants and a majority of applicants who advanced to the FAS, but were not funded, agreed that the Stage 2 reviews received were consistent in that the written justifications aligned with the respective ratings, while a large majority of applicants who were not funded disagreed (Figure 74B).

Applicants were asked to comment after Stage 2 decisions on the consistency of the Stage 2 reviews they received. A minority of applicants provided feedback that ratings and comments were highly discordant across reviewers, and a minority stated that assigned ratings were not consistent with the comments provided. A small minority of applicants noted that comments were brief, uninformative, irrelevant and/or misguided (Table 23).

When asked how CIHR could improve the review experience, a small minority of applicants suggested: ensuring better alignment between application and reviewer expertise; re-instating face-to-face committee meetings; clarifying the level of detail expected in the application; and making the adjudication process career-stage specific. Further, a small minority of applicants suggested promoting better alignment between reviewers and removing outlying or non-compliant reviews (Table 24).

Reading Preliminary Reviews

A large majority of Stage 1 and Stage 2 reviewers indicated that they read the preliminary reviews of other reviewers (Figure 75). At Stage 1 and Stage 2, a large majority of reviewers found the ability to read others’ reviews helpful (Figure 76). A large majority of Stage 1 reviewers indicated that this component of the Stage 1 review process was important, and that it influenced their assessments of at least one application (Figure 76A). Similarly, a majority of Stage 2 reviewers indicated that the ability to read other reviewers’ reviews influenced their assessment of at least one application (Figure 76B).

When asked why they read preliminary reviews, a minority of Stage 1 and Stage 2 reviewers responded that it was to calibrate their reviews, while for a small minority of reviewers it was to address discrepancies and participate in online discussions (Table 25 and 26).

Online Discussions

At Stage 1, 66% of applications (899 out of 1,366) were discussed online by 42% of Stage 1 reviewers (188 out of 443). At Stage 2, 83% of applications (370 out of 445) were discussed online by 95% of Stage 2 reviewers (207 out of 217). A large majority of Stage 1 and Stage 2 reviewers indicated they both read and participated in online discussions (Figure 77) and found the online discussion tool easy to use (Figure 78). The majority of Stage 1 and Stage 2 reviewers further indicated that participation in an online discussion was helpful in their review process, influenced their assessment of an application, and influenced other reviewers’ assessments of applications (Figure 79).

A large majority of Stage 1 reviewers agreed that the online discussion is an important part of the Stage 1 review process and should be mandatory for those reviewers who have divergent views of the same applications (Figure 80A and 80B). A large majority of Stage 2 reviewers agreed that online discussion should be mandatory for those reviewers who have divergent views of the same application (Figure 80C). When asked how the online discussion tool could be improved for future competitions, a small minority of Stage 1 and Stage 2 reviewers suggested: making the online discussion synchronous/real-time or revert back to the face-to-face meetings; and making the online discussion a mandatory component of the Stage 1 review, particularly in situations when there are scoring discrepancies between reviewers (Table 27).

Role of the Virtual Chair

Within this section, Stage 1 and Stage 2 reviewers were asked to provide feedback on the role of the Virtual Chair.

According to a majority of Stage 1 and Stage 2 reviewers, the role of the Virtual Chair was appropriate, helpful and helped to ensure that necessary online discussions took place (Figure 81). Moreover, a majority of Stage 1 reviewers agreed that it was helpful to have the Virtual Chair prompt discussions between reviewers when necessary (e.g., scoring discrepancies) (Figure 82). When asked who online discussions should be initiated by, the majority of Stage 1 and Stage 2 reviewers indicated that they should be initiated by the Virtual Chair (Figure 83). Reviewers were also asked what criteria should be used to determine whether an online discussion should take place and the majority of both Stage 1 and Stage 2 reviewers indicated that discussions should be initiated to address scoring discrepancies (Figure 84).

Budget Assessment

Within this section, Stage 2 reviewers’ feedback on their experience with the budget assessment process will be discussed.

Less than half of reviewers agreed that the budget assessment process was clear (Figure 85A). Accordingly, less than half of reviewers were able to effectively assess whether budget requests should be accepted as described or adjusted, or were able to effectively assess budget requests across career stages (Figure 85B and 85C).

Less than half of Stage 2 reviewers indicated that the budget categories were appropriate to assess the breakdown of budget requests (Figure 86A). While the majority of Stage 2 reviewers agreed that applicants clearly presented their past funding history, equal proportions of reviewers agreed and disagreed when asked if applicants provided the necessary information in each budget category for reviewers to be able to assess the appropriateness of the applicants’ budget requests (Figure 86B and 86C).

Just over half of Stage 2 reviewers agreed that the character limits were sufficient for applicants to justify the amount requested within each budget category and the appropriateness of the funds requested to support the proposed program of research (Figure 87). However, only approximately a third of reviewers indicated that applicants provided clear justifications for the appropriateness of the funds requested to support the proposed program of research (Figure 88A). Similarly, only a minority of reviewers indicated that applicants provided acceptable justifications when asking for more than their baseline amount (Figure 88B).

Of those Stage 2 reviewers who provided comments on the budget assessment, a small minority of reviewers indicated that: the justifications provided for the budget requests were limited, making them difficult to evaluate; budget requests significantly exceeded historical baseline amounts; and past funding history was challenging to interpret (Table 28).

FAS Review Process

Within this section, the effectiveness and overall satisfaction experienced by FAS reviewers of the new structured review process will be discussed.

Overall Satisfaction with the FAS Review Process

Overall, a majority of FAS reviewers were dissatisfied with the structured review process (Figure 89A). As noted in Table 29, a minority of FAS reviewers commented that the role of the FAS reviewer was not clear and expressed concerns with respect to differences in assessment approaches, namely that a small minority of FAS reviewers only reviewed the Stage 2 reviews while others also consulted the grant application (see section on Reviewing Stage 2 Reviews below). A small minority of reviewers commented that FAS reviewers were not given sufficient time to review their assigned applications and that they were not necessarily assigned to applications within their area of expertise (Table 29). However, all FAS reviewers agreed that having three reviewers assigned to each application was appropriate for the FAS review process (Figure 89B).

Reviewing Stage 2 Reviews

In assessing applications in the “grey zone”, a large majority of FAS reviewers disagreed that Stage 2 reviewers used the full range of the adjudication scale and a majority of FAS reviewers indicated that Stage 2 reviewers did not provide clear or sufficient feedback to support their ratings (Figure 90).

A large majority of FAS reviewers consulted the grant applications in addition to the Stage 2 reviews and a large majority of FAS reviewers agreed that reading both the application and the Stage 2 reviews was useful and necessary for the FAS (Figure 91).

A majority of FAS reviewers commented that the Stage 2 reviews provided were variable in quality and suggested that Stage 2 reviewers did not provide sufficient rationale for why ratings were chosen. A minority of reviewers indicated that they did not trust the quality of Stage 2 reviews and needed to consult the grant applications in addition to the Stage 2 reviews. A small minority of FAS reviewers commented that they did not confirm or challenge the Stage 2 reviews, stating that their instructions indicated not to consult the application. A small minority of FAS reviewers did indicate that the Stage 2 reviews were helpful when the area of research was outside the scope of the FAS reviewer’s expertise (Table 30).

Binning Process

A large majority of FAS reviewers read the comments of the other FAS reviewers (Figure 92A). When asked the total time spent reading other reviewers’ comments, approximately half of FAS reviewers indicated that it took under an hour and a large majority indicated that it took under two hours (Figure 92B). Approximately one quarter of FAS reviewers indicated that reading other reviewer comments/binning decisions influenced their assessment (Figure 92C).

According to a majority of FAS reviewers, the number of YES and NO allocations for the binning process were not appropriate (Figure 93). When FAS reviewers were asked to provide feedback regarding the ideal YES and NO allocations for the binning process, a small minority of FAS reviewers indicated that they would prefer not to have set allocations while a small minority of indicated that they would like a maybe/neutral bin (Table 31).

Pre-Meeting Reviewer Comments

Approximately half of the FAS reviewers found the comments provided by other FAS reviewers to be helpful in their preparation for the face-to-face meeting (Figure 94).

When asked to comment on the FAS reviewer comments, a small minority of FAS reviewers indicated that other FAS reviewers did not provide comments or that they did not know what to include in the comments. Another small minority noted that overall comments were not helpful (Table 32).

Face-to-Face Meeting

All FAS reviewers agreed that a face-to-face meeting is required to determine which “grey zone” applications should be funded (Figure 95). The majority of reviewers agreed that focusing the discussion on applications in Group B is appropriate, that the process of moving applications from Groups A/C to B was clear and easy, and that the process of moving applications between groups was efficient (Figure 96).

Reviewers used a voting tool to indicate whether they thought the applications should be considered for funding. All FAS reviewers agreed that the voting tool was easy to use (Figure 97). A large majority of reviewers agreed that the instructions of the voting process were easy to follow and a majority agreed that the voting process was effective (Figure 97).

A funding cut-off line was displayed for reviewers on the list of applications to help reviewers focus their discussions. Responses were mixed when FAS reviewers were asked if the funding cut-off line helped to inform the discussion at the meeting, with approximately equal numbers of reviewers agreeing and disagreeing that the funding cut-off line was helpful (Figure 98).

When asked to provide comments on the face-to-face meeting, a small minority of reviewers indicated that they would have liked to have been informed prior to the meeting of the YES/NO votes that would be allowed and a small minority of reviewers felt the instructions were not clear. A minority of reviewers did state that the face-to-face meeting is invaluable for discussing applications (Table 33).

6. Conclusions and Considerations for Future Directions

The 2014 Foundation Grant “live pilot” competition provided CIHR with an opportunity to assess new processes and systems, and solicit feedback from applicants, research administrators, reviewers and Virtual Chairs.

Overall, applicants and reviewers found a number of the new design elements helpful in their application and review. There are also a number of areas where improvements and continued monitoring are warranted to ensure that the Foundation Grant program meets its objectives. Based on the feedback received, CIHR has implemented a number of enhancements to the 2015 Foundation Grant “live pilot” competition that are summarized in Table 1.

Elements Being Further Evaluated in the 2015 Foundation Grant “Live Pilot” Competition

Based on the results of this pilot, feedback on specific elements will be further evaluated over the course of the 2015 Foundation Grant “live pilot” competition to determine if any additional changes are required. Specifically, CIHR will evaluate the need to increase clarity, adjust character limits and/or merge sections of the application where there is perceived overlap. In addition, whether Stage 1 results should be considered at Stage 2 or at the FAS, and if so, to what extent, will be further evaluated. For the 2015 competition, it is anticipated that there will be increased continuity between Stage 1 and Stage 2 by having the same Virtual Chairs monitoring the same set of applications at both stages, where possible.

CIHR will further evaluate specific design elements over the course of multiple iterations of both Project and Foundation Grant competitions. There may be an adjustment period for both applicants and reviewers to new design elements and increased experience within competitions may address some of the challenges reported in the first “live pilot”.

Further Analyses

CIHR plans to conduct additional analyses in order to ensure the reliability, consistency, and fairness of the new Investigator Initiated Programs and peer review processes. The results of these additional analyses will be disseminated as they are completed.

In particular, CIHR has applied learnings from the 2014 Foundation Grant “live pilot” competition to inform reviewer education and future performance management strategies and is working to define review quality and establish a set of measurable indicators that will be monitored in subsequent pilots. As well, CIHR will assess and validate the distribution of Foundation Grant funding in particular priority areas, such as Aboriginal Health, Ethics, Global Health, Aging, and Sex. Competition results will also be analyzed to investigate whether there are any potential biases in the process that disadvantage particular groups, such as new/early career researchers, mid-career researchers and clinical researchers. An overall assessment of the breakdown of applicant and grantee demographics will be compared to prior Open Operating Grant Program competition data once multiple cycles of the Foundation and Project Grant competitions have taken place.

Date modified: