Stakeholder assessment of the 2015 Foundation Grant funding competition
Final Report

Prepared by:

  • Dr. Jamie Park
  • Mahrukh Zahid
  • Julie Bain
  • Jennifer Rup
  • Dr. Jemila Hamid
  • Caitlin Daly
  • Dr. Julia E. Moore
  • Dr. Sharon Straus

For questions about this report, please contact:

Jamie Park, Ph.D
Research Coordinator
Knowledge Translation Program
Li Ka Shing Knowledge Institute
St. Michael's Hospital
Toronto, Canada
Email: ParkJam@smh.ca
Phone: 416-864-6060 ext. 76219

Table of Contents


Abbreviations

CCV
Canadian Common CV
CIHR
Canadian Institutes of Health Research
FAS
Final assessment stage
IQR
Interquartile range
NOD
Notice of decision
OOGP
Open Operating Grant Program
VC
Virtual chair

Key Definitions

Stage 1 applicants
Applicants who have submitted a research proposal to the first stage of the 2015 Foundation grant competition
Stage 1 reviewers
Reviewers who assessed applications in the first stage of the 2015 Foundation Grant competition using an internet-based platform
Stage 1 virtual chairs
Chairs responsible for overseeing and supporting the Stage 1 remote review process
Stage 1 applicants after decision
Applicants at the end of the Stage 1 review process who have been notified if they were successful or unsuccessful to continue on to Stage 2
Stage 2 applicants
Applicants were successful in Stage 1 and have submitted a Stage 2 application to the second stage of the 2015 Foundation grant competition
Stage 2 reviewers
Reviewers who assessed applications in the second stage of the 2015 Foundation Grant competition using an internet-based platform
Stage 2 virtual chairs
Chairs responsible for overseeing and supporting the Stage 2 remote review process
FAS reviewers
Stage 2 virtual chairs who became reviewers and participated in a face-to-face discussion for the Final assessment stage
Stage 2 applicants after decision
Applicants at the end of the Final assessment Stage who have been notified if they were successful or unsuccessful in the 2015 Foundation grant competition

Executive Summary

Purpose

The Canadian Institutes of Health Research has been working with the research community to reform and modernize the Investigator Initiated Programs and review processes. As part of the reform, in 2014 the CIHR introduced the first Foundation Grant competition designed to provide long-term support for innovative, high-impact programs of research. A second Foundation Scheme competition was launched in fall 2015 and feedback was collected on the application and peer review processes. This report summarizes the feedback received from applicants, research administrators, peer reviewers, and virtual chairs (VCs) from the three stages of the 2015 Foundation Grant competition.

A total of 10 surveys were disseminated and analyzed to evaluate each stage (i.e., Stage 1, Stage 2, Final Assessment Stage (FAS) of the Foundation Scheme Competition and group of participants (i.e., applicants, research administrators, reviewers, VCs) in the process; response rates and demographics are reported in Sections 1 and 2. The overall response rate was 51.0% (n=1604) and the findings presented in the following sections are representative of the final dataset of survey responses. Sections 3 and 4 provide an overview of the respondents' perceptions of the adjudication criteria, scale, and weight. Overall, respondents indicated that adjudication criteria was clear however there was an opportunity to further clarify the "Productivity" and "Significance of Contributions" sections in Stage 1 and the "Research Approach" and "Expertise" in Stage 2. It was suggested by VCs that the full range of the adjudication scale was not used, perhaps due to the ambiguity of an alphabetical scale. Respondents also indicated challenges with applying the "Leadership" criteria and "Mentorship" criteria to early career investigators. Applicants and reviewers suggested decreasing the "Leadership" weighting in Stage 1 and increasing "Productivity" and "Vision and Program Direction". For Stage 2, they suggested increasing the weight of the "Research Approach" and decreasing the "Quality of the Support Environment". Sections 5 and 6 include an overview of the respondents' satisfaction with the application process and format. Generally, Stage 1 applicants were satisfied with their stage of the application process compared to Stage 2 applicants. Applicants indicated that they were not confident that reviewers knew about character limits that were imposed on the applications. Respondents suggested increasing the limits for the "Significance of Contributions" for Stage 1 and "Research Approach" in Stage 2.

Sections 7 and 8 cover the respondents' perceptions of the CV and budget. Respondents indicated that the Canadian Common CV (CCV) was challenging to use due to technological issues. Applicants also suggested increasing the limits of most Stage 2 CV sections and the budget justification section. Completing and reviewing the budget was also found to be challenging using the character limits provided and reviewers suggested that applicants should provide more justification when asking for increased budget. Sections 9 and 10 provide a high level overview of the relevance of supporting documents and learning materials by respondents. Respondents both used and found the documents and materials helpful; however, they suggested streamlining the information by consolidating documents. Section 11 presents respondents' feedback on ResearchNet. Generally, respondents were satisfied with ResearchNet use and the support service provided. Stage 2 applicants were less satisfied as other respondents with ResearchNet support service. Sections 12 and 13 provide information on overall satisfaction with the review format and process. Overall, reviewers were satisfied with the character limit in the review worksheets. Applicants, reviewers, and VCs were divided in how satisfied they were. Applicants after decision were generally dissatisfied; however, this was associated with their successful advancement in the process. Additionally, FAS reviewers were satisfied with their review process while Stage 1 and 2 reviewers were concerned with how the adjudication criteria and scale were being used by other reviewers. Section 14 includes reviewers' and VCs' experiences with the ranking process. Reviewers struggled with using and interpreting the alphabetical scale and had difficulty rating across career stages. Reviewers also did not understand the purpose of breaking ties during their ranking process. Sections 15 and 16 present feedback on reading reviews and responses on the quality of the reviews. Respondents perceived a lack of consistency on the quality of reviews, and that reviews were too brief or lacked specificity to inform the development of a revised grant. Sections 17, 18, and 19 include feedback on the online discussion process, role of the VC and perceived workloads. Feedback indicated that the discussions were crucial to the process however there was inconsistent participation by reviewers and VCs. Participation of the VC was found to be generally helpful and workloads were manageable. Sections 20 and 21 summarize feedback on the face-to-face meeting and NOD document. Overall, the respondents found the face-to-face meeting important and successful but moving applications between groups was challenging. Generally, applicants did not find the information in the NOD document helpful and indicated they had challenges interpreting the content. Sections 22 and 23 provide a high level overview of feedback received on the surveys and the limitations of this report.

Competition Overview

The 2015 Foundation Scheme Competition included registration followed by a three-stage competition and review process. In Stage 1, applicants (i.e., Stage 1 applicants) completed a structured application form that aligned with adjudication criteria focused on their caliber as an applicant. Stage 1 applicants were also required to provide a CV through the web-based CCV. Reviewers (i.e., Stage 1 reviewers) assessed the caliber of the applicant(s) and their vision and program direction. They assessed their assigned applications by providing structured reviews that consisted of an alphabetical rating for each adjudication criterion and brief comments on strengths and weaknesses for each of the Stage 1 criteria. Aided by their ratings, reviewers were asked to rank each application within their group of applications. CIHR combined all reviewer rankings into a consolidated ranking for each application. Reviews were uploaded and then discussed remotely through an internet-based platform that allowed reviewers to communicate virtually during a three day asynchronous online discussion period. VCs (i.e., Stage 1 VCs) oversaw and supported the Stage 1 review process. Once Stage 1 review processes were complete, applicants (i.e., Stage 1 applicants after decision) were provided with a Notice of Decision document (NOD). Successful applicants were invited to submit a Stage 2 application (i.e., Stage 2 applicants). The structured Stage 2 application form included adjudication criteria focused on the quality of the proposed program of research. Stage 2 applicants were given a budget baseline calculated by CIHR. Stage 2 applicants were required to submit additional CV information through the CCV. Reviewers (i.e., Stage 2 reviewers) assessed the quality of the program, expertise, experience and resources in addition to the budget requested. Discussion of Stage 2 reviews was also conducted using the same online platform used in Stage 1 during a three day asynchronous online discussion period. VCs (i.e., Stage 2 VCs) oversaw and supported the Stage 2 review process. In the FAS, VCs became reviewers (i.e., FAS reviewers) who participated in a face-to-face discussion and integrated the results of the Stage 2 reviews. FAS reviewers focused on assessing applications that were identified as being close to the funding cut-off "grey zone" or demonstrated a high degree of variability in Stage 2 reviewer assessments. Once the FAS was complete, the FAS reviewers provided CIHR with recommendations on which applications should be funded. A final NOD document was provided to Stage 2 applicants (i.e., Stage 2 applicants after decision) that integrated Stage 2 and FAS results.

Methods

Online surveys were developed in FluidSurveys and sent to Foundation grant applicants, reviewers, research administrators, and VCs from September 2015 to August 2016. The CIHR administered the surveys and provided the survey results to the Knowledge Translation Program at St. Michael's Hospital for analysis between November 2016 and February 2017. The surveys included closed and open-ended questions. The closed-ended questions were analyzed as proportions of total responses received for a question using SPSS v20. Where appropriate, Likert scale responses were reduced to the nominal level by combining all "agree" and "disagree" responses into two categories of "accept" and "reject"; chi-square tests or Fisher's exact tests were applied to determine the statistical significance. We used t-tests or ANOVAs to compare mean scores of continuous variables across demographic subgroups using the computing environment R. Comments received for open-ended questions were analyzed in NVivo 11. French responses were translated into English. Two qualitative analysts independently familiarized themselves with survey data by reviewing a portion of responses to develop an initial list of codes, key ideas, and themes. Analysts compared their initial list of potential codes and developed an analytic framework to apply to the data. Responses were coded by a single analyst to the developed framework; the framework was further refined and modified to better fit the data by the analyst. An iterative data analysis process was used where the framework was repeatedly adapted during the coding process to capture emergent themes. Note that only responses relevant to the question asked were coded and that one response could be coded to multiple themes. Major findings are presented in this report; responses from all survey questions are presented in the appendices.

Findings

This report consists of the survey response rates and respondents' demographics criteria. It also includes respondents' feedback on 19 areas of the application and review process: their perception of the adjudication criteria, weighting of adjudication criteria, satisfaction with the application process, application format, CV, budget, supporting documents, learning materials, ResearchNet, review format, review process, ranking process, their experience reading reviews, the quality of reviews, online discussion, role of the VC, perceived workload, face-to-face meeting, and the NOD document. The last sections of the report present participants' comments on the surveys used to collect their feedback and the limitations of the survey results. All proportions are calculated from valid responses; associated n values can be found in meta-tables in Appendix A. Additionally, responses from open-ended responses were consolidated and presented as summary statements; associated n values for response themes are found in tables in Appendix B and are organized by survey respondent.

1. Survey response rate

A total of 3146 participants were invited to complete a survey; 1624 (51.6%) responses were received and 20 of these were excluded due to missing data. A total of 1604 responses were included in the final analysis. There were 591 responses from Stage 1 applicants, 31 from Research administrators, 249 from Stage 1 reviewers, 22 from Stage 1 VCs, 380 from Stage 1 applicants after decision, 130 from Stage 2 applicants, 90 from Stage 2 reviewers, 11 from Stage 2 VCs, 7 from FAS reviewers, and 93 from Stage 2 applicants after decision. Stage 2 applicants after decision had the lowest response rate (36.0%) and Stage 1 VCs had the highest (73.0%) (Table 1).

2. Demographics

The following section provides an overview of the respondents' career stage, profession, research pillar, language preferences, funding status, and previous experience. For career stages, early career is defined as having less than five years as an independent researcher, mid-career as 5-10 years, and senior career as over 10 years' experience. For research pillars, these encompass the four pillars of health research (biomedical; clinical; health systems and services; and the social, cultural and environmental) outlined in CIHR's mandate. For a full breakdown of demographic criteria, please refer to Table 2-13 in Appendix A.

2.1 Applicants

Out of 991 Stage 1 applicants, 591 replied and included early (31.8%), mid (21.8%), and senior career scientists (46.0%). Additionally, out of 260 Stage 2 applicants, 130 responded to the survey and included early (33.1%), mid- (18.5%), and senior career scientists (48.5%) (Table 2). Stage 1 and Stage 2 applicant responses included a large proportion of applicants in the biomedical field (56.9%, 53.8%) and a smaller proportion in the clinical (17.3%, 20.8%), health systems/service (10.8%, 13.8%), and social, cultural, environmental, and population health fields (14.6%, 10.8%) (Table 4). A large proportion of Stage 1 and 2 applicants (66.8%, 82.3%) held CIHR funding at the time of the survey (Table 5). When asked about their language use and preference, 87.0% of Stage 1 applicants indicated that they used English as their Official Language, 98.5% used English when completing their application and 96.3% felt comfortable submitting in their language of choice (Table 6). A small proportion (2.9%) of respondents encountered language-related issues when completing their application. Open-ended responses included a belief that submitting an application in English had a higher likelihood of acceptance and an application in French was not as effective. Almost half of Stage 1 applicant respondents (46.7%) had previously applied to the 2014 Foundation competition compared to 35.4% of Stage 2 applicant respondents (Table 7). 

2.2 Applicants after decision

Out of 910 Stage 1 applicants after decision, 380 replied and included early (30.3%), mid- (31.6%), and senior career scientists (37.6%). Additionally, out of 260 Stage 2 applicants after decision, 93 replied and included early (26.9%), mid- (17.2%), and senior career scientists (55.9%) (Table 2). Respondents included a large proportion of Stage 1 and 2 applicants after decision from the biomedical field (61.3%, 61.3%), and a small proportion in the clinical (14.7%, 10.8%), health systems/service (9.5%, 14%), and social, cultural, environmental, and population health fields (13.7%, 14.0%) (Table 4). Nearly half of Stage 1 and 2 applicants after decision respondents (50.9%, 45.2%) had previously applied to the 2014 Foundation competition (Table 7).  The majority of survey responses from Stage 1 applicants after decision (68.6%) were from those who were not successful past Stage 1. Survey responses from Stage 2 applicants after decision were almost equally distributed between those who were (54.8%) and were not successful (45.2%) in the 2015 Foundation Grant competition (Table 8).

2.3 Research administrators

Out of 59 Research administrators, 31 replied and were not asked to complete a demographic section in their survey.

2.4 Reviewers

A total of 433 Stage 1 reviewers, 171 Stage 2 reviewers, and 16 FAS reviewers were invited to complete a survey. The respondents, 249 Stage 1 reviewers, 90 Stage 2 reviewers, and 7 FAS reviewers were mid- (51.2%, 27.8%, 14.3%) and senior career scientists (42.3%, 67.8%, 85.7%) (Table 2). A small proportion of responses were from early career scientists: 5.2% for Stage 1 reviewers and 4.4% for Stage 2 reviewers. Stage 1, 2, and FAS reviewers' included a large proportion from the biomedical field (63.3%; 68.9%, 42.9%), and small proportions from the clinical (18.5%, 11.1%, 28.6%), health systems/service (6.5%, 14.4%, 0%), and social, cultural, environmental, and population health fields (10.5%, 5.6%, 28.6%) (Table 4). Over 75.0% of Stage 1, 2, and Final Assessment reviewer respondents had previous CIHR review experience. A small percentage of reviewers indicated they did not have any previous experience reviewing for CIHR: 1.6% of Stage 1 Reviewers, and 12.2% of Stage 2 Reviewers (Table 9).

2.5 Virtual chairs

A total of 30 Stage 1 VCs were invited to complete a survey, 22 responded and included mid-(18.2%) and senior career scientists (81.8%). Additionally, out of 16 Stage 2 VCs, 11 responded and included mid- (9.1%) and senior career scientists (90.9%) (Table 2). Stage 1 and 2 VCs included a large proportion from the biomedical field (81.8%, 72.7%), and a small proportion from the clinical (4.5%, 9.1%), health systems/service (4.5%, 9.1%), and social, cultural, environmental, and population health fields (9.1%, 9.1%) (Table 4). Over 70.0% of Stage 1 and 2 VC respondents had previous review and chairing experience for CIHR. A small percentage of Stage 1 and 2 VCs indicated that they did not have any previous experience reviewing for CIHR (4.5%, 0%) or chairing for CIHR (4.5%, 9.5%) (Table 9,10).

3. Feedback on the adjudication criteria and scale

As part of the new application and review process, CIHR introduced adjudication criteria specific to each stage. Stage 1 adjudication criteria focused on the caliber of the applicant and included: "Leadership", "Significance of Contributions", "Productivity", and "Vision and Program Direction". Stage 2 adjudication criteria focused on the quality of the proposed research and included: "Research Concept", "Research Approach", "Expertise", "Mentorship and Training", and "Quality of Support Environment". A new rating scale was also developed for the competition that reviewers used to rate each of the adjudication criteria (O++, O+, O, E++, E+, E, G, F, and P). The following section provides an overview of the respondents' experience using the adjudication scale and feedback on the adjudication criteria. This section is organized by application stage and review process. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 22 Stage 1 VCs, 380 Stage 1 applicants after decision, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 93 Stage 2 applicants after decision; associated total responses can be found in Appendix A (Tables 12-27).

3.1 Stage 1

Overall, over 70.0% of Stage 1 applicants and research administrators agreed that Stage 1 adjudication criteria were clear and that they understood what application information should be included for each adjudication criterion (Table 12). Respondents stated that the criteria were clearly outlined in the "Interpretation Guidelines", as indicated by 72.7% of Stage 1 applicants (Table 13). The survey respondents who expressed a lack of clarity about the adjudication criteria described their experiences in the open-ended responses. These survey respondents indicated there was overlap between adjudication criteria, particularly related to content included in "Leadership", "Significance of Contributions", and "Productivity". For example:

"It was not clear sometimes if something was considered Leadership or Significance, especially for junior PIs who's most significant contributions are often also demonstrations of leadership…Also, these can overlap with Productivity, since for most scientists their contributions are publications and grants."

Stage 1 Applicant

Additionally, 58.1% of Stage 1 applicants indicated that the distinction between "Productivity" and "Significance of Contributions" was clear (Table 12). In the open-ended responses, applicants indicated that these criteria shared a significant amount of conceptual overlap and that the distinction between them was unclear. Research administrators expressed similar concerns that this distinction was not clearly understood by applicants. Other areas that respondents believed could be clarified included the "Vision and Program Direction" criterion and the "Summary"section of the application; applicants indicated that these sections felt largely redundant. In addition, survey respondents indicated that they were uncertain about the significance or relevance of some criteria to the adjudication process, particularly how the "Leadership" criterion would be applied across various career stages or disciplines. For example, it was unclear how early career investigators versus established investigators would be compared on "Leadership" and how basic science researchers versus clinical researchers would be compared on "Productivity". For example:

"The productivity section is also highly dependent on the field of work. Research metrics comparisons can be very misleading as popular fields (e.g. large audience of researches) lead to higher impact metrics from citations etc. than more specialized fields."

Stage 1 applicant

When asked if adjudication criteria should be removed or added, the majority of Stage 1 applicants and Research administrators did not want to remove (84.6%, 84.0%) or add any criteria (76.6%, 70.8%) (Table 14). Those who did want to change the adjudication criteria described the changes in the open-ended responses. Survey respondents suggested merging "Productivity" and "Significance of Contributions" into one section. The removal of the "Leadership" criterion was also suggested as it was perceived to be a disadvantage to early career investigators. Consideration or addition of applicants' career stage was suggested to be included as an adjudication criterion. For example:

"In terms of leadership, as a young researcher, I found it difficult to provide much evidence beyond managing my lab and contributions to the field. I don't imagine most new investigators are "Directors' or "Editors' of anything. Perhaps the instructions could accommodate some examples of new-researcher leadership qualities."

Stage 1 applicant

Stage 1 reviewers and VCs were similarly asked to provide feedback on the adjudication criteria and how applicants completed each section. Approximately half of Stage 1 reviewers (53.5%) recommended that applicants should receive additional guidance regarding the adjudication criteria (Table 15). In the open ended responses, reviewers echoed this perception specifically referencing additional guidance on what to include and the level of detail required for the "Vision and Program Direction" criterion. Additionally, reviewers indicated that applicants required additional clarity on the appropriate content to include in the "Productivity","Significance of Contributions", and "Leadership" criteria. Reviewers noted inconsistencies across applications on what information was provided in which section and suggested that there was some conceptual overlap between these criteria, making it difficult to assess. For example:

"It seems that many included the information, but in other sections of the grant and it was expected that I extract that information from other sections and include it in the evaluation of each section...It is not clear if we are required to extract/infer this information from other sections, or if we are supposed to adhere strictly to what is in each section. Some guidance or additional guidelines to both the applicant and reviewer would be helpful in this area."

Stage 1 reviewer

Stage 1 reviewers also suggested that additional information could be provided to applicants on how to address the adjudication criteria across the various career stages; particularly related to how early career investigators complete the "Leadership" criterion.

When asked about their perceived ability to adjudicate, 74.4% of Stage 1 reviewers agreed that they were able to assess "Leadership" using the information provided by the applicant, 74.7% were able to assess the "Significance of Contributions", 88.2% were able to assess the "Productivity" criterion, and 61.4% indicated that they were able to assess "Vision and Program Direction"(Table 16). This was similar to their feedback on how to assess each criteria where 72.7% were clear on the "Leadership" category, 76.8% were clear on "Significance of Contributions", 90.3% were clear on "Productivity", and 69.2% were clear on "Vision and Program Direction"(Table 17). The majority of Stage 1 reviewers identified that they reviewed applications from "Established investigators" (63.6%) (Table 18) and about half disagreed that "Leadership", "Significance of Contributions", and "Vision and Program Direction" could be easily applied according to career stage (Table 19). This is consistent with the information conveyed in the open-ended responses where reviewers expressed difficulty adjudicating applications across career stages.

When asked about the appropriateness of adjudication criteria, 65.0% of Stage 1 reviewers agreed that the adjudication criteria were appropriate and that they could assess the caliber of the applicant (Table 20). Similarly, 59.2% responded that they could distinguish the differences in the caliber of the applicants (Table 21).The ability to use the adjudication criteria to assess or distinguish applicants was not associated with whether or not the reviewer was assessing an early career or established investigator (Table 22). The majority of Stage 1 reviewers (60.2%) and 40.8% of Stage 1 VCs agreed that the criteria allowed them to differentiate between applications and not just the applicants (Table 21). Those who indicated difficulty in assessing the caliber of the applicant and their application elaborated in the open-ended responses. Stage 1 reviewers indicated difficulty adjudicating applications across different research disciplines; specifically, they had difficulty assessing basic research compared to clinical research applications and adjudicating applications that were outside the reviewers' area of expertise. For example:

"Applicants with clinical/translational studies and clinician scientists were very difficult to compare to basic science scientists. Challenges were encountered in terms of significance of contribution since some reviewers attribute very high scores to translational studies and minor scores to basic sciences discoveries. It's very difficult also to compare productivity between people with few high impact basic science publications and others with numerous low-to-mid impact clinical papers. Clearly, some reviewers attributed very high credits to translational studies, while most did not."

Stage 1 reviewer

When asked about their use of the adjudication scale, 59.2% of Stage 1 reviewers indicated that they used the full range of the adjudication scale although a small proportion (27.2%) of Stage 1 VCs validated reviewers' use (Table 23). This is consistent with the open-ended responses where respondents indicated that there was a lack of clarity on how to properly use the scale. In particular, reviewers identified that they commonly used the high end of the scale as they generally felt that applications were strong. Additionally, reviewers did not know how to properly use the scale and were unsure of the expectation of use. This lack of clarity led to a perception that the scale complicated their ability to discriminate between applicants. General feedback also indicated that the scale was not used consistently between reviewers. In order to address this issue, respondents suggested that the adjudication scale could be improved for future competitions by reverting to the numerical scale used in previous competitions so that an overall score or average scores could be calculated. For example:

"It would be easier to have an average score if scores were numerical - then I could glanced at ranking scores more effectively comparing myself with others…"

Stage 1 reviewer

Once Stage 1 decisions were complete, Stage 1 applicants after decision were asked for their feedback on the adjudication criteria. Their responses were divided with 40.2% satisfied and 43.0% dissatisfied with the clarity of the adjudication criteria. Additionally, 29.9% responded that they were satisfied with the clarity of the rating system while 55.3% were dissatisfied (Table 24). Open-ended responses from indicated that applicants felt there were discrepancies with the rating system which led to a lack of clarity on how consistently they were judged.

3.2 Stage 2

Overall, over 70.0% of Stage 2 applicants agreed that Stage 2 adjudication criteria were clear and that they understood what application information should be included in relation to each adjudication criterion (Table 25). The criteria were clearly outlined in the "Interpretation Guidelines" as indicated by 71.7% of Stage 2 applicants (Table 13). Themes were similar in Stage 2 to those previously mentioned from Stage 1 survey respondents where respondents indicated that the adjudication criteria were not always clear with regard to what level of detail was expected from applicants, specifically related to the "Research Approach". For example

"It is difficult to judge how much detail regarding experiments to provide in the Research Approach section. From comments that I received last time from Stage 2, reviewers were polarized on how much detail vs. big picture they wanted to see."

Stage 2 applicant

Additionally, survey respondents indicated that there was a lack of clarity as to the significance or relevance of some of the criteria, particularly in regard to the "Mentorship" criterion.

The majority (76.2%) of Stage 2 applicants responded that they were able to convey under the "Expertise" section how program experts would help ensure the delivery of proposed objectives (Table 26). Those who indicated they were unable to adequately convey the information for the category above elaborated in the open-ended responses. Respondents shared the perception that they lacked a clear direction and adequate space to communicate their teams' expertise. Respondents also suggested that applicants appeared to be disadvantaged if working with an interdisciplinary team or in collaboration with other researchers as there was not enough space to adequately explain the role of each team member. For example:

"The space is limited with a large team and no CVs. I felt it depended on the expertise of the reviewer if they would know/realize/appreciate some of the Program Experts without having the space to explain and/or a CV to back up a statement about their expertise."

Applicants communicated a concern that they would be negatively assessed for not providing sufficient information.

When asked if adjudication criteria should be removed or added, the majority of Stage 2 applicants and Stage 2 reviewers did not want to remove (81.5%, 66.7%) or add any adjudication criteria (72.6%, 70.0%) (Table 14). Respondents who indicated that they wanted to change criteria expressed their opinions in the open-ended responses. Stage 2 respondents suggested additional adjudication criteria or a background section for preliminary data, previous progress in their research and funding history. Additionally, they suggested adding a criterion for the potential benefits or knowledge contributed from the proposed research. In response to removing criteria, applicants suggested that removing the "Mentorship" or "Quality of the Support Environment" criterion as it may bias against early career investigators or smaller institutions respectively. For example:

"I felt that giving a rating to the Quality of Support category risked favoring applicants from larger units and institutions, where the research facilities can often be more extensive and diverse than in smaller ones; reviewers may be inclined to give a higher rating to a proposal on this basis."

Stage 2 reviewer

Approximately half of Stage 2 reviewers (51.1%) recommended that applicants required additional guidance regarding the adjudication criteria (Table 15). This was also confirmed as a theme in the open-ended responses from reviewers. Specifically, they mentioned that applicants need more guidance on differentiating the information that should be provided for the "Research Approach". For example:

"They need clear explanation that concept should be different than approach. From my reading, the best grants used concept to explain the overarching aims of the grant and putting the research in context for relevance. The Approach focused more explicitly on the methods they would use and the model approaches they would take. This allowed for separate evaluation of the research concept and methodological approaches."

Stage 2 reviewer

When asked about their perceived ability to adjudicate, over 80% agreed that they were able to assess the "Research Concept", "Expertise", "Mentorship and Training" and "Quality of Support Environment", using the information provided by the applicant. In comparison, 70.0% agreed they were able to assess the "Research Approach" using the information provided by the applicant (Table 16). When asked about the clarity of criteria, the majority of Stage 2 reviewers were clear on how to assess the "Research Concept" (76.7%), "Research Approach" (72.1%), "Expertise" (83.3%), "Mentorship and Training" (78.9%), and "Quality of Support Environment" (76.6%) (Table 17). However, just over half of Stage 2 reviewers (51.1%) agreed that adjudication criteria could easily be applied to career stage (Table 19). This is consistent with the information found in the open-ended responses where it was indicated that it was difficult to compare applications across various career stages but was an important factor to consider.  For example:

"For young investigators many times it was difficult to assess their background knowledge/experience in some of the proposed methodologies used. In many instances they would rely on collaborators with whom they do not have an established track record with. It was very difficult to assess this."

Stage 2 reviewer

When asked about the appropriateness of adjudication criteria, 62.2% of Stage 2 reviewers agreed that the adjudication criteria were appropriate and that they could assess the quality of the proposed program of research (Table 20). Additionally, 54.5% responded that they could distinguish the differences in quality of the proposed research (Table 21). The ability of Stage 2 reviewers to distinguish the quality of the proposed program of research was associated with their applicants' career stage. Specifically, more reviewers were able to distinguish the caliber of early career investigators (72.7%) compared to established career investigators (48.5%) (n=90, p=0.046; Table 22).  In the open-ended responses, Stage 2 reviewers identified that there was not enough information in the application to adequately adjudicate, particularly for reviewers who were not overly familiar with the research area. For example:

"The topic of grants was too broad for me to an appropriate evaluation. I am a basic scientist and was given man imaging grants to review. While I can understand the basics of imaging, I am not in a position to evaluate an imaging grant relative to a basic science grant (of which I had 3 of each)."

Stage 2 reviewer

Respondents also indicated that some of the criteria were not helpful in discriminating between applicants, in particular "Quality of Support Environment" and "Mentorship and Training".

The majority (60.4%) of Stage 2 reviewers agreed the criteria allowed them to make meaningful differences between applications (Table 21). The ability of Stage 2 reviewers to identify meaningful differences between applications was associated with their applicants' career stage. Specifically, more reviewers were able to distinguish differences with early career investigator applications (85.7%) compared to established career investigator applications (52.3%) (n= 90, p=0.01; Table 22). In contrast, 27.3% of Stage 2 VCs agreed that the adjudication scale allowed them to distinguish meaningful differences in the applications.

When asked about the usability of the adjudication scale, 64.0% of Stage 2 reviewers agreed that the descriptions for each letter of the scale were clear and 55.8% who agreed that the descriptions were useful compared to 18.6% who disagreed (Table 27). Additionally, 51.3% of Stage 2 reviewers were able to use the full range of the adjudication scale; however, 36.4% of Stage 2 VCs agreed that Stage 2 reviewers used the full range (Table 23). FAS reviewers also indicated that Stage 2 reviewers did not use the full range. Consistent with the feedback from Stage 1, open-ended responses indicated a common theme that there was a lack of clarity on how to properly use the scale and the expectation of its use. Reviewers indicated difficulty discriminating between applications especially those of high distinction. For example:

"The meaning of Outstanding+ and ++ is not clear and this leads to distortion in the rating. Most reviewers do not use the full scale therefore there is an implicit group pressure to use the top of the rating scale, which is not satisfactory for the reviewers and, most importantly, very frustrating for the applicants who do not understand why an outstanding application is not funded."

Stage 2 reviewer

3.3 Final assessment stage

Once the FAS decisions were complete, Stage 2 applicants after decision were asked for their feedback on the adjudication criteria. Their responses showed that 44.9% were satisfied and 42.7% were dissatisfied in the clarity of the adjudication criteria. Additionally, 34.8% responded that they were satisfied with the clarity of the rating system compared to 56.1% who were dissatisfied (Table 24).

4. Weighting of adjudication criteria

The following section provides an overview of the respondents' feedback on the weighting of the adjudication criteria and is organized by Stage 1 and 2 of the application and review process. Stage 1 adjudication criteria were weighted as: 25% "Leadership", 25% "Significance of Contributions", 25% "Productivity", and 25% "Vision and Program Direction". Stage 2 adjudication criteria were weighted as: 25% "Research Concept", 25% "Research Approach", 20% "Expertise", 20% "Mentorship and Training", and 10% "Quality of Support Environment". The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 130 Stage 2 applicants, and 90 Stage 2 reviewers; associated total responses can be found in Appendix A (Tables 28-31).

4.1 Stage 1

Stage 1 applicants were asked to provide their feedback on the weighting of Stage 1 adjudication criteria as an open-ended response. Stage 1 applicant responses showed positive feedback that the weighting of the adjudication criteria was appropriate. However, respondents also elaborated on challenges with the "Leadership" criterion in that they felt there was too much weighting for its relevant importance. Respondents indicated having a heavily weighted "Leadership" criterion may negatively impact early career investigators. For example:

"I think to a certain extent, "leadership" is difficult to quantify and difficult to verify, so should perhaps weigh less. On the other hand, productivity and scientific contributions are easy to verify."

Stage 1 applicant

In addition, respondents indicated that there was too much weighting on the "Vision and Program Direction" criterion considering the limited amount of detail they could provide in the constrained character count. Applicants suggested that they would want to increase the weighting for "Vision and Program Direction" if character limits were increased.

Additionally, about half of Stage 1 reviewers perceived that the weighting was appropriate for each of the Stage 1 criteria. Specifically, 50.0% agreed on "Leadership" weighting, 54.6% agreed on "Significance of Contributions" weighting, 57.5% agreed on "Productivity"weighting, and 54.2% agreed on the weighting for "Vision and Program Direction"(Table 28). For those who did not agree, the median ideal weighting for "Leadership" should be 15.0% (IQR=10-20), "Significance of Contributions" to be 25.0% (IQR=15-30), "Productivity" to be 30.0% (IQR=20-40), and "Vision and Program Direction" to be 30.0% (IQR=20-40) (Table 29). In the open-ended responses, reviewers also suggested to increase "Vision and Program Direction" and "Productivity". Reviewers again indicated that "Leadership" weighting would bias early career investigators and suggested to decrease its weight.

4.2 Stage 2

Overall, the majority of Stage 2 applicants perceived that the weighting was appropriate for each of the Stage 2 criteria. Specifically, 70.5% agreed that the weighting for "Research Concept" weighting was appropriate, 65.5% agreed on "Research Approach" weighting, 67.2% agreed on "Expertise" weighting, and 79.5% agreed on the weighting for "Quality of Support Environment". However, a smaller proportion of Stage 2 applicants (56.6%) perceived that the weighting for "Mentorship and Training" was appropriate (Table 30). For those that did not perceive the weighting to be appropriate, their suggested median ideal weighting for "Research Concept" was 30.0% (IQR=20-30), "Research Approach" was 30.0% (IQR=30-38), "Expertise" was 25.0% (IQR=15-30), "Mentorship and Training" was 10.0% (IQR=6-15), and "Quality of Support Environment" was 5.0% (IQR=0-10) (Table 31). Similarly, feedback from Stage 2 applicants in open-ended responses indicated that the criterion for "Quality of Supportive Environment" should not be weighted as heavily as it may bias against certain geographical locations or sizes of research labs. For example:

"Quality of support environment is the weirdest section. If a PI has been successful, the environment must be accommodating. Is CIHR really going to take an outstanding PI and arbitrarily drop the scores because the reviewer isn't convinced of the environment for some reason. Conversely do PIs from a "gig" center automatically get 10% regardless of their abilities. This rewards geography over quality."

Stage 2 applicant

Equally, respondents also indicated that there was not enough weight placed on the "Research Approach" criterion and suggested merging it with the research concept for added weight and space. For example:

"More weight on the concept and approach would be ideal to ensure that the candidate provides a solid rationale for the program of research and clearly articulates a plan in which to execute the program objectives."

Stage 2 Applicant

Generally, half or less than half of Stage 2 reviewers perceived that the weighting for Stage 2 criteria was appropriate. Specifically, 51.1% agreed that the weighing for "Research Concept" was appropriate, 56.7% agreed on "Expertise", 42.2% agreed on "Research Approach", and 40.0% agreed on the weighting for "Mentorship and Training". The weighting for the "Quality of Support Environment" was appropriate for 62.2% of reviewers (Table 30). For those that did not agree, the median ideal weighting for "Research Concept" was 30.0% (IQR=20-30), "Research Approach" was 35.0% (IQR=30-40), "Expertise" was 25.0% (IQR=10-30), "Mentorship and Training" was 10.0% (IQR=10-15), and "Quality of Support Environment"was 5.0% (IQR=0-10) (Table 31). Additionally, open-ended responses from reviewers supported the idea that there should be a greater emphasis or weight on the "Research Approach" or proposed methodology of the proposal.

5. Overall satisfaction with application process

The following section provides an overview of the applicants' experience and feedback on the structured application process and is organized by Stage 1 and Stage 2. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, and 130 Stage 2 applicants; associated total responses can be found in Appendix A (Tables 32-36).

5.1 Stage 1

Overall, the majority of Stage 1 applicants (70.2%) and Research administrators (80.0%) were satisfied with the structured application process (Table 32). Overall satisfaction was associated with previous grant experience of the applicant. Specifically, more applicants were satisfied if they did not have previous experience submitting to CIHR (75.3%) compared to applicants with experience (64.1%) (n=566, p=0.015; Table 33). When asked to compare the process to the previous Foundation competition in 2014, 51.5% of Stage 1 applicants indicated that this submission took less time compared to 17.4% who indicated it took more time; 34.9% indicated it was easier to use compared to 3.5% who indicated it was harder to use. Finally, 32.5% indicated it was less work compared to 16.6% who indicated it was more work (Tables 34-36).

5.2 Stage 2

Overall, 51.6% of Stage 2 applicants were satisfied with the application process (Table 32). This satisfaction was not associated with the applicants' previous grant application experience as seen with Stage 1 applicants. When asked to compare the process with the 2014 Foundation competition, 44.2% of Stage 2 applicants indicated that the current submission took less time compared to 28.0% who indicated it took more time. Additionally, 53.4% indicated that it was easier to use compared to 2.3% who indicated it was harder to use; and 22.1% indicated it was less work compared to 23.4% who indicated it was more work (Table 34-36).

When asked to compare the process to the previous Open Operating Grants Program (OOGP) competition, 60.9% of Stage 2 applicants indicated that this submission took more time compared to 25.2% who indicated it took less time. Additionally, 53.5% indicated it was harder to use compared to 27.9% who indicated it was easier to use; and 62.8% indicated it was more work compared to 22.1% indicating it was less work (Table 34-36).

6. Feedback on the structured application format

The following section provides an overview of the respondents' experience and feedback on the structured application format, one of the new design elements of the Foundation grant. The idea behind the structured format was to focus applicants and reviewers on specific adjudication criteria. For Stage 1, "Leadership", "Significance of contributions", and "Productivity" were allocated half a page each; "Vision and Program Direction" was allocated one page. For Stage 2, "Research Concept", "Research Approach", and "Expertise" were allocated three pages each; "Mentorship and Training" was allocated two pages; and "Quality of Support Environment" was allocated one page. Results are organized by Stage 1 and 2 of the application and review process. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 130 Stage 2 applicants, 90 Stage 2 reviewers; associated total responses can be found in Appendix A (Tables 37-48).

6.1 Stage 1

Overall, the majority of Stage 1 (92.1%) and Research administrators (100%) did not have any non-technical problems in completing the structured application form (Table 37).Open-ended responses identified positive feedback for this stage of the application process. Generally, Stage 1 applicants (78.3%) and Research administrators (84.0%) found the Stage 1 structured application format easy to work with. However, there was less agreement from Stage 1 applicants (65.4%) and Research administrators (72.0%) that the application format was intuitive (Table 38). Applicants without previous experience (85.9%) agreed that the application form was easy to use compared to those without any experience (69.5%) (n=566, p<0.0001; Table 39). Additionally, applicants without previous experience (71.1%) agreed that the application form was intuitive to use compared to those with experience (58.8%) (n=566, p=0.008; Table 39). When asked to compare their previous experience, 31.5% of Stage 1 applicants agreed that the current experience submitting the structured application format was better than their previous experience with CIHR, 32.6% were neutral, and 17.2% said it was worse (Table 40).

When asked about the Stage 1 character limits, less than 70% of Stage 1 applicants were in agreement that the character limit was adequate to respond to each criterion with the exception of 72.3% agreeing on "Vision and Program Direction" character limits. Specifically, 67.4% agreed on the adequacy of the character limit for "Leadership", 61.4% agreed on "Significance of Contributions", 69.8% agreed on "Productivity" (Table 41). Additionally, approximately half or less than half of Research administrators felt that the character limit was adequate for "Leadership" (48.0%), "Productivity" (44.0%) and the "Significance of Contributions" (40%); while 64.0% agreed that there was adequate space for the "Vision and Program Direction" (Table 41). Applicants who did not agree on the adequacy of character limits suggested having one page each to respond to "Leadership", "Significance of Contributions", and "Productivity"criteria and two pages to respond to "Vision and Program Direction" criterion as the ideal limit (Table 42). Additionally, open-ended responses from applicants expressed that the character limits in the structured application form limited their ability to fully express their concepts. Applicants noted that there were times where information needed to be repeated in multiple sections to improve the clarity of their application. In particular, applicants indicated that the "Vision and Program Direction" criterion should either be expanded or removed as the current character limits do not allow for a meaningful explanation. Applicants suggested improvements to the application form could include provision of concrete examples, expanding the limits, allowing PDFs in the appendices and to formulate character limits for the entire proposal as opposed to specific sections. Applicants also indicated technical difficulties completing the form specifically around character counts and formatting issues.

Reviewers were also asked for their feedback on the structure of the application format. The majority of Stage 1 reviewers (79.3%) agreed that the structured application format was helpful in their review process (Table 43). The majority (64.0%) also agreed that applicants were able to convey the information required for them to conduct a complete review using the format provided (Table 44). Positive comments from reviewers included that they perceived applicants were more concise and focused on specific criteria thereby preventing the provision of unnecessary information. However, open-ended responses also identified a common theme that the character limits in the form caused significant variation in applicant responses, which led to reviewers having difficulty judging between applications. Reviewers identified that having sections of information interrupted the overall flow, which made it difficult to get a clear sense of what applicants were trying to convey. For example:

"The problem wasn't the structure. Structure is good, and in other competitions where that structure is lacking the referee is always just left to their own devices to try to find the relevant info. The problem was the brevity restricting how much information was able to be included in each section..."

Stage 1 reviewer

Respondents suggested to fuse certain sections such as "Significance of Contributions" and "Productivity" thereby allowing applicants to use characters in areas where they feel is more important.

6.2 Stage 2

Overall, the majority of Stage 2 applicants (82.5%) did not have any non-technical problems in completing the structured application form (Table 37). Over half of Stage 2 applicants (59.2%) found the Stage 2 structured application format easy to work with and 55.8% agreed it was intuitive (Table 38). Ease of use and intuitiveness were not associated with previous experience of submitting a Stage 2 application as seen with Stage 1 applicants (Table 39). When comparing their previous experience submitting a CIHR application, 33.7% of Stage 2 applicants found the structured application better, 20.9% were neutral, and 45.4% thought it was worse (Table 40).

When asked about the Stage 2 character limits, Stage 2 applicants indicated that the character limit was adequate to respond to most of the criteria for Stage 2. Specifically, 78.0% agreed that the character limit was appropriate for "Research Concept", 78.9% agreed on "Expertise", 80.5% agreed on "Mentorship and Training", and 78.0% agreed "Quality of Support Environment". However, 50.4% of Stage 2 applicants felt that the character limit for "Research Approach" was adequate (Table 45). When asked what the ideal limit should be for the "Research Approach", 33.4% of Stage 2 applicants wanted to increase the page limit to 4 pages (Table 46). Open-ended responses also identified a need to increase the space in general and specifically for "Research Approach". Applicants generally felt that they spent a lot of time trimming their content to fit within the character limits, which impacted their level of rigor they could present. Additionally, applicants also suggested to keep the character limits as is or suggested revising character limits to apply to the entire application versus each specific criterion. Stage 2 applicants were not clear on what reviewers expected for each criterion and worried they would be unfairly judged for not being able to provide the right amount of detail.

Reviewers were also asked for their feedback on the structure of the application format. Similar to the Stage 2 applicant responses, Stage 2 reviewers indicated that the character limit was adequate to respond to most of the criteria for Stage 2. Specifically, 67.8% agreed that the character limit was appropriate for "Research Concept", 73.3% agreed on "Expertise", 73.3% agreed on "Mentorship and Training", and 81.1% agreed "Quality of Support Environment". However, 57.5% of Stage 2 reviewers felt that the character limit for "Research Approach" was adequate (Table 45). When asked what the ideal limit should be for the "Research Approach", 32.1% of Stage 2 reviewers wanted to increase the page limit to 4 pages (Table 46). Open-ended responses also confirmed this theme of increasing the character limit in general and specifically for the "Research Approach". When probed on how the application format affected the review process, the majority of Stage 2 reviewers (70.1%) agreed that the structured application format was helpful in their review process (Table 43). The majority (60.1%) of reviewers agreed that applicants were able to convey the information required for them to conduct a complete review (Table 44) and 58.9% of Stage 2 reviewers agreed that the Stage 2 applicants made good use of the character limits (i.e. if an applicant did not include enough detail, it was because they did not include the "right" detail as opposed to not having enough space) (Table 47). A lower proportion of reviewers agreed that established investigators conveyed the required information (51.5%) compared to early career investigators (86.4%) (n= 22, p=0.009; Table 48). Similarly to Stage 1 reviewers' open-ended responses, Stage 2 reviewers identified that the limits were too restrictive to fully understand and assess applicants' proposals and suggested to increase the length.

7. Perceptions about the CV

The following section provides an overview of the respondents' experience and feedback on the CV section of the application. Based on feedback received from the research community regarding the 2014 Foundation grant CV, some section limits were modified, and new sections were added. Results are organized by Stage 1 and 2 of the application and review process. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 130 Stage 2 applicants, 90 Stage 2 reviewers; associated total responses can be found in Appendix A (Tables 49-56).

7.1 Stage 1

The instructions for the Foundation CV were found to be clear and easy to follow as indicated by 64.8% and 63.9% of Stage 1 applicants and 64.0% and 68.0% of Research administrators respectively (Table 49). However, 48.8% of Stage 1 applicants and 48.0% of Research administrators indicated that the CCV was easy to work with (Table 50). In the open-ended responses, applicants identified that the CCV could be improved by improving the technology or technical features as they found that that completing the CCV was time-consuming and difficult to navigate.

Respondents were asked to comment on the usefulness of the CV and 71.3% of Stage 1 applicants and 75.0% of Research administrators agreed that the Foundation Scheme CV would be useful for reviewers in determining the caliber of the applicant (Table 51). The majority of Stage 1 reviewers (61.4%) agreed with how useful the CV was in determining the caliber of the applicant (Table 52). However, open-ended responses from reviewers indicated that they were unclear on the importance of the "Career Contributions" table. Reviewers also requested calculations on the impact factor and requested to have an objective or standardized measure included in the future. When asked about the "Career Contributions" table, the majority of Stage 1 applicants (64.0%) and Research administrators (76.0%) agreed that the table provided useful information to reviewers (Table 53). The open-ended responses indicated that respondents were concerned about the significance of the "Career Contributions" table and how it would be judged by reviewers. Specifically, research administrators and reviewers were apprehensive about the quantity of publications being valued over the quality or impact factor of a publication. Additionally, respondents found this table to duplicate information already found and entered in the CCV. Applicants requested the addition of citations for their publications, invited reviews or panels, and inclusion of other mentorship activities (i.e., undergraduate trainees) in the table.

Each section of the CV was appraised individually for relevance and if they had an adequate character limit. In general, Stage 1 applicants, Research administrators, and Stage 1 reviewers agreed that each section was relevant and that character limits were appropriate for the CV. The section with the lowest proportion of agreement on relevance included the "Leaves of Absence" section where 64.2% of Stage 1 applicants agreed on its relevance (Table 54). The section with the lowest proportion of agreement for character limit included the "Publications section" where 68.5% of Stage 1 applicants agreed on its appropriateness (Table 55). Generally, open-ended responses indicated that applicants found that the limits were restrictive and suggested an increase in limits, specifically for the "Publications" section. Respondents also indicated that certain CV sections could be decreased. For example, research administrators and reviewers identified that the limit for "Presentations" could be reduced. Stage 1 applicants identified that the "Employment" section could be reduced and questioned the relevance of this category. The "Membership" section was also expressed by Stage 1 reviewers to have less relevance to the application and suggested a decrease in its limits.

7.2 Stage 2

The majority of Stage 2 applicants (70.6%) found the Foundation Scheme CV instructions clear and 71% agreed they were easy to follow (Table 49). Overall, 63.9% of Stage 2 applicants agreed that the Foundation Scheme CV would be useful for reviewers in determining the caliber of the applicant with 67.8% and 83.1% agreeing that the "Career Contributions"table and the "Most Significant Contributions" section being useful respectively (Table 53, 56). Open-ended responses identified that respondents thought the "Most Significant Contribution" section was redundant with other aspects of the application such as "Productivity". When asked to provide additional information that could be important for their application, respondents indicated that they would have liked to convey more information about their publications; specifically, they suggested the ability show publication type and authorship order. Respondents also requested to use a standardized measure such as the H-index to calculate their impact and asked that trainee accomplishments to be included. Additionally, Stage 2 reviewers (61.1%) agreed that that information in the CV was useful in determining the caliber of the applicant (Table 52). In their open-ended responses they suggested to improve the layout of the information presented and recommended the National Institutes of Health Biosketch format.

Each section of the CV was appraised individually for relevant and adequate character limit. In general, Stage 2 applicants and Stage 2 reviewers thought each section was relevant. The sections with the lowest proportion of agreement included the "Leaves of Absence" section where 61.2% of Stage 2 applicant agreed it was relevant and the "Membership" section where 65.5% of Stage 2 reviewers agreed it was relevant (Table 54). When asked about the character limit of each CV section, Stage 2 reviewers agreed that each section had an appropriate character limit. However, a proportion of Stage 2 applicants agreed that certain CV sections had inappropriate character limits such as the "Recognitions" section with 33.9% agreement, the "Publications" section with 42.7% agreement, the "Presentations" section with 32.1% agreement, and the "Review and Assessment Activities" section with 30.9% agreement (Table 55). Respondents who did not find the character limit appropriate were asked to provide suggested limits for each of the CV sections; generally most CV sections were suggested to have their limits increased or to have no limits at all. Respondents proposed to change the limits into a timeframe and identified that "Publication", "Research Funding History", "Presentations", and "Review and Assessment Activities" limits could be converted to 10 years. When asked how to improve the CV, applicants and reviewers indicated that the CCV interface and website could improve as navigation and usability were challenging. Respondents expressed having multiple technological issues with CCV which included a slow interface, website crashing and the confusion of having to perform unnecessary steps.

8. Feedback about the budget

Stage 2 applicants submitted a budget request to support the proposed research program in their application. Reviewers were asked to evaluate if the requested resources were appropriate to financially support the proposed research program as described in the application. Further, the CIHR required that budget requests be consistent with the applicant's previous research funding history as determined by the budget baseline provided by the CIHR. The following section provides an overview of the respondents' experience and feedback on the budget section of the application. The proportions calculated in this section are based on the number of valid responses from 130 Stage 2 applicants and 90 Stage 2 reviewers; associated total responses can be found in Appendix A (Table 57-59).

Applicants were asked if they were clear on what to include in the budget request. The majority of Stage 2 applicants (75.4%) were clear on what to include in the respective budget categories. Additionally, 67.3% were clear on how to justify their requested funds however 55.8% indicated that they were clear on what to include in the "Past Funding History" section (Table 57). Applicants' open-ended responses confirmed their confusion on the "Past Funding History" section and expressed that they were unclear on how it was calculated. When applicants reached out for clarification, they indicated that responses were not provided in a timely manner. Respondents suggested providing additional clarity on specific sections; for example, many respondents stated that they were unsure on how to categorize the expenditure of animals. When asked about the budget format, 63.2% of Stage 2 applicants and 54.8% of Stage 2 reviewers agreed that the character limits in the overall budget were appropriate. Additionally, 64.0% of Stage 2 applicants and 63.1% of Stage 2 reviewers agreed that the character limit to justify the funds requested was appropriate (Table 58). Applicants also requested for an increase in the character limits in open-ended responses as the current space seemed inadequate.

Stage 2 reviewers were asked to provide feedback on the budget assessment process; in general, reviewers were divided in their responses. Specifically, 47.0% agreed that the process was clear, 49.4% agreed that they were able to effectively assess the budget, 42.4% agreed that they were able to effectively assess budget requests across career stages, 43.5% agreed that applicants provided the relevant information, and 48.8% agreed that applicants provided the necessary information. Additionally, 44.0% of Stage 2 reviewers agreed that applicants provided clear justifications for the appropriateness of the funds requested to support the proposed program of research compared to 41.7% who disagreed (Table 59). The minority of Stage 2 reviewers (35.7%) agreed that applicants provided acceptable justifications when asking for more than their baseline amount. When asked about the "Other"section in the CIHR baseline budget, the majority of Stage 2 reviewers (70.6%) indicated it was helpful in understanding how the baseline budget was calculated and 64.7% agreed that the budget categories were appropriate. Reviewers expressed that they had difficulty coming to a consensus on the budget which led to excessive discussion on this aspect of the application, consequently neglecting others sections. Reviewers also conveyed that the budget requested for very specific amount and would have preferred to judge a more general breakdown of the budget. Additionally, reviewers mentioned that it was difficult to assess the budget without sufficient information provided by the applicant. A common theme with reviewers included applicants over inflating their budget without providing appropriate justifications. There was confusion on how the budget baseline was calculated and a concern about the fairness of this calculation. Furthermore, respondents indicated that applicants needed additional guidance on what information should be included to appropriately address the budget criterion. For example:

"The biggest problem seems to be how to arrive at a suitable budget, given that each applicant might be able to conjure up a massive program but then they have to adhere to their respective baseline. Baselines as calculated are often disputed by the applicants and as an assessor it is very hard to recommend appropriate budgets."

Stage 2 reviewer

9. Feedback on the supporting documents

A number of documents were developed in order to support individuals who were involved in the application and review process of the Foundation Grant competition. The following section provides a high-level overview of the respondents' use and feedback on the supporting documents that were provided by CIHR. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 22 Stage 1 VCs, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated total responses can be found in Appendix A (Table 60-61).

Overall, there was variable use of the supporting documents provided by CIHR (Table 60). Generally, more applicants and research administrators used the documents compared to reviewers and VCs. The documents that were used by the fewest respondents across all groups included the Foundation Scheme Role Definitions document, the Foundation Scheme Q&A /Reforms of Open Programs and Peer Review document, the Foundation Scheme CV- Quick Reference Guide, the CCV Frequently asked questions document, and the Questions and Answers on the Budget Request document (Table 60). Over 70.0% of respondents who used the supporting documents indicated that they were helpful (Table 61). Generally, respondents expressed in the open-ended responses that it was difficult to consult numerous documents. For example:

"Though all these documents were helpful, it becomes overwhelming to have to consult so many different documents to access the required information."

Research administrator

They suggested that providing an example budget or example of a successful application would be helpful in future competitions. Additionally, they requested that supplementary information should be included in the documents about new investigator expectations.

10. Feedback on the learning materials

A number of interactive learning lessons were developed in order to support individuals involved in the application and review process of the Foundation grant competition. The following section provides a high-level overview of the respondents' use and feedback on the learning materials that were provided by CIHR. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 22 Stage 1 VCs, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated totals can be found in Appendix A (Table 62-63).

Generally, there was variable use of the learning lessons provided by CIHR (Table 62) with a greater proportion of virtual chairs attending lessons. The learning materials that were used by the fewest respondents across all groups included the Interactive Lesson on the Stage 1 application for the Foundation Scheme, and materials specifically created or Stage 2 reviewers. For those who did use the learning materials, respondents indicated that they were helpful (Table 63). Open-ended responses identified that respondents suggested that the availability of learning materials could be improved by providing more flexible dates and times to participate. Feedback identified that there were some technical issues accessing the learning materials due to issues with the platform and problems with logging onto ResearchNet. Respondents also suggested including specific examples for adjudication criteria responses and budgets. For example:

"It would be useful to have real information from reviewers in terms of what they want, what they judged and how they ranked their applications."

Stage 2 applicant

For the webinars, participants indicated a need for more time for the Q&A and discussion sections. Others expressed that it was time consuming to participate in a discussion if there was a lack of general participation and unsatisfactory quality of comments. Respondents indicated that it may not be necessary to attend all of the webinars if they were provided with the materials earlier. Internal and faculty support sessions that were provided by their own institutions were found to be helpful and of greater use than the CIHR learning materials. The perception was that these materials created by their institution were useful because they were concise and contained only essential information.

11. ResearchNet

The following section provides an overview of the respondents' experience with and feedback on ResearchNet and specific feedback on its usability during the application and review processes. Results in this section are organized by application processes, review processes, and feedback on support. The proportions calculated in this section are based on the number of valid responses from 591 Stage 1 applicants, 31 Research administrators, 249 Stage 1 reviewers, 22 Stage 1 VCs, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated total responses can be found in Appendix A (Table 64-71).

11.1 Application process

Approximately half of Stage 1 applicants (54.5%) and all Research administrators used a Windows computer system to access ResearchNet (Table 65). The majority of Research administrators (88.0%) used a test account in ResearchNet (Table 66); 96.0% indicated that it was helpful and 96.0% indicated that they would like to have access to a test account for all of CIHR's open programs (Table 67). Research administrators also expressed a request to access their institutions' applications in real-time or before the submission to improve efficiencies and to help the applicant earlier in the process.

Feedback on the general usability of ResearchNet included that it was easy to use as expressed by 84.8% of Stage 1 applicants and 88.0% of Research administrators. The majority of Stage 1 applicants (86.0%) and Research administrators (88.0%) agreed that they able to enter their application information in ResearchNet without any difficulty. Overall, 90.0% of Stage 1 applicants and 88.0% of Research administrators were able to submit their application efficiently using ResearchNet (Table 68). Similarly, open-ended comments revealed positive feedback with a general appreciation of the usability compared to the CCV system and that it was intuitive. However, respondents were also asked if they experienced difficulty with some of ResearchNet tasks. Responses highlighted frustrations with cutting and pasting information into the sections as character counts were inaccurate. Applicants also expressed a wish to open multiple tabs at once and have the option to preview the page to identify sections that are relevant to their stage; e.g., the visibility of the budget header was highlighted in Stage 1 as causing confusion.

11.2 Review process

Overall, over 85% of reviewers and VCs agreed that ResearchNet was easy to use for the review process, 82.0% of Stage 1 reviewers and 84.4% of Stage 2 reviewers indicated that the structured review process in ResearchNet was user-friendly, and FAS reviewers (100%) indicated that the binning process in ResearchNet was user-friendly. Overall, 85.2% of Stage 1 reviewers, 89.2% of Stage 2 reviewers, and 100% of FAS reviewers were able to efficiently review the applications using ResearchNet. Additionally, 85.6% of Stage 1 and 80.0% of Stage 2 VCs were able to effectively fulfill their role using ResearchNet (Table 70). There were generally positive comments for ResearchNet use and application. When asked about their challenges with ResearchNet, reviewers indicated in open-ended responses that they experienced issues submitting and then editing their adjudication scores and were unclear whether or not these were final or could be edited further. Additionally, they conveyed that they could not view the final ranking. Generally, reviewers suggested improving the navigation and the instructions for the online discussion tool on how to respond to discussion threads. Virtual chairs expressed a request to access reviewers directly in order to compel them to participate and complete their rankings. VCs indicated that they had to perform extensive navigation to access the online discussion threads and identified challenges with being able to open multiple windows at once. Feedback also included difficulty with the timeouts as they were too brief to complete all their reviewing activities; many people indicated that they lost their work and spent extra time having to re-enter information.

11.3 Feedback on ResearchNet support

Feedback on the support service included a general satisfaction with the timeliness of the support. Specifically, by over 70% of Stage 1 applicants, Research administrators, Stage 2 reviewers, Stage 2 VCs, and FAS reviewers. However, a lower proportion of Stage 2 applicants (59.4%) were satisfied with the timeliness of the support service for ResearchNet (Table 71).  Similarly, there was general satisfaction with the helpfulness of the support service. Specifically, with over 70% of Stage 1 applicants, Research administrators, Stage 2 reviewers, Stage 2 VCs, and FAS reviewers being satisfied. However, a lower proportion of Stage 2 applicants (61.9%) were satisfied with the helpfulness of the support service (Table 71).  

12. Perceptions on the review format

The following section provides an overview of the reviewers' experience and feedback with the format of the review worksheet. One of the design elements of the Foundation Scheme is the structured review. The idea behind this structured format was to focus reviewer feedback on the specific adjudication criteria. Stage 1 reviewers were not asked to provide feedback on the format of the review worksheet and therefore responses in this section include those from Stage 2 reviewers and FAS reviewers. Stage 2 reviewers were limited to half a page to provide their comments on the strengths and weaknesses of each adjudication criteria: "Research Concept", "Research Approach", "Expertise", "Mentorship and Training", and "Quality of Support Environment". The proportions calculated in this section are based on the number of valid responses from 380 Stage 1 applicants after decision, 190 Stage 2 reviewers, 7 FAS reviewers, and 93 Stage 2 applicants after decision; associated total responses can be found in Appendix A (Tables72-73).

Overall, over 80.0% of Stage 2 reviewers and all of FAS reviewers felt that the character limit in the structured review worksheet was adequate to respond to each adjudication criterion (Table 70). The highest proportion of those who did not find that it was adequate indicated that the ideal character limit for "Research Concept" was one page (42.9%), "Research Approach" was two pages (41.7%), "Expertise" was one page (100%), "Mentorship and Training" was zero (50.0%) or half a page (50.0%), and "Quality of Support Environment" was zero pages (66.7%) (Table 73). Reviewers expressed in open-ended responses that the review form was too restrictive on what they were able to provide for feedback. They indicated that the space was not sufficient to provide detailed comments for some sections ("Research Approach", "Research Concept") while they felt that other sections did not require that much space to comment on ("Quality of Support Environment"). Reviewers expressed that they would have preferred to have an overall limit for comments in general as opposed for certain sections and did not like having to provide both strengths and weaknesses in separate boxes.

13. Overall satisfaction with the review process

The following section provides an overview of the respondents' experience and feedback with the review process. Results in this section are organized by Stage 1, Stage 2, and FAS. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 380 Stage 1 applicants after decision, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, 7 FAS reviewers, and 93 Stage 2 applicants after decision; associated totals can be found in Appendix A (Tables 74-81).

13.1 Stage 1

Overall, Stage 1 reviewers were divided in their responses to how satisfied they were with the review process. Specifically, 46.4% were satisfied and 41.8% were dissatisfied. Reviews indicated in open-ended responses that they were concerned with the adjudication criteria and restrictions as there was not enough or relevant information provided by the applicant to review. Similarly, 45.4% of Stage 1 VCs responded that they were satisfied compared to 54.5% who were dissatisfied with the review process (Table 74). VCs expressed that the review format was a good idea however, it was poorly executed because submitted reviews were generally of low quality. VCs noted that there was a large variability with scores potentially due to a lack of reviewer training or problems with how the adjudication scale was used. For example:

"There is a major disconnect between CIHR instructions to reviewers and the adjudication scale provided by CIHR. CIHR instructs reviewers that they have to use the entire letter scale for their applications. In other words, this means to use the letter scale in a relative manner…However, CIHR provides a table that assigns specific meanings to the various letters of the letter grade scale…Thus, the letter scale should be used in an objective manner. It is not fair to an outstanding applicant to receive a letter grade of G just because his/her application happened to be evaluated with 10 other even more outstanding applications."

Stage 1 VC

Over half of Stage 1 reviewers (59.0%) agreed that the structured review process made it easier to review and 39.6% agreed it was a better way to provide feedback to applicants (Table 75). Open-ended responses from reviewers identified they had positive comments on being able to focus their review using the structured format however other indicated that they were unclear with how to judge the sections using the adjudication scale. Moreover, reviewers expressed a need to enhance the engagement of reviewers and chairs as feedback was not consistently provided. For example:

"In reading other reviews, the feedback going to applicants in where their application fell short seems really poor. Most people put very little in the "weaknesses" category and what is put is fairly non-committal. It was easier to provide feedback in operating grants about the Aims but it is much harder to say the truth on why they received a lower score on the Foundation criteria because it is a lot more personal. I think most reviewers are very uncomfortable having to say, "you did not publish a high impact paper in the past 7 years and that is a weakness" but that is exactly why the applicant scored lower whether it was said or not."

Stage 1 reviewer

Stage 1 applicants after decision were asked to comment on the Stage 1 review process. The majority (74.4%) indicated that they were not satisfied with the Stage 1 review process, 61.6% did not think the review process was fair and transparent, and 74.4% were not confident in the process (Table 77). Satisfaction with the adjudication process was associated with career stage; early career investigators (35.5%) being satisfied compared to mid- (9.92%), and senior investigators (16.9%) (n=349, p<0.0001; Table 78). Additionally, there was an association with Stage 1 success and satisfaction with Stage 1 adjudication where successful applicants (41.7%) were satisfied compared to unsuccessful applicants (9.8%) (n=351, p<0.0001) (Table 78). Additionally, 55.2% of Stage 1 applicants after decision saw value in the structured review process compared to 35.5% who did not (Table 79). Open-ended responses from applicants indicated that they felt the review system lacked crucial discussions between reviewers. Without having face-to-face discussion, applicants indicated that reviewers could not properly discuss any discrepancies between rankings/reviews. Moreover, applicants felt that reviewers did not follow the process and had a lower sense of accountability and could provide minimal feedback without justification. Additionally, a second theme was expressed by applicants that they perceived that some reviewers lacked the expertise or knowledge in their area of research to properly review their application. For example:

"I think there is limited value in the new peer review process relative to the old process. The lack of standardization of scoring, limited interactions between reviewers, lack of training of reviewers, and lack of expertise among reviewers is concerning (and several of these combine to confound the problems)."

Stage 1 applicant

The majority of Stage 1 applicants after decision (78.3%) did not contact CIHR with questions on the adjudication process (Table 79); responses from those who did contact CIHR were variable in their satisfaction on the completeness, consistency and accuracy of CIHR's response (45.7% satisfied, 36.6% dissatisfied) and with the timeliness of response (50.7% satisfied, 31.4% dissatisfied). In contrast, the majority (73.0%) were satisfied with the courtesy of CIHR staff (Table 81). When asked their reasoning in contacting CIHR, survey respondents indicated in open-ended responses that they contacted CIHR for clarification on details of the application or review process. Respondents also mentioned contacting CIHR to express concern regarding the feedback received and to express their opinions on the new application and review changes.

13.2 Stage 2

Overall, Stage 2 reviewers were divided in their responses to how satisfied they were with the review process with 51.6% satisfied and 44.8% dissatisfied. Reviewers commented that there is a need for reviewer accountability and transparency in the process. They suggested re-instating the face-to-face or a teleconference portion and increased training for reviewers. Feedback included that the online discussions were not successful and there was inconsistent use of adjudication scales. Similarly, Stage 2 VCs were also divided with 45.5% satisfied and 45.5% dissatisfied with the review process (Table 74) and suggested a face-to-face component to increase reviewer engagement. When asked about how easy and useful the structured review process was, 55.0% of Stage 2 reviewers agreed that the structured review process made it easier to review, 65.5% agreed it was a useful way to provide that feedback, and 63.2% agreed it was intuitive. Additionally, 31.4% agreed that the structured review process was a better way to provide feedback to applicants compared to 55.2% who disagreed Table 75). Generally, feedback was viewed as incomplete or brief which would not be useful for applicants wishing to improve their proposals.

Stage 2 applicants after decision were asked for their feedback on the Stage 2 review process; 55.1% indicated they were dissatisfied with the Stage 2 review process (Table 77). Furthermore, 58.3% did not think it was fair and the majority (61.8%) was not confident in the review process (Table 77). Feedback in open-ended responses included the perception that reviewers were not knowledgeable in the field and that they were an inappropriate match to their application. Applicants indicated that the feedback they received was brief and did not align with the ratings or rankings received. Moreover, applicants felt there were discrepancies between reviewers as their rankings and ratings contained a large standard deviation. For example:

"I think that this system is all but transparent. Reviewers can decide to rank independently from the ratings or comments they provide. There is no incentive for reviewers to provide high-quality reviews."

Stage 2 applicant after decision

Applicants suggested poor quality reviews should be removed from the assessment or face-to-face meeting should be held to discuss discrepancies. Applicants also suggested improving the transparency or clarity around the rating system to understand why discrepancies may have occurred. For example:

"Provide full transparency. Applicants should receive, for every reviewer, the scores and the ranks (including denominator) and then a clear mathematical explanation for how grants were identified for funding or discussion based on the consolidated rankings."

Stage 2 applicant after decision

Overall, applicants did not support the alphabetic rating system and they did not fully understand the scale or its use. They suggested converting it to a numbered scale for better clarity as it was currently hard to understand. Regarding the usefulness of the reviewers' comments, some applicants expressed that their comments were not detailed enough and were subjective. The discrepancies between ratings and rankings also made the applicants feel like the comments they received were not usable and they did not have a clear indication of what to improve with their application (i.e. receiving positive comments but negative ratings).

When asked about the value of the review process, 56.1% agreed they saw value in the structured review process compared to 35.2% who did not. Perceived value was associated with Stage 2 success where more of those who were successful in Stage 2 saw value (73.5%) compared to those who were not successful (35.7%) (n=91, p=0.001; Table 76). Applicants were also asked to comment on the value of the new review process and applicants felt that the value had decreased with the review process because of the lack of quality reviewers and lack of transparency in the rating and ranking processes. For example:

"I do not object to a structured review per se; this does not, however obviate the need for reviewers who are knowledgeable in the field or the value of discussion around a table of peers…No such accountability is present in the virtual system: this is demonstrated by the inconsistency of reviews, their brevity and lack of justification provided by some reviewers who did not respond to their virtual chairs."

Stage 2 applicant after decision

When asked if they contact CIHR about the review process, the majority of Stage 2 applicants after decision (87.8%) did not (Table 80). Those applicants who did contact CIHR were divided in their satisfaction with the completeness, consistency and accuracy of CIHR's response (50.0% satisfied, 50.0% dissatisfied). However the majority (70.0%) were satisfied with the timeliness of response and with the courtesy of the CIHR staff (90.0%) (Table 81).

13.3 Final assessment stage

Overall, FAS reviewers (100%) agreed that they were satisfied with their segment of the review process. However, 28.1% of Stage 2 applicants after decision were satisfied with the FAS review process (Table 74). Satisfaction with the FAS stage was associated with Stage 2 success where more of those who were successful in stage two were satisfied (66.0%) compared to those who were not successful (7.3%) (n=88, p=0.001; Table 78). When asked for feedback on the review process, applicants expressed in open-ended responses that they either did not receive any feedback from reviewers or there was a wide standard deviation with their rankings. The standard deviation was concerning to applicants as they felt that reviewers were not clear or trained on the rating system. Applicants were unclear how such discrepancies between reviewers were possible if the system was fair. Moreover, some applicants received high rankings but still didn't move onto the next step contributing to confusion about the rating system. For example:

"There is confusion between ratings and rankings. For my application, there seemed no association between the two, and no explanation how very positive ratings and comments resulted in very low rankings. No transparency and consequently accountability in this process of allocating millions of dollars of research funding."

Stage 2 applicant after decision

Applicants provided some suggestions for improvements in open-ended responses including taking more care to match expertise between reviewers and applicants as applicants felt that there were issues having knowledgeable reviewers comment on their application. For example:

"Applicants need to have confidence that the reviewers have the expertise to review their grants -- currently that is not the case."

Stage 2 applicant after decision

Additionally, an added layer of transparency regarding who reviewed their application would have been appreciated. Some applicants expressed that they did not know which ratings/reviews belonged to which reviewer and, therefore, were unsure how to interpret their results. Other suggestions included having a hybrid online and face-to-face processas applicants expressed that discrepancies were more prominent with the new review process, and that it lowered reviewer accountability and quality causing discrepant scores and inadequate feedback.

14. Reviewers' experience with the rating and ranking process

Reviewers' were asked to rate each adjudication criteria for each application they were assigned. A list of applications was generated ranked from highest to lowest rated based on those ratings. Reviewers were then responsible for validating the ranked list that was generated and moving applications up or down the list as appropriate. The following section provides an overview of the reviewers' experience and feedback with the rating and ranking process during the review process and is organized by Stage 1 and 2. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers and 90 Stage 2 reviewers; associated total responses can be found in Appendix A (Tables 82-86).

14.1 Stage 1

Overall, Stage 1 reviewers (83.0%) agreed that the ratings produced a rank-list from best to worst (Table 82) and 79.8% indicated they needed to break ties between applications (Table 81). However, 45.9% of Stage 1 reviewers had difficulties rating or ranking applications (Table 84). Generally, Stage 1 reviewers broke one to two ties; however they also indicated that this caused a domino effect where breaking one tie then caused another. The main challenges with breaking ties was confusion on the purpose of breaking ties, how to differentiate between top applications, and the consistency of how other reviewers were rating/ranking. For example:

"Perhaps more explanation up front that the reviewer would have to break ties to rank order would have been helpful up front. I thought that would be the role of the voting but my chair had to ask me to break my ties as I had not done so."

Stage 1 reviewer

Some expressed a concern that the process was unfair to early career investigators at this stage mainly due to assessing the "Leadership" criterion. They suggested a need to normalize scores and that they spent a lot of time changing ratings due to an assumption that rank scores were the only important feature.

14.2 Stage 2

Overall, Stage 2 reviewers (86.1%) agreed that the ratings produced a rank-list from best to worst (Table 82). The majority (71%) agreed that the ranking process was intuitive and 77.9% said that it was appropriate to adjust the ranking before submission (Table 85). The majority (80.2%) agreed that the ratings selected for each adjudication criterion aligned with the comments they provided in each section, 75.6% agreed that it was helpful to have added granularity at the top of the rating scale in order to indicate differences between highly competitive applications, and 67.4% indicated that rating each adjudication criteria was a useful tool that helped them rank their applications. However, 47.7% responded that they were able to effectively rate applications across career stages and 54.7% were able to effectively rank applications across career stages (Table 85). Stage 2 reviewers expressed, in open-ended responses, a common challenge that there were only slight differences between applications at the top of the adjudication scale resulting in difficulty with correlating ratings and rankings when ratings change. Reviewers were concerned that changes to the ratings made large differences in the rankings and admitted that they adjusted ratings in order to rank their set of applications according to their perceived strengths. Reviewers also experienced difficulties while assessing grants from different fields of research (e.g. basic research to health systems) and ranking for early career investigators. When asked about the process of breaking ties, 31.4% indicated they broke ties between applications (Table 83). On average, Stage 2 reviewers broke between 1-2 ties and were in agreement on being clear on the purpose of breaking ties (74.5%), on the process of breaking ties (68.6%), and agreed that the process was easy (65.2% ) (Table 86). Those not in agreement commented in open-ended responses and indicated that they did not want to be forced to break ties and would prefer another process to deal with high discrepancies.

15. Experience reading the reviewers' reviews

Reviewers were allowed to read other reviewers' preliminary reviews to give reviewers the opportunity to calibrate reviews and to identify any discrepancies in the absence of a face-to-face committee meeting. The following section provides an overview of the reviewers' and VCs' experience with reading others' reviews. This section is organized by Stage 1, Stage 2, and FAS. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated total responses can be found in Appendix A (Tables 87-94).

15.1 Stage 1

Stage 1 reviewers (89.0%) and Stage 1 VCs (100%) responded that they had read the preliminary review of other reviewers (Table 87). Stage 1 VCs also indicated that they had read the applications assigned to their reviewers (81.8%) (Table 88). The main reasons for reading others' reviews were to identify discrepancies between reviewer ratings (89.2%) and to help prepare for the online discussion (80.4%) (Table 89). Reading the others' reviews influenced 68.8% of Stage 1 reviewers' assessment of at least one of their applications (Table 90). The largest proportion of Stage 1 reviewers who did not read others' preliminary reviews chose not to as they did not wish to be influenced by other reviewers (41.7%) (Table 91). Additionally, reviewers also expressed that they did not read reviews unless obvious discrepancies were present.

15.2 Stage 2

Stage 2 reviewers (92%) and Stage 2 VCs (100%) responded that they had read the preliminary reviews of other reviewers (Table 87). Stage 2 VCs (90.9%) also indicated that they had read applications assigned to their reviewers (Table 88). The reviewers' main reason for reading the reviews was to identify discrepancies between reviewer ratings (87%) and to help prepare for the online discussion (79.2%) (Table 83). In the open-ended responses, respondents expressed the belief that it was their responsibility to read every review; it helped to integrate expert opinions, understand discrepancies, and stimulate discussion. The experience held by respondents was that reviews were not helpful when not submitted on time, or were too brief and poorly written to be constructive. Respondents also viewed this as a valuable process to determine if others had a similar understanding of the application, and its strengths and weaknesses. Additionally, the process of reading others' reviews was viewed as important in order to prepare for discussion and to integrate others' expertise and opinions especially if reviewing an application from a field in which they had limited knowledge. Stage 2 reviewers who did not read others' preliminary reviews chose not to as they did not wish to be influenced by other reviewers (100%) (Table 91). However, reading the others' reviews was helpful to Stage 2 reviewers (88.1%) and influenced the reviews of at least one application according to 75% of Stage 2 reviewers (Table 90).

15.3 Final assessment stage

FAS reviewers (100%) indicated that they had read the comments of other FAS reviewers (Table 87). 86.0% of FAS reviewers also consulted the grant application in addition to the Stage 2 reviews (Table 86). The majority (83.3%) agreed that reading the grant application and reviews was useful for the FAS and 66.9% agreed that it was necessary to read both in order to properly complete their stage of the review process (Table 87). 71.0% of FAS reviewers indicated that the total time it took to read others' reviews was between 1-2 hours (Table 92). The act of reading other reviewers' comments or binning decision affected 57.1% of FAS reviewers' decisions (Table 93) and that this occurred "Often" or "Occasionally" (Table 94). Overall, they agreed (85.7%) that the comments provided by other FAS reviewers were helpful in the preparation for the face to face meeting.

16. Assessment of review quality

The following section provides an overview of the respondents' experience and feedback with the quality of reviews. High quality reviews should have clearly described strengths and weaknesses; included constructive and respectful justifications for each given rating; and inspired confidence in the reviewer's ability to fairly assess the application. Results in this section are organized by Stage 1, 2, and FAS. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 380 Stage 1 applicants after decision, 130 Stage 2 applicants, 90 Stage 2 reviewers, 11 Stage 2 VCs, 7 FAS reviewers, and 93 Stage 2 applicants after decision; associated total responses can be found in Appendix A (Table 95-102).

16.1 Stage 1

Half of Stage 1 reviewers (50.8%) and 72.7% of Stage 1 VCs indicated that there were issues with the quality of the reviews (Table 95). On average, Stage 1 VCs indicated that 21.0% of the reviews were of unsatisfactory review quality. The most common issues identified by Stage 1 VCs were that comments had insufficient detail to support the ratings (68.2%) and there was a misalignment of rating and comments (59.1%) (Table 96). Additionally, 56.3% of Stage 1 VCs indicated that the flagged issues of review quality were not properly addressed by the reviewers (Table 95). When Stage 1 reviewers were asked to provide feedback on preliminary review quality, over 90.0% agreed that the reviews they read did not disclose personal information about the reviewer and that they were respectful and professional. A smaller proportion of Stage 1 reviewers agreed that the written justifications aligned with respective ratings (63.0%), comments aligned with specific adjudication criteria (65.2%), comments were clear and concise (68.8%), comments were non-biased and accurate (62.5%), comments concerning the applicant, research institution or research field were appropriate (72.4), and comments referred to information obtainable from the application/CCV (75.5%). However, 47.9% agreed that ratings made by other reviewers were sufficiently supported by detailed comments (Table 97). In the open-ended responses, reviewers and virtual chairs indicated there was a great deal of variability regarding reviewer rankings, score justification, and comments. Respondents also suggested that the review quality was unsatisfactory due to reduced reviewer accountability. They suggested that reviewers did not feel pressured to provide a detailed or justified response without the face-to-face component and proposed to reinstate this feature. For example:

"While I like this process it is concerning to me that some reviewers post reviews (that are lacking in substance ) and they do not participate in the online review process to justify their ratings when there are noted discrepancies despite my directly asking them for input. Is there a way that an email could be sent to those reviewers to ensure they log in to review?"

Stage 1 reviewer

After Stage 1 judgments were made, Stage 1 applicants after decision were asked about the quality of reviews received. The majority (70.0%) indicated they were dissatisfied with the overall quality (Table 98). Satisfaction with the quality of reviews was associated with Stage 1 success where more applicants who were accepted into Stage 2 were satisfied (51.7%) compared to those who were not successful (11.3%) (n=363, p<0.0001; Table 100). When details were asked on the quality of reviews, 35.8% agreed that the written justifications aligned with respective ratings, 25.9% agreed that ratings were sufficiently supported by detailed comments, 33.3% agreed that comments aligned with specific adjudication criteria, 46.7% agreed that comments were clear and concise, 25.8% agreed that comments were non-biased and accurate, 30.3% agreed that comments concerning the applicant, research institution or research field were appropriate, 48.9% agreed that comments referred to information obtainable from the application/CCV, 54.4% agreed that comments were respectful and professional, and 24.4% agreed that the reviews provided information that would be useful for their research and/or refining their application for a future competition (Table 97). However, 89.1% did agree that the comments did not disclose personal information about the reviewer. Overall, Stage 1 applicants after decision indicated that they were dissatisfied (74.4%) with the consistency of the peer review judgments and the quality of the peer review judgments (63.6%) (Table 99). Open-ended responses from Stage 1 applicants after decision identified a concern that reviewers may not have been content experts and consequently, held the perception that the reviews contained factual errors. Respondents expressed that a portion of reviews were too short, comments were not useful to applicants, some reviewers lacked expertise, and some may not have thoroughly read the application. For example:

"The review process clearly did not include people with expertise in my field, one review stated this explicitly, another review applied productivity criteria of their field to my own field."

Stage 1 applicant after decision

When asked to identify criteria important in determining review quality, over 70.0% of Stage 1 reviewers and VCs agreed that the alignment of ratings and comments, the appropriate justification for identified strengths/weaknesses, accurate and relevant comments, the alignment of comments with specific adjudication criteria, having constructive comments for applicants to improve their research and/or potential future application, clear and comprehensible comments, and respectful and professional language were important criteria (Table 101). In contrast, less than 70.0% of Stage 1 applicants after decision found the previously mentioned criteria important in determining review quality. Specifically, 52.2% agreed that alignment of ratings and comments was important, 51.5% agreed on appropriate justification for identified strengths/weaknesses, 51.7% agreed on the alignment of comments with specific adjudication criteria, 49.4% agreed on accurate and relevant comments, 45.7% agreed on constructive comments for applicants to improve their research and/or potential future application, and 64.4% agreed on the importance of clear and comprehensible comments. An exception to their responses included having respectful and professional language, which was endorsed by 74.6% of Stage 1 applicants after decision to be important in review quality (Table 101). Respondents suggested in open-ended responses that other criteria for determining the quality of a review should include constructive feedback provided by reviewers who are context experts. Respondents recommended including a mandate for participation and provision of comments, which should entail a detailed justification for their ratings. In addition, respondents advocated for an improved method to appropriately match reviewer expertise with applications. Respondents also indicated the need for a greater level of objectivity and a reliable way to resolve reviewer discrepancies to ensure inter-rater reliability. Face–to-face meetings were expressed as being critical to ensure the quality of reviews remains high. For example:

"The consistency of reviews and ratings was very broad. If there was a roundtable discussion amongst the reviewers, this could have been greatly improved in my opinion."

Stage 1 applicants after decision

16.2 Stage 2

Similar to Stage 1 responses, 55.8% of Stage 2 reviewers and 72.7% of virtual chairs indicated that there were issues with the quality of reviews (Table 95). Approximately 25.0% and 21.0% of reviews were of unsatisfactory review quality as indicated by Stage 2 reviewers and Stage 2 VCs respectively. Generally, over 70.0% of Stage 2 reviewers and VCs agreed that the Stage 2 reviews had sufficiently justified strengths and weaknesses, had adjudication criteria focused comments, had an absence of factual errors, provided clear comments, provided respectful comments, had an appropriate balance of strengths and weaknesses to support ratings, had an absence of inappropriate references to the applicant(s), the research institution(s) or research field, did not disclose personal reviewer information (Table 97). After the final assessment judgments were made, Stage 2 applicants after decision were asked about the quality of Stage 2 reviews received. When asked about their reviews, 41.8% agreed that the reviews they received were consistent in that the written justifications (strengths and weaknesses) aligned with the respective ratings and 25.4% indicated that they provided information that would be useful in refining their application for a future competition (Table 99). When asked to identify criteria important in determining review quality, generally, over 70.0% of Stage 2 reviewers, VCs, and Stage 2 applicants after decision agreed that having sufficiently justified strengths and weaknesses, an appropriate balance of strengths and weaknesses to support ratings, absence of factual errors, clear comments, respectful comments, and an absence of inappropriate references to the applicant(s), the research institution(s) or research field were important. In contrast, 66.7% of Stage 2 applicants after decision agreed that not disclosing personal reviewer information was important and 63.7% of Stage 2 VCs agreed that having adjudication criteria focused comments were important (Table 101).

16.3 Final assessment stage

A majority (85.7%) of FAS reviewers indicated that there was an issue with the quality of reviews provided (Table 93). A small proportion (14.3%) agreed that Stage 2 reviewers provided clear feedback to support their ratings, and 14.3% agreed that they provided sufficient feedback to support their ratings (Table 102). As an example:

"Some reviews were excellent but others were very brief and therefore made it hard to understand the justification for their review. Also some reviews did not match the scores given."

FAS reviewer

After the final assessment judgments were made, Stage 2 applicants after decision were asked about the quality of reviews received. 36.0% of Stage 2 applicants after decision were dissatisfied with the overall quality of reviews received (Table 98).  Satisfaction with the quality of reviews was associated with Stage 2 success where more applicants who were funded were satisfied (59.5%) compared to those who were not successful (12.5%) (n=61, p<0.001; Table 100). When asked to provide specific feedback, less than 70.0% of Stage 2 applicants after decision agreed that the reviews included sufficiently justified strengths and weaknesses (47.7%), an appropriate balance of strengths and weaknesses to support ratings (45.2%), adjudication criteria focused comments (48.8%), an absence of factual errors (42.9%), clear comments (52.4%), respectful comments (61.9%), or an absence of inappropriate references to the applicant(s), the research institution(s) or research field (63.1%). However, 71.5% agreed that reviews did not disclose personal reviewer information and was not a major issue in the quality of the reviews received (Table 97). Overall, Stage 2 applicants after decision were split (41.5% satisfied, 52.8% dissatisfied) around the quality of peer review judgments (Table 98). General feedback included that there was inconsistencies between ratings and rankings as most applicants received very high ratings with no justification. Applicants also identified that reviewers made errors in their comments and therefore suspected they were not adequately knowledgeable in the field to review.

17. Experience with the online discussions

The purpose of the online discussion was to give reviewers the opportunity to calibrate their reviews and to discuss any discrepancies between their ideas and the reviews of others in the absence of a face-to-face committee meeting. The following section provides an overview of the reviewers' and VCs experience and feedback with the online discussion tool and participating in the online discussions. Results in this section are organized by Stage 1 and Stage 2. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 130 Stage 2 applicants, 90 Stage 2 reviewers, and 11 Stage 2 VCs; associated total responses can be found in Appendix A (Table 103-109).

17.1 Stage 1

Overall, Stage 1 reviewers read online discussion posts (98.1%) and participated in an online discussion (95.3%) (Table 103). On average, they read the online discussion posts for eight applications and participated in an online discussion for six. The most common reason for participating was due to a scoring discrepancy between themselves and another reviewer (63.5%) and because of prompting by the VC (57.0%) (Table 104). Reviewers identified in open-ended responses that they participated in the discussion because they believed it was their responsibility. Reviewers also participated in online discussion to gain insight and clarification from others, especially content experts. However, participation was only viewed as helpful if reviewers participated and provided in depth discussion, which was not consistently reported. For example:

"It is absolutely critical that we have extensive discussions, especially when reviewers are not in agreement. The virtual chairs have a big responsibility to make sure people explain themselves. I was happy for the most part but noticed some reviewers who were too silent. If they do not participate they should be excused from the college of reviewers, in my opinion."

Stage 1 reviewer

Reviewers valued the ability to discuss in order to gain insight from others, encourage discussion, and obtain budget clarifications. When asked the appropriate times and frequency of discussions, respondents stated that the online discussions should take place for all reviewers, and for all applications, to help address discrepancies in scores, comments, and ratings. The most common reason why they did not participate included a lack of time (50.0%) or did not feel that their participating was warranted (50.0%) (Table 105). Those that did not participate found the time period was inadequate, or they did not know how to access the function. The majority of Stage 1 reviewers (70.0%) agreed that participating in the online discussion was helpful in the review process, 69.7% agreed that it influenced their assessment of the application, 79.0% modified at least one review, 58.6% agreed that the comments in the online discussion were considered by other reviewers (Table 106). Similarly, 73.0% of Stage 1 VCs indicated that the online discussion functionality was helpful (Table 107). Overall, respondents indicated in open-ended responses that face-to-face review was the most effective method of facilitated discussion. Respondents indicated online discussion was not helpful due to lack of reviewer participation and engagement of VCs. They suggested having it mandated for reviewers too participate and contribute to the process. When VCs were asked for their feedback, 55.0% responded that their reviewers were actively participating in online discussions (Table 108). On average, Stage 1 VCs initiated 14 discussions and approximately 39.0% of their reviewers required prompting to participate in an online discussion. Chairs identified that the online discussion was an important part of the review process but did not find the online portion to be user-friendly. Chairs struggled with a lack of interactivity and real-time feedback on posts.

17.2 Stage 2

Overall, Stage 2 reviewers read online discussion posts (98.8%) and participated in an online discussion (100%) (Table 103). On average, Stage 2 reviewers read the online discussion posts for 7 applications and participated in an online discussion for 6. The most common reason for participating was due to a scoring discrepancy between themselves and another reviewer (82.2%) (Table 104). The most common reason why they did not participate included a lack of participation by other reviewers (100%) or they were not prompted to participate by the VCs (100%) (Table 105). The majority of Stage 2 reviewers (68.0%) agreed that participating in the online discussion was helpful in the review process, 74.7% agreed that it influenced their assessment of the application, 80.8% modified at least one review, 62.7% agreed that the comments in the online discussion were considered by other reviewers (Table 106). Similar to Stage 1, Stage 2 reviewers indicated a need to compel reviewers to participate and that they struggled with the asynchronous characteristic of the online discussion. 

When VCs were asked for their feedback, 81.8% of Stage 2 VCs indicated that the online discussion functionality was helpful (Table 107). All Stage 2 VCs agreed that the online discussions were an important part of the Stage 2 review process and that the tool should be mandatory for those reviewers who have divergent views of the same application. The majority (80.0%) indicated that the online discussion tool was helpful to discuss the application budget and provide a convergent budget recommendation to CHIR (Table 107). On average, Stage 2 VCs initiated 16 discussions and approximately 35.0% of their reviewers required prompting to participate. The majority of Stage 2 VCs (82.0%) indicated that their reviewers were actively participating in online discussions (Table 108). Open-ended responses from VCs on the online discussion tool included that there were issues with the lag time and technology delays in addition to difficulties in calibrating scores. Feedback from VCs indicated a lack of engagement from reviewers and suggestions to include a videoconferencing option or returning to face-to-face reviews.

18. Feedback on the Virtual chairs

The following section provides an overview of the respondents' feedback on the virtual chair role. Their role was to confirm application assignments to reviewers, ensure that the reviews submitted were of high quality, flag applications that should be discussed by reviewers, monitor and/or prompt online discussions, and communicate with CIHR staff as required. Results in this section are presented by Stage 1 and Stage 2. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated total responses can be found in Appendix A (Table 110-115).

18.1 Stage 1

Generally, Stage 1 reviewers found the participation of their VC appropriate (72.5%), helpful (67.9%), and helped to ensure that necessary online discussions took place (77.2%) (Table 110). Respondents were asked to provide suggestions for improvement and expressed in open-ended responses that virtual chairs should be able to exclude poor quality or non-responsive reviewers. Reviewers indicated confusion related to VCs' expectations in achieving consensus with regards to rankings and budget as they were not sure if that was required. Additionally, they commented that the virtual chairs varied in quality with most doing a good job prompting discussions to others who did not moderate any discussion. For example:

"There is a great variability between Chairs. Some were very active in prompting discussion, others not."

Stage 1 reviewer

When asked about their experience, 59.0% of Stage 1 VCs agreed that they were able to assign the correct complement of expertise to the applications (Table 111). VCs expressed that they were not aware of this ability and would have liked the ability to bring in reviewers with specific expertise when needed. VCs commented that there was a shallow pool of reviewers with the appropriate expertise but felt that at Stage 1 it was not as crucial given the adjudication criteria. The majority of Stage 1 VCs agreed that their participation helped ensure that necessary online discussions took place (81.8%) (Table 112) and received questions from reviewers regarding the new review process (77.3%) (Table 113). VCs indicated that most of the questions were related to the online discussion tool, including: when to use, how much to discuss, and timing of discussions. Additionally, reviewers asked for clarification on the rating versus ranking process. The majority of Stage 1 VCs agreed that the information provided by CIHR was useful in identifying which applications should be discussed (63.6%) (Table 112) and 72.7% of Stage 1 VCs indicated that they were satisfied with their role (Table 115). VCs expressed in open-ended responses that they would like to have a better notification system, including more updates from when reviewers post, discuss, or when ratings/rankings are changed. VCs suggested that there could be an improvement in the visibility of rankings/ratings with regard to the spreadsheet from CIHR. Additionally, they suggested providing them with reviewer emails to enhance reviewer participation in online discussions or performing discussions in real-time.

18.2 Stage 2

Generally, Stage 2 reviewers agreed that the participation of their VC was appropriate (74.8%), helpful (72.4%), helped to ensure that necessary online discussions took place (67.5%), and was helpful in prompting discussion between reviewers (84.3%) (Table 110). When asked about their experience, 72.7% of Stage 2 VCs agreed that the correct complement of expertise was assigned to their group of applications (Table 111). The majority agreed that their participation helped ensure that necessary online discussions took place (91.0%) (Table 112) and 63.6% indicated that they had received questions from reviewers regarding the new review process (63.6%) (Table 113). Overall, Stage 2 VCs agreed that the information provided by CIHR was useful in identifying which applications should be discussed (63.7%) (Table 114) and that they were satisfied with their role (72.8%) (Table 114).

19. Perceived workload

One of the goals of the new review process was to decrease reviewer burden and the amount of work required to conduct reviews. The following section provides an overview of the reviewers' and VCs' perception of their workloads. The proportions calculated in this section are based on the number of valid responses from 249 Stage 1 reviewers, 22 Stage 1 VCs, 90 Stage 2 reviewers, 11 Stage 2 VCs, and 7 FAS reviewers; associated total responses can be found in Appendix A (Tables 116-122).

19.1 Stage 1

Generally, most of the Stage 1 reviewers indicated that their workload was "Just right" (38.6%) or "Manageable to challenging" (33.5%) (Table 116). On average, Stage 1 reviewers were assigned 10 applications. When comparing to the OOGP competition, 68.0% said it was less work, 14.4% were neutral, and 14.4% indicated that it was more work. Compared to the 2014 Foundation competition, 33.3% said it was less work, 42.9% were neutral, and 19.1% said it was more work (Table 117). Specifically when assessing each review activity in comparison to the OOGP, 85.4% agreed that it was less work reading one application, 50.0% said it was less work looking up additional information related to one application, and 74.2% said it was less work writing the reviews of one application. In comparison to the 2014 Foundation Competition, the majority indicated that they were neutral in the amount of work it took them to read one application (66.3%), looking up additional information (66.3%), and writing the reviews of one application (68.7%) (Table 118). On average, Stage 1 reviewers took one and a half hours to read a single application, one hour looking up additional information, one hour writing the review of a single application, one and a half hours reading other reviews, one and a half hours participating in online discussions, and one and a half hours completing the ranking of assigned applications (Table 119). Feedback in open-ended responses indicated that reading applications did not take a lot of time but that there was additional time spent re-reading and looking up additional information. A lack of familiarity with material and the relevant research field increased the time spent looking up additional information, such as citations, H index, and publications. When writing the reviews, reviewers indicated that it went quickly as they were only asked to provide minimal comments due to the character limits. However, reviewers voiced concern about the quality of their reviews. Additionally reading others' reviews was not found to be time consuming and reviewers generally used them as a comparison for their ratings. When participating in online discussions, reviewers noted that the use of the online discussion varied and not all applications were discussed, thereby varying the amount of time spent on online discussions. They also indicated that ranking timeframes depended on how many ties were required to be broken and how they were broken.

The majority of Stage 1 VCs (86.0%) agreed that their assigned workload was manageable (Table 120) and 95.5% agreed that the number of applications they were assigned was appropriate (Table 121). Stage 1 VCs were asked to compare their workload to the last time they had chaired; 14.4% indicated it was more work, 47.6% were neutral, and 33.3% indicated it was less work (Table 121). On average, Stage 1 VCs were assigned 27 applications; feedback from chairs indicated that 25 applications would be the appropriate number of applications. On average, Stage 1 VCs took two hours confirming application assignments for reviewers, four hours reading applications assigned to their reviewers, four hours reading preliminary reviews completed by their reviewers, two and a half hours ensuring the quality of reviews submitted by their reviewers, four hours initiating online discussions, three hours prompting/reminding reviewers to participate in an online discussion, four hours participating in online discussions, and one hour communicating questions, concerns and/or feedback to CIHR (Table 119). Feedback from open-ended responses indicated that VCs felt that the timeframe and workload was appropriate but the process could still be improved. Stage 2 VCs indicated that the number of applications alone did not determine the workload but instead relied on the expertise of the chair and the number of applications that needed extensive discussion. VCs suggested that five reviewers should be assigned instead of three per application.

19.2 Stage 2

Generally, the majority of Stage 2 reviewers indicated that their workload was "Just right" (32.5%) or "Manageable to challenging" (31.3%) (Table 116). Compared to the OOGP competition, 59.3% said it was less work, 17.2% were neutral, and 23.5% said it was more work (Table 117). When comparing each review activity to the last OOGP completion, 71.8% said it was less work reading one application, 28.2% said it was less work looking up additional information related to one application and 67.2% indicated it was less work writing the reviews of one application (Table 118). On average, Stage 2 reviewers took two and a half hours reading a single application, one hour looking up additional information, one and a half hours writing the review of a single application, one hour reading other reviews, one hour participating in online discussions, and one hour completing the ranking of assigned applications (Table 119).

The majority (91.0%) of Stage 2 VCs agreed that the number of applications they were assigned was appropriate in terms of workload (Table 120). Compared to the last time they chaired for CIHR, 22.2% indicated it was more work, 44.4% were neutral, and 33.3% indicated it was less work (Table 121). Stage 2 VCs indicated that 15 applications would be the appropriate number to assign to them. On average, Stage 2 VCs took one hour confirming application assignments for reviewers, three hours reading applications assigned to your reviewers three and a half hours reading preliminary reviews completed by your reviewers, two hours ensuring the quality of reviews submitted by your reviewers, two and a half hours initiating online discussions, two hours prompting/reminding reviewers to participate in an online discussion four hours participating in online discussions, and one hour communicating questions, concerns and/or feedback to CIHR (Table 119).

19.3 Final assessment stage

Generally, FAS reviewers indicated that their workload was "Just right" (57.1%) or "Manageable to challenging" (42.9%) (Table 116). FAS reviewers agreed they had sufficient time in advance of the meeting (85.7%) and agreed that having three reviewers for each application was appropriate (85.7%) (Table 122). Compared to the last OOGP competition, 66.7% said it was less work and 33.3% were neutral. Compared to the Foundation competition in 2014, 60.0% said it was less work and 40.0% were neutral (Table 117). On average, FAS reviewers were assigned 15 applications and took one hour reading the Stage 2 reviews for one application, one hour consulting the Stage 2 grant application for one application, one hour looking up additional information related to the applications online, one and a half hours assigning the grant applications to YES/NO bins and writing comments to justify their assessment, one hour reading other FAS reviewer comments using the "In meeting" task in ResearchNet, and one hour reviewing the FAS ranking to prepare for the FAS committee meeting (Table 119). Feedback from FAS reviewers included that they noticed the quality of reviews from Stage 2 varied significantly, which impacted their time spent reviewing as they had to look for additional information. Time given to FAS reviewers to conduct the assessment was perceived to be too short.

20. Face-to-face meeting

The following section provides an overview of the respondents' experiences with the face-to-face meeting. Prior to the face-to-face meeting, each reviewer was assigned a subset of applications and each application was assigned to three reviewers. For each application, the reviewer had access to information from Stage 2, including the reviews, the consolidated rankings, standard deviations and the full applications. A binning system was used; a YES bin (to be considered for funding) or a NO bin (not to be considered for funding). Each reviewer was allocated a minimum number of applications that may be placed in the YES and NO bins and submitted their recommendations to CIHR prior to the meeting. Based on the YES/NO binning recommendations reviewers made as part of the pre-meeting activities, CIHR ranked all the FAS applications in order from the highest ranked to lowest ranked. At the meeting, the applications were placed into one of three groups: Group A (applications recommended that should be funded), Group B (applications for discussion at the meeting) or Group C (applications recommended that should not be funded). Group B applications were further discussed in the face-to-face meeting. The proportions calculated in this section are based on the number of valid responses from 7 FAS reviewers; associated total responses can be found in Appendix A (Tables 123-124).

Just over half (57.0%) of FAS reviewers indicated that there was the appropriate number of YES and NO allocations in the binning process (Table 123). Open-ended responses were variable in terms of what would be the appropriate binning with a suggestion of having it reviewer determined. The majority of FAS reviewers agreed on the following face-to-face meeting statements: the instructions provided at the face-to-face meeting were clear and easy to follow (85.7%), creating Groups A, B, and C, and focusing the discussion at the committee meeting to applications in Group B was appropriate (85.7%), the process of moving applications from Group A or Group C to Group B was clear and easy to complete (71.9%), the process of moving applications between groups was efficient (57.2%), conflicts were handled appropriately at the face to face meeting (100%), the voting tool was easy to use (85.7%), the voting process was effective (85.7%), the instructions provided regarding the voting process were easy to follow (71.4%), the face to face meeting was required in order to determine which applications should be funded (100%) (Table 119). However, 40.9% agreed that the funding cut off line helped to inform the discussion at the meeting (Table 124). Feedback from FAS reviewers indicated that they were not aware of the funding cut-offs. Additionally, reviewers did feel that the face-to-face portion was the most valuable part of the review however it was dependent on the completed reviews and their quality.

21. Notice of decision

The following section provides an overview of the Stage 2 applicants after decisions' feedback on the NOD document, a new design element that was implemented by CIHR that indicates whether or not their proposal was approved. Feedback on the NOD was only requested from Stage 2 applicants after decision. The proportions calculated in this section are based on the number of valid responses from 93 Stage 2 applicants after decision; associated total responses can be found in Appendix A (Table 125-126).

The majority of Stage 2 applicants after decision (61.5%) agreed that the NOD clearly explained the Stage 1 and 2 results of their application; 55.4% agreed that the document was helpful in interpreting their results (Table 125). Similarly, 54.4% of Stage 2 applicants after decision indicated that they used the NOD document to interpret the results (Table 126). Applicants expressed in open-ended responses that they were not aware of the document or how to access it. Additionally, they were unclear on their rankings in relation to others' rankings and it was not apparent whether or not they were funded at first glance. Applicants communicated that they had to read the document multiple times to understand the content as they were unsure on where they stood in the rankings. Applicants requested more detail and transparency on how rankings were calculated. For example:

"[I'm] still a little confused by consolidated ranking ... would be more clear to know what "Place's I was ranked so I can see how close I was to funding cut off. I could not decipher where I stood compared to funding cut off."

Stage 2 applicant after decision

22. Survey feedback

The following section provides an overview of the respondents' experience with completing the feedback surveys. Survey respondents were asked to provide general feedback on the survey process or the survey questions as open-ended responses.

Survey feedback included that the respondents felt the survey was too long and took longer to complete than the stated time. Respondents indicated that questions were repetitive and the time commitment of completing the survey may be a deterrent for future participation. Additionally, they expressed that some questions were too restrictive (e.g., Yes/No responses) and did not allow for added granularity. Generally respondents were thankful to provide feedback and appreciated the opportunity with the hope that CIHR take note of their comments. Applicants requested the addition of specific questions around the new changes, the reviewers, fairness of the review process, and CIHR leadership. Reviewers requested the addition of specific questions regarding the timeline, impression of new changes, virtual meetings, if assigned applications fit their expertise, fairness of the process and challenges of reviewing for early career investigators. Respondents also suggested the ability to save or download their survey comments to keep for their own records. Other suggestions included earlier access to the survey after completing the submissions in order to accurately comment on items and a request an N/A or skip option for questions.

Limitations

This report has the following limitations: (a) the data from the online survey was collected anonymously and was not linked across the competitions phases, therefore, we were unable to confirm if each response was completed from a unique respondent; (b) sample sizes may not be representative of researchers across Canada as provincial data was not collected; (d) the average response rate for the survey was 51.0% therefore, it is possible that this report might not represent the full view of all possible participants, non-respondents could have had different characteristics and opinions from respondents; (e) open-ended comments were coded by a single coder which could lead to a certain degree of subjectivity; (f) Sample sizes were also limited in certain categories, VCs and FAS reviewers had the smallest sample size as compared to other respondents. These limitations are important to note when referring to this report as a summary of what the respondents felt towards the application process.

Date modified: