Stakeholder assessment of the 2016 Project Grant funding competition
Final Report

Prepared by:

  • Dr. Jamie Park
  • Mahrukh Zahid
  • Julie Bain
  • Jennifer Rup
  • Dr. Jemila Hamid
  • Caitlin Daly
  • Dr. Julia E. Moore
  • Dr. Sharon Straus

For questions about this report, please contact:

Jamie Park, Ph.D
Research Coordinator
Knowledge Translation Program
Li Ka Shing Knowledge Institute
St. Michael's Hospital
Toronto, Canada
Email: ParkJam@smh.ca
Phone: 416-864-6060 ext. 76219

Table of Contents


Abbreviations

CCV
Canadian Common CV
CIHR
Canadian Institutes of Health Research  
FAS
Final assessment stage
IKT
Integrated knowledge translation
IQR
Interquartile range
KT
Knowledge translation
NOD
Notice of decision
OOGP
Open Operating Grant Program
VC
Virtual chair

Key Definitions

Stage 1 applicants
Applicants who have submitted a research proposal to the first stage of the 2016 Project grant competition
Stage 1 reviewers
Reviewers who assessed applications in the first stage of the 2016 Project Grant competition using an internet-based platform
Stage 1 virtual chairs
Chairs responsible for overseeing and supporting the Stage 1 remote review process
Final assessment reviewers
Stage 1 virtual chairs who became reviewers and participated in a face-to-face discussion for the Final assessment stage
Applicants after decision
Applicants at the end of the Final assessment stage who have been notified if they were successful or unsuccessful in the 2016 Project Grant competition

Executive Summary

Purpose

The Canadian Institutes of Health Research (CIHR) has been working with the research community to reform and modernize the new Investigator Initiated Programs and review processes. As part of the reform, in the spring of 2016 CIHR introduced the first live pilot Project Grant competition designed to support projects with the greatest potential to advance health-related fundamental or applied knowledge, health research, health care, health systems, and/or health outcomes and that have a specific purpose and a defined endpoint. Feedback was collected on the application and peer review processes. This report summarizes the feedback received from applicants, research administrators, peer reviewers, and virtual chairs from the two stages of the 2016 Project grant competition. A total of 6 surveys were disseminated and analyzed to evaluate each stage (i.e., Stage 1 and Final assessment stage) and group of participants (i.e. applicants, research administrators, reviewers, virtual chairs) in the process; response rates and demographics are found in sections 1 and 2. Overall response rate was 49.3% (n=3862) and the findings presented in the following sections are representative of the final dataset of survey responses. Sections 3 and 4 provide an overview of the respondents' perception of the adjudication criteria and scale. Generally, applicants were unclear with the distinction between the "Quality of the Idea" and "Importance of the Idea". Respondents suggested adding a background/preliminary research section as new criteria. Reviewers indicated they had difficulty using the adjudication scale and felt that the IKT approach was not effectively integrated or assessed. "Research Approach" was recommended to increase while "Quality of the Idea" and "Importance of the Idea" was recommended to decrease or combined. Sections 5 and 6 include an overview of the respondents' satisfaction with the application process and format. Overall, the application format was easy to work with however applicants and reviewers identified that it was restrictive in character limits to explain their research. Sections 7 and 8 include the respondents' experience with the CV and budget. Applicants suggested an increase in the "Publications" section and most stated that the budget process was clear but wanted more space to justify their requests. Sections 9 and 10 provide a high level overview of the relevance of supporting documents and learning materials by respondents. Supporting documents and learning materials were generally found to be used and were helpful; however respondents suggested streamlining the content and the access to them. Section 11 presents respondents' feedback on ResearchNet. Overall it was found to be easy to use with a satisfactory support service. Suggestions for improvement include repairing character counts and including automatic saving functions.

Section 12 and 13 provide information on overall satisfaction with the review format and process. Generally there was a positive reaction to the review worksheet, though reviewers did request to merge weaknesses and strengths together. Overall, respondents were not very satisfied with the review process; respondents noted a problem with reviews not being aligned with ratings, discrepancies between reviewers, and the provision of brief reviews. Section 14 includes reviewers' and VCs' experience with the ranking process. Reviewers maintained that their reviews produced a rank list. However, they questioned the purpose of ranking and the purpose of breaking ties during the process. Sections 15 and 16 present feedback on reading reviewers' comments and responses on review quality. Reading others' reviews was indicated to be helpful and impacted reviewers' assessment in Stage 1 while having less of an impact in the FAS. The quality of reviews was noted to be unsatisfactory by some applicants and reviewers. Applicants noted that reviews did not align with their score and questioned the expertise of reviewers. Sections 17, 18, and 19 include feedback on the online discussion process, role of the VC and perceived workloads. The online discussion was perceived by reviewers as an important and helpful part of the review process however reviewers and VCs indicated that there was a varying level of commitment seen. Virtual chairs were seen as beneficial and requested to have additional control in their scope of work. Workloads were perceived to be manageable to challenging and were dependent on factors such as reviewers' knowledge base and provision of timely reviews. Sections 20 and 21 summarize feedback on the face-to-face meeting and NOD document. FAS reviewers indicated that the FAS processes were clear and easy to follow. While the NOD document was generally found to be difficult to use and difficult to access. Sections 22 and 23 provide a high level overview of feedback received on the surveys and the limitations of this report.

Competition overview

The 2016 Project Scheme application process included registration, followed by a two-stage competition and peer review process. In stage 1, applicants (i.e., Stage 1 applicants) completed a structured application form that aligned with adjudication criteria focused on the concept and feasibility of the project. Applicants and co-applicants were also required to complete a CV through the web-based Canadian Common CV (CCV) and to submit a budget. Reviewers (i.e., Stage 1 reviewers) assessed the project concept and feasibility of the project's research proposal plan of execution. They reviewed their assigned applications by providing structured reviews that consisted of a rating for each adjudication criterion and brief comments on strengths and weaknesses. Aided by their ratings, reviewers were asked to rank their group of applications. CIHR combined all reviewer rankings into a consolidated ranking for each application. Reviews were conducted remotely through an internet-assisted platform that enabled communication among reviewers in a virtual space. Virtual chairs (i.e., Stage 1 VCs) oversaw and supported the Stage 1 review process. In the FAS, reviewers (i.e., Final assessment stage (FAS) reviewers) participated in a face-to-face discussion and integrated the results of the Stage 1 reviews. FAS reviewers focused on assessing applications that were identified as being close to the funding cut-off "grey zone" and demonstrated a high degree of variability in Stage 1 reviewer assessments. Once the FAS was complete, the reviewers provided CIHR with recommendations on which application should be funded. A final NOD document was provided to applicants (i.e., Applicants after decision) that integrated Stage 1 and FAS results.

Methods

Online surveys were developed in FluidSurveys and were sent to Project Grant applicants, reviewers, research administrators and virtual chairs from March 2016 to August 2016. The CIHR administered the surveys and provided the survey results to the Knowledge Translation Program at St. Michael's Hospital for analysis between November 2016 and February 2017. The survey included closed and open-ended questions. The closed-ended questions were analyzed as proportions of total responses received for a question using SPSS v20. Where appropriate, Likert scale responses were reduced to the nominal level by combining all "agree" and "disagree" responses into two categories of "accept and "reject"; chi-square tests or Fisher's exact tests were applied to determine the statistical significance. We used t-tests or ANOVAs to compare mean scores of continuous variables across subgroups using the computing environment R. Comments received for open-ended questions were analyzed in NVivo 11. French responses were translated into English. Two qualitative analysts independently familiarized themselves with survey data by reviewing a portion of responses to develop an initial list of codes, key ideas, and themes. Analysts compared their initial list of potential codes and developed an analytic framework to apply to the data. Responses were coded by a single analyst to the developed framework; the framework was further refined and modified to better fit the data by the analyst. An iterative data analysis process was used where the framework was repeatedly adapted during the coding process to capture emergent themes. Note that only responses relevant to the question asked were coded and that one response could be coded to multiple themes. Major findings are presented in this report; responses from all survey questions are presented in the appendices.

Findings

This report consists of the survey response rates and respondents' demographics criteria. It also includes respondents' feedback on 19 areas of the application and review process: their perception of the adjudication criteria, adjudication weighting, application process, application format, CV, budget, supporting documents, learning materials, ResearchNet, review format, review process, ranking process, experience reading reviews, quality of reviews, online discussion, role of the virtual chair, perceived workload, the face-to-face meeting, and the NOD document. The last sections of the report present participants' comments on the surveys used to collect their feedback and the limitations of the survey results.

1. Survey response rate

A total of 8162 participants were invited to complete a survey; 4031 responses were received and 69 of those were excluded due to missing data. A total of 3862 responses were included in the following analysis. There were 1614 responses from Stage 1 applicants, 104 from Research administrators, 920 from Stage 1 reviewers, 71 from Stage 1 VCs, 14 from FAS reviewers, and 1239 from Applicants after decision. The average response rate was 49.3% with Research administrators having the lowest response rate at 38% and Stage 1 VCs having the highest at 61% (Table 1).

2. Demographics

The following section provides a high-level overview of the respondents' career stage, profession, research pillar, language preferences, funding status, and previous experience. For career stages, early career is defined as having less than five years of experience as an independent researcher, mid-career as 5-10 years, and senior career as over 10 years' experience. For research pillars, these encompass the four pillars of health research (biomedical; clinical; health systems and services; and the social, cultural and environmental) outlined in CIHR's mandate. For a full breakdown of demographic criteria, please refer to Table 2-17 in Appendix A.

2.1 Applicants

A total of 3040 Stage 1 applicants were invited to complete a survey, 1640 replied and included early (28.2%), mid (41.1%), and senior (30.4%) career scientists; there were also a small proportion from knowledge users (0.3%) (Table 2). The majority of Stage 1 applicants indicated their gender as male (61.5%) and 36.5% indicated as female (Table 4). Responses included those from a large proportion of applicants in the biomedical field (65.8%) and a smaller proportion in the clinical (14.2%) health systems/service (8.6%), and social, cultural, environmental, and population health fields (10.7%) (Table 5). When asked about their language use and preference, 82.8% of Stage 1 applicants indicated that they used English as their Official Language, 97.6% used English when completing their application and 95.6% felt comfortable submitting in their language of choice (Table 6). A small proportion (3.9%) of respondents encountered language-related issues when completing their application. Open-ended responses identified that French applicants applied in English instead of their first language due to the perception that there were more English reviewers, and by applying in French, it may put their application at a disadvantage. For example:

"I think submitting in English gives me a better chance to get funded and a better chance to be assigned the reviewers most competent in my field."

Stage 1 applicant

A small proportion of Stage 1 applicants (9.5%) indicated they did not have any previous experience submitting and application to a CIHR completion and a large proportion (81%) indicated that they had previous experience submitting an application to the Open Operating Grant Program (OOGP) competition (Table 7).

2.2 Applicants after decision

Out of 3037 Applicants after decision, 1239 replied to the survey and included early (25%), mid (42%), and senior (32.9%) career scientists (Table 2). Applicants after decision responses included a large proportion of those in the biomedical field (68.4%), and a smaller proportion in the clinical (13.4%), health systems/service (7.3%), and social, cultural, environmental, and population health fields (10.1%) (Table 5). The majority of Stage 2 applicant after decision respondents (66%) had not previously applied to a CIHR (Table 7) and 94.7% indicated that they had submitted a new application compared to 5.4% who indicated their submission was revised from a previous competition (5.3%) (Table 8).The majority of survey responses from Stage 2 applicants after decision (76.9%) were from those who were not successful in the 2016 Project grant competition (Table 9).

2.3 Research administrators

Out of 276 Research administrators invited to complete a survey, 104 replied and the majority indicated their gender as female (80.4%). While a small proportion (10.6%) of research administrators indicated that they did not have previous experience submitting applications from their institution to a CIHR competition, the majority (74%) did have experience from the previous OOGP competition (Table 7).

2.4 Reviewers

Out of 1664 Stage 1 reviewers invited to complete a survey, 920 replied and included mid (41.8%) and senior (41.8%) career scientists. Additionally, out of 29 FAS reviewers, 14 completed the survey and included mid (64.3%) and senior (64.3%) career scientists (Table 2). There were a small proportion of responses were from early career scientists (10.1% for Stage 1 reviewers) or from knowledge users (3.6% for Stage 1 reviewers, 7.1% for FAS reviewers). A large proportion of Stage 1 and FAS reviewers were in the biomedical field (47.1%, 64.3%), and a smaller proportion in the clinical (26.2%, 14.3%), health systems/service (10.9%, 7.1%), and social, cultural, environmental, and population health fields (14.0%, 14.3%) (Table 5). Less than half of Stage 1 reviewer respondents did not have previous review experience (42.5%) while 45.1% had previous experience for the OOGP competition (Table 10). The majority of those that did have previous experience in a non-pilot competition indicated that their role was as a reviewer (53.9%) compared to as previous chair (2.4%) or scientific officer (2.1%) (Table 11). In contrast, all of the FAS reviewers who responded had previous review experience for a CIHR competition (Table 10); majority had experience being a previous reviewer (71.4%) in a non-pilot competition compared to a previous chair (14.3%) or scientific officer (7.1%) (Table 11).

2.5 Virtual chairs

A total of 116 VCs were invited to complete a survey, 71 responded and included mid (22.5%) and senior (71.8%) career scientists (Table 2), as well as a small proportion (4.2%) from knowledge users. A large proportion of Stage 1 VCs reported being in the biomedical field (64.8%), and a smaller proportion in the clinical (15.5%), health systems/service (2.8%), and social, cultural, environmental, and population health fields (16.9%) (Table 5). The majority of Stage 1 VC respondents had previous review experience for the OOGP (91.5%) and a small percentage did not have any previous experience reviewing for CIHR (1.4%) (Table 10). Additionally, the majority of Stage 1 VCs (62%) did not have previous chairing experience for CIHR (Table 12).

3. Feedback on the adjudication criteria and scale

As part of the new application and review process, CIHR introduced adjudication criteria in Stage 1. Stage 1 adjudication criteria focused on the concept and the feasibility of the project and included: "Quality of the Idea", "Importance of the Idea", "Approach", and "Expertise, Experience and Resources". Additionally, applications with an integrated knowledge translation (IKT) focus needed to be able to include their knowledge translation (KT) plans within the Project Scheme adjudication criteria. A new rating scale was also developed for the new competition that reviewers used to rate each of the adjudication criteria (O++, O+, O, E++, E+, E, G, F, and P). The following section provides an overview of the respondents' experience using the adjudication scale and feedback on the adjudication criteria. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers, 14 FAS reviewers, and 1239 Applicants after decision; associated total responses can be found in Appendix A (Tables 13- 29).

Overall, over 80% of Stage 1 applicants and Research administrators agreed that the adjudication criteria "Approach" and "Expertise, Experience and Resources" were clear and that they understood what application information should be included in relation to each adjudication criteria (Table 13). However, 47.9% of Stage 1 applicants and 56.9% of Research administrators agreed that the "Quality of the Idea" criterion was clear. Relatedly 52.5% of Stage 1 Applicants and 61.1% of Research administrators agreed that the "Importance of the Idea" criterion was clear. When asked about the usefulness of the "Interpretation Guide", half of Stage 1 applicants (51.1%) indicated that the criteria was clearly described in the guide and allowed them to adequately convey the required application information (Table 14).  This is consistent with the open-ended responses, as survey respondents indicated that the distinction between the "Quality of the Idea" and the "Importanceof the Idea"was very unclear based on the information provided. For example:

"There seemed to be a lot of overlap in criteria for quality of idea and importance of idea and it was hard to determine what type of information should be included in each section while avoiding redundancy."

Stage 1 applicant

Survey respondents indicated there needs to be effective information regarding the distinction between these two criteria provided to applicants. Alternatively, survey respondents indicated merging these criteria to avoid confusion. For example:

"It was somewhat difficult to distinguish between the quality and importance of the idea criteria. I think it might be more useful to collapse those two categories together in future competitions (but provide adequate character limits to address both things)."

Stage 1 applicant

When asked if adjudication criteria should be removed or added, the majority of Stage 1 applicants and research administrators did not want to remove (69.5%, 80.9%) or add any criteria (64.3%, 70.3%) (Table 15). Those that did want to modify existing criteria, indicated in open-ended responses that the "Quality of the Idea" and "Importance of the Idea" should be removed or merged. Furthermore, applicants also suggested that the "Expertise, Experience, and Resources" criteria be removed or given less weighting as it may bias against early career researchers or smaller institutions. In terms of adding criteria, survey respondents indicated that a background section, where applicants could describe the relevant background and rational for the proposed research, should be considered as an additional adjudication criterion. Relatedly, survey respondents indicated that preliminary data and previous research or grant progress should be an additional adjudication criterion. Survey respondents also indicated that creativity or originality of proposed research should be considered as an additional adjudication criterion to discourage incremental research and mitigate the potential for bias against truly novel research. Furthermore, applicants suggested that there should be consideration for career stage incorporated into the adjudication criteria to mitigate the perceived bias against early career investigators.

Reviewers were also asked for their feedback on the clarity of adjudication criteria; 36.3% of Stage 1 reviewers indicated that the distinction between "Quality of the Idea" and "Importance of the Idea" was clear (Table 16) and 34.8% agreed that they saw value in keeping them as separate criteria. A minority of reviewers (31.3%) agreed that applicants understood the distinction between "Quality of the Idea" and "Importance of the Idea" (Table 17). Reviewers also suggested that these two criteria could be merged and given less weighting. For example:

"I think Quality of the Idea and Importance of the Idea should be merged. Unfortunately, by breaking it up into pieces, the flow and logic of the grant application was interrupted. I did not feel that they could tell me a coherent story and argumentation for what they were proposing to do."

Stage 1 reviewer

Reviewers also provided feedback on their perception of applicants' understanding of the adjudication criteria, the majority agreed that applicants understood the "Approach" (89.8%) and "Expertise, Experience and Resources" (88.9%). A smaller proportion agreed that applicants understood the "Importance of the Idea" (59.8%) and "Quality of Idea" (50%) criteria (Table 17). The majority of Stage 1 reviewers (69.5%) recommended that applicants required additional guidance regarding the adjudication criteria (Table 18). When asked if adjudication criteria should be removed or added, the majority of Stage 1 reviewers did not want to remove (63.6%) or add any criteria (68.4%) (Table 15). Those that did indicate a preference to change criteria included their suggestions in open-ended responses. Reviewers identified that additional, objective measures of productivity should be considered in order to establish an applicants' "track record" which is perceived to be a good indicator of future success in research. Reviewers and applicants also shared the perception that there should be a greater emphasis and additional space provided to properly explain the methodology or "the science" behind the proposed research, particularly the theory and the feasibility of a project. For example:

"There is a general lack of emphasis in study methodology. The robustness of the methodology can only be evaluated under one item under feasibility - approach. This is the most critical component of any research design yet in the review process and virtual meetings it was not given the due attention most likely due to the lack of enough emphasis in the criteria for adjudication."

Stage 1 reviewer

They also suggested that collaboration or letters of support be considered as an additional adjudication criterion in order to properly explain the roles and expertise of all personnel involved, beyond the character constraints of the "Expertise, Experience, and Resources" criterion.

When asked about their perceived ability to adjudicate, 76.3% of Stage 1 reviewers agreed that they were able to assess the "Quality of the Idea" using the information provided by the applicant, 77.7% were able to assess the "Importance of the Idea", 83.5% were able to assess the "Approach" criterion, and 87.1% indicated that they were able to assess "Expertise, Experience and Resources" (Table 19). This was similar to their feedback on how to assess each criteria where 72.8% were clear on how to assess the "Quality of the Idea" category, 73.6% were clear on "Importance of the Idea", 88.1% were clear on "Approach", and 87.6% were clear on "Expertise, Experience and Resources Direction" (Table 20). The majority (65.6%) agreed that the adjudication criteria allowed them to distinguish differences in the quality and feasibility of the proposed research project (Table 21).

Feedback on the adjudication scale identified that the 50% of Stage 1 reviewers agreed that the scale allowed them to make meaningful differences between applications (Table 22). Additionally, Stage 1 reviewers also indicated that their ratings aligned with their provided comments (71.7%) and 66.0% agreed that it was helpful to have the added granularity at the top of the rating scale. About half of Stage 1 reviewers agreed that the descriptions provided for each letter of the adjudication scale were clear (55.1%), appropriate (53.6%), and useful (52.7%) (Table 23). In the open-ended responses, respondents indicated that there was a lack of clarity on how to use the scale. For example, reviewers indicated not knowing the difference between an O+ ranking and an O++ ranking. As a result of this, respondents shared the perception that the scale was used inconsistently by reviewers. For example:

"The criteria for the different ratings was not helpful or well defined. In fact, quite ambiguous. There was clearly no calibration between different reviewers. Often my "O+" was someone else's "E+" (or vice versa).  And there was a wide range of how different reviewers interpreted the different criteria."

Stage 1 reviewer

Reviewers felt the scale was not intuitive and that the expectations for how to use scale were unclear, which ultimately led to scale being used improperly. Both reviewers and virtual chairs perceived that reviewers' scores were clustered at the top of the scale resulting in the perception that the scale was ineffective at discriminating between applicants. Although the added granularity of the scale was intended to help reviewers discriminate between applicants, reviewers indicated that less granularity of the scale would be helpful in future competitions. Respondents also indicated that the adjudication scale could be improved for future competitions by reverting to the numerical scale used in previous competitions that can be used to calculate an overall score or average. For example:

"A numerical (or percentile quality) scale would be more useful in my opinion. Easier to sum up and rank with more range and flexibility."

Stage 1 VC

When asked about their use of the adjudication scale, 58.5% of Stage 1 reviewers indicated that they used the full range of the adjudication scale (Table 24). However, the majority of FAS reviewers (85.8%) did not agree that Stage 1 reviewers used the full range of the scale.

Applicants after decision were also asked to provide feedback on the adjudication criteria, 59.8% were dissatisfied with the clarity of the adjudication criteria compared to 25.6% who were satisfied (Table 25). Similarly, 68.4% were dissatisfied with the clarity of the rating system compared to 19.7% who were satisfied. Open-ended responses from applicants indicated that believed that the adjudication criteria was unclear and in some case inappropriate. For example:

"The adjudication criteria were repetitive and vague, and I felt that I had to repeat things because there was so much overlap in the adjuration criteria for Quality and Importance of the idea, while leaving out important things due to lack of space."

Applicant after decision

Applicants were also concerned if reviewers interpreted the criteria consistently and suggested that face-to-face calibration would have improved the process.

3.1 The integrated knowledge translation approach

As part of the application process, Stage 1 applicants with a knowledge translation component were requested to include their integrated knowledge translation (IKT) approach within the adjudication criteria. The minority of Stage 1 applicants (36.2%) reported being able to convey their IKT approach within the application (Table 26). Open-ended responses indicated that applicants were confused on the terminology and struggled with what to include for this section. For example:

"Please define in simple terms what you consider to be an "integrated knowledge translation approach". It can be interpreted in so many different ways as to be meaningless."

Stage 1 applicant

In addition, applicants suggested that "Knowledge Translation" criterion be removed as it may bias against applicants proposing basic and/or biomedical research that may not have an immediate impact on human health. Others expressed that the character limits did not allow for a full description of their plan and consequently felt they were not doing it justice in the space provided. Applicants suggested including IKT as a separate section to in order to properly convey their reasoning and approach. Additionally, 32.4% of Stage 1 reviewers agreed that the information supplied by applicants was enough to assess the IKT approach and 33.7% reported that the information was sufficient to assess the IKT approach (Table 27, 28). When comparing to the last CIHR non-pilot competition, 45.6 % of Stage 1 reviewers disagreed that the adjudication criteria allowed for better assessment of the IKT approach and 45.9% disagreed that the information contained in the structured application allowed for better assessment of the IKT approach (Table 29). Open-ended responses identified that reviewers were not knowledgeable in IKT to properly assess the criterion or that they did not believe that applicants also did not know what this criterion was and suggested that additional guidance should be given to clarify what is expected. Reviewers also identified that the IKT approach needed its own section or additional space to properly relay the information.

4. Weighting of the adjudication criteria

The following section provides an overview of the respondents' feedback on the weighting of the adjudication criteria. Stage 1 adjudication criteria were weighted as: 25% for "Quality of the Idea", 25% for "Importance of the Idea", 25% for "Approach" (25%), and 25% for "Expertise, Experience and Resources". The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers; associated total responses can be found in Appendix A (Tables 30-31).

Overall, approximately half of the Stage 1 applicants perceived that the weighting was appropriate for each of the adjudication criteria. Specifically, 46.1% agreed on "Quality of the Idea" weighting, 52.7% agreed on "Importance of the Idea" weighting, 41.2 % agreed on "Approach" weighting, and 56.6% agreed on the appropriateness of the weighting for "Expertise, Experience and Resources" (Table 30). For those who did not agree, the median ideal weighting for "Quality of the Idea" was 15% (IQR=10-20); "Importance of the Idea" was 15% (IQR=10-20); "Research Approach" was 40% (IQR=35-50); and "Expertise, Experience and Resources" was 20% (IQR=10-20) (Table 31).

Similarly, over half of Research administrators (66.7%) and Stage 1 reviewers (59.2%) perceived that the weighting for "Expertise, Experience and Resources" was appropriate. A smaller proportion agreed on the appropriateness of the weighting for the "Quality of the Idea" (53.6% Stage 1 reviewers, 42.8% Research administrators), "Importance of the Idea" (52.4% Stage 1 reviewers, 45.4% Research administrators), and "Approach" (48.8% Stage 1 reviewers, 36.8% Research administrators) criteria (Table 30). When asked what the ideal weighting should be, Research administrators responded 15% for "Quality of the Idea" (IQR=10-20); 20% for "Importance of the Idea" (IQR=15-20); 35% for "Approach" (IQR=20-40); and 20% for "Expertise, Experience and Resources" (IQR=15-28.8) (Table 31). Correspondingly, Stage 1 Reviewers responded that 15% is ideal for the "Quality of the Idea" (IQR=10-20); 15% is ideal for "Importance of the Idea" (IQR=10-20); 40% is ideal for "Research Approach" (IQR=35-50); and 20% is ideal for the "Expertise, Experience and Resources" (IQR=15-30) (Table 31). In the open-ended responses, respondents indicated that there was too much weighting on the "Quality of the Idea" and "Importance of the Idea". Respondents also indicated that the significant weighting placed on "Quality of the Idea" and the "Importance of the Idea" may actually be placing applicants proposing foundational or basic research at a disadvantage. Respondents suggested that these two criteria could be merged and given less weighting. Additionally, survey respondents indicated that the current adjudication criteria do not have enough weighting on the "Approach" or proposed methodology, which was perceived as critical in deciding which applications are successful. Consequently, survey respondents indicated that there should be more weighting on the "Approach" criterion. Furthermore, respondents indicated that there should be less weight on the "Expertise, Experience, and Resources" criterion as it does not serve as a discriminatory criterion and may bias against smaller research institutions or early career investigators. For example:

"I also think that the expertise/experience/resources is over-weighted at 25% because it will disadvantage junior researchers and those at smaller institutions, both of whom are already at a significant disadvantage in the process. If we want a more equitable process, it's important to limit the weight of factors like these that will advantage certain groups over others in ways that have nothing to do with the quality of the idea or approach."

Stage 1 reviewers

In addition, applicants expressed concern related to the lack of correlation between the character limit and the adjudication weighting. For example, for the "Quality of the Idea" to have a half-page character limit and be weighted the same as "Research Approach" with a four-page character limit was perceived as confusing and dissatisfying. Applicants suggested that the character limit reflect the relative importance of each category in the scoring (i.e., more weighting, more characters). For example:

"The weighting should be closer to the amount of space given in the application sections."

Stage 1 applicant

5. Overall satisfaction with application process

The following section provides an overview of the respondents' experience and feedback on the structured application process. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, and 920 Stage 1 reviewers; associated total responses can be found in Appendix A (Tables 32 -35).

Overall, less than half of Stage 1 applicants (47.1%) were satisfied with the structured application process (Table 32). In open-ended responses, application indicated issues with the usability of the form due to the website crashing or formatting challenges when trying to copy from Word documents. For example:

"The structured application was very difficult to use. Copying and pasting sections from a word processing program resulting in changes in formatting and character counts. CIHR's advice to cut and paste from Microsoft into Notepad and then copy to ResearchNet required that all the formatting (bold, italics) had to be re-done which was needlessly time-consuming."

Stage 1 applicant

Applicants also expressed confusion related to the adjudication criteria, specifically the perceived conceptual overlap between the "Quality of Idea" and "Importance of Idea" criteria. For example:

"The lack of clarity regarding the quality of the idea and importance of the idea made it difficult to appreciate what information was required where. Different people had different perspectives on these sections. This ambiguity should be eliminated to avoid discrepancies between applicants."

Stage 1 applicant

When asked to compare the application process to a previous non-pilot competition, 42.2% of Stage 1 applicants indicated that this submission took less time compared to 38% who indicated it took more time (Table 33). Additionally, 37.4% indicated it was easier to use compared to 42.7% who indicated it was harder to use (Table 34). Finally, 37.7% indicated it was less work compared to 40.6% who indicated it was more work (Tables 35). The majority of Research administrators (61.4%) and Stage 1 reviewers (61.5%) were satisfied with the structured application process. Research administrators identified that the process was restrictive and didn't allow applicants to fully express their ideas and approach.

6. Feedback on the structured application format

The following section provides an overview of the respondents' experience and feedback on the structured application format, one of the new design elements of the Project grant. The idea behind the structured format was to focus applicants and reviewers on specific adjudication criteria. Applicants were provided with half a page for the "Quality of Idea", one page for the "Importance of the Idea", four and a half pages for the "Approach", and one page for the "Expertise, Experience, and Resources" section. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers; associated total responses can be found in Appendix A (Tables 36-43).

Overall, the majority of Stage 1 applicants (77.1%) and Research administrators (82.5%) did not have any non-technical problems in completing the structured application form (Table 36). Over half of Stage 1 applicants (56.6%) and the majority of Research administrators (70.1%) agreed that the structured application format was easy to work with. However, a smaller proportion of Stage 1 applicants (44.4%) and Research administrators (53.8%) indicated that the application format was intuitive (Table 37). Applicants felt they not only needed better instructions in completing application sections but also clarification on whether information on the online form was required in the PDF as well. For example:

"The new structure has some merit, but I found it not very intuitive because I did not find clearly defined spots for some critical information. Giving instructions as to what information to include, not only what would be used to judge it, would be very helpful."

Stage 1 applicant

Furthermore, 35.5% of Stage 1 applicants and 45.5% of Research administrators agreed that the current experience submitting the Project grant structured application was better than their previous experience with CIHR; meanwhile, 23.8% of applicants and 30.3% of administrators indicated their experience was neutral, and 39.7% of applicants and 15.1% of administrators indicated it was worse (Table 38). In the open-ended responses, applicants identified that the format was too restrictive and preferred a longer free form format to convey their ideas. The limit on the references was found to be confusing and applicants requested it to be removed. Applicants and administrators also requested to increase limits overall and specifically requested to include a section for background information, preliminary data, and tables. For example:

"The current character limits do not allow adequate space to provide adequate background information and a detailed description of the proposed research. This is particularly problematic as it is difficult to know what field of expertise the reviewers of your proposal will come from in the new system."

Stage 1 applicant

When asked for their perception if character limits were appropriate, 35.1% of Stage 1 applicants agreed the limit was adequate for the "Quality of the Idea", 63.6% agreed on "Importance of the Idea", 50.7% agreed on "Approach", and 73.6% agreed on "Expertise, Experience and Resources" (Table 39). Similarly, 38.5% of Research administrators agreed on the adequacy of the character limit for "Quality of the Idea", 67.5% agreed on "Importance of the Idea", 59.5% agreed on "Approach", and 62% agreed on "Expertise, Experience and Resources". Those who did not agree that character limits were appropriate were asked to identify their ideal limits; for the "Quality of the idea" criteria Stage 1 applicants (78.6%) and Research administrators (80%) suggested to change it to one page (Table 40). For the "Importance of the Idea", the majority of Stage 1 applicants (46.3%) suggested the limit to be two pages and the majority of Research administrators (32.1%) suggested it to be one and a half pages. For the "Approach" criteria, five pages was suggested to be the ideal limit by the largest proportion of Stage 1 applicants (23.7%) and Research administrators (35.5%). Finally, for "Expertise, Experience and Resources", two pages was recommended to be an ideal limit by the largest proportion of Stage 1 applicants (50.7%) and Research administrators (63.3%) (Table 40). In open-ended responses, respondents indicated the need to increase the space, especially for figures and references. For example:

"Need to increase the character limit for the proposal. Also need to increase the number of pages allowed for figures. Preliminary data is essential to evaluate an application. Some applicants had to "cram in" many figures in 2 pages, resulting in postage stamp size figures that were very difficult to evaluate."

Stage 1 reviewer

For the competition, applicants were permitted to attach figures (max two pages) to their application. When asked if they attached figures, the majority of Stage 1 applicants (95.8%) attached figures to their applications with an average of two attachments and 77% reported attaching figures to be beneficial (Tables 41-42).

Reviewers were also asked for their feedback on the structure of the application format. The majority of Stage 1 reviewers (74.7%) agreed that the structured application format was helpful in their review process and 74.7% agreed that using this format, applicants were able to convey the information required for them to conduct a complete review (Table 43). Additionally, Stage 1 reviewers (72.6%) found the Stage 1 structured application format easy to work with (Table 37). When asked about the appropriateness of character limits, 75.3% agreed that the "Quality of the Idea" limit was appropriate, 83.1% agreed on "Importance of the Idea", 64.7% agreed on "Approach", and 86.5% agreed on "Expertise, Experience and Resources" (Table 40). For Stage 1 reviewers who disagreed on the appropriateness of the current character limits; the majority (64.7%) suggested having one page to respond to "Quality of the idea" criteria, one page to respond to the Importance of the Idea (27.1%), five pages to respond to the "Approach" (87%) and two pages to respond to the "Expertise, Experience and Resources" criteria (45.1%) as ideal limits (Table 35). Overall, 74.7% of Stage 1 reviewers indicated that applicants made good use of the character limits (i.e., if an applicant did not include enough detail, it was because they did not include the "right" detail as opposed to not having enough space) (Table 43). When asked what other instructions they would provide applicants to help them use the character limits more efficiently, reviewers indicated that more concrete examples of what is appropriate in each section to prevent duplication of information. For example:

"Perhaps include examples of the type of information that aligns well with each section. This would reduce repetition across sections, and may reduce instances where applicants simply wrote about one criteria in another section because they had more space in that second section."

Stage 1 reviewer

7. Perceptions about the CV section

The following section provides an overview of the respondents' experience and feedback on the CV section of the application. The Project Biosketch included the following sections: "Recognitions" (Most relevant – up to 5), "Employment" (No maximum), "Leaves of Absence" (No maximum), "Research funding history" (5 years), "Publications" (Most relevant – up to 10), "Intellectual Property" (Most relevant – up to 5), "Presentations" (Most relevant- up to 5), "Knowledge and Technology Translation" (Most relevant – up to 5), and "Supervisory Activities" (Most relevant- up to 10). The Co-applicant CV included: "Employment" (Most relevant – 1 entry), "Publications" (Most relevant – up to 5), and "Knowledge and Technology Translation" (Most relevant – up to 5). The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers; associated total responses can be found in Appendix A (Tables 44-51).

The instructions for the Project CV were found to be clear and easy to follow as indicated by 66.5% and 66.4% of Stage 1 applicants and 66.3% and 66.3% of Research administrators respectively. However, 50% of Stage 1 applicants and 44.2% of Research administrators indicated that the CCV was easy to work with (Table 44). Respondents were asked to comment on the usefulness of the CV; 55.6% of Stage 1 applicants and 68.4% of Research administrators agreed that the Project Biosketch CV would be useful for reviewers in determining the caliber of the applicant (Table 45). The majority of Stage 1 reviewers (68.4%) agreed that the Biosketch was useful in determining if the applicant had the necessary experience and expertise required to lead or conduct the proposed research. Additionally, the majority of Stage 1 applicants (74%) reported that completing the "Most Significant Contributions" task would provide useful information to reviewers (Table 46).

Each section of the CV was appraised individually and in general, over 70% of Stage 1 applicants, Research administrators, and Stage 1 reviewers agreed that each section was relevant with the exception of the "Leaves of Absence" and the "Intellectual Property" section where 65% of Stage 1 applicant and 67.4% of Stage 1 applicants agreed on its relevance respectively (Table 47). Similarly, over 70% of Stage 1 applicants, Research administrators and Stage 1 reviewers found the character limit for each CV section was appropriate with the exception of "Publications and Presentation" sections. For the "Publication" section, 41.8% of Stage 1 applicants, 51.5% of Research administrators and 61.5% of Stage 1 reviewers agreed on the appropriateness of its character limit. For the "Presentations" section, more than half of Stage 1 applicants and Research administrators (55.3% and 57.6% respectively) agreed on the appropriateness of its character limit (Table 48). Respondents indicated the space limits should increase for all categories in the CV except for the "Employment" and "Leaves of Absence" sections. Respondents suggested that the "Knowledge and Technology Translation" limit be increased to 10 items or more and that "Publications" should be increased with no maximum. More specifically, respondents indicated that they would have liked to include more information about their publications such as specify type of publication and authorship order. Respondents also suggested the "Supervisory Activities" limit to be converted to a timeframe rather than 10 items. Overall, respondents expressed a preference for the Biosketch outline used by the National Institutes of Health over the current CCV format. Respondents reported that the interface of the CCV could use improvement. Applicants reported technological issues that included a slow interface, website crashing and multiple unnecessary steps to complete their CV. For example:

"First, it would be helpful if the CCV site was able to handle the predictable increases in traffic that occur just prior to registration and application deadlines. Many people wasted needless hours trying to get on the site."

Stage 1 applicant

Respondents indicated the need for the inclusion of tables as well as letters of collaboration. Additionally, respondents reported that the instructions for completing the CV were often inaccurate and unclear, particularly with regards to the multiple CV templates across applicants (i.e., co-applicant vs. principal applicant). Applicants did not find the multiple CV templates helpful and would have preferred more consistency in CV templates across different applicants. Respondents were also asked to comment on the usefulness of the co-applicants' CV and 52.5% of Stage 1 applicants and 57.2% of Research administrators agreed that the co-applicant CV would be useful for reviewers in determining the caliber of the co-applicant (Table 49). The majority of Stage 1 reviewers (73.2%) agreed that there was enough information in the co-applicants' CV to determine if they had the necessary experience required (Table 49). Each section of the co-applicant CV was appraised individually and over 70% of Stage 1 applicants, Research administrators, and Stage 1 reviewers agreed that each section was relevant (Table 50). Similarly, over 70% of Stage 1 applicants, Research administrators and Stage 1 reviewers found the character limit for each CV section was appropriate with the exception of the "Publications" section where 45.3%, 50.8%, 67% of Stage 1 applicants, Research administrators and Stage 1 reviewers agreed on the appropriateness of its character limit (Table 51). Feedback on the co-applicant CV included a general sentiment that the limits were too restrictive and applicants would have preferred the ability to include co-applicants research funding. Applicants also indicated that the CV could mirror the Biosketch and were unsure why co-applicants needed to have a modified version.

8. Feedback about the budget

Applicants submitted a budget request to support the proposed research program in their application. Reviewers were asked to evaluate if the requested resources were appropriate to financially support the proposed research program as described in the application. Further, CIHR required that budget requests be consistent with the applicant's previous research funding history as determined by the budget baseline provided by CIHR. The following section provides an overview of the respondents' experience and feedback on the budget section of the application. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants and 104 Research administrators, associated total responses can be found in Appendix A (Tables 52-53).

Overall, the majority of Stage 1 applicants (87.1%) and Research administrators (77.9%) found the budget section easy to use (Table 52). Similarly, 81.4% of applicants and 79.2% of administrators found it useful to describe the budget in terms of categories (e.g., "Consumables", "Non-Consumables") and 69.8% of Stage 1 applicants and 62.4% of Research administrators expressed that the categories allowed them to sufficiently outline the relevant aspects of their budget. Overall, applicants and administrators indicated the instructions provided in the budget section were clear (82.6% and 71.5% respectively)and were sufficient to help them understand what information to put in each section (79% and 62.4%respectively). The majority of Stage 1 applicants (68.8%) preferred the overall budget instead of a year-by-years breakdown and 73.5% agreed that they were able to appropriately justify their budget. A smaller proportion of Research administrators (57.2%) agreed that applicants were able to justify their requested funds and 46.8% preferred to provide an overall budget instead of a year-by-years breakdown (Table 52).

When asked about the budget format, 56% of Stage 1 applicants and 35.1% of Research administrators agreed that the character limits in the overall budget were appropriate. Generally, respondents reported the budget space limits to be restrictive. They also reported having little guidance and unclear category definitions. For example:

"It was not exactly clear what kind of justification was asked for in the budget. An example should be provided for each subsection in the guidelines, such that the presentation would be more uniform."

Stage 1 applicant

When asked about the appropriateness of the specific sections of the budget, over 70% of Stage 1 Applicants agreed that the "Research staff", "Trainees", "Consumables", "Non-consumables", and "Other" categories were appropriate; the exception was the Knowledge Translation section where only 68% of Stage 1 applicants agreed it was appropriate (Table 53). Respondents were able to comment on each section in the open-ended responses. Respondents indicated that it was unclear who is to be included in the "Research Staff" category, for example, if undergraduate volunteers would fit under this category. For example:

"There were some basic things which seemed to go in two categories or neither category. For instance, stipends for trainees was given as an example in both the research staff and the trainees category, and I thought that money for participant incentives is a pretty basic requirement that would be useful to include as an example in the appropriate category."

Stage 1 applicant

Respondents also noted that the space limit for the "Research Staff" category was too restrictive and indicated that the "Trainee" category needed additional clarification regarding what type of student (graduate, medicine, etc.) would belong in which category. They suggested that the "Research Staff" and "Trainees" categories could be merged to avoid confusion. Respondents indicated that the definition for the "Consumables" and "Non-Consumables" categories were unclear and could have been clarified with additional examples of what a consumable is and is not. They suggested that the criteria for "Consumables" be expanded to include animals and services or maintenance supplies. Respondents indicated that the definition for the Knowledge Translation category was unclear and suggested it would be helpful to have an example of what would and would not fit under this category. More specifically, respondents expressed a lack of clarity on how the knowledge translation category applies to their field, particularly for applicants proposing basic or biomedical research. Finally, respondents indicated that the definition for the "Other" category was unclear and thus were unclear on what falls under this category. Respondents suggested that mice and animals, services, travel, and publications be placed in the "Other" category.

9. Feedback on the supporting documents

A number of documents were developed in order to support individuals who were involved in the application and review process of the Project grant competition. The following section provides a high-level overview of the respondents' use and feedback on the supporting documents that were provided by CIHR. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers, 71 VCs, and 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 54-55).

Overall, there was variable use for the supporting documents provided by CIHR (Table 54). Generally, more Stage 1 applicants and Research administrators used the documents compared to reviewers and virtual chairs. The documents that were used by the fewest respondents across all groups included the Project Scheme Live pilots- questions and answers document, Project Biosketch: quick reference guide and Project scheme co-applicant CV- quick reference guide (Table 54).

Over 70% of respondents who used the supporting documents indicated that they were helpful with the exception of the Project Biosketch: Quick Reference Guide where 62.5% of Stage 1 VCs agreed it was helpful and the Project Scheme- Stage 1 Adjudication Criteria Descriptors and Interpretation Guidelines where 40% of FAS Reviewers agreed it was helpful (Table 55).

When asked for feedback regarding the supporting documents, respondents expressed that there were too many different documents with a lot of overlap in the information provided. Respondents suggested that it would have been more useful to consult a "master file", or a single document that contained the most important material. For example:

"All the documents were helpful to have on hand although there seemed to be some detail missing. Also, it was frustrating to constantly have to refer to different documents and keep track of which was which. A master pdf document, with an interactive table of contents or hyperlinks within the document would be helpful."

Stage 1 research administrator

Respondents also indicated that the language used in the documents was often difficult to understand. They expressed that more lay language could be used instead of legal or technical terms in order to improve the clarity of the information. Respondents identified a need for more accessibility and information or awareness regarding the location of the supporting documents. For example:

"I was unaware of the existence of the documents I indicated I did not read. Making all of these documents prominent and easily available on the reviewing web page work be very helpful."

Stage 1 reviewer

Respondents also suggested that the clarity of the supporting documents could be improved by incorporating more examples of certain criteria. When asked about additional ideas for future documents that would be helpful for the role of a reviewer, respondents indicated that the webinars were more helpful than the supporting documents.

10. Feedback on the learning materials

A number of interactive learning lessons were developed in order to support individuals who were involved in the application and review process of the Project grant competition. The following section provides a high-level overview of the respondents' use and feedback on the learning materials that were provided by CIHR. The following section provides a high-level overview of the respondents' use and feedback on the supporting documents that were provided by CIHR. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers, 71 Stage 1 VCs, and 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 56-57).

Generally, there was variable use of the learning materials provided by CIHR (Table 56). The learning materials that were used by the fewest respondents across all groups included the webinars on the Stage 1 application for the Project Scheme, and materials specifically created for Stage 1 reviewers and Stage 1 VCs. For those who did use the learning materials, over 70% of respondents indicated that they were helpful with the exception of the Stage 1 Application Process Webinar where 68.1% of Stage 1 applicants indicated it was helpful (Table 57). When asked about any feedback regarding the learning materials, respondents indicated that the learning materials largely repeated the information found in the supporting documents. This resulted in the perception that it may not be necessary consult both the documents and the webinar. For example:

"Most of what was in the webinar was already known and available on the CIHR website. What I find frustrating is the inconsistency of information. I know for a fact that different answers were given to the same questions in different webinars."

Stage 1 applicants

Respondents also indicated difficulties with the online platform and carrying on continuous discussions with individuals across time zones. When asked about how the learning materials can be improved, respondents indicated that it would be helpful if the webinars were more synchronous, with regards to timing and accessibility. In addition, respondents suggested that the materials and responses from the Q&A webinar should be shared with all participants, and they should be available for viewing at all times. For example:

"The webinar should have been done as one freely accessible YouTube session as well. That might even help prospective applicants to improve their submissions."

Stage 1 reviewer

Research administrators were asked if they had developed any special training for their researchers to help them in the application process; 45.2% indicated they had and 54.8% indicated they did not (Table 58). When asked to describe the training in open-ended questions, administrators indicated a combination of activities, which included advice for applying to the grant and providing overviews of the review process. Other activities included the formation of workshops or creation of information sheets and providing one-on-one coaching support. For example:

"I prepared an email communication with instructions regarding the application process that was sent to all registrants and their department administrators. I provided feedback on the applications I reviewed and also provided one-on-one advice over the phone or email as questions arose. We also have a separate unit on campus that organized some workshops and provided additional review/editing services."

Research administrator

11. ResearchNet

The following section provides an overview of the respondents' experience and feedback with their use of ResearchNet and specific feedback on its usability during the application process and review process. Results in this section are organized by application processes, review processes, and feedback on support. The proportions calculated in this section are based on the number of valid responses from 1614 Stage 1 applicants, 104 Research administrators, 920 Stage 1 reviewers, 71 Stage 1 VCs, 14 FAS reviewers, and 1239 Applicants after decision; associated total responses can be found in Appendix A (Tables 59-65).

11.1 Application process

The majority of Stage 1 applicants (49.5%) and Research administrators (93.4%) indicated they used a Windows system to access ResearchNet (Table 60). Overall, 60.3% of Stage 1 applicants were satisfied with access to ResearchNet (Table 62). Feedback on the general usability of ResearchNet included that it was easy to use as expressed by 83.2% of Stage 1 applicants and 81.5% of Research administrators (Table 63). The majority of Stage 1 applicants (75.4%) agreed that they were able to enter their application information into ResearchNet without any difficulty. Those who did experience difficulties illustrated their experiences in the open-ended responses. Applicants experienced technical issues with the formatting features in ResearchNet, specifically around the accuracy of the number of characters. For example:

"Character count doesn't match up with word resulting in having to edit in ResearchNet. Windows to enter information are too small for larger sections (anything that's more than half a page). Formatting is all lost/messed up."

Stage 1 applicant

Additionally, applicants indicated that it took time to explain the application process to co-applicants who were from outside of Canada or new to the system. Applicants suggested that the principal applicant be allowed to input co-applicant information to reduce this challenge. For example:

"Let us submit CV info on behalf of co-applicants after they provide the confirmation codes- having to do so again between registration and the full application, particularly requiring additional materials, was troublesome."

Stage 1 applicant

Features that respondents liked about ResearchNet included having instructions in one place and the ability to set internal deadlines as a Research administrator. Overall, 85.5% of Stage 1 applicants and 75% of Research administrators were able to submit their application efficiently using ResearchNet (Table 63). The majority of research administrators (93.4%) indicated that it would be helpful to have a test account and 92.1% indicated that they would like to have access to a test account for all of CIHR's open programs (Table 64).

11.2 Review process

The majority of Stage 1 reviewers (46.4%), Stage 1 VCs (46.5%), and FAS reviewers (64.3%) indicated they used a Windows system to access ResearchNet (Table 60). Feedback on the general usability of ResearchNet included that it was easy to use as expressed by 88.7% of Stage 1 reviewers, 84.5% of Stage 1 VCs, and 75% of FAS reviewers. Additionally, 79.4% of Stage 1 reviewers indicated that the structured review process in ResearchNet was user-friendly and 83.3% of FAS reviewers agreed that the binning process in ResearchNet was user-friendly. Overall, 85.1% of Stage 1 reviewers and 41.6% of FAS reviewers were able to efficiently review the applications using ResearchNet (Table 63). Open-ended responses from reviewers and chairs indicated that they would prefer to have an improved notification system to alert them when applications get assigned, new discussions are posted, or rankings are changed. For example:

"There was a technological glitch in that when new discussions came in it was not flagged -- this was extremely frustrating and took up additional time as I had to constantly remember when I last viewed the discussions and then try to find new discussion threads."

Stage 1 VC

Reviewers also expressed that the login time to complete their reviews before timeout was insufficient and they had to re-input their reviews multiple times. In order to avoid this, reviewers recommended that an automatic saving feature be added. For example:

"The problem as you pointed out in the guide was when it times out you lose the work you had been doing. The timeout period is too short and an auto save feature would make sense to avoid this problem."

Stage 1 reviewer

11.3 Feedback on ResearchNet support

Participants who had problems with ResearchNet contacted the ResearchNet support service. Applicants, research administrators, reviewers, and virtual chairs expressed general satisfaction with the timeliness of support (71.4%, 73.5%, 76.4%, and 75.9% respectively). However, a lower proportion of FAS reviewers (41.7%) and Applicants after decision (31.4%) were satisfied with the timeliness (Table 65). Similarly, Stage 1 applicants, Research administrators, Stage 1 reviewers expressed general satisfaction with the helpfulness of the support service (71.7%, 71.4%, 77.6%, and 74.9% respectively); however, a lower proportion of FAS reviewers (45.5%) reported being satisfied (Table 65). A minority of Applicants after decision also indicated that CIHR's responses were complete, consistent, and accurate (20.9%), and 49.4% agreed that CIHR staff were courteous.

12. Perceptions of the review format

The following section provides an overview of the reviewers' experience and feedback with the format of the review worksheet. One of the design elements of the Project scheme is the structured review. The idea behind this structured format was to focus reviewer feedback on the specific adjudication criteria. Reviewers were asked to provide comments for each adjudication criteria and were provided with a half a page for strengths and a half a page for weaknesses for each. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers and 14 FAS reviewers; associated total responses are found in Appendix A (Tables 66-69).

Overall, over 70% of Stage 1 reviewers felt that the character limit in the structured review worksheet was adequate to respond to each adjudication criteria (Table 66). The proportion of those who did not find that it was adequate indicated that the ideal character limit for the adjudication criteria were as follows: "Quality of the Idea" was one page (45.2%), "Importance of the Idea" was one page (42.6%), "Approach" was one page (49.4%), and "Expertise, Experience and Resources" was one page (46.2%) (Table 67). Additionally, 22.7% of Stage 1 reviewers agreed that the adjudication worksheet allowed for better assessment for the IKT approach (Table 68). All of the FAS reviewers indicated that the character limit on the adjudication worksheet was adequate (Table 69). Generally in open-ended responses, reviewers requested for additional limits for applicants and themselves to respond to adjudication criteria. Though there were positive comments indicating that the form was fine to use; reviewers also identified that they would prefer to combine the weakness and strength sections of the review worksheet.

13. Overall satisfaction with the review process

The following section provides an overview of the respondents' experience and feedback with the review process. Results in this section are organized by Stage 1 and FAS. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 Reviewers, 71 VCs, 14 FAS reviewers, and 1239 Applicants after decision; associated totals can be found in Appendix A (Tables 70-78).

13.1 Stage 1

Overall, Stage 1 reviewers were divided in their responses to how satisfied they were with the review process, specifically, 42.6% were satisfied and 49.3% were dissatisfied. Additionally, 17.2% of Stage 1 VCs responded that they were satisfied compared to 79.7% who were dissatisfied with the review process (Table 70). Open-ended responses from reviewers identified that they were unclear with how to judge the sections using the adjudication scale, that they required more time to complete their reviews, and that the engagement of reviewers needs to be enhanced. For example:

"No accountability. Many reviewers didn't submit reviews and those that did, some did not interact during online discussion."

Stage 1 reviewer

Moreover, reviewers expressed a need to enhance the quality of reviews as feedback was not consistently provided or were generally too brief to be useful to applicants. For example:

"The discrepancy between the adjudication scores, overall scores and the overall ranking was a huge problem. Furthermore, reviewers were adjusting their scores based on the discussion without adjusting their reviews."

Stage 1 reviewer

The majority of Stage 1 reviewers (78.5%) indicated that the applications fell within their area of expertise, 80% were comfortable reviewing the applications and 70.7% were able to assess both the concept and the feasibility of the proposed research. Almost all (96%) of Stage 1 reviewers also agreed that they were able to effectively report their conflict of interest (Tables 71-74). Additionally, the majority (74.7%) indicated that the review process was appropriate and 71.6% agreed it was a useful way to provide feedback to applicants; however, a smaller proportion (42.8%) agreed that it was a better way to provide feedback. The majority of reviewers (68.9%) agreed that the review process was intuitive; however, just over half (54.5%) indicated that the new process made it easier to review (Table 75).

Stage 2 applicants after decision were asked to comment on the Stage 1 review process. The majority (77.9%) indicated that they were not satisfied with the Stage 1 review process (Table 70). Satisfaction with the review process was associated with success where more applicants who were successful in receiving funding were satisfied (40.8%) compared to those who were unsuccessful (8.1%) (n=1139, p<0.001; Table 76). When asked about the value of the review process, the majority of Applicants after decision (74.7%) also indicated that they saw value in the structured review process and that the process for Stage 1 was fair and transparent (71.6%) (Table 77). In open-ended responses, applicants indicated that the process was valuable if reviewers followed the process and provided clear justifications for their ratings. For example:

"I agree with the principle behind this idea, but the execution was not well done. Justifications were highly positive for most non-funded applicants I spoke with. By making most rankings positive (e.g., variations of outstanding or excellent), it doesn't demand that reviewers point out negatives, which is necessary for improving applications for resubmission."

Applicant after decision

Open-ended responses identified that applicants indicated that the adjudication process could be improved by addressing the perceived lack of reviewer accountability. Applicants suggested the re-introduction of face-to-face reviewer panels is essential to addressing this issue. In addition, applicants suggested providing reviewers with incentives, making reviewers' identities known, have virtual chairs to review the submitted reviews, and allow applicants to respond to reviews as other ways to improve reviewer accountability. In addition, applicants indicated that the adjudication process could be improved by addressing the perceived lack of consistency among reviewers. Applicants suggested that mandatory discussion of any discrepancies between reviewers is central to addressing this issue. For example:

"I think in cases where there are large differences in reviewers scores, there needs to be some discussion to form a consensus. E.g. did the grant just happen to fall into a reviewer's pool of grants that were all scored very high or low and that's why there is a difference?"

Applicant after decision

Applicants also suggested ensuring consistency in the number (i.e., all reviewers have five grants) and content of grants (i.e., all grants reviewed are in basic research) that a reviewer reviews would help address this issue. Additional ways to improve reviewer consistency included providing training to VCs and reviewers, removing the lowest score awarded, and allowing applicants and/or VCs to contact the reviewers, which could impact reviewers' future involvement in reviewing applications. For example:

"I think it would also be useful to provide training to referees - for example one of mine asked why there were no letters of collaboration in the application when these were not allowed - so it is important that reviewers know the guidelines of what was asked for in the application…"

Applicant after decision

Applicants suggested that the review process could be improved by ensuring applications are being reviewed by appropriate and qualified reviewers who provide meaningful feedback to applicants. Applicants indicated that reviewers should be experienced, have the appropriate expertise, and no conflicts of interest. For example:

"It is also important to make sure the reviewers knows the field, or at least is knowledgeable enough to be a reviewer. I have spoken to several reviewers who are purely clinical researchers and were assigned basic science grants, and they admit they had no idea what they were doing. I assume the reverse has occurred."

Applicant after decision

In order to achieve this, applicants suggested that CIHR make it mandatory for researchers who have been previously funded by CIHR to serve as reviewers. Applicants also suggested that a greater transparency in the application and adjudication process would improve the adjudication process. Specifically, CIHR needs to clearly communicate to both applicants and reviewers so they have the same understanding of what is expected in each section. Finally, based on both the feedback received from reviewers and the weighting of the adjudication criteria, there was the perception that basic research is at a disadvantage. In order to improve the review process for applicants, CIHR could decide on funding interests (i.e., basic vs. translational research) and communicate these to potential applicants.

13.2 Final assessment stage

Over half of FAS reviewers (58.3%) indicated they were dissatisfied with their segment of the review process. Additionally, 38.4% of Stage 2 applicants after decision were dissatisfied with the final review process (Table 70). Satisfaction was associated with applicant's success in the FAS where more applicants were satisfied if they were successfully funded (30.0%) compared to those who were unsuccessful (5.5%) (n=624, p<0.0001; Table 76). In open-ended responses, applicants indicated that they did not receive notes from the scientific officer or feedback from reviewers at all. For example:

"No SO notes for my application…so I have no clear understanding of any collective assessment, which were inconsistent with each other and with individual ratings…this is painful to receive absolutely no context, and quite frankly, completely opposite views on the same point from separate reviewers does not serve me, nor the scientific community…"

Applicant after decision

Applicants felt they received a wide range of ranking from reviewers and were concerned about the clarity of the rating system. When asked about the consistency of reviews, applicants indicated that their ratings did not align with their associated rank and there was also a large discrepancy between reviewers. For example:

"I was utterly shocked to receive wildly divergent scores but no discussion. This was promised by CIHR - my application should have been discussed in Phase 2. I was denied Phase 2 review…"

Applicant after decision

When asked about the fairness of the reviews, applicants indicated that the reviewers did not seem to be knowledgeable enough in the field to review their application and also expressed frustration with not having a numerical scale for transparency. For example:

"There were no comments by any of the reviewers about deficits to the approach. All the comments focused on 'beliefs' held by the reviewers, suggesting that at least some of the reviewers had insufficient expertise to truly evaluate the project on its scientific merits."

Applicant after decision

The majority of Applicants after decision (92.5%) responded that they did not contact CIHR about the review process for Stage 1 or the FAS (Table 78). Suggestions for improvement included improving how reviewers were assigned applications taking into account their expertise and experience in the field of research. Additional suggestions included re-implementing the face-to-face component to enhance engagement and accountability.

14. Reviewers' experience with the rating and ranking process

Stage 1 reviewers' were asked to rate each adjudication criteria for each application they were assigned. A list of applications was generated ranked from highest to lowest rated based on those ratings. Reviewers were then responsible for validating the ranked list that was generated and moving applications up or down the list as appropriate. The following section provides an overview of the Stage 1 reviewers' experience and feedback with the rating and ranking process during the review process. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers; associated total responses can be found in Appendix A (Tables 79 – 82).

Overall, Stage 1 reviewers (82.6%) agreed that the ratings produced a rank-list from best to worst (Table 79). The majority of reviewers agreed (70.7%) that rating each criterion was a useful tool that helped them rank their applications and that it was easy to use (75.4%) and intuitive (69.4%) (Table 80). Additionally, reviewers agreed (76.4%) that it was appropriate to adjust the ranking of applications before they submitted their decisions to CIHR as the ratings were meant to be a tool to inform their overall ranking decisions. During the Stage 1 review process, 68.8% of reviewers indicated they needed to break ties between applications (Table 81). Over half of Stage 1 reviewers (57.2%) were clear on the purpose of breaking ties; similarly, 62.1% agreed that the process of breaking ties was clear and 58.9% agreed it was intuitive (Table 82). Generally Stage 1 reviewers had to break multiple ties or none at all. Reviewers indicated there was a lack of clarity around the purpose of breaking ties and the proper process to do so. For example:

"What is the point of breaking ties in the middle of the pack when only the top 1-2 applications get funded? And when some application [that] I rate low gets a great rating from another reviewer but we never discuss our conflicting impressions?"

Stage 1 reviewer

Reviewers were concerned that the rankings were artificial given that reviewers were given different sets of applications and that ratings were not taken into consideration in the final rank. Applications were too different from one another to properly compare them between reviewer groups. For example:

"I think the ranking of applicants per reviewer is totally irrelevant. As a clinician scientist, I was reviewing applications from both my clinical and basic science backgrounds - so my list of applications was extremely diverse and many had absolutely nothing to do with each other. So ranking them was completely artificial and had no relevance to anything."

Stage 1 reviewer

Reviewers admitted to changing the rank order to get the applications into an order that they deemed appropriate and not based on the rating for each adjudication criteria. Reviewers also mentioned having to adjust their rankings after learning how others were using the scale. For example:

"I changed rank order to get the applications into the order that seemed most appropriate according to my gestalt of which were the best applications. This was not necessarily based on the rating scale for each individual criterion, so somewhat subjective!  Perhaps an "overall" rating scale is also needed?"

Stage 1 reviewer

15. Experience reading the reviewers' reviews

Reviewers were allowed to read other reviewers' preliminary reviews to give reviewers the opportunity to calibrate their reviews and to identify any discrepancies between their ideas and the reviews of others in the absence of a face-to-face committee meeting. The following section provides an overview of the reviewers' and VCs' experience with reading others' reviews. This section is organized by Stage 1 and FAS. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers, 71 VCs, and 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 84 –86).

15.1 Stage 1

Stage 1 reviewers (90.3%) and Stage 1 VCs (97.1%) responded that they had read the preliminary review of other reviewers (Table 83). Stage 1 VCs also indicated that they had read the applications assigned to their reviewers (92.9%). More than 80% of both Stage 1 reviewers and VCs agreed that reading others' reviews was helpful and an important part of the review process. Additionally, 76.2% of Stage 1 reviewers indicated that reading others' reviews influenced their assessment of at least one application (Table 84). When asked about why they read the preliminary reviews, respondents expressed the belief that it was their responsibility to read every review in order to help to integrate expert opinions, understand discrepancies, and stimulate discussion. Respondents also viewed this as a valuable process to determine if others had a similar understanding of the application and more specifically, to see if others perceived similar strengths and weaknesses. For example:

"To get a better sense of why their scores differed from mine or to see whether I had missed important points (positive or negative) to appropriately evaluate a grant."

Stage 1 reviewer

When asked about why they did not read the other preliminary reviews, respondents indicated that reviews were not helpful when not submitted on time or were too brief and poorly written to be constructive. For example:

"Everything seemed to be happening to the deadlines. I was under time pressure because I needed to be away during some of the "virtual meetings" and when I submitted my final material not much else was available to compare."

Stage 1 reviewer

15.2 Final assessment stage

FAS reviewers (92.9%) indicated that they had read the comments of other FAS reviewers and read the applications which they were assigned (92.9%) (Table 83). All of the FAS reviewers agreed that it was necessary for the FAS; however, only 46.2% agreed that the comments provided by other reviewers were helpful in their preparation for the face-to-face meeting (Table 83). Responses from FAS were divided when asked if the comments influenced (46.2%) or did not influence (53.8%) their assessment of the application (Table 85). Those that indicated that reviewer comments influenced their assessment, responded that it happened "Occasionally" (66.7%)(Table 86). When asked to provide any comments regarding Stage 1 reviews and their usefulness for the FAS, respondents indicated that they were not helpful as some reviewers were assigned applications outside of their field of expertise. For example:

"The drawback here was that many/some of the final assessment stage reviewers did not have the expertise to review the application."

FAS reviewer

16. Assessment of review quality

The following section provides an overview of the respondents' experience and feedback with the quality of reviews. High quality reviews should have clearly described strengths and weaknesses; included constructive and respectful justifications for each given rating; and inspired confidence in the reviewer's ability to fairly assess the application. Results in this section are organized by Stage 1 and Final Assessment Stage. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers, 71 VCs, 14 FAS reviewers, and 1239 Applicants after decision; associated total responses can be found in Appendix A (Tables 87 –92).

16.1 Stage 1

The majority of Stage 1 reviewers (63.2%) indicated that there were issues with the quality of the reviews (Table 87). On average, 30% of preliminary reviews were deemed unsatisfactory. When Stage 1 reviewers were asked to provide feedback on preliminary review quality, over 80% agreed that the reviews they read did not disclose personal information about the reviewer, did not have any inappropriate reference to the applicant, their research institution or field, and that they were respectful. A smaller proportion of Stage 1 Reviewers agreed that other reviewers' comments sufficiently justified strengths and weaknesses (67.6%), had an appropriate balance of strengths and weaknesses to support ratings (62%), provided comments that were focused on the adjudication criteria (65%), had an absence of factual errors (63.5%), and provided comments that were clear (68%) (Table 88). When asked about quality issues in preliminary reviews, respondents indicated there was a great deal of variability between reviewer rankings, score justification, and comments. Respondents also expressed that a portion of reviews were too short, that comments were not useful, that some reviewers lacked expertise and may not have thoroughly read the application. For example:

"For my reviews and the other reviews that I read, at times there is a mismatch between the review text and the score; that is perhaps reflective of the slate of applications the specific reviewer was evaluating and whether the reviewer as well matched in expertise to the applicant's research."

Stage 1 reviewer

The majority of VCs (92.5%) also indicated that there were issues with the quality of the reviews (Table 87). On average, Stage 1 VCs indicated that 39% of the reviews were of unsatisfactory review quality. Similarly to Stage 1 reviewers, over 80% of VCs agreed that reviewers' comments did not disclose personal information about the reviewer, did not have any inappropriate reference to the applicant, their research institution or field, and that they were respectful (Table 88). However, about half agreed that reviewers' comments sufficiently justified strengths and weaknesses (43.3%), had an appropriate balance of strengths and weaknesses to support ratings (40.3%), provided comments that were focused on the adjudication criteria (55.3%), had an absence of factual errors (49.3%), and provided comments that were clear (44.8%). When asked to identify criteria which are important in determining review quality, over 80% of Stage 1 reviewers and VCs agreed that the following criteria were important: sufficient justification and balance of strengths and weaknesses; having adjudication focused comments; the absence of factual errors; having clear and respectful comments; the absence of inappropriate references to the applicant, their institution, or field; and the absence of personal reviewer information (Table 89).

16.2 Final assessment stage

When FAS reviewers were asked about the quality of Stage 1 reviewers' comments, all of them agreed that the Stage 1 comments were unclear and did not adequately justify the ratings provided. Additionally, all FAS reviewers agreed that Stage 1 reviews did not provide sufficient feedback to support their ratings (Table 90). When asked to provide comments regarding the quality of the reviews that were received, reviewers suggested that the unsatisfactory review quality may have been due to reduced accountability. For example,

"Many of the other FAS reviewers did not provide much in the way of comments to justify their decision."

FAS reviewer

It was suggested that this would be best resolved through the use of face-to-face meetings.

After the final assessment judgments were made, Applicants after decision were asked about the quality of reviews received. A minority were satisfied with the consistency of peer review judgments (19.6%), the quality of peer review judgments (23.1%), and the quality of the Scientific Officer's notes (9.9%)(Table 91). On average, 50% of reviews applicants received were of unsatisfactory review quality. Additionally, less than 40% of applicants agreed that Stage 1 reviewer comments sufficiently justified strengths and weaknesses, had an appropriate balance of strengths and weaknesses to support ratings, provided comments that were focused on the adjudication criteria, had an absence of factual errors, and provided comments that were clear. However, the majority of applicants did agree that the comments did not disclose personal information about the reviewer (70.9%), did not have any inappropriate reference to the applicant, their research institution or field (59.6%), and that they were respectful (60.6%) (Table 88). Additionally, 30.1% agreed that the reviews were consistent in that the written justifications (i.e., strengths and weaknesses) aligned with the respective ratings and 30.1% agreed that the information in the comments would be useful in refining their application for a future competition (Table 92). Applicants indicated that the comments were too brief and scores were not properly justified for them to use the information and improve their application.

When asked to identify criteria which are important in determining review quality, over 70% of Applicants after decision agreed that sufficiently justifying strengths and weaknesses, an appropriate balance of strengths and weaknesses to support ratings, adjudication criteria focused comments, absence of factual errors, providing clear comments, having respectful comments, an absence of inappropriate references to the applicant(s), the research institution(s) or research field were all important. A smaller proportion of Applicants after decision (65.9%) agreed that not disclosing personal reviewer information was important (Table 89). When asked for additional quality indicators (or elements) that are believed to be important in defining review quality, respondents indicated that reviewers should be content experts in order to provide quality feedback. For example:

"Reviewers must have knowledge/expertise related to the applications they're reviewing, and they must adequately and appropriately justify their rankings with provision of useful feedback."

Applicant after decision

When asked to propose additional criteria for review quality, respondents suggested that there should be a mandate for participation, comments and a detailed justification of ratings. Face-to-face meetings were perceived to be a critical component in facilitating effective discussions, which was missing in the online format.

17. Experience with the online discussions

The purpose of the online discussion was to give Stage 1 reviewers the opportunity to calibrate their reviews and to discuss any discrepancies between their ideas and the reviews of others in the absence of a face-to-face committee meeting. The following section provides an overview of the reviewers' and VCs' experience and feedback with the online discussion tool and participating in the online discussions. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers, 71 VCs, and 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 93 –98).

Overall, Stage 1 reviewers read online discussion posts (94%) and participated in an online discussion (87.7%) (Table 93). On average, they read the online discussion posts for eight applications and participated in an online discussion for six. The most common reason for participating was due to a scoring discrepancy between themselves and another reviewer (59.1%) and because of prompting by the VC (56.2%) (Table 94). In open-ended responses, respondents identified that they participated in the discussion because they believed it was their responsibility. For example:

"Because that is the duty of a reviewer. However, many reviewers did not participate - even after prompting by myself and the virtual chair."

Stage 1 reviewer

Reviewers also participated in online discussion to seek clarification from others, especially content experts. Reviewers valued the ability to discuss in order to gain the insight of others, encourage discussion, and obtain budget clarifications. However, participation was only viewed as helpful if reviewers participated and provided in-depth discussion, which was not consistently reported. For example:

"It's supposed to be a group effort. Given this flawed system it was the only way we could discuss grants. Unfortunately some reviewers chose not to participate which was appalling."

Stage 1 reviewer

The majority of Stage 1 reviewers agreed that the online discussion tool was easy to use (74.6%) and intuitive (71.4%) (Table 95). Additionally, reviewers identified that the online discussion was an important part of the review process (70%), helpful in the review process (66.8%), should be mandatory for reviewers who have divergent views on the same application (80.3%), and influenced their assessment of an application (73.7%). A smaller proportion of reviewers agreed that their participation was considered by other reviewers (57%). In contrast, when VCs were asked for their feedback on the online discussions, about half (51.5%) indicated that the online discussion was helpful and only 26.5% indicated that reviewers were actively participating in online discussions (Table 95). On average, Stage 1 VCs responded that 58% of their reviewers required prompting to participate in an online discussion. The majority agreed that the online discussion was an important part of the Stage 1 review process (70.6%) and should be mandatory for reviewers who have divergent views on the same application (89.7%) (Table 95). VCs identified in open-ended responses that there was variable quality and engagement from reviewers in participating in online discussions. When asked to provide feedback on who should decide when an online discussion should take place, the majority of Stage 1 reviewers (67.6%) and VCs (88.8%) identified that the VC was the most appropriate person to make that decision (Table 96). Reviewers (71.4%) and VCs (87.3%) indicated that a scoring discrepancy should be used to determine whether an online discussion takes place (Table 97). The spreadsheet that CIHR provided to the VCs to help them identify applications that should be discussed was helpful according to 69.1% of VCs and useful according to 54.4% (Table 98).

18. Feedback on the Virtual chairs

The following section provides an overview of survey respondents' feedback on the VC role. Their role in Stage 1 was to confirm application assignments to reviewers, ensure high quality reviews are submitted, flag applications that should be discussed by reviewers, monitor and/or prompt online discussions, and communicate with CIHR staff as required. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers and 71 VCs; associated total responses can be found in Appendix A (Tables 99 –102).

The majority of Stage 1 reviewers (63.1%) found the participation of their VC was beneficial to them during the online discussion period (Table 99). When asked about the experience, 17.1% of Stage 1 virtual chairs agreed that they were able to assign the correct complement of expertise to the applications (Table 100). Open-ended responses identified that virtual chairs were unaware that they were able to assign reviewers and would have wanted that opportunity. For example:

"I had not understood that I was supposed to assign applications. This was done by CIHR staff. Most reviewers had appropriate expertise I think, but I thought the judgment or competence of a few of them was dubious"

Stage 1 VC

Reviewers also shared the perception that they spent too much time trying to prompt and monitor discussions. Chairs indicated that the review process required scores to be submitted in advance; however, some reviewers submitted reviews very late or did not submit at all in time for the online discussion. For example:

"Some were good, others were non responsive - the online discussion simply does not work - it cannot come even close to a face to face discussion because typing stuff is cumbersome, and the time delay between comments is too great…"

Stage 1 VC

VCs suggested the re-introduction of face-to-face for Stage 1 procedures to increase reviewer accountability. The majority of Stage 1 VCs (65.6%) indicated that they were dissatisfied with their role (Table 101); comments indicated that they felt that they spend too much time chasing reviewers' comments and had lack of control over their selection of reviewers. Approximately half (48.4%) received questions from reviewers regarding the new review process (Table 102). Common questions received by virtual chairs included clarification on the rating scale, time frames for discussion and submission, and how to change preliminary ratings.

19. Perceived workload

One of the goals of the new review process was to decrease reviewer burden and the amount of work required to conduct reviews. The following section provides an overview of the reviewers' and virtual chairs' perception of their workloads. The proportions calculated in this section are based on the number of valid responses from 920 Stage 1 reviewers, 71 VCs, and 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 103–111).

19.1 Stage 1

Generally, most of the Stage 1 reviewers indicated that their workload was "Manageable to challenging" (33.3%) or "Challenging" (32.1%) (Table 103). On average, Stage 1 reviewers were assigned 10 applications. Compared to previous CIHR non-pilot competitions, 30.8% of reviewers indicated it was less work, 15.1% were neutral, and 48.6% indicated that it was more work (Table 104). When assessing each review activity in comparison to the previous competition, 20% of reviewers agreed it was more work reading one application, 30.8% said it was more work looking up additional information related to one application, and 26.1% said it was more work writing the reviews of one application (Table 105). On average, Stage 1 reviewers took two hours to read a single application, two hours looking up additional information, one and a half hours writing the review of a single application, one and a half hours reading other reviews, two hours participating in online discussions, and one and a half hours completing the ranking of assigned applications (Table 106). Overall, feedback in open-ended responses indicated that there was insufficient time to complete their reviews and that the last-minute shifts in timelines made completing the reviews very difficult. The time looking up additional information was dependent on the reviewers' familiarity with the subject matter. For example:

"Depending on the application and my familiarity with the subject matter, applications could take at least 1-2 hours to read and score."

Stage 1 reviewer

Being unaware of when reviewer comments would be posted was a problem for reviewers as it impacted their workloads as they had to constantly check in and logon. The change and limited time frame of the discussions also impacted reviewer workload. The change in timeline resulted in reviewers being unavailable for the new timeframe and consequently decreased reviewer participation.

The majority of Stage 1 VCs (88.6%) agreed that the number of applications they were assigned was appropriate (Table 107) and 84.4% agreed that their workload was manageable (Table 108). When compared to their workload the last time they had chaired, 25.7% of VCs indicated it was more work, 28.6% were neutral, and 45.7% indicated it was less work (Table 109). On average, Stage 1 VCs were assigned 30 applications. Feedback from chairs indicated that 25 applications would be an appropriate number of applications. On average, Stage 1 VCs took one and a half hours confirming application assignments for reviewers, eight hours reading applications assigned to their reviewers, eight hours reading preliminary reviews completed by their reviewers, five hours ensuring the quality of reviews submitted by their reviewers, four and a half hours initiating online discussions, three and a half hours prompting/reminding reviewers to participate in an online discussion, five hours participating in online discussions, and one and a half hours communicating questions, concerns, and/or feedback to CIHR (Table 106). Feedback from open-ended responses indicated that VCs were disappointed with the lack of expertise of chairs and application-reviewer mismatch. Generally, the number of applications was perceived to be appropriate, however they were concerned with shifting timelines and the quality of reviewers' engagement. For example:

"The date for the asynchronous review was changed 2 weeks before the actual review. I had booked off time to manage the review process and then had to try to change this at the last minute. This was not completely possible (I was booked to present at a conference during the eventual review period). Changing the date at the last minute was highly inconvenient for me."

Stage 1 VC

VCs indicated that they read applications when there were discrepancies in scores; however, this was dependent on reviewers' submitting their ratings on time. For example:

"Reading the reviews was critical. My point is that access to preliminary reviews was unhelpful. All reviews should have been available at the beginning of the committee process. I have never been part of chairing a committee at CIHR where there were so many missing reviews. It was impossible to keep up with trying to move towards agreement, when reviews are still coming in, even after the initial discussions were started."

Stage 1 VC

VCs indicated that they spent a lot of time prompting reviewers to submit their reviews and suggested that there be a built-in buffer in timeframes to accommodate late reviews. Additionally, VCs suggested highlighting reviews that have been changed to reduce the perceived burden and facilitate direct communication with reviewers.

19.2 Final assessment stage

Generally, Final Assessment Stage Reviewers indicated that their workload was "Just right" (30.8%), "Manageable to challenging" (23.1%) or "Challenging" (23.1%) (Table 103). FAS reviewers agreed they had a sufficient amount of time ahead of the meeting to complete pre-meeting activities (83.3%) (Table 110) and the majority (53.8%) spent 1-2 hours reading the comments of other reviewers (Table 111). Compared to the last CIHR non-pilot competition, 23.1% of reviewers said it was less work and 61.6% said it was more work and 33.3% neutral (Table 104). On average, FAS reviewers were assigned 10 applications and took one and a half hours reading the stage 1 reviews for one application, two and a half hours consulting the Stage 1 grant application for one application, one hour looking up additional information online related to the application, one and a half hours assigning the grant applications to YES/NO bins and writing comments to justify their assessment, one hour reading FAS reviewer comments using the "In meeting" task in ResearchNet, and 1 hour reviewing the final assessment stage ranking to prepare for the final assessment stage committee meeting (Table 106). Feedback from FAS reviewers included that grant assignments arrived late and timeframes were not appropriately shifted to accommodate for this. Generally, reviewers felt there was not enough time to assess the number of grants they were assigned and that workload depended on the quality of Stage 1 reviews, the number of reviews, and if applications were in their area of expertise. For example:

"…We had a limited time frame to do what we were asked to do and 6 days was simply not enough to do a proper assessment of 10 grants."

FAS reviewer

20. Face-to-face meeting

The following section provides an overview of the respondents' experience with the face-to-face meeting. Prior to the face-to-face meeting, each reviewer was assigned a subset of applications and each application was assigned to three reviewers. For each application, the reviewer had access to information from Stage 1, including: the reviews, the consolidated rankings, standard deviations, and the full applications. A binning system was used; a YES bin (to be considered for funding) or a NO bin (not to be considered for funding). Each reviewer was allocated a minimum number of applications that may be placed in the YES and NO bins and submitted their recommendations to CIHR prior to the meeting. Based on the YES/NO binning recommendations reviewers made as part of the pre-meeting activities, CIHR ranked all the FAS applications in order from the highest ranked to lowest ranked. At the meeting, the applications were placed into one of three groups: Group A (applications recommended that should be funded), Group B (applications for discussion at the meeting) or Group C (applications recommended that should not be funded). Group B applications were further discussed in the face-to-face meeting. The proportions calculated in this section are based on the number of valid responses from 14 FAS reviewers; associated total responses can be found in Appendix A (Tables 112 –113).

Less than half of FAS reviewers (42.9%) indicated that there was the appropriate number of YES and NO allocations in the binning process (Table 112). In the open-ended responses, reviewers explained that they were concerned that the Stage 1 reviews were not completed properly and that only a fraction of applications were actually discussed. For example:

"The grants need to be discussed by competent reviewers first before being decided upon by useless reviews from stage 1 reviewers assigned by a digital system (sometimes without having the appropriate expertise)."

FAS Reviewer

The majority of FAS reviewers agreed on the following face-to-face meeting statements: the instructions provided at the face to face meeting were clear and easy to follow (77%), creating Groups A, B, and C, and focusing the discussion at the committee meeting to applications in Group B was appropriate (61.6%), the process of moving applications from Group A or Group C to Group B was clear and easy to complete (69.3%), conflicts were handled appropriately at the face-to-face meeting (77%), the voting tool was easy to use (77%), the voting process was effective (69.3%), the instructions provided regarding the voting process were easy to follow (76.9%), and the face to face meeting was required in order to determine which applications should be funded (100%) (Table 113). However, 46.2% agreed that the process of moving applications between groups was efficient and that the funding cut off line helped to inform the discussion at the meeting.

21. Notice of decision

The following section provides an overview of the Applicant after decisions' feedback on the NOD document, a new design element that was implemented by CIHR that indicates whether or not their proposal was approved. Feedback on the NOD was only requested from Applicants after decision. The proportions calculated in this section are based on the number of valid responses from 1239 Applicants after decision; associated total responses can be found in Appendix A (Tables 114 –115)

About half of Applicants after decision (51.7%) agreed that the NOD clearly explained the Stage 1 and FAS results of their application; and 47.3% agreed that the document was helpful in interpreting their results (Table 114). Additionally, 58.5% of Applicants after decision indicated that they used the NOD document to interpret the results (Table 115). In open-ended responses, applicants expressed that they were unclear on how the final rank and standard deviation were calculated. Applicants also indicated that the information was not easily obtained from the document or had difficulty locating the document on the website. For example:

"I could not tell what happened. I tried to use the document but honestly, I found it challenging to interpret and couldn't find the explanatory documents which clearly state what the consolidated rank actually means nor the percentage distribution of reviewer rankings. I assume low is bad but it is really unclear."

Applicants after decision

Generally, there was a consensus on the lack of justification provided by reviewers and therefore the document was not perceived as useful. Additionally, applicants would have preferred to know the cut-off for funding and where they were positioned on the scale.

22. Survey feedback

The following section provides an overview of the respondents' experience with completing the feedback surveys. Survey respondents were asked to provide general feedback on the survey process or the survey questions. Survey feedback included that the respondents felt the survey was too long and took longer to complete than the stated time. Respondents indicated that questions were repetitive and the time commitment of completing the survey may be a deterrent for future participation. Additionally, they expressed that some questions were too restrictive (e.g., Yes/No responses) and did not allow for added granularity. Generally respondents were thankful to provide feedback and appreciated the opportunity with the hope that CIHR take note of their comments. Other suggestions included earlier access to the survey after completing the submissions in order to accurately comment on items and a request an N/A or skip option for questions.

Limitations

This report has the following limitations: (a) the data from the online survey was collected anonymously and was not linked across the competitions phases, therefore, we were unable to confirm if each response was completed from a unique respondent; (b) sample sizes may not be representative of researchers across Canada as provincial data was not collected; (d) the average response rate for the survey was 49.3% therefore, it is possible that this report might not represent the full view of all possible participants, non-respondents could have had different characteristics and opinions from respondents; (e) open-ended comments were coded by a single coder which could lead to a certain degree of subjectivity; (f) Sample sizes were also limited in certain categories and FAS reviewers had the smallest sample size as compared to other respondents. These limitations are important to note when referring to this report as a summary of what the respondents felt towards the application process.

Date modified: