Institute of Circulatory and Respiratory Health: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

  1. Rebuilding Confidence: The recent changes made in response to the Peer Review Working Group's first round of reforms have been generally well-received by the research community. Updated project and peer review reforms appear to be restoring confidence, generating a sense of greater optimism and addressing concerns related to trust and reviewer accountability.
  2. Premature Evaluation: Currently, it is too early to determine the effectiveness of the CIHR reforms. In order to re-establish confidence and trust, the research community would like an ongoing, transparent mechanism for engagement and performance management of reformed systems.
  3. Ineffective Communication: Although the community recognized CIHR's communication and community engagement efforts, numerous changes led to information overload. Researchers reported responses from the Contact Center were inadequate and lacked clarity. Stakeholders encouraged CIHR to continue to engage in ongoing, routine and open dialogue with the research community.

Stakeholder Engagement Approach

Approach Description Participants
Expert Consultation One on one consultation sessions with community experts within the mandate of the Institute of Circulatory and Respiratory Health (ICRH). Expert consultations were conducted by the ICRH Scientific Director, Dr. Brian H. Rowe. 5
Focus Group ICRH hosted 6 online focus groups with members of the mandate community. Focus groups were conducted using WebEx, were one hour in length and moderated by Dr. Brian H. Rowe. Participants were provided with an overview of recent CIHR reforms before engaging in discussion guided by three themes derived from the primary CIHR questions. 11
Web Form ICRH had two separate written web submission available:
  1. CIHR peer review expert panel web submission form
  2. Institute-based mandate community written submissions
Total 31


Individuals participating in stakeholder engagement activities represented all research pillars; however, most representation was from Pillars 1 and 2. The majority of the stakeholders were senior and mid-career investigators and the majority of participants were male. Most participants had reviewer, applicant and/or recipient experience with CIHR; however, not all had participated in the most recent round of open competitions, and thus lacked first-hand experience with the new CIHR reforms.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

The original objectives of the reforms to the investigator-initiated programs and peer review processes were to:

  1. Contribute to a sustainable Canadian health research enterprise by supporting world-class researchers in the conduct of research and its translation across the full spectrum of health;
  2. Contribute to a sustainable foundation of health research leaders by providing long-term support to pursue innovative, high impact programs of research;
  3. To capture ideas with the greatest potential for important advances in health-related knowledge, the health care system and/or the health outcomes, by supporting projects with a specific purpose and defined end point.

Overall, there was agreement, especially among those in the Pillar I community, that the initial reforms (prior to July 17) were too drastic, disruptive and poorly communicated to the research community. These changes resulted in researcher anxiety, reviewer angst and fatigue, and institutional chaos. The main concerns were the:

  1. Unfamiliar and difficult to interpret grading system;
  2. Lack of accountability in the asynchronous on-line peer review;
  3. Cancellation of face-to-face (F2F) meetings;
  4. New application format and structure (including the CCV);
  5. Poor matching of the reviewers to the applications, thus impacting quality of reviews.

The recent corrections (of the former CIHR reforms) have been well-received and are believed to address many of the concerns raised by the research community regarding the Project peer review system. Specifically:

  1. Researchers felt strongly that the return of the F2F meetings will increase the reviewer accountability that was lacking in the online reviews, permit open discussion of the top grant applications, build reviewer capacity and address issues concerning transparency and consistency;
  2. Overall, the researchers believe the simplified scoring system will be applied more appropriately and lead to fairer application evaluations;
  3. The idea of including only the top 40% of applicants in the F2F meetings was generally endorsed; however, feelings remain that triage in the 'grey-zone' is critically important;
  4. The use of Chairs and Scientific Officers/Co-Chairs to ensure appropriate feedback to the researchers who did not receive funding was strongly endorsed;
  5. While there was general support for the advancement of the College of Reviewers, an expressed need for a lever or process to increase and ensure the pool of reviewers are appropriate, well qualified and high quality. Voluntary mechanisms are weak and social duty may not be a sufficient motivator;
  6. Improvement of the reviewer matching system was highly endorsed;
  7. Reviewer training was frequently mentioned as a long-term shared responsibility (CIHR and universities).

As a result of the corrections, there was a renewed sense of optimism expressed with respect to the future, restoration of confidence in CIHR and peer review, and respect for the open nature of the reforms. Concerns remain with the reviewer-application matching algorithm, and expertise alignment for multidisciplinary research. It was recommended that these changes should be reviewed in an iterative manner at a later day.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

It is well-recognized that the mandate of CIHR is broad (e.g., Pillars/Themes I-IV, multiple diseases), and that science is evolving to be more collaborative, patient-oriented, team-based, interdisciplinary, and transformative. In addition, researchers recognize the need to be increasingly cognizant and focussed on the return of investment, or the "so what", of their science. This changing paradigm, has left areas of confusion where respondents highlighted several key points and areas of concern:

  1. There was an overwhelming consensus that the current CIHR funding is inadequate to properly meet research needs across Canada;
  2. The respondents requested clarity on changes to the application process and reforms to the peer-review system;
  3. Effective communication which strives to clearly outline funding cycles, deadlines, results announcements and funding start dates;
  4. Grant evaluation process which better facilitates and explains:
    1. The significance of the role of partnerships with respect to evaluation;
    2. The value and emphasis of knowledge translation with respect to evaluation.

Of utmost concern for many in the research community is the support and accessibility of funding for young investigators (YI). Although supportive of an increase in YI specific funding, researchers expressed concern that larger funding schemes, (i.e., Foundation scheme) are largely inaccessible for YI. This perceived lack of support is resulting in a loss of confidence in the YI community and overall frustration. On a positive note, the community feels that networks have done a good job fostering career development and engagement opportunities for YI.

On the subject of the grant application process, respondents expressed both positive and negative reflections on the new system. Overall, there was a consensus that the 'reforms to the reforms' addressed most of the concerns related to application size and burden. Researchers appreciated the more streamlined and free flowing nature of the updated application process. These modifications addressed the concern that the previous character-regulated application forms did not provide sufficient space and encouraged cookie-cutter responses (especially in the integrated KT section). Nevertheless, issues remain with respect to the common CV and the co-application process. Grant writers explicitly stated that while value is appropriately placed on partnerships, the co-applicant requirements result in an incredible administrative burden. Restrictive access to the common CV (CCV) continues to be a significant administrative burden for NPIs, it was suggested in order to increase CCV functionality NPIs take full responsibility of information input.

Finally, although there was overall support for the Foundation scheme as a mechanism to facilitate continuity and stability, there was some push back on what was referred to as a 'two-tiered system.' Some members expressed feelings that the current Foundation and Project scheme structure creates an elite division in the research community. This structure fosters growth and opportunity for a select group of researchers, while limiting funds available to others. Furthermore, many health outcomes researchers were focusing on Project schemes; however, this reinforced the "haves" (Pillar 1) versus the "have nots" (all other research Groups) mentality in Canada.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

One main challenge that was identified by the respondents for the adjudication of applications is the consistency of identified criteria applied by reviewers when scoring applications. As peer review is based on individual feedback, the perspective of the reviewer, whether a result of unconscious bias, personal judgement, poor reviewer matching or other influences, is difficult to manage. Another challenge is the lack of reviewer comments or inclusion of nonsensical and/or non-constructive feedback. Reviewer feedback is critical to the adjudication process as this provides information back to applicants regarding how their application was viewed and assessed. The absence of comments by reviewers' leaves applicants wondering what the issues were and omits the opportunity for applicants to learn and improve their application.

There are organizations, nationally and internationally, that are examining ways to address these challenges. The Patient-Centered Outcomes Research Institute (PCORI) has developed a rigorous peer reviewer training program focussed specifically on how to undertake merit reviews for the organizationFootnote 1. In addition, PCORI has designated staff to assess the quality of the reviews provided and will follow-up with reviewers if there is a lack of feedback or if their review is seen to be of poor quality. If reviewers consistently provide poor quality reviews, they will then assign a peer review mentor to provide guidance and support. The provincial funder Alberta Innovates, assigns internal staff to "blind" reviews, where the reviewer name, information that may identify a reviewer, and nonsensical or inappropriate comments are removed prior to returning reviewer feedback to applicants.

Assessment of multidisciplinary, transdisciplinary and interdisciplinary research also poses a challenge for adjudication, as applications of this type are generally complex and involve heterogeneous environments. Applications can span across the CIHR research themes that have varying practices on authorship and metrics of success. Identifying and matching reviewer expertise is difficult as the nature of an application requires broad and generalized knowledge along with content specific expertise. The former Clinical Trials Committee was mentioned as a successful model (e.g., content experts, methodologists and statisticians on the panel) that should be considered for these complex multi-disciplinary grants. Allowing applicants to identify potential reviewers for their application is a practice that many funding agencies employ to help identify suitable reviewers. Additionally, ad hoc solicitation of external reviews from content experts is a relatively common practice used to support internal reviewer assessments and panel discussions.

A second stage F2F discussion following initial independent assessment of applications remains a common practice among many funding organizations. The added value of F2F panels discussions for reliability of peer review is variable and may depend on the funding opportunity type (e.g., trainee, salary, operational, interdisciplinary, investigator-initiated)Footnote 2,Footnote 3,Footnote 4,Footnote 5, Footnote 6.

The revised CIHR requirement of four reviewers per application is seen to allow for identification of appropriate reviewers for a given application. It is not clear, however, if this will contribute to decreased reviewer burden. Regardless of the number of reviewers/application, input from the ICRH community identified that appropriate reviewer matching is critical. Return to F2F discussion, although the added value unclear based on the literature, is seen to be a positive move by the ICRH community. Return of the F2F meeting is seen to provide opportunities for mentorship and reviewer training in addition to supporting greater reviewer accountability.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

There was an overwhelming consensus that while the community is hopeful that changes will accomplish the stated objectives, it is too early to tell if appropriate and sufficient mechanisms have been implemented. The matching algorithm used in the Spring competitions was abandoned by CIHR, and the new matching system will need to be formally evaluated once additional rounds of competition have been completed.

There was support for the College of Reviewers (CoR); however, concerns were expressed about its slow progress. The College was seen as an important tool for researchers to be acknowledged by CIHR as "qualified" to review and also to improve the matching of the reviewers to individual applications. Since matching of reviewer expertise was seen to be invalid and inefficient in recent competitions, this was identified as an essential element required to restore the confidence in the peer-review process by the research community. The quality and accuracy of triage decisions impacting ranking in the "grey zone" were seen as particularly important. Additionally, the CoR is seen as a mechanism to provide reviewer training information and guides to improve reviewer expertise. This is important as there was an expressed need for reviewer training to ensure reviewer diversity and expertise. Many stakeholders expressed that this training be a shared responsibility between academic institutions and the CIHR. Lastly, participants were skeptical there would be inadequate numbers of peer reviewers, as it is a voluntary process.

Although there was overall support for the direction in which the peer review system is moving, some positives and negatives of the changes include:

  1. Support for the personal review of matching by the Chair and Scientific Officer; however, given the broad mandate of the Chair and SO, some hesitation that it may be difficult for these leaders to "predict" the effectiveness of the match without the additional information available from the CoR;
  2. Encouragement for the reforms to the Indigenous Health Applications was expressed, these will also require iterative feedback for refinement and adoption in other domains (inter-disciplinary applications);
  3. Welcomed attitude towards the simplification of the online application process;
  4. Gratitude for the increase in time frame of publications permitted as part of the CCV (to seven years);
  5. Confident that the return to partial F2F review will mitigate some of the concerns expressed throughout the stakeholder engagement process;
  6. Expressed concern by some stakeholders of the exclusive and intimidating nature of the F2F meetings;
  7. Reference made to a 'glass ceiling' associated with the regular CIHR panel members. For example, despite an accomplished research career and expertise, a stakeholder expressed consistent disappointment from never receiving an invite after numerous self-nominations for participation in CIHR review panels;
  8. Continued frustrations were expressed by many with the complicated common CV, especially for the collaborators. The four-page NIH Biosketch model was advanced by several as a more efficient mechanism to articulate the expertise of the applicant. More is not always better and a four-page document forces applicants to be succinct and selective.

Overall, the research community we interviewed voiced a renewed support for CIHR and the upcoming peer review reforms. It was clearly expressed that researcher confidence needs to be rebuilt with respect to reviewer accountability, reviewer expertise and grant matching; however, the community appeared open to these reforms.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Peer review is recognized as the gold standard for assessing the scientific quality and merit of grant proposals. Despite this consensus, most recognize that peer review is inherently imperfect as the decisions are based on expert opinion, interpretation of assessment criteria, and scoring rubrics defined by the granting agency. Moreover, while addressed by CIHR more so than many agencies, bias and reviewer conflict of interest (COI) have at times been observed. This leads to inconsistencies in results based on biases, conflicts, misinterpretation and/or lack of experience by reviewers when conducting reviews. In addition, peer review can be very costly in terms of time and money. Regardless of the limitations and imperfections of peer review, there are identified best practices to enhance the quality, reliability, and efficiency (time and money) of peer review. The following lists best practice considerations based on staff experience and the current literature:

  • Exclusions based on declarations of COI to avoid bias and conflicts;
  • Peer review must involve individuals who are knowledgeable, experienced and expert in the subject matter under review;
  • Peer review should NOT be conducted by individuals who are inadequately qualified, trained or who have expressed lack of expertise in the subject matter;
  • Peer reviewers should undergo formal training prior to engaging in peer review. This is seen as a shared responsibility between funding agencies and academic institutions;
  • Feedback on review quality to peer reviewers needed as part of an ongoing quality control measure (e.g., PCORI model)Footnote 1;
  • Peer review mentoring to support new reviewers (the literature does not fully support this, but the studies did have limitations, least of which that participants may have had prior training)Footnote 7;
  • Clearly stated common definitions for assessment criteria and scoring rubrics improves review quality and reliability;
  • Reliability of peer review is greatest at roughly the top and bottom 10% of proposals (proposals generally with the most congruent reviewer scores), and the greatest variability observed for proposals that fall between these extremes (generally have variable congruence reviewer scores). As such, the greatest effort should focus on proposals that fall between the extremesFootnote 8, Footnote 9, Footnote 10. Success rates hoovering around 10% has raised the suggestions to create a lottery for decisions on proposals that are identified to have met the "bar" for quality and meritFootnote 9, Footnote 11, Footnote 12, Footnote 13;
  • F2F meetings provide benefit of linkage and exchange between reviewers compared to online only. F2F also provides peer mentoring and learning for new reviewers;
  • Online reviews alone are least effective as reviews can be marginal and the benefit of the exchange between reviewers for additional feedback to applicants is lost. However, the overall impact to the outcome for application success has mixed effects;
  • Teleconference/videoconference may be a reasonable alternative to F2F for cost control measures and with particular grant typesFootnote 4, Footnote 5;
  • Proposals should be limited in page number (have typically found 10-12 pages sufficient). Biosketch (NIH format) of NPA and Co-Apps sufficient for assessment, collaborator information limited 300 words outlining contribution and experienceFootnote 14;
  • Administrative data could be collected at full application stage for successful applications only.
Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

  • Consideration for how to assess applicant quality - Declaration on Research Assessment (DORA);
  • Sufficient lead time to thoroughly review applications;
  • Time spent preparing grant proposalsFootnote 15, Footnote 16;
  • Ongoing evaluation and monitoring of pre- and post-discussion priority scoresFootnote 3;
  • Comparative analysis of F2F meetings with alternative approaches (e.g., teleconference/videoconference) for impact on results (outcomes) Footnote 2,Footnote 3,Footnote 4,Footnote 5, Footnote 6;
  • Ongoing evaluation and monitoring of reviewer and applicant feedback on satisfaction and experience with process;
  • Application rank assessment pre vs post. Percentage of applications that change in score/rank, percentage move out of funding range post, percentage that move into funding range post;
  • Average number of reviewers/application;
  • Review quality;
  • Reviewer scoring practices (consistently high scorer vs consistently low scorer);
  • Individual reviewer scores (pre/post);
  • Avg time for review (primary, secondary, reader, external);
  • Applicant burden – time to complete application vs success rates vs funds available;
  • Costs for peer review/ competition;
  • Panel to panel score variations;
  • Review the European Research Council (ERC) - Starting Grants, Consolidator Grants, Advanced Grants as a mechanism to balance investment to new, mid and senior career scientist and the peer review approach employed for these grantsFootnote 17.
Date modified: