Recommendations from the Peer Review Working Group for the CIHR Project Grant competition

September 14, 2016

July 13 Working Meeting Outcome Peer Review Working Group Recommendation Rationale
Applications

Applicants will be permitted to submit a maximum of two applications to each Project Grant competition.

  • Applicants may submit a maximum of two applications in the role of Nominated Principal Applicant (NPA) per Project Grant competition.

The rationale is to reduce the burden on reviewers (while most applicants submitted one or two applications, some NPIs submitted more, with one submitting seven applications in the last Project Grant competition). However, we know that some researchers may need to renew more than one grant, so a maximum of two applications per competition was deemed reasonable.

The existing page limits for applications will be expanded to 10 pages (including figures and tables) and applicants will be able to attach additional unlimited supporting material, such as references and letters of support.

  • Project Grant applications will be 10 pages long (including figures and tables).
  • Those 10 pages will be “free flow” (unstructured), allowing applicants to decide how to address the review criteria.
  • Applicants can attach unlimited references and letters of support.
  • Applications will be assessed based on “significance and impact of the research” (25% of final score), “approaches and methods” (50% of the final score), and “expertise, experience and resources” (25% of final score).
  • The Common CV (CCV) for the Project Grant application will include publications from the past seven years and applicants will be able to upload a PDF to supplement the CCV information if they have taken leaves of absence in the past seven years.
  • A one-page rebuttal will also be included in the revised structure to give the applicant the opportunity to explain how the application was improved since the previous submission.

We have received a lot of feedback from the community on this point: some applicants wanted no changes to the application structure, as they felt that they were close to getting funded, while others wanted up to 12 pages. Still, others noted that 12 pages was too much of a change, as there is not optimal time to write a whole new application. It was also important to note that many reviewers liked the shorter applications but did feel that they needed more information. Overall, a 10-page application structure takes the needs of reviewers and applicants into account.

As the CCV cannot accommodate the addition of information about leaves in its current form, applicants will be able to upload a PDF (no page limits) to supplement the CCV information. Whatever length of time an applicant has taken off from research in the past seven years (e.g., parental, bereavement, medical, or administrative leave) is the amount of time that they may append. For example, an applicant who took 1 year of maternity/parental leave within the past 7 years would be able to upload a PDF detailing 1 year of funding and publications beyond the 7 year limit.

Drafting a rebuttal is an important scholarly exercise and incorporating it into the competition process was deemed appropriate by the group.

Stage 1

Chairs will now be paired with Scientific Officers to collaboratively manage a cluster of applications and assist CIHR with ensuring that high quality reviewers are assigned to all applications.

  • Update: Reviewer assignments will be approved by the Chairs and Scientific Officers. In addition, Chairs and Scientific Officers will have the ability to remove or add reviewers after the reviewers have completed the new Conflict of Interest/ Ability to Review (CAR) assessment for a group of applications.
  • Applicants can make recommendations regarding what types of expertise are required to review their application.

The Working Group felt strongly about having this human intervention early in the process to ensure that the right experts are assigned to each application.

In addition to drop-down menus of keywords, the Working Group recommended that applicants should also have the opportunity to use an optional “Other” textbox in ResearchNet to make recommendations regarding the type(s) of expertise necessary for evaluating their proposal, highlighting nuances specific to their research community as needed. The group felt that it was important for Chairs and Scientific Officers to receive this information as part of the expertise matching process in order to better evaluate the suitability of matches between applications and reviewers. Note: Instructions will be included in ResearchNet.

Each application will receive 4-5 reviews at Stage 1.

  • Each application will be assigned to four (4) reviewers at Stage 1

There was much discussion and debate amongst the Working Group members regarding the value of having four versus five reviewers, or even dropping down to three. The Working Group deemed three reviewers to be insufficient given the possibility of a reviewer not submitting, which would result in an application only receiving two reviews. Having four reviewers balances reviewer burden with high quality decision-making. The group felt that having four reviewers was a manageable number—especially given the additional oversight measures being put in place to ensure quality review.

Applicants can now be reviewers at Stage 1 of the competition. However, they cannot participate in the cluster of applications containing their own application.

  • Outcome endorsed.

This will ensure a greater pool of peer reviewers, without compromising quality and fairness.

Asynchronous online discussion will be eliminated from the Stage 1 process.

  • Outcome endorsed.

Asynchronous online discussion has been removed from the competition process.

CIHR will revert to a numeric scoring system (rather than the current alpha scoring system) to aid in ranking of applications for the Project Grant competition.

  • Instead of the alpha scoring system (E+, O++), each of the three criteria will be scored on a 101-point scale. Applications will receive a total score between 0 and 100 (based on criteria weighting).
  • Reviewer comments for Stage 1 will be submitted through a single free-form textbox (rather than through separate textboxes for each strength/weakness per criterion). There will also be a separate textbox for reviewers to use to summarize the application.

It was agreed that the alpha scoring system caused problems, but the former 0 to 4.9 scale also had issues. The Working Group concluded that the 0-100 scale will increase transparency and enhance understanding of the process for stakeholders, including the public and the research community. It is also important that the public and the government funding source understand that applicants with very high scores are not funded due to limitation of funds. A 4.3 score, for example, meant little to the public or government officials.

We heard from a number of reviewers that the structured review format from the last Project Grant competition actually hindered their ability to provide cohesive comments. While reviewers will still be expected to discuss the strengths and weaknesses of each application, they will not be broken down into separate compartments, giving reviewers the space to discuss each application more freely. Working Group members agreed that summarizing an application was important for the reviewer (as it demonstrates that they read and understood the application).

Stage 2: Face-to-Face Discussion

Approximately 40% of applications reviewed at Stage 1 will move on to Stage 2 for a face-to-face review in Ottawa.

  • Outcome endorsed.
  • The working group further recommended that the other 60% of reviewers’ comments be reviewed by chairs and SOs to ensure appropriate review.

We acknowledge that the idea of not all applications being discussed face-to-face makes the community uncomfortable, but in discussing the sheer volume of anticipated applications, we realized that discussing 100% of the applications at the face-to-face committees would not be possible.

Stage 2 will include highly ranked applications and those with large scoring discrepancies.

  • Outcome endorsed.
  • Update: The Working Group’s recommendation is to have a process where the top-ranking ~30 per cent of applications across clusters plus those in the top ~30 per cent within their cluster will go forward to Stage 2. This approach is recommended until data collected during the next Project Grant can inform this ranking strategy and address issues related to applicants who are also reviewers. The remaining portion will include applications such as those that have large scoring discrepancies, were specifically flagged by Competition Chairs, or were highly ranked within their clusters. The final breakdown of the 40 per cent will depend on the actual applications submitted during the competition.

It is important that all applications deemed to be the most promising have the opportunity for fine-detailed discussion at the face-to-face meetings to ensure that the best grants are funded. The working group felt that it was important to make sure that grants move forward whether they are ranked in the top percentage across or within clusters. This will allow CIHR to evaluate which of these move-forward rules is best able to identify the grants that reviewers ultimately recommend for funding.

We recommend that CIHR analyze the predictive capacity of ranking within versus across clusters on the final ranking and/or funding status.

Chairs will work with CIHR to regroup and build dynamic panels, based on the content of applications advancing to Stage 2.

  • Outcome endorsed

The Working Group agreed that building the panels on a per-competition basis would result in a responsive review model. The Working Group was comfortable with a model that would involve approximately 20-30 panels at Stage 2, based on application pressure per competition.

Applications moving to the Stage 2 face-to-face discussion will be reviewed by three panel members. A ranking process across face-to-face committees will be developed to ensure the highest quality applications will continue to be funded.

  • Update: Instead of using three panel members, the Working Group recommends having two of the original four reviewers from Stage 1 go to Ottawa for Stage 2 face-to-face meetings within cluster-based panels. They will be expected to present their own and the reviews of other two Stage 1 reviewers at the meeting.
  • In order to increase accountability, reviewer names will accompany their reviews to the final assessment stage.
  • In addition, the Working Group has recommended ranking within each cluster for Stage 2, as opposed to a ranking across face-to-face committees.

The two reviewers per application in Stage 2 will be chosen based on expertise needed, not on how they ranked grants in Stage 1, in order to minimize bias. In other words, the selection process will be blinded to how the reviewer ranked each grant.

We felt it was important that all reviewers be made aware when they are invited that they could be selected to attend the face-to-face meetings. Further, Stage 1 reviewers' names will be made available with their reviews at the face-to-face meetings. We saw this approach as a way to strengthen reviewer accountability throughout the two-stage process.

The working group recognizes that there is a concern among some members of the research community that applications will not have their full complement of reviewers at the meeting and may be disadvantaged by only bringing two reviewers forward. However, it is the expectation that reviewers invited forward will present all of the reviewers and the Chair/SO will also be asked to play a role in this process. The working group therefore recommends that, following the fall grant cycle, CIHR analyze grants’ final funding status as a function of the rankings assigned by the original four reviewers along with which two of the four went to the face-to-face meeting.

The majority of members felt that equal success rates across clusters would be the fairest option given the limited funds currently available.

Based on feedback from CIHR’s Science Council, the group also took into account the fact that face-to-face meetings cost money and bringing reviewers in to review only one or two grants was not prudent.

Chairs, Scientific Officers, and Reviewers
  • CIHR grantees will be strongly encouraged to review if invited. The Working Group feels that it is important for grant recipients to give back to the process, if invited. The College Chairs will continue this discussion.
  • Host a face-to-face meeting with all of the Chairs and Scientific Officers in the fall. The Working Group agreed with the proposed selection criteria to choose Chairs, and also felt that it was appropriate for the Chairs to help choose the Scientific Officers, but recommended that CIHR bring everyone together. CIHR is evaluating the feasibility of hosting this meeting for the 2016 competition.
  • Make peer review training mandatory for all reviewers. Completing the training – including a module on unconscious bias — should be a requirement for all reviewers, including seasoned senior investigators, Chairs and Scientific Officers.
  • Invite early career investigators (ECIs) to observe the Stage 2 face-to-face peer review meetings. The Chairs in the College of Reviewers have made mentoring ECIs a priority. They will work with CIHR to find opportunities for mentorship—including observing peer review—and will discuss using the Project Grant competition as one of those opportunities, whether it takes place as part of the fall competition or future ones.
Additional Recommendations
  • Consider a variety of mechanisms related to ensuring equity across different career stages and sex of applicants. The Peer Review Working Group strongly recommended that CIHR continue this conversation with its Science Council, as well. There was agreement that this important topic requires more in-depth consultation and analysis across all CIHR funding programs.
  • There was unanimous support from the Working Group for equalizing success rates for early career investigators (ECIs) in the Project Grant program. Equalizing success rates means ensuring that the success rate for ECIs in a competition matches the overall competition success rate. The Working Group recommended that the additional 30 million dollars received in the last budget be used for this purpose, similar to how these additional funds were used in the first Project Grant competition.
  • Share the appropriate level of data from the Fall 2016 Project Grant competition after the results are released. The Working Group advised CIHR to share this competition data publicly in the spirit of transparency, but also to ensure that the data can be used by CIHR to shape important decisions in the future.
  • Adjust the CCV requirements for non-academic co-applicants as well as academic co-applicants who are not appointed at a Canadian institution. We have heard a number of concerns from the community regarding the use of the CCV for non-academic and non-Canadian co-applicants, particularly in terms of applicant burden. The Working Group proposed that such applicants may upload other documents, rather than requiring a CCV. A plan will be put together to address this issue for the spring 2017 competition.
  • Incorporate a mechanism between Stage 1 and Stage 2 to let applicants respond to reviewer comments. For those applications moving ahead to Stage 2, the Working Group proposed giving applicants the opportunity to submit a 1-page response to reviewer comments that could then be used as part of the Stage 2 deliberations. It will be further explored for implementation for the spring 2017 competition.
  • The discussions and recommendations of the CIHR Peer Review Working Group were bound by the amount of information and data available to inform decisions. CIHR will evaluate the changes implemented and make analyses available as they are conducted (e.g., data about the reviewer profile matching algorithm and its validation). It will be instrumental for future refinement, improvement, and transparency of Peer Review for CIHR to make this data publically available, at the finest resolution possible, while still protecting privacy and confidentiality. It was outside of the WG’s mandate to validate the matching algorithm. CIHR will determine the best way to undertake this work.
Date modified: