Institute of Human Development, Child and Youth Health: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

The three strongest messages received from the IHDCYH community through this stakeholder consultation process are:

  1. The reforms to the investigator-initiated programs and peer review process, particularly with regards to online peer review, were not well implemented or successful in meeting the original objectives
  2. There is a significant feeling of disconnect between IHDCYH's community and CIHR, which in turn will negatively impact the long-term research enterprise in maternal, child and youth health
  3. The impression of most of the stakeholders consulted by IHDCYH is that the reforms are detrimental to new/early career researchers

Stakeholder Engagement Approach

IHDCYH gathered stakeholder input from our research community through two activities – a face-to-face focus group and directed use of an online web form developed by CIHR. The total number of individuals who provided feedback through these two methods was 12.

The face-to-face focus group was two hours in length and eight participants spent approximately 20 minutes discussing each of the six questions. Participants in the focus group were invited based on their established leadership roles in IHDCYH's community and representation of a range of disciplines and stakeholder groups. A further 12 stakeholders were identified and asked to fill in the online web form, a total of four online responses were received. As the online responses are anonymous it is not known if they are from the invited stakeholders.

Discussion in the focus group was transcribed. This written material was combined with the written responses received online and analysed to identify key themes, which are summarised in the following pages.

Participants

As mentioned above, 12 people provided input; six were male and six were female. The participants were the President/CEO of a national non-profit organization focused on child health; a senior-level staff member working in policy development, research and evaluation for a child-health focused provincial government organization; the founder of a family advocacy group; six senior investigators in maternal, child and youth health; and three new/early career investigators in maternal, child and youth health. All invited participants are highly active and visible members of IHDCYH's community. Focus group participants were from British Columbia, Manitoba, Ontario, Quebec, and New Brunswick. The online responses came from participants in British Columbia, Alberta, and Ontario.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

For reference, the original objectives of the reforms were to 1) Capture excellence across all four research pillars; 2) Capture innovative, original and breakthrough research; 3) Integrate new talent to sustain Canada's pipeline of health researchers; 4) Improve sustainability of the long-term research enterprise; and 5) Address operational challenges including applicant workload, peer review burden, ensuring an effective peer review process, and the complexity of CIHR programs.

Overall, stakeholder responses to Question 1 were mostly negative. Only one participant indicated they thought the design of the reforms were 'broader and more focused, allowing peer-based review and avoiding bias'. Another two participants stated that in theory the design of the reforms was a good idea but they also said it is too early to say whether the Foundation and Project Grants will work in the long-term, even if the current problems with implementation are resolved. These two and the remaining participants then went on to point out some of the challenges with implementation, the potential inequities of the changes, and the damage resulting from the reforms. Rather than addressing the original objectives, various participants identified that the reforms may have achieved the opposite effect.

Reforms do not capture excellence and innovative research

Much of the discussion and input received overall was focused on the new peer review process (see Question 4 for additional responses regarding peer review). The general consensus was that the peer review process had not been improved and did not address the original objectives of the reform. Several people commented that the new peer review process has impacted the ability for CIHR to capture excellent and innovative research. As one participant stated, 'the peer review process is a failure in terms of the selection process for peer reviewers, their interpretation of applications, and the lack of a proper discussion for assessing excellence'.

Reforms are detrimental to new talent

With regards to new talent, 'the current approach has a profound negative impact on new talent and innovative research, as funding tends to be funneled towards high achievers with a strong track record', which has led to 'a very dire situation for new investigators in Canada'; statements that were echoed by several participants, with particular reference to the Foundation funding 'which has gone disproportionately to very senior investigators'. Allowing those who were successful in the Foundation Grant to then be co-applicants on Project Grants was also perceived by some to be unfair to new researchers. However, a counter argument was offered that allowing Foundation Grant recipients to be co-investigators on Project Grants led by newer investigators was a way of leveraging the knowledge and support of senior investigators.

Sustainability of the research enterprise has been negatively impacted

Comments related to the sustainability of the research enterprise could be grouped into two themes. These themes are the development of a disconnect in the relationship between the research community and CIHR, which is certainly detrimental to the sustainability of the long-term research enterprise, and problems with financial sustainability of the research enterprise.

Participants voiced a sense of frustration, confusion, and lack of clarity and understanding of the reforms and the process by which they were designed and implemented. Participants also didn't feel the health research community had been significantly engaged in the design and implementation of the reforms. Specific comments included:

"There is a lack of scientists at these program-level discussions."

"There's no scientific peer community working together, mentoring and nurturing talent."

"The centralization of decision-making and the disconnect with the scientific community have eroded relationships."

"The elimination of the [Institute Advisory Boards] is another reform that speaks to the loss of connectivity with research communities."

"Success rates [have] plunged and confidence in the peer-review system is at an all-time low."

"From 2000 to 2012 it was 'our CIHR', now people are declining to sit on committees etc. – they don't feel any obligation to CIHR"

With regards to the financial sustainability of the research enterprise, several respondents raised concerns about the historically low success rates and how this will negatively impact researchers as many will be unable to obtain funding. One participant also questioned whether it was appropriate to have additional programs such as SPOR, CERC, and CRC when there is such a high application pressure for the funding in the Foundation and Project Grant competitions. Another participant commented that 'there was a failure to consider the effect of the Foundation scheme on overall rates of funding, resulting in a lack of planning for the large diversion of funds into this scheme'.

Applicant and peer reviewer burden has not been lessened

In addition to the broader discussion of the systemic impacts of the reforms it was noted that the challenges of applicant workload and peer review burden had not been addressed. Due to the historically low success rates, many researchers will still need to write multiple applications to get funding. Applicants also struggled with the new format of the application (which has since been changed). There was also a totally unrealistic timeline for peer review with one participant indicating that people were given three weeks' notice to be a Virtual Chair, and that some reviewers were given 10 grants to review in two weeks.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science growth of interdisciplinary research?

In brief, the challenges referred to in Question 2 include funding program accessibility and complexity, applicant burden, application processes that do not capture correct information, insufficient support for new/early career investigators, lack of available expertise for peer review, inconsistency of peer review, conservative nature of peer review, and high peer reviewer workload.

Again the stakeholder responses were all negative with the exception of one person who indicated that the new programs were 'more encouraging to new people, techniques and younger scientists'. The remaining responses made it clear that participants thought the changes to the programs and peer review process as implemented to date have not allowed CIHR to address any of the challenges listed, which limited the extent of the discussion. In response to this question there was a brief discussion of minor elements of the application process and the changes that were announced in Fall, 2016, which were considered favourably. There was also reiteration of the discussion of the inequities in the support for new/early investigators (see above). The majority of responses focused on peer review and the issues that arose with the on-line review process. These responses are included in the summary of the more detailed discussion of the peer review reforms that happened in response to Question 4, when many similar points were raised again.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review a CIHR's reforms address these?

None of the stakeholders consulted felt they had sufficient specific knowledge to discuss this questin detail.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

With the exception of one respondent who stated that the new peer review process provides the 'ability to get the correct reviewer to review the grant in a timely fashion', all participants raised various issues with the peer review mechanisms. Three participants considered the reformed peer review process to be a failure and to be worse than the previous committee system, citing reports of poor quality control and the use of non-experts to review grants. The others supported the theory behind the College of Reviewers but identified problems with the implementation, particularly with reviewer recruitment and the online review process. Most issues raised were similar to those that have already been brought forward to CIHR in the context of the July, 2016 Working Meeting with members of the health research community and consequent CIHR Peer Review Working Group.

Inability to identify appropriate peer reviewers

With regards to peer review quality, points discussed included feedback received from applicants that 'the quality and nature of the comments made by peer reviewers indicates that CIHR is not able to identify appropriate individuals to perform the task'. Many comments were made about the reviewer selection process, which was perceived to be random and unable to effectively match applications with reviewers with the appropriate expertise. A particular focus was the algorithm used to match peer reviewers to applications. The choices of key words given to potential reviewers were reported to be too narrow to allow people to properly describe their experience/expertise. This then led to a mismatching of reviewers with applications, which is unfair to the applicant and the reviewer. A comment was also made that 'not enough thought was given to who is qualified to review new program areas, e.g. innovation and integrated services'.

Other reasons given for problems with identifying appropriate peer reviewers included:

"People approached to be peer reviewers or virtual chairs [were] not given sufficient notice (often only two or three weeks) to make it possible for them to accept"

"Having requests for peer reviewers come from an unnamed "team'….rather than peers…reduces the likelihood of the potential reviewer accepting the request."

"Information on the qualifications, expertise, and competency of the reviewers [was] lacking."

Poor quality of online peer review process

The other significant topic of discussion with regards to peer review was removal of the face-to-face peer review committee structure and implementation of online peer review. Several reasons were given why this is a bad idea and decreases the opportunity for a proper discussion to assess excellence, as well as resulting in inconsistent and poor quality reviews:

"[Online peer-review] loses the value of shared insights and knowledge gained through an in-person discussion"

"The reviewers' accountability is removed"

"In person, reviewers have to defend what they are saying; they don't in virtual reviews"

"The committee structure allowed for some corporate memory and, therefore, some checks and balances"

"Few people have any connectivity with the process, including the external review panel or the applicants"

"There is no way to address the gap between the reviewer's expertise and the focus of the grant application"

Bias in peer review criteria

The final topic that came up was the peer review criteria used in the new peer review process. The criteria were observed by some to be 'flawed' by being biased towards research in health care delivery, knowledge translation, and impact, which is not always translatable to all applications, particularly those in basic science. As one participant stated 'this does not leave it to the reviewer to determine the quality but biases him/her to the nature of the science CIHR wants to see'.As such, people indicated that this 'one-size fits all' approach will 'really compromise some of the pillars' and 'stifle creativity'. It was indicated that 'the reviewer should have more flexibility to comment, rather than address specific questions that might not fit [all applications]'. In opposition to this, one online participant indicated that they noticed the majority of successful applicants were from basic biomedical sciences. The participant went on to say that 'if clinical research… is to be emphasized then consideration should be undertaken to separate funding opportunities for basic biomedical and clinical applications'.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Participants who provided responses to this question stated that they weren't highly familiar with "international best practices' but had suggestions for practices that should be considered. Some of the practices suggested have already been implemented by CIHR in the current round of the Project Grants.

All respondents considered the use of face-to-face peer review to be essential for quality peer review. The recent decision by CIHR to conduct face-to-face review on 40% of Foundation and Project Grants was received positively but some wondered if the percentage was still too low. It was also noted that reviewers should actually be peers of the applicants and that 'clinicians should not review a basic science proposal' and vice versa. With regards to recruiting and assigning reviewers, there was a suggestion that if an electronic matching algorithm is to be used it should use a machine learning method that scans an abstract and looks for key words rather than relying on the applicant to assign key words from a set list. It was also felt that reviewers should be given the opportunity to identify whether they felt able to assess an application before being assigned to it. Following this, the Chairs should play a role in deciding on the final assignments. The need for transparency of the College of Reviewers and the peer review process was also raised.

The other theme of the suggestions was ensuring quality of reviewers and reviews through training and evaluation. It was questioned whether there is adequate training and mentoring for new reviewers. In the previous system would-be reviewers were invited to observe committees as a way of learning how to review; as one participant stated 'this mentorship process has disappeared with [the new] system'.For evaluation, it was mentioned that 'reviewers are not evaluated [by anyone]….the quality of reviewer submissions is not managed'. As such, it was suggested that an accountability system be established to prevent bad reviewers from reviewing in future competitions. The example was given of a dashboard, so that reviewers could see where their scores lay in the standard deviation of reviews of the same application. It was also suggested that the role of chairs of review committees should include evaluating reviewer competency and identifying reviewers who need additional training.

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

Discussion in response to this question was limited to general suggestions for evaluation rather than specific indicators or methods. Participants agreed that evaluation of the quality of peer review was much needed. This would have to be qualitative to track not only quality but any improvements made following changes to the process. As one participant commented, 'the methodology for setting the [review] criteria needs a feedback loop, so we can see how we are achieving the original objectives'. It was also suggested that it would be 'helpful to measure satisfaction (of both the applicants and reviewers) with the process…compared with previously'.

A further point made was that it is critical to start measuring the overall impact of the reforms, particularly in regards to young scientists and whether they are staying in science. A few participants also indicated that any metrics 'should be imposed from outside CIHR and monitored by an arms-length body'; 'the implementation and evaluation of remedies should also be externally imposed'.

Date modified: