Appendix C: The Palliative and End-of Life Care Peer Review Committee (PLC) for the CIHR Open Operating Grants Committee
[ Table of Contents ]The Panel was established in 2005 in response to representations through the Institute of Cancer Research that there was no appropriate existing committee able to expertly review the range of PELC applications. Previously these had been reviewed by a number of committees, mostly in the health services area. Funding for the operation of this new committee was provided by Health Canada through National Strategy funding: this undoubtedly increased the palatability of the new review committee to CIHR management.
This committee is of great importance to the continuation of CIHR funding of PELC. The Open Operating Grants Competition is held twice yearly and attracts 1500-2000 applications across all areas of health research. Applications are assigned by applicant preference to approximately 40 separate committees for review, assessed in competition with the other applications reviewed by each committee, and given a merit rating from 0 to 4.99. Across all committees, most applications score in the range 3.5-4.5. Depending on the total number of applications received and the budget available for the whole competition, the top n% of applications reviewed by each committee are funded, so long as they achieve a quality rating of 3.5 or greater. In recent competitions, the value of n has been in the low 20s. Thus, while the PLC panel remains in operation, there will always be some PELC research funded, assuming applications exceed the quality floor, and if the number of applications to PLC increases, so will the number of funded projects.
Application trends
Despite the funding opportunity provided to the PELC research community by the existence of PLC, the response has been muted. Application numbers have been low, generally in the 10-15 range (Fig C1), compared to the "usual" committee application load of 50+, so the future of the committee is threatened if CIHR should decide to invoke a "use it or lose it" approach to efficiency in peer review. There have been a large number of withdrawals, that is, applications where CIHR is notified of intent to submit, but the applicant does not do so.
Classified according to CIHR's primary research area, the largest number of the total 128 applications are related to cancer (39), health services research (32), nursing (14), and psychosocial/behavioural research (14). ICR (64) and IHSPR (23) were the most favoured institutes of affiliation, with IA (14), ICRH (9), INMD (8), and IHDCYH (5) also popular.
Given the low number of applications and the low success rates prevailing at CIHR, the number of applications funded per competition has varied from one to six, but on occasion additional applications have been funded through one-time additional or strategic funding provided by several CIHR Institutes. Unquestionably, the severe competition has deterred many PELC researchers from undertaking the major effort required to apply for a CIHR grant, and this has probably been exacerbated by "horror stories" from applicants who have applied multiple times without success, even though they received excellent merit ratings. The rating for the highest-rated but non-funded application has been at the upper end of the "excellent" range in several competitions (Fig C1)
Figure C1 illustrates another source of applicant frustration, which again is not unique to PLC. In several competitions (e.g. #8) the ratings received by funded and unfunded applications were almost identical, and certainly not reflective of a true difference in scientific merit. Again, this unfortunatre outcome is not the result of any perverse behaviour by members of PLC, since ratings are the committee average of private ratings (constrained within a consensus range). Members of PLC had no way of knowing that their collective ratings gave rise to the "dead heat" between two applications.
Fig C1 Ratings of all applications reviewed by PLC
Funded applications are denoted in red

Bias?
As noted in the main report, we heard both praise and criticism of the PLC committee from researchers, and it is important to assert here and now that it is blameless with respect to low success rates: like all other CIHR review committees, it is required merely to rate the applications as it sees them, after which the percentile success rate is applied by CIHR staff. Nevertheless, some of the criticisms were related to perceptions of bias against certain disciplines or types of research, so we examined the composition and behaviour of the committee in some detail to see if there was any objective evidence of committee dysfunction.
Because it reviews only a small number of applications, PLC membership is correspondingly small. Since its inception, only 17 identifiable individuals have served on the committee, including the Chair and Scientific Officer. To reduce conflict of interest risk, and broaden perspectives when reviewing a relatively small research field, five of these have been non-Canadians. A wide range of disciplinary expertise has been represented: nutrition, biochemistry, oncology, epidemiology, radiotherapy, nursing, medical ethics, psychology, geriatrics, paediatrics and respirology, and the members have used both quantitative and qualitative methods in their own research. A community reviewer has also contributed. We note that one member requested anonymity, and at the last competition the committee list was not published at all because the small number of reviewers would have made identification of who reviewed what probable. This unusual lack of transparency by CIHR implies that committee members may be feeling the pressure from their non-member colleagues. While there has been recent turnover on the committee, we suggest that consistently greater turnover of members would be beneficial: other PELC researchers may be more restrained in their criticism - and more willing to submit applications - if they have experienced the arduous role of committee member.
Overall, the expertise represented in terms of the four CIHR themes of health research1 is well-aligned with the thematic content of the applications reviewed (Fig C2).
Fig C2 Thematic distribution

The ratings allocated by the PLC committee at its various meetings show no sign of pathology or dysfunction (Fig C1). The ratings span a wide range, and there is relatively little bunching, except in competition 7. The rating margin between funded and not funded applications is unfortunately narrow in four competitions, but due to the private member rating procedure used by CIHR, this could not have been a deliberate committee decision. There is no trend of rating inflation or deflation. There is some variation in success rate between applications in the four themes, from a low of 14% for biomedical and health systems research, to a high of 44% for social/population health applications, but none of these variations exceed those due to chance. Similarly, we found no evidence of bias in the success rates for applications affiliated with any institute or any primary research area, but we emphasize that the small numbers of applications put these conclusions at risk of type 2 error. We were also unable to find evidence of systematic bias against any particular methodologies: the number of funded applications using qualitative or mixed-methods approaches was within the expected range.
If at first you don't succeed...
Fig C3 History of Reapplications

CIHR, unlike the National Institutes of Health (NIH), allows the same project proposal to be submitted an indefinite number of times to successive competitions. Judged from the titles, investigators and abstracts, there were 80 unique proposals within the 128 applications sent to PLC, of which 13 were withdrawn before the committee ever reviewed them. Many of the proposals were submitted as many as three times, and, in general, received higher ratings on each application (shades of green in Fig C3), though there were cases of declining or stable ratings as well (red lines). Proposals that were eventually funded are represented in Fig C3 by bold lines.
This analysis of the fate of each proposal points out another reason why there were negative comments about the operating grants program as a funding opportunity, because for proposals on their first application, the success rate was very low, only 15%. However, for those proposals that were resubmitted the success rate was much higher, 31%, and it was 40% for the third application of the proposal. Persistence has its rewards.
Fig C4 shows that for those proposals for which the review process has ended, either because they were funded or because the applicant chose not to resubmit, the success rate is 22/80 (27.5%): much higher than the success rate for applications in individual competitions, and this should be borne in mind by applicants discouraged by low competition success rates. If we discount those applications that were withdrawn before the first competition, the success rate was actually one-third.
The rising rating trajectory of most repeat applications (Fig C3) suggests that the PLC committee is providing constructive criticism to applicants, which allows them to improve their proposals and obtain a higher rating at the next submission. However, it should be noted that CIHR's review system treats each submission as brand new, and therefore the Panel does not have access to previous reviews. It is therefore also possible for an applicant to respond fully to reviews but to have their score drop when reviewed by different panel members in the next competition.
Fig C4 The fate of the repeat applications submitted to PLC

Conclusion
We conclude that there is no evidence that PLC is other than a well-functioning committee with an appropriate membership for its workload, free of flagrant bias, and providing good advice to applicants. At the same time we understand the frustration of, and sympathize with, those PELC researchers who have applied to the committee, earned excellent ratings, and yet not received funding for their proposals, but this is not PLC's fault.