Audit of Applicant Feedback

Table of Contents


Summary

Introduction

The Internal Audit of Applicant Feedback is part of the 2012-15 Risk-Based Annual Internal Audit Plan, which has been approved by the Canadian Institutes of Health Research’s (CIHR) Governing Council (GC).

The Canadian Institutes of Health Research

The Canadian Institutes of Health Research is the Government of Canada's agency responsible for funding health research in Canada. CIHR was created in June 2000 under the authority of the CIHR Act and reports to Parliament through the Minister of Health. CIHR's mandate is to "excel, according to internationally accepted standards of scientific excellence, in the creation of new knowledge and its translation into improved health for Canadians, more effective health services and products and a strengthened Canadian health-care system." CIHR comprises 13 "virtual" institutes – each headed by a Scientific Director, who is assisted by an Institute Advisory Board – which bring together all partners in the research process – the people who fund research, those who carry it out, and those who use its results – to share ideas and focus on what Canadians need: good health and the means to prevent and fight disease. Each Institute supports a broad spectrum of research in its topic areas and, in consultation with its stakeholders, sets priorities for research in those areas. CIHR funds over 14,000 researchers and trainees in universities, teaching hospitals, and other health organizations and research centres in Canada and abroad.

Applicant Feedback

CIHR uses a peer review process involving expertise-based committees to rate applications according to a set of criteria and factors that ultimately determine which applications will be funded. Applications are assigned two or more reviewers to evaluate the application and prepare internal reviewer reports prior to the actual committee meeting. The applications are discussed during the peer review committee meeting to establish a consensus rating. Each committee also has a Scientific Officer, a neutral observer who summarizes the committee’s discussion but does not rate any applications.

As part of the peer review process, applicants receive feedback from the two internal reviewers and the Scientific Officer. CIHR is in the process of making changes to its open operating grants programs (OOGP) and associated peer review processes and this audit is expected to assist them with these changes.

Risk Addressed

This audit addresses the risk that feedback provided to the applicants does not reflect the criteria established by CIHR or provide applicants with an accurate summary of the committee’s discussion. This risk relates to the This risk is related to the TBS Management Accountability Framework (MAF) elements of Stewardship – “the departmental control regime (assets, money, people, services, etc.) is integrated and effective, and its underlying principles are clear to all staff” – and Learning, Innovation and Change Management – “the department manages through continuous innovation and transformation, promotes organizational learning, values corporate knowledge, and learns from its performance.”

Objective

The objective of the audit was to provide an assessment of the quality of the feedback applicants receive in relation to the evaluation criteria, as well as a baseline assessment of how peer reviewers adhere to the evaluation criteria provided by CIHR.

Scope

The audit examined peer review and Scientific Officer Notes documentation for a selection of applications submitted to the March 2012 OOGP Competition (sample details are found in Appendix B). Only research grant application reviews were examined; dissemination grants, training and salary awards and all other funding opportunities were excluded from the scope. The audit excluded the review of the accuracy of the reviews themselves, conflicts of interest, appropriateness of the reviewer, funding decisions, flags, and ratings.

Overall Audit Opinion

The audit has concluded that in the area of applicant feedback CIHR has moderate issues; control weaknesses exist, but exposure is limited because the likelihood or the impact of the risk is not high.

Statement of Conformance

In my professional judgement as Chief Audit Executive, sufficient and appropriate audit procedures have been conducted and evidence gathered to support the accuracy of the opinion provided in this report. The audit of applicant feedback was conducted in accordance with the Federal Government’s Policy on Internal Audit and related professional standards. It conforms with the Internal Auditing Standards for the Government of Canada, as supported by the results of a quality assurance and improvement program.

Summary of Strengths

Through the course of the audit, the following applicant feedback strengths were observed:

Summary of Improvement Opportunities

The following aspects of applicant feedback require management’s attention:

Internal Audit thanks management and staff for their assistance and cooperation throughout the audit.

David Peckham
Chief Audit Executive
Canadian Institutes of Health Research

Management agrees with the conclusion of the audit.

Jennifer O'Donoughue
Executive Director – Reforms Implementation
Canadian Institutes of Health Research

Detailed report

Methodology, Criteria and Conclusion

The internal audit of applicant feedback was conducted in accordance with the Federal Government’s Policy on Internal Audit. The principal audit techniques used included:

Controls were assessed as adequate if they were sufficient to minimize the risks that threaten the achievement of objectives. Detailed criteria and conclusions are contained in the Appendix A of this report.

The audit was conducted between November 2012 and April 2013.

Observations, Recommendations, and Management Action Plan

The following are audit observations, recommendations, and management action plans to address the weaknesses identified during the audit.

Observation Recommendation Management Action Plan
1. Peer review committee members do not always provide adequate and complete feedback as result of the design of the current documents.

The peer review guidelines suggest that peer reviewers limit their feedback to 3 pages. In this space they are expected to address 5 overall criteria which are further subdivided into 20 separate factors (i.e. research question clarity, project feasibility) with several other factors governing the format of the overall review (i.e. provision of constructive feedback, comments on budgets). These competing demands for limited amounts of space and a large number of criteria and factors can result in many not being addressed by reviewers. Furthermore there was a trend among the reviewer notes to include detailed summaries of the application itself for use as speaking notes during committee discussions, rather than concise criticisms or suggestions regarding the application.

While it is unreasonable to expect every single reviewer to provide detailed comments on all aspects of the application and all factors (not all of which might be relevant), the audit suggested a systematic failure by the majority of the reviewers to comment on certain criteria and factors (particularly problematic were the sections covering application originality, research environment and research impact).

A minority of reviewers incorporated the review criteria and factors into their reviews in the form of headings, including “no comments” for some sections. Aside from this minority, it was impossible to tell if the absence of comments indicated a failure to consider the criteria and factors, or an indication that no issues or areas for improvements were identified.

Scientific Officers have a reduced number of criteria to address compared to the peer reviewers, but also face similar difficulties in terms of balancing a suggested single-page limit with the need to capture the discussion.

CIHR is exploring a new peer review process as part of the implementation of the new Open Programs, and one of the design elements being piloted is structured review. Structured review will require reviewers to assign a rating and comment on strengths and weaknesses of each of the established criteria.

Risk and impact

Without complete and detailed feedback from reviewers, applicants may not have a sense of why their application was not highly ranked, or what areas require improvement for subsequent submissions. This could lead to frustration on the part of both the applicant and the reviewers, and force both to repeatedly submit and review applications without improvement or success (see observation 3 regarding resubmission). For funded applications, the peer review process is an opportunity to receive suggestions from other experts on areas to improve. This ability/opportunity may be lost as a result.
1a) Peer reviewers should be provided with a template to structure their review, with sections for each criterion and factor against which applications are reviewed.

1a) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. If the use of the structured review proves successful through the Reforms pilots, CIHR will move forward with implementing a structured review process for all open programs. CIHR will also explore whether it is appropriate to implement a structured review for other CIHR programs.

Expected completion

In progress for all Reforms pilots.

If deemed successful, will implement for all Open Programs by September, 2017.
1b) Instructions and examples should encourage the reviewers to provide concise and relevant feedback for each criterion and factor. An explicit requirement to say “no comments” should be incorporated where applicable.

1b) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. Through the implementation of CIHR’s College of Reviewers, CIHR will explore a training strategy to support reviewers that could include comprehensive instructions to reviewers, as well as examples of excellent reviews written by CIHR reviewers. In addition, for the pilots being launched over the next two years that are testing, among other things, the effectiveness of the proposed peer review process, CIHR’s training team will explore ways to support peer reviewers through the transition by developing relevant training material.

With the implementation of the new structured review process, it will be mandatory for reviewers to provide comments for each adjudication criterion.

Expected completion

September, 2017 (for Open Programs)

1c) Scientific Officers should be provided with a template or other tool to structure their summary of discussions.1

1c) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. CIHR will analyze the requirements asked of Scientific Officers and based on this analysis will explore the best tool, including a structured template, for Scientific Officer Notes.

Expected completion

September, 2017 (for Open Programs)
1d) Guidance documentation and communications for peer review committee members should encourage reviewers to comment on any criteria or factors that are consistently missed in reviewer feedback to applicants.2

1d) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. In addition to exploring a structured review, as described above in 1a), as part of the peer review process, CIHR is developing a training curriculum to ensure that reviewers are able to provide high quality reviews.

Expected completion

September, 2017 (for Open Programs)

The Instructions for Scientific Officers state “Absence of a rationale for budget cuts or term reductions is unfair to the applicant, and a frequent cause of complaints about the review process.” In addition to the budget limitations recommended at the application stage, CIHR normally imposes an additional across-the-board cut of all approved application budgets funded within the open operating grants program. Despite the importance of this component of the review, approximately 50% of reviewers and Scientific Officers did not include explicit and appropriate discussion of the budget.

Risk and impact

Explicit documentation of budgetary adjustments is necessary to demonstrate the fairness and integrity of this controversial and critical part of the peer review process, where the expertise and judgment of the peer reviewers are irreplaceable.Failing to include a specific discussion of any budgetary adjustments could impair the credibility of CIHR’s peer review process, alienate and frustrate applicants, and increase complaints made to CIHR staff members and senior management.
1e) Both the peer reviewer and Scientific Officer Notes should include mandatory comments on the budget, even if no reductions are made.

1e) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. If reviewers/committee members recommend a budget reduction, then it is agreed that comments from those individuals should be mandatory. With the implementation of the new Open Programs, CIHR is piloting a new structured review process that will make comments on the budget mandatory if reviewers propose budget reductions. As part of this pilot, the use of a box or other mechanism to indicate the budget has been reviewed and no changes proposed will be explored.

As part of the structured Scientific Officer Notes template described in 1c), CIHR will explore tools that may include a section to discuss the budget in order to remind the Chairs and Scientific Officers that this is important information to include in the notes should the committee recommend a budget reduction.

Expected completion

September, 2017 (for Open Programs)
2. The peer review process has no systematic ongoing monitoring and improvement process.

This internal audit was the first effort to systematically assess completed peer reviews against the criteria and factors to ensure reviewers are appropriately meeting their responsibilities. This audit demonstrated that it is possible to assess the adequacy of the peer review in terms of coverage of CIHR’s assessment criteria and factors and identify areas for improvement. In addition to informing the peer review process itself, the information gathered should usefully inform the peer review aspects of the Reforms currently being undertaken.

The peer review process is a fundamental part of the granting process whose integrity should be closely monitored and, if possible, improved. There is no such systematic process currently in place.

Risk and impact

Without a systematic ongoing improvement process, efforts to monitor and improve peer review are ad hoc or based on intuition rather than methodical data collection.

Applicants are a key stakeholder in the peer review process, and the ultimate users of the results. Despite this, applicants are not solicited for their opinion of the peer review process itself, and the adequacy of the feedback they receive. Feedback is received from applicants in an unsystematic manner in the form of comments on (generally criticisms of) the rating or feedback on their application received during committee meetings.

Risk and impact

The risks of not communicating with applicants are minimal; the issue is one of a missed opportunity to seek systematic feedback from a key stakeholder group well-placed to provide comments and suggestions. The receipt of unsystematic feedback makes decisions about the criteria and process vulnerable to either systematically discounting comments from applicants, or overreacting to a limited number of non-representative comments.

2a) Using the tools and results of the audit as a starting point, a systematic ongoing monitoring and improvement process should be implemented.

2a) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. Through the implementation of CIHR’s College of Reviewers, CIHR will explore adding a systematic ongoing monitoring and quality assessment process of all reviewers within the College.

Expected completion

September, 2016
2b) The peer review improvement process should incorporate a process to systematically gather feedback from applicants.3

2b) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. As part of the Project Scheme pilot process, applicants are being surveyed to collect feedback on the peer review process (including the reviewer comments). Based on the results of the pilot process, CIHR will explore continuing to survey applicants in a systematic way through the College of Reviewers (described in 2a). Should collecting feedback from applicants prove successful/useful, CIHR will explore the feasibility of also surveying applicants who apply to non-Open programs.

Expected completion

September, 2017 (for Open Programs)
3. Instructions given to peer review committees regarding encouragement or discouragement of resubmission in the Scientific Officers’ manual is inconsistent with the behaviour of Scientific Officers and the instructions given during committee meetings.

The Instructions for Scientific Officers states that “The [Scientific Officer Notes] should include the following…Encouragement or discouragement in relation to resubmission”. Despite this clear mandate to guide applicants on the appropriateness of resubmission, very few Scientific Officer Notes for unstreamlined4 applications contained clear language regarding resubmission.

Streamlined applications will never contain such instructions to applicants, as they are not discussed during the committee

deliberations, but instead applicants receive a standard block of text simply noting the application was streamlined. In these cases there is no opportunity for the committee to provide endorsement or discouragement regarding resubmission.

Risk and impact

This inconsistency introduces confusion regarding the committees’ role in explicitly encouraging or discouraging resubmission and makes it less likely that different peer review committees will treat repeat applications the same way.
3a) The committee instructions, either oral or written, should be modified to provide a consistent message regarding resubmission.

3a) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. CIHR will review its requirements and determine whether it is appropriate to require the committee to make a recommendation on application resubmission. If this requirement is continued, CIHR will review the instructions given to the peer review committee to ensure that this requirement is captured in the Scientific Officer Notes.

Expected completion

September, 2017 (for Open Programs)
3b) The structured feedback recommended in observation 1 should include an explicit option to indicate the committees’ position, positive, negative or neutral, regarding resubmission.

3b) Responsibility
Executive Director - Reforms Implementation

Action

Agreed. If this requirement is kept, CIHR will explore whether adding a checkbox to the structured Scientific Officer Notes template described in 1c) would add value to Scientific Officer Notes for current Open programs.

Expected completion

September, 2017 (for Open Programs)

Appendix A

Audit Criteria

The audit uses the following definitions to make its assessment of peer review and Scientific Officer Notes.

Conclusion on Audit Criteria Definition of Opinion
Well controlled Well managed, no material weaknesses noted or only minor improvements are needed.
Moderate issues Control weaknesses, but exposure is limited because either the likelihood or the impact of the risk is not high.
Significant improvements required Control weaknesses either individually or cumulatively represent the possibility of serious exposure.

The overall conclusion considers the cumulative risk exposure related to the audit observations in the context of the above criteria.

Overall conclusion

The audit has concluded that the area of applicant feedback has moderate issues but exposure is limited because either the likelihood or the impact of the risk is not high.

Criteria and Factors Overall Quality5 Conclusion
Peer reviewer notes

1. Research approach

Medium Moderate issues
  • Clarity of the research question.
Low
  • Completeness of the literature review and relevance to study design/research plan.
Low
  • Clarity of rationale for the research approach and methodology.
Low
  • Appropriateness of the research design.
High
  • Appropriateness of the research methods.
Medium
  • Feasibility of the research approach (including recruitment of subjects, project timeline, preliminary data where appropriate, etc.).
Medium
  • Anticipation of difficulties that may be encountered in the research and plans for management.
Medium

2. Originality of the proposal

Low Moderate issues
  • Potential for the creation of new knowledge.
Low
  • Originality of the proposed research, in terms of the hypotheses/research questions addressed, novel technology/methodology, and/or novel applications of current technology/methodology.
Low

3. Applicant characteristics

Medium Moderate issues
  • Qualifications of the applicant(s), including training, experience and independence (relative to career stage).
Medium
  • Experience of the applicant(s) in the proposed area of research and with the proposed methodology.
Medium
  • Expertise of the applicant(s), as demonstrated by scientific productivity over the past five years (publications, books, grants held, etc.). Productivity should be considered in the context of the norms for the research area, applicant experience and total research funding of the applicant.
Low
  • Ability to successfully and appropriately disseminate research findings, as demonstrated by knowledge translation activities (publications, conference presentations, briefings, media engagements, etc.).
Low
  • Appropriateness of the team of applicants (if more than one applicant) to carry out the proposed research, in terms of complementarity of expertise and synergistic potential.
Low

4. Environment for the research

Low Moderate issues
  • Availability and accessibility of personnel, facilities and infrastructure required to conduct the research.
Low
  • Suitability of the environment to conduct the proposed research.
Low
  • Suitability of the environment (milieu, project and mentors) for the training of personnel (if applicable).
Low

5. Impact of the research

Low Moderate issues
  • Research proposal addresses a significant need or gap in health research and/or the health care system.
Low
  • Potential for a significant contribution to the improvement of people's health in Canada and the world and/or to the development of more effective health services and products.
Low
  • Appropriateness and adequacy of the proposed plan for knowledge dissemination and exchange
Low

6. Review format

High Moderate issues
  • The reviewer provided a brief synopsis of the proposal
Medium
  • The reviewer provided constructive feedback
High
  • The reviewer was professional in tone
High
  • The reviewer commented on the appropriateness of the budget
Medium
  • The reviewer did not included inappropriate information
High
Scientific Officer Notes

7. Scientific Officer Notes should include the following:

Medium Moderate issues
  • The major strength(s), weakness(es) of the application
Medium
  • The factors and issues that had the greatest impact on the evaluation, in terms of the evaluation criteria for the funding opportunity
High
  • Aspects of the committee discussion that were not captured in the internal reviewer written report
Low6
  • Encouragement or discouragement in relation to resubmission, and suggestions for improving the proposal for resubmission, if appropriate
Low7
  • Constructive feedback for the applicant
Medium
  • Explanation of budget and/or term reductions. Absence of a rationale for budget cuts or term reductions is unfair to the applicant, and a frequent cause of complaints about the review process.
Low
  • Clear, organized, concise notes
High
  • Objective and neutral in tone
High
  • No inappropriate information
High
  • Resolution of reviewer disagreement
N/A8

Appendix B

Sample Characteristics

The audit involved a review of 125 English and French application peer review documents. This included 2 at-home peer reviews per application, 250 individual peer review reports and 125 sets of Scientific Officer Notes. The 125 applications were randomly sampled from the March, 2012 Open Operating Grants competition, stratified into the following groups:

Pillar Funded Not funded Streamlined
1 – Biomedical 10 12 10
2 – Clinical 10 10 10
3 – Health systems 10 10 10
4 – Social/cultural/environmental determinants of health 11 10 12
Totals 41 42 42

Review of the individual applications was undertaken by the audit team, judging each criterion and factor using a three-point scale based on the degree to which each was addressed by the reviewer’s comments (0 = no comments; 1 = some or minor comments; 2 = significant or comprehensive comments). Percentages for peer reviews were calculated by adding the total score of a specific criterion and factor for all applications, divided by a maximum possible score of 500 (two reviews per application, with a maximum possible score of two per criterion or factor for each application, for 125 applications). Since none of the 42 streamlined applications included Scientific Officer Notes, the percentages for Scientific Officer Notes were calculated out of a possible maximum of 166 (one set of Scientific Officer Notes per application, with a maximum possible score of two per criterion or factor for each application, for 83 applications).

Detailed Figures

Below are the figures showing the results of the review for the overall categories, both by pillar and by population.

Peer Review Reports

The chart below shows the overall score for the five criteria. Each criterion has between 2 and 7 factors (see appendix A, criteria groups 1-5; individual factors were analyzed as part of the audit but are not displayed below).

While the charts below are entitled “Quality of Peer Reviews by Criteria” and “Quality of Peer Review – Report Format”, as specified in the audit scope, this does not refer to the scientific merit or accuracy of the individual reviews. The quality is purely in terms of the level of detail and extent to which each peer reviewer addressed the criteria and factors included in the CIHR Peer Review Manual for Grant Applications. Internal Audit is not qualified to, and did not attempt, to assess or judge the scientific merits of either the application or the peer review itself.

While there are disparities between pillars, in the aggregate the overall across-pillar comparisons were reasonable. At no point were there pillars that were in completely separate categories (i.e. P1 was “low” while P4 was “high”) and ultimately all four pillars fell within the same overall quality group for every criterion.

Figure 1: Quality of Peer Reviews by Criteria

Figure 1 long description

Report Format Characteristics

The chart below shows the overall score regarding the format of the peer review report, consisting of 5 sub-criteria (see appendix A, criteria group 6).

Figure 2: Quality of Peer Review - Report Format

Figure 2 long description

Scientific Officer Notes

The chart below shows the overall score regarding the quality of the Scientific Officer Notes, consisting of 6 criteria plus 4 comments on the structure of the Scientific Officer Notes (see appendix A, criteria group 7). Two items were removed from the analysis and addressed separately below the chart.

Figure 3: Quality of Scientific Officer Notes

Figure 3 long description

Resubmission encouragement/discouragement

Because of issues with scoring, the disparities of practice between peer review committees and the coded nature of resubmission encouragement/discouragement language, the interpretation of the analysis of the fourth criterion (“Encouragement or discouragement in relation to resubmission, and suggestions for improving the proposal for resubmission, if appropriate”), is not included in the chart. These issues are addressed through the recommendations and management action plan found in observation 3.

Peer reviewer disagreements

The tenth criterion, “the notes contained justifications for resolution of peer reviewer disagreements”, is not included in this chart as only 3 applications included disagreements between reviewers. All 3 sets of Scientific Officer Notes discussed the disagreements clearly and appropriately.

Footnotes

Footnote 1

During the audit, it was noted that some Scientific Officers included charts or rating scales, which concisely conveyed the application’s quality for each of the criteria. While not a requirement, including such a scale in peer review templates could assist the feedback process.

1

Footnote 2

In the case of a consistent failure of peer reviewers to improve in their quality of feedback versus a criterion, and no comments from applicants indicating the criterion is useful or relevant, removal of the criterion should be considered

While there were some differences between pillars in terms of the quality of review for certain criteria (see Appendix B), the recommendation to include comments on consistently-missed criteria could address these gaps with no further interventions necessary. However, the ongoing improvement mechanism (see observation 2) should note that gaps have historically existed between pillars and this information should be incorporated in subsequent analyses.

2

Footnote 3

Care would have to be taken to distinguish between the quality of the application’s peer review (i.e. the degree of detail and justification given by the reviewer), versus the resulting funding decision. A high-quality peer review could discuss, in specific and suitable detail, all the reasons why an application should not be funded; thus it would receive a high score in terms of the quality of the peer review, while getting an extremely low rating that ensured it would not be funded.

3

Footnote 4

“Streamlining” is the process of eliminating noncompetitive applications based on initial reviewers’ scores to maximize the amount of time spent discussing competitive applications during actual peer review committee meetings.

4

Footnote 5

Observations ratings are based on the score given to each criteria and factor, expressed as a percentage; “low” means the criteria had an aggregate score at or below 50% of its potential total, “medium” means 51-79% and “high” means 80-100%. A low score indicates most or all reviews did not mention the criteria explicitly, and/or provided an inadequate (i.e. only brief or superficial comments) review. High scores indicate detailed and specific feedback on the criterion.

Note that due to the limitations of the audit, the conclusions on overall quality do not reflect any weighting for the relevance of the individual sub-factors.

5

Footnote 6

Individual reports given a high rating on this criterion included extra information or comments not found in individual peer review reports, but since any Scientific Officer Notes failing to incorporate such comments could indicate either a lack of discussion or a failure to document, it is impossible to verify which is correct after the fact. While given a rating of “low”, it is recognized that there is a strong possibility that this rating is artificially low due to the nature of certain committee discussions and specific applications.

6

Footnote 7

See Appendix B, “Scientific Officer Notes”.

7

Footnote 8

As an insufficient number of application reviews included reviewer disagreements, no analysis was possible. See Appendix B, “Scientific Officer Notes”

8

Date modified: