Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials

a Delphi study

Diane Whitham, Julie Turzanski, Lucy Bradshaw, Mike Clarke, Lucy Culliford, Lelia Duley, Lisa Shaw, Zoë Skea, Shaun P. Treweek, Kate Walker, Paula R. Williamson, Alan Montgomery (Corresponding Author), Site Performance Metrics for Multicentre Randomised Trials Collaboration

Research output: Contribution to journalArticle

2 Citations (Scopus)
7 Downloads (Pure)

Abstract

Background Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact. The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials. Methods We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care. We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70% and unimportant by <15% of respondents were taken forward to a consensus meeting of representatives from key UK-based stakeholders. Participants in the consensus meeting discussed and voted on each metric, using anonymous electronic voting. Metrics with >50% of participants voting for inclusion were retained. Results Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds. At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website. Conclusions We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.
Original languageEnglish
Article number557
JournalTrials
Volume19
DOIs
Publication statusPublished - 16 Oct 2018

Fingerprint

Delphi Technique
Multicenter Studies
Guideline Adherence
Politics
Focus Groups
Surveys and Questionnaires
Research Personnel
Clinical Trials
Delivery of Health Care

Keywords

  • Multicentre randomised trials
  • Performance metrics
  • Delphi survey
  • Consensus meeting
  • Trial management

Cite this

Whitham, D., Turzanski, J., Bradshaw, L., Clarke, M., Culliford, L., Duley, L., ... Site Performance Metrics for Multicentre Randomised Trials Collaboration (2018). Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study. Trials, 19, [557]. https://doi.org/10.1186/s13063-018-2940-9

Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials : a Delphi study. / Whitham, Diane; Turzanski, Julie; Bradshaw, Lucy; Clarke, Mike; Culliford, Lucy; Duley, Lelia; Shaw, Lisa; Skea, Zoë; Treweek, Shaun P.; Walker, Kate; Williamson, Paula R.; Montgomery, Alan (Corresponding Author); Site Performance Metrics for Multicentre Randomised Trials Collaboration.

In: Trials, Vol. 19, 557, 16.10.2018.

Research output: Contribution to journalArticle

Whitham, D, Turzanski, J, Bradshaw, L, Clarke, M, Culliford, L, Duley, L, Shaw, L, Skea, Z, Treweek, SP, Walker, K, Williamson, PR, Montgomery, A & Site Performance Metrics for Multicentre Randomised Trials Collaboration 2018, 'Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study', Trials, vol. 19, 557. https://doi.org/10.1186/s13063-018-2940-9
Whitham, Diane ; Turzanski, Julie ; Bradshaw, Lucy ; Clarke, Mike ; Culliford, Lucy ; Duley, Lelia ; Shaw, Lisa ; Skea, Zoë ; Treweek, Shaun P. ; Walker, Kate ; Williamson, Paula R. ; Montgomery, Alan ; Site Performance Metrics for Multicentre Randomised Trials Collaboration. / Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials : a Delphi study. In: Trials. 2018 ; Vol. 19.
@article{bd64a017ae5c40a7b042b4c16d31a016,
title = "Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study",
abstract = "Background Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact. The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials. Methods We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care. We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70{\%} and unimportant by <15{\%} of respondents were taken forward to a consensus meeting of representatives from key UK-based stakeholders. Participants in the consensus meeting discussed and voted on each metric, using anonymous electronic voting. Metrics with >50{\%} of participants voting for inclusion were retained. Results Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds. At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website. Conclusions We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.",
keywords = "Multicentre randomised trials, Performance metrics, Delphi survey, Consensus meeting, Trial management",
author = "Diane Whitham and Julie Turzanski and Lucy Bradshaw and Mike Clarke and Lucy Culliford and Lelia Duley and Lisa Shaw and Zo{\"e} Skea and Treweek, {Shaun P.} and Kate Walker and Williamson, {Paula R.} and Alan Montgomery and {Site Performance Metrics for Multicentre Randomised Trials Collaboration}",
note = "Funding This study was supported by an NIHR Clinical Trials Unit Support Funding grant for supporting efficient and innovative delivery of NIHR research. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR or the Department of Health and Social Care. The Health Services Research Unit, University of Aberdeen, receives core funding from the Chief Scientist Office of the Scottish Government Health Directorates. The study was not registered. Availability of data and materials The data generated or analysed during the current study are included in this published article (and its supplementary information files). Additional information is available from the corresponding author on reasonable request.",
year = "2018",
month = "10",
day = "16",
doi = "10.1186/s13063-018-2940-9",
language = "English",
volume = "19",
journal = "Trials",
issn = "1745-6215",
publisher = "BioMed Central",

}

TY - JOUR

T1 - Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials

T2 - a Delphi study

AU - Whitham, Diane

AU - Turzanski, Julie

AU - Bradshaw, Lucy

AU - Clarke, Mike

AU - Culliford, Lucy

AU - Duley, Lelia

AU - Shaw, Lisa

AU - Skea, Zoë

AU - Treweek, Shaun P.

AU - Walker, Kate

AU - Williamson, Paula R.

AU - Montgomery, Alan

AU - Site Performance Metrics for Multicentre Randomised Trials Collaboration

N1 - Funding This study was supported by an NIHR Clinical Trials Unit Support Funding grant for supporting efficient and innovative delivery of NIHR research. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR or the Department of Health and Social Care. The Health Services Research Unit, University of Aberdeen, receives core funding from the Chief Scientist Office of the Scottish Government Health Directorates. The study was not registered. Availability of data and materials The data generated or analysed during the current study are included in this published article (and its supplementary information files). Additional information is available from the corresponding author on reasonable request.

PY - 2018/10/16

Y1 - 2018/10/16

N2 - Background Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact. The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials. Methods We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care. We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70% and unimportant by <15% of respondents were taken forward to a consensus meeting of representatives from key UK-based stakeholders. Participants in the consensus meeting discussed and voted on each metric, using anonymous electronic voting. Metrics with >50% of participants voting for inclusion were retained. Results Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds. At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website. Conclusions We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.

AB - Background Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact. The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials. Methods We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care. We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70% and unimportant by <15% of respondents were taken forward to a consensus meeting of representatives from key UK-based stakeholders. Participants in the consensus meeting discussed and voted on each metric, using anonymous electronic voting. Metrics with >50% of participants voting for inclusion were retained. Results Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds. At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website. Conclusions We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.

KW - Multicentre randomised trials

KW - Performance metrics

KW - Delphi survey

KW - Consensus meeting

KW - Trial management

U2 - 10.1186/s13063-018-2940-9

DO - 10.1186/s13063-018-2940-9

M3 - Article

VL - 19

JO - Trials

JF - Trials

SN - 1745-6215

M1 - 557

ER -