Estimating Site Performance (ESP): can trial managers predict recruitment success at trial sites? An exploratory study

Hanne Bruhn* (Corresponding Author), Shaun Treweek, Anne Duncan, Kirsty Shearer, Sarah Cameron, Karen Campbell, Karen Innes, Dawn McRae, Seonaidh C. Cotton

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)
11 Downloads (Pure)

Abstract

Background
Multicentre randomised trials provide some of the key evidence underpinning healthcare practice around the world. They are also hard work and generally expensive. Some of this work and expense are devoted to sites that fail to recruit as many participants as expected. Methods to identify sites that will recruit to target would be helpful.

Methods
We asked trial managers at the Centre for Healthcare Randomised Trials (CHaRT), University of Aberdeen to predict whether a site would recruit to target. Predictions were made after a site initiation visit and were collected on a form comprising a simple ‘Yes/No’ prediction and a reason for the prediction. We did not provide guidance as to what trial managers might want to think about when making predictions.

After a minimum of eight months of recruitment at each site for which a prediction had been made, all trial mangers in CHaRT were invited to a group discussion where predictions were presented together with sites’ actual recruitment performance over that period. Individual trial managers reflected on their predictions and there was a general discussion about predicting site recruitment. The prediction reasons from the forms and the content of the group discussion were used to identify features linked to correct predictions of recruitment failure.

Results
Ten trial managers made predictions for 56 site visits recruiting to eight trials. Trial managers’ sensitivity was 82% and their specificity was 32%, correctly identifying 65% of sites that would hit their recruitment target and 54% of those that did not. Eight ‘red flags’ for recruitment failure were identified: previous poor site performance; slow approvals process; strong staff/patient preferences; the site recruitment target; the trial protocol and its implementation at the site; lack of staff engagement; lack of research experience among site staff; and busy site staff. We used these red flags to develop a guided prediction form.

Conclusions
Trial managers’ unguided recruitment predictions were not bad but were not good enough for decision-making. We have developed a modified prediction form that includes eight flags to consider before making a prediction. We encourage anyone interested in contributing to its evaluation to contact us.
Original languageEnglish
Article number192
JournalTrials
Volume20
DOIs
Publication statusPublished - 3 Apr 2019

Bibliographical note

Availability of data and materials
All quantitative data generated and analysed during this study are included in this published article [and its supplementary information files] Additional file 3.

The dataset of predictions used and analysed during the current study are available from the corresponding author on reasonable request.

The transcript of the group discussion generated and analysed during the current study is not publicly available due it containing information that could compromise research participant consent (it would be a relatively simple matter to identify trials and trial managers) but are available from the corresponding author on reasonable request.

Keywords

  • recruitment
  • clinical trials
  • trial managers
  • recruitment sites
  • recruitment performance
  • Recruitment sites
  • Recruitment performance
  • Trial managers
  • Clinical trials
  • Recruitment
  • PREVALENCE

Fingerprint

Dive into the research topics of 'Estimating Site Performance (ESP): can trial managers predict recruitment success at trial sites? An exploratory study'. Together they form a unique fingerprint.

Cite this