Research designs for studies evaluating the effectiveness of change and improvement strategies

Research output: Contribution to journalArticle

331 Citations (Scopus)

Abstract

The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple-those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.

Original languageEnglish
Pages (from-to)47-52
Number of pages5
JournalQuality & safety in health care
Volume12
Issue number1
DOIs
Publication statusPublished - 2003

Keywords

  • PRIMARY-CARE
  • RANDOMIZATION
  • TRIALS
  • AUDIT

Cite this

@article{e5eca76b4cc149018be5869be22f7d2d,
title = "Research designs for studies evaluating the effectiveness of change and improvement strategies",
abstract = "The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple-those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.",
keywords = "PRIMARY-CARE, RANDOMIZATION, TRIALS, AUDIT",
author = "Eccles, {M. P.} and Campbell, {Marion Kay} and Ramsay, {Craig R}",
year = "2003",
doi = "10.1136/qhc.12.1.47",
language = "English",
volume = "12",
pages = "47--52",
journal = "Quality & safety in health care",
issn = "1475-3898",
publisher = "BMJ Publishing Group",
number = "1",

}

TY - JOUR

T1 - Research designs for studies evaluating the effectiveness of change and improvement strategies

AU - Eccles, M. P.

AU - Campbell, Marion Kay

AU - Ramsay, Craig R

PY - 2003

Y1 - 2003

N2 - The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple-those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.

AB - The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple-those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.

KW - PRIMARY-CARE

KW - RANDOMIZATION

KW - TRIALS

KW - AUDIT

U2 - 10.1136/qhc.12.1.47

DO - 10.1136/qhc.12.1.47

M3 - Article

VL - 12

SP - 47

EP - 52

JO - Quality & safety in health care

JF - Quality & safety in health care

SN - 1475-3898

IS - 1

ER -