Skip to main content

HHPE 402 - Physical Exam of the Spine and Upper Extremities in Athletic Training: Appraise RCTs

Steps

Steps to appraising an RCT:

  1. Find an RCT that addresses your clinical question
  2. Assess risk of bias and determine if results are trustworthy
  3. Determine if the effect is significant and generalizable

 

Source for most info on this page; Critical appraisal of randomised controlled trials by Nik Bobrovitz, October 2016

Find RCTs

1. Find an RCT that addresses your clinical question

Limit your search to "Randomized Control Trials" in the following databases:

Search 500,000+ RCTs in EBSCO 

Bias and Trustworthiness

2. Assess the risk of bias and decide if the results are trustworthy.

Validity

  • Internal validity: the extent to which the study is free from bias
    • Bias: systematic differences between groups (i.e. worse symptoms in one group)
    • Bias can be introduced because of the design, conduct, or analysis of studies
    • Low risk of bias: we can attribute differences in outcomes to the differences in the treatment given and not other variables (confounding)


If a study is internally valid we then assess the study’s external validity a.k.a. generalizability


External validity: the extent to which the results apply outside the study setting

  • Can you use the results in your situation?
  • Assess whether your patients/setting are similar enough to those in the study 

Effect Significance

3. Determine if the effect is significant and generalizable

Significance:

  • What was the effect on the primary outcome?
  • Is the effect statistically significant?  

 

External validity: were the patients and setting in the study similar to ours? 

 

Consider:

  • patient characteristics
  • feasibility and features of the intervention
  • clinical setting
  • standards of routine care

 

What is an RCT?

What is a randomized controlled trial?

  • A study in which participants are randomly allocated to an experimental or comparison group
  • Experimental group gets an intervention
  • The comparison group gets something different (no intervention, a placebo, different intervention)
  • Outcomes in each group are compared to determine the effect of the intervention 

Group Size

In RCTs the experimental and comparison groups are equal in size.

  • Only difference should be the intervention
  • Infer causality: can attribute differences in outcomes to the differences in the treatment

Types of Bias

Table 8.4.a: A common classification scheme for bias

Type of bias

Description

Relevant domains in the Collaboration’s ‘Risk of bias’ tool

Selection bias.

Systematic differences between baseline characteristics of the groups that are compared.

  • Sequence generation.

  • Allocation concealment.

Performance bias.

Systematic differences between groups in the care that is provided, or in exposure to factors other than the interventions of interest.

  • Blinding of participants and personnel.

  • Other potential threats to validity.

Detection bias.

Systematic differences between groups in how outcomes are determined.

  • Blinding of outcome assessment.

  • Other potential threats to validity.

Attrition bias.

Systematic differences between groups in withdrawals from a study.

  • Incomplete outcome data

Reporting bias.

Systematic differences between reported and unreported findings.

  • Selective outcome reporting (see also Chapter 10).

 

Source: Higgins, J. P. and Altman, D. G. (2008). Assessing Risk of Bias in Included Studies. In Cochrane Handbook for Systematic Reviews of Interventions (eds J. P. Higgins and S. Green). doi:10.1002/9780470712184.ch8