Skip to main content

Table 2 Overview of risk of bias, applicability, usability, and similarity in study design of development and validation studies

From: Does poor methodological quality of prediction modeling studies translate to poor model performance? An illustration in traumatic brain injury

Model development studies (N = 10 development studies)

Overall risk of bias of development studies

High

6

60%

Low

2

20%

Unclear

2

20%

Applicability of development studies

High

3

30%

Low

7

70%

Unclear

0

0%

Usability of models

Research

  

Yes

4

40%

No

6

60%

Clinical practice

Yes

9

90%

No

1

10%

External validation studies (N= 245)

Similarity in study design between development and validation cohorts

Similar

147

60%

Cohort to trial

26

11%

Trial to cohort

71

29%

NA

1

 

Relatedness

Related

35

14%

Moderately related

45

18%

Distantly related

164

67%

NA

1

 
  1. Risk of bias: risk of bias was assessed with the original PROBAST (Supplementary Table 3)
  2. Usability: The model was deemed usable in research if the full model equation or sufficient information to extract the baseline risk (intercept) and individual predictor effects was reported, and usable in clinical practice if an alternative presentation of the model was included (e.g., a nomogram, score chart or web calculator)
  3. Relatedness: To judge relatedness we created a relatedness rubric, aiming to capture various levels or relatedness by dividing the validation studies into three categories: “related,” “moderately related,” and “distantly related” (Supplementary Table 4)