Skip to main content

Table 2 Quality assessment by prediction model risk of bias assessment tool

From: Critical appraisal and external validation of a prognostic model for survival of people living with HIV/AIDS who underwent antiretroviral therapy

Question

Answer

Rationale

Domain 1: Participants

 1.1 Were appropriate data sources used, e.g., cohort, RCT or nested case–control study data?

No

Nested case–control without proper adjustment of the baseline hazard.

 1.2 Were all inclusions and exclusions of participants appropriate?

Probably no

Having complete laboratory blood tests before receiving ART may lead to biased selection.

Overall risk of bias of Domain 1

High risk of bias

 

Domain 2: Predictors

 2.1 Were predictors defined and assessed in a similar way for all participants?

Probably yes

Laboratory outcomes were obtained in a standardized manner, whereas self-reported data such as mode of HIV transmission might be subjected to bias from self-interpretation. Nevertheless, only laboratory outcomes were included in the final prognostic model.

 2.2 Were predictor assessments made without knowledge of outcome data?

Yes

The outcome was death, and predictor data were collected at patients' enrollment.

 2.3 Are all predictors available at the time the model is intended to be used?

Yes

All predictors (i.e., hemoglobin, CD4+ cell count, and HIV viral load) are routine laboratory assessment and easy to access.

Overall risk of bias of Domain 2

Low risk of bias

 

Domain 3: Outcome

 3.1 Was the outcome determined appropriately?

Probably no

1. Determination of AIDS-related death was unclear, so misclassification of outcomes might be possible.

2. Given that loss to follow-up was not mentioned in the paper, participants who were lost to follow-up might be misclassified as being alive.

 3.2 Was a pre-specified or standard outcome definition used?

No information

Definition of AIDS-related death was not provided.

 3.3 Were predictors excluded from the outcome definition?

Yes

The outcome was death, which is objective.

 3.4 Was the outcome defined and determined in a similar way for all participants?

No information

The authors did not provide any information regarding how AIDS-related death was determined and whether it varied from patients to patients.

 3.5 Was the outcome determined without knowledge of predictor information?

Yes

The outcome was death, which is objective.

 3.6 Was the time interval between predictor assessment and outcome determination appropriate?

Yes

The time interval, from ART initiation till the end of follow-up (12 years in total) was long enough to observe the death outcome.

Overall risk of bias of Domain 3

High risk of bias

 

Domain 4: Analysis

 4.1 Were there a reasonable number of participants with the outcome?

No

The number of events per variable = 105 death/35 = 3, which is too small.

 4.2 Were continuous and categorical predictors handled appropriately?

Probably no

Continuous predictors (CD4+ and hemoglobin) were not examined for nonlinearity, but generally, these two variables are right skewed and should be log-transformed before entering the model.

 4.3 Were all enrolled participants included in the analysis?

No

Among the 3584 patients in the control group, only 600 could be matched and included in analyses, whereas the remaining could not be successfully matched were excluded.

 4.4 Were participants with missing data handled appropriately?

Probably no

Although multiple imputation was used, there was no explicit mention of the specific method used to analyze imputed data.

 4.5 Was selection of predictors based on univariable analysis avoided?

No

Selection was entirely based on p values in univariate Cox analyses and ROC analyses.

 4.6 Were complexities in the data (e.g., censoring, competing risks, sampling of controls) accounted for appropriately?

No

1. Censored data were not mentioned and might not be handled properly.

2. Non-AIDS-related death was not accounted as a competing risk of AIDS-related death.

3. Propensity-score matching approach was misused.

 4.7 Were relevant model performance measures evaluated appropriately?

Probably yes

Discrimination was assessed by the concordance index, and calibration curve was used to assess calibration.

 4.8 Were model overfitting and optimism in model performance accounted for?

No

Internal validation consists only of a single random split sample of participant data and did not include all model development procedures including any variable selection.

 4.9 Do predictors and their assigned weights in the final model correspond to the results from multivariable analysis?

No

The final model was based only on a selection of predictors from the reported multivariable regression analysis without refitting the smaller model.

Overall risk of bias of Domain 4

High risk of bias

Â