Impact of sample size on the stability of risk scores from clinical prediction models: a case study in cardiovascular disease

Background Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in the national guidelines, and those based on recently published sample size formula for prediction models. Methods We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100,000, 50,000, 10,000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5–95th percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population-derived risk) to summarise results. Results For a sample size of 100,000, the median 5–95th percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population-derived risks of 4–5%, 9–10%, 14–15% and 19–20% respectively; for N = 10,000, it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula-derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained. Conclusions Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting, there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models.

The main challenge in following these recommendations is the need to calculate R 2 CS_ADJ 4 (an unbiased estimate of the Cox-Snell 5 R 2 ) to calculate the sample size, which can only be calculated after fitting the model. It is recommended to use metrics provided by previous prediction models developed on similar populations to estimate R 2 CS_ADJ. In this study, we can use the model developed on the whole development cohort to calculate R 2 CS_ADJ directly. This value of R 2 CS_ADJ allows us to calculate the minimum required sample size for a model developed in this population.

Calculation of Nmin
We illustrate the steps for calculating Nmin for the female cohort, following the process outlined in Riley et al. 1

Criteria (i)
We start by calculating: is a biased estimate of the Cox-Snell 5 R 2 (based on the work by Magee 6 ) , LR = likelihood ratio of the model developed on the entire population, and n = 1,865,078 is the size of the cohort used in that model. Next we calculate: Where is the global shrinkage factor of Van Houwelingen and Le Cassie 2 , and p = 13 is the number of predictor variables. There are 9 variables, and but Smoking contributes two dummy variables (categories = yes/ex/never) and Townsend contributes 4 dummy variables (5 deprivation categories). Then we can calculate: To get a model which has a shrinkage of at least SVH = 0.9, as is recommended in the guidelines, we use the following formula: = 1434

Criteria (ii)
In order for the difference between the apparent and adjusted R 2 Nagelkerke 3 to be suitable, the following equation must be satisfied: Where = 0.9 is the desired shrinkage, is the acceptable difference between the apparent and adjusted R 2 Nagelkerke, and

Criteria (iii)
This requires that the confidence interval around the cumulative incidence at t, time point of interest, to be smaller than 0.05. We will assume an exponential distribution which is the simplest approach to this. The size of the confidence interval is 0.0290 < 0.05.
Therefore the value of Nmin = 1434 satisfies all the criteria and is included as a sample size in our main analysis.
The exact same process was followed for the male cohort, and the value of Nmin = 1405 was found to satisfy all the criteria.

Calculating of Nepv10
Female cohort There are 13 coefficients, meaning a 130 events are required to meet the 10 events per predictor rule. There are 82 065 events in the development cohort of size 1 865 079. This means there are 0.0440 events per person, and 2 954 individuals are required (on average) to obtain 130 events.

Male cohort
There are 13 coefficients, meaning a 130 events are required to meet the 10 events per predictor rule. There are 101 360 events in the development cohort of size 1 790 582. This means there are 0.0566 events per person, and 2 296 individuals are required (on average) to obtain 130 events.