A relationship between the incremental values of area under the ROC curve and of area under the precision-recall curve

Background Incremental value (IncV) evaluates the performance change between an existing risk model and a new model. Different IncV metrics do not always agree with each other. For example, compared with a prescribed-dose model, an ovarian-dose model for predicting acute ovarian failure has a slightly lower area under the receiver operating characteristic curve (AUC) but increases the area under the precision-recall curve (AP) by 48%. This phenomenon of disagreement is not uncommon, and can create confusion when assessing whether the added information improves the model prediction accuracy. Methods In this article, we examine the analytical connections and differences between the AUC IncV (ΔAUC) and AP IncV (ΔAP). We also compare the true values of these two IncV metrics in a numerical study. Additionally, as both are semi-proper scoring rules, we compare them with a strictly proper scoring rule: the IncV of the scaled Brier score (ΔsBrS) in the numerical study. Results We demonstrate that ΔAUC and ΔAP are both weighted averages of the changes (from the existing model to the new one) in separating the risk score distributions between events and non-events. However, ΔAP assigns heavier weights to the changes in higher-risk regions, whereas ΔAUC weights the changes equally. Due to this difference, the two IncV metrics can disagree, and the numerical study shows that their disagreement becomes more pronounced as the event rate decreases. In the numerical study, we also find that ΔAP has a wide range, from negative to positive, but the range of ΔAUC is much smaller. In addition, ΔAP and ΔsBrS are highly consistent, but ΔAUC is negatively correlated with ΔsBrS and ΔAP when the event rate is low. Conclusions ΔAUC treats the wins and losses of a new risk model equally across different risk regions. When neither the existing or new model is the true model, this equality could attenuate a superior performance of the new model for a sub-region. In contrast, ΔAP accentuates the change in the prediction accuracy for higher-risk regions. Supplementary Information The online version contains supplementary material available at (10.1186/s41512-021-00102-w).


Introduction
Risk prediction is crucial in many medical decisionmaking settings, such as managing disease prognosis. Numerous research has been dedicated to continually updating risk models for better prediction accuracy. For example, several papers have investigated the improvement in predicting the risk of cardiovascular disease by adding new biomarkers to the existing Framingham risk model, such as the C-reactive protein [1,2], and more recently, a polygenic risk score [3,4].
In some applications, an existing marker is replaced with a new marker that provides more precise information. For example, cancer treatment such as radiation can have significant long-term health consequences for cancer survivors. Prescribed radiation doses to body regions, such as the abdomen and chest, are routinely available in medical charts. But to predict the risk of an organ-specific outcome, e.g., secondary lung cancer or ovarian failure, a more precise measurement of the radiation exposure to specific organs provides better information. Radiation oncologists developed and applied algorithms to estimate these organ-specific exposures [5].
The measurement of a new marker or the more precise measurement of a known risk factor is often costly and time-consuming. Thus, it is important to verify that the new model indeed has a measurable and better prediction performance than the existing one, and thus, worth the extra resources. A number of metrics have been proposed to evaluate the incremental value (IncV) of the risk model that incorporates the new information. The IncV has primarily been discussed in settings where new markers are added to the existing risk profile [6,7]. In this paper, the term IncV refers to the change of the prediction performance whenever an existing risk model is compared with a new one.
In medical research, the receiver operating characteristic (ROC) curve has been and remains the most popular tool for evaluating the prediction accuracy of a risk model, dating back to the 1960s when it was applied in diagnostic radiology and imaging systems [8,9]. The area under the ROC curve (AUC) captures the discriminatory ability of a model, i.e., how well a model separates events (subjects who experience the event of interest) from non-events (subjects who are event-free). Recently, the precision-recall (PR) curve is gaining popularity [10][11][12][13]. Originated from the information retrieval community in the 1980s [14,15], it is a relatively new tool in medical research. The area under the PR curve is called the average positive predictive value or the average precision (AP) [16][17][18]. Several papers suggest that the PR curve and AP are more informative than the ROC curve and AUC for evaluating the risk model's prediction performance for an unbalanced outcome, i.e., when the event rate is low ( [16,19,20]). Davis and Goadrich established a one-to-one correspondence between an ROC curve and a PR curve [21]. When comparing the prediction performance of two risk models, e.g., new versus existing, the ROC curve of the new model dominates the ROC curve of the existing model if and only if the PR curve of the new model dominates the PR curve of the existing model. However, when the ROC and PR curves of the two models cross, it is not uncommon that the IncVs of AUC and AP contradict each other. Clark et al. [22] investigated two models for predicting acute ovarian failure among female childhood cancer survivors. The ovarian-dose model has a slightly lower AUC but an increased AP by about 48%, compared to the prescribed-dose model. The disagreement creates confusion in determining whether the updated risk score improves the prediction accuracy.
In this article, we investigate the analytical connection and difference between the IncVs of AUC and of AP with respect to their true values derived from the underlying data generating mechanism. Unlike previous works investigating the inconsistency between the AUC and AP mainly via simulation studies, our numerical study focuses on the true values, not estimates, of these two IncV metrics. In addition, we examine the effect of the event rate on their (dis)agreement, both analytically and numerically.

Notation and definitions
First, we lay out the notations and define concepts that are used throughout this article. Let D = 0 or 1 denote a binary outcome. For studies with an event time T, define D = I(T ≤ τ ) for a given prediction time period τ , which indicates that the outcome is time-dependent. In this article, we refer to subjects with D = 1 as the events and those with D = 0 as the non-events. Let π = Pr(D = 1) denote the event rate.

Risk model and risk score
A risk model is a function of a set of predictors X = X 1 , · · · , X k−1 , which might include interaction terms and polynomial terms, to obtain the probability of D = 1. Usually, we write this model as a regression model: where g(·) is a smooth and monotonic link function, such as a logit link. For the censored event time outcomes, a risk model could be Cox's proportional hazards model [23] or the time-specific generalized linear model [24]; both models can be expressed in the general form of Eq. (1) with modifications. In practice, the underlying data generating mechanism is often complicated, and our working risk model in Eq. (1) is usually misspecified. Let π(X) = Pr(D = 1 | X) denote the true probability of D = 1, which is determined by the underlying distribution of D given X. Here, we refer to π(X) as the true risk and p(X) as the working risk from a working risk model. When the working risk model in Eq. (1) is misspecified, p(X) = π(X). The working risk p(X) can be regarded as a risk score and used to classify subjects into different risk categories. For example, given a cut-off value c, subjects with p(X) ≤ c are classified into the low-risk group, whereas the highrisk group consists of subjects with p(X) > c. In general, a risk score, denoted as r(X), can be any function of X that reflects how likely a subject is an event. Thus, r(X) can be a non-decreasing transformation of p(X), e.g., r(X) = g −1 (p(X)) = β 0 + β 1 X 1 + · · · + β k−1 X k−1 .

Remark 1
In practice, the working risk p(X) is estimated from a data sample. The estimated regression coefficients β j , j = 0, 1, · · · , k − 1, are the solution to an estimating equation: , which is not of interest here. In this article, we investigate the predictive performance of the population working risk p( β 0 , · · · , β k−1 = 0 with the expectation taken under the true joint distribution of (D, X), and β * j = lim n→∞ β j .

Accuracy measures and IncV metrics
The AUC and AP can be defined on any risk score r(X) since they are rank-based. The ROC curve is a curve of the true positive rate (TPR) versus the false positive rate (FPR). Given a cut-off value c, the TPR and FPR are the proportions of higher-risk score r(X) > c among the events and non-events respectively, i.e., TPR(c) = Pr [r(X) > c | D = 1] and FPR(c) = Pr [r(X) > c | D = 0]. The AUC can be interpreted as the conditional probability that given a pair of an event and a non-event, the event is assigned with a higher-risk score, i.e., AUC = Pr r( The PR curve is a curve of the positive predictive value (PPV) versus the TPR. The PPV is defined as PPV(c) = Pr [D = 1 | r(X) > c], the proportion of subjects with higher-risk scores that are events. The AP can be expressed as AP = E [PPV (r 1 (X))] [18], where r 1 (X) denotes the risk score of an event, and the expectation is taken under the distribution of r 1 (X). The AP is event-rate dependent [18]; in contrast, the AUC does not depend on π since it is conditional on the event status.
Let old and new denote an accuracy measure (e.g., AUC or AP) of the existing and new risk models, respectively. The IncV is defined as = new − old , which quantifies the change in when comparing the new model with the existing one.

Data example
Accurate ovarian failure (AOF) is a treatment associated complication caused by ovarian exposure to radiation and chemotherapy. It is defined as permanent loss of ovarian function within 5 years of a cancer diagnosis or no menarche after cancer treatment by age 18. About 6% of female childhood cancer survivors have AOF. We evaluate and compare two recently published risk models [22] that predict AOF on an external validation dataset, the St. Jude Lifetime Cohort [25], which consists of 875 survivors with 50 AOF events.
Both models include the following risk factors: age at cancer diagnosis, cumulative dose of alkylating drugs measured using the cyclophosphamide-equivalent dose, hematopoietic stem-cell transplant, and radiation exposure. The difference between the two models is in the measurement of radiation exposure. The prescribed-dose model uses the prescribed radiation doses to the abdominal and pelvic regions, which are routinely available in medical charts. The ovarian-dose model uses the minimum of the organ-specific radiation exposure for both ovaries estimated by radiation oncologists. The equation for calculating the AOF risk using each model is developed using the Childhood Cancer Survivors Study and given in the supplementary material of Clark et al. (2020) [22]. Figure 1a and b show the ROC curves and PR curves of these two models. The estimated AUC is 0.96 for the prescribed-dose model and 0.94 for the ovarian-dose model; AUC is estimated to be − 0.02. The estimated AP is 0.46 for the prescribed-dose model and 0.68 for the ovarian-dose model. The estimated AP is 0.22. The estimation procedure is explained in Appendix.
Based on the AUC, we conclude that the more expensive ovary dosimetry does not improve the prediction accuracy at all. However, based on the AP, the ovarian-dose model clearly outperforms the prescribeddose model. Why do these two metrics give conflicting conclusions?

Analytical comparisons between AUC and AP
To answer this question, we first investigate the connections and differences between the AUC and AP using the following three hypothetical risk scores: r 1 , r 2 , and r 3 . We assume that all the risk scores among non-events follow a standard normal distribution, i.e., r j | D = 0 ∼ N(0, 1), for j = 1, 2, 3. However, their distributions among events are different: Figure 2 presents the comparisons of these three risk scores under an event rate π = 0.05. Figure 2a shows their density curves stratified by events and non-events. Among them, the two density curves of r 3 are the most separated. Thus, the ROC and PR curves of r 3 dominate those of r 1 and r 2 ( Fig. 2b and c), and consequently, r 3 has the largest AUC and AP. In contrast, the ROC and PR curves of r 1 and r 2 cross: r 2 has a slightly larger AUC with AUC r 2 − AUC r 1 = 0.007, but r 1 has a considerably larger AP with AP r 1 − AP r 2 = 0.096. Figure 3 exhibits the comparisons between r 1 and r 2 for three different event rates π = 0.2, 0.05, and 0.01. Analytically, both the AUC and AP measure the separation of the risk score distributions between events and non-events. Let F 1 (·) and F 0 (·) denote the cumulative distribution functions (CDFs) of a risk score r(X) conditional on D = 1 (events) and D = 0 (non-events), respectively.
Let q α = F −1 1 (α) denote the αth quantile for the distribution F 1 , 0 ≤ α ≤ 1. As shown in Eqs. (7) and (8) of Appendix, the AUC and AP can be expressed as functions of F 0 (q α ), the proportion of non-events whose risk scores are below the αth quantile of the risk scores among events. The F 0 (q α ) measures the separation of the two distributions F 1 and F 0 : the larger the F 0 (q α ) is at a given α, the more non-events having lower-risk scores, indicating a further separation between these two distributions. For example, the F 0 (q α ) curve of r 3 dominates those of r 1 and r 2 (Fig. 2d), which is consistent with the fact that (2021) 5:13 Page 5 of 15

Fig. 2
Comparison of three hypothetical risk scores r 1 , r 2 , and r 3 at event rate π = 0.05 r 3 has the best separation between events and non-events ( Fig. 2a). Furthermore, we can express both AUC and AP as where w (α) is a weight function, and (α) = F new,0 q new,α − F old,0 q old,α , capturing how much the new working risk model changes the separation of these two distributions at a given α. Note that (α) is independent of π because it is conditional on the event outcome. Thus, AUC and AP are weighted averages of (α), but their weights are different. For AUC, w AUC (α) ≡ 1 for 0 ≤ α ≤ 1, i.e., (α) is equally weighted. For AP, w AP (α) is a function of α and π (Eq. (9) of Appendix).
To visualize how w AP (α) changes with α and π, we plot the w AP (α) in a log scale against α for different π in Fig. 3a, in the context of comparing the hypothetical risk scores r 1 and r 2 . For any π, w AP (α) increases with α. This tells us that AP assigns heavier weights to the upper-tail quantiles of the risk score, representing higher-risk regions, and lighter weights to lower-tail quantiles, representing lower-risk regions, i.e., AP emphasizes the change of the separation in higher-risk regions. However, the change is equally weighted in AUC since w AUC (α) ≡ 1.
Additionally, w AP is affected by π. When π is smaller, w AP (α) is larger for α values close to 1 but smaller for α values close to 0. This indicates that, at a lower event rate, if a risk model can better separate the two risk score distributions at the upper quantiles, it will be rewarded more; if it has a worse separation at lower quantiles, it will be penalized less.

Hypothetic risk scores r 1 and r 2 revisited
Assuming that r 2 is from an existing risk model and r 1 is from a new one, (α) = F r 1 ,0 q r 1 ,α − F r 2 ,0 q r 2 ,α , AUC = AUC r 1 − AUC r 2 , and AP = AP r 1 − AP r 2 . As shown in Fig. 3b, (α) > 0 for large α, and (α) < 0 for small α. It indicates that compared to r 2 , r 1 has a better separation for the upper quantiles of the risk score but worse for lower quantiles. With equal weighting, AUC is equivalent to the area under (α) curve over its entire range. Since the area above 0 is approximately the same as the area below 0, AUC ≈ 0. As mentioned earlier, AUC is invariant for different π. Thus, AUC = −0.007 (Fig. 2b) for all three π values.
For AP, the r 1 's upper-tail better performance is weighted more than its lower-tail worse performance, which explains AP is all positive for the three π values (Fig. 3c). Additionally, when π gets smaller, the better separation of r 1 at the upper quantiles is rewarded more, and meanwhile, its worse separation at lower quantiles is penalized less. Thus, even though (α) stays the same across different π, AP increases as π decreases (Fig. 3c). Figure 1c plots the estimated (α), w AP (α), and w AP (α) (α). It shows that the estimated (α) > 0 for α > 10%, whereas the prescribeddose model performs better with the estimated (α) < 0 when α < 10%. It suggests that compared to the prescribed-dose model, the ovarian-dose model separates the events and non-events better among individuals predicted to be at a higher risk. Overall, under the estimated (α) curve, the area below zero is slightly larger than the area above zero. Thus, the estimated AUC is negative but close to zero. This indicates that these two models have comparable performance in terms of AUC.

Data example revisited
However, the estimated AP rewards the superior performance of the ovarian-dose model at the upper quantiles with large weights, and thus, it is positive and sizable. Clark et al. [22] created four risk groups: low (< 5%), medium-low (5% to < 20%), medium (20% to < 50%), and high risk (≥ 50%). The ovarian-dose model classifies 37 individuals (out of 875) as high risk, among which 30 (81%) experienced AOF, while the prescribed-dose model predicted 13 individuals at high risk, with 6 (46%) AOF events. This again confirms that the ovarian-dose model is better at identifying the AOF events.
Comparison with Brier score. Since both the AUC and AP are rank-based, they are semi-proper scoring rules: the true model has the maximum AUC and AP among all the models, but a misspecified risk model and the true model can have the same AUC and AP when they rank the subjects' risks in the same order. We decide to compare these two metrics with the Brier score (BrS), the only strictly proper scoring rule. The BrS is the expected squared difference between the binary outcome D and the working risk p(X), i.e., The BrS is minimized at the true model, i.e., p(X) = π(X). A non-informative model, assigning the event rate to every subject, i.e., p(X) ≡ π, leads to the maximum BrS value π(1 − π). A scaled Brier score (sBrS) is defined as sBrS = 1 − BrS/ [π(1 − π)], ranging from 0 and 1, with larger values indicating better performance [26].

Remark 2
Although the BrS cannot be directly expressed as a function of F 0 (q α ), it is closely related to the two distributions F 1 and F 0 . Specifically, it can be written as Let sBrS denoted the IncV of sBrS. The sBrS is estimated to be 0.23 for the prescribed-dose model and 0.50 for the ovarian-dose model, and sBrS is estimated to be 0.27. Thus, similar to AP, sBrS favors the ovarian-dose model.
Why are sBrS and AP consistent in this example? Figure S1 of the supplementary material shows the histogram of the predicted risk p i from each model among the AOF and non-AOF individuals. For the non-AOF individuals, the risk score distributions of these two models are similar. Consequently, the mean and variance of p i for both models are also similar: the mean is 0.033 for the ovarian-dose model and 0.042 for the prescribed-dose model; their variances are both about 0.0053. The MSPE for the ovarian-dose model is 0.0064, slightly lower than 0.0071 for the prescribed-dose model.
For the AOF events, the risk score distribution of the ovarian model has a heavier right tail. This indicates that the ovarian-dose model pushes more AOF events to the high-risk group. As a result, the mean of p i for the ovarian-dose model is 0.48, much closer to 1 than 0.23 for the prescribed-dose model. The variance is 0.10 for the ovarian-dose model and 0.023 for the prescribeddose model. The MSPE of the ovarian-dose model is 0.367, much smaller than 0.613 of the prescribed-dose model. Combining the MSPEs for events and non-events weighted by their respective proportions, the estimated BrS for the ovarian-dose model is 0.027, which is smaller than 0.042, the estimated BrS for the prescribed-dose model.
This data example illustrates a comparison of the three IncV metrics: AUC, AP, and sBrS. Next, we expand the comparison via a numerical study.

Numerical study
As we are interested in the true values of the IncV for the population working risk (described in Remark 1), not in the IncV estimates from a sample, we do not use simulation studies; there are no data or samples involved. The numerical study in this section evaluates the IncV of adding a marker, denoted by Y, to a model with an existing marker, denoted by X. The true value of each IncV metric is directly derived from the distributional assumptions described below.
Let the markers X and Y be independent standard normal random variables. Given the values of these two markers, a binary outcome D follows a Bernoulli distribution with the probability of D = 1 via the following model: where (·) is the CDF of a standard normal distribution. Given X and Y, π(X, Y ) is the true risk. The true model in Eq. (3) includes an interaction between X and Y, indicating the effect of X on the risk changes with the value of Y, and vice versa. Typically, in practice, none of the working models are the true model. Having this in mind, we compare the following two misspecified working models: (i) one-marker model: p(X) = (γ 0 + γ 1 X), and (ii) two-marker model: Here, we consider different values of β 1 , β 2 , β 3 and π: β 1 = 0.3, 0.4, · · · , 0.9, 1, β 2 = 0.3, 0.4, · · · , 0.9, 1, β 3 = − 0.5, − 0.4, · · · , − 0.1, 0.1, · · · , 0.4, 0.5 (excluding 0), and π = 0.01, 0.05, 0.1, 0.2, 0.5. Each combination of (β 1 , β 2 , β 3 , π) values is referred to as a scenario. Given a scenario, the value of β 0 can be derived. In the supplementary material, we explain how to obtain the value of β 0 and calculate the true values of AUC, AP, and sBrS of the onemarker and two-marker models as well as the true values of the IncV metrics.

Results
We compare the three IncV metrics based on the following two aspects: (1) size and range, and (2) agreement. A desirable IncV metric should be sensitive to the change in the predictive performance. If a new model improves the prediction accuracy, an IncV should have a sizable positive value. It should also be able to reflect a performance deterioration with a sizable negative value. If an IncV is often close to 0, we might question its utility in supporting decision-making. As mentioned earlier, inconsistency among different accuracy metrics is often encountered. Thus, we are also interested in the agreement among the three IncV metrics. Figure 4 plots the summary statistics (minimum, 25% quantile, median, 75% quantile, and maximum) of the three IncV metrics under different event rates. AP has the widest range, followed by sBrS, and AUC has the narrowest range. This difference between AUC and AP is more evident for a lower event rate. For example, under π = 0.01, the inter-quartile range (IQR) and median of AUC are both 0.07. In contrast, the IQR of AP is much wider, with a range of about 0.41 and a median of 0.21.

Size and range
In addition, AUC is negative in less than 1% of the scenarios (29 out of 3200). Furthermore, when it is negative, the value is very close to 0, which indicates that AUC cannot distinguish between a useless marker and a harmful marker [27]. On the other hand, AP is negative in about 12% of the scenarios (389 out of 3200), with a much larger size.
As π changes, the range of AP varies the most among the three IncV metrics, whereas the quartiles of AUC remain almost constant. As π increases, the ranges of all the IncV metrics get narrower and closer to each other. When π = 0.5, both AUC and AP range from 0.015 to 0.25 with a median of 0.089, and sBrS ranges from 0.019 to 0.32 with a median of 0.12.

Agreement Correlation
We calculate the Pearson correlation between each pair of the IncV metrics under each π (Table 1). AP and sBrS are highly correlated for all values of π. As π increases, their correlation decreases from about 1 (π = 0.01) to  0.84 (π = 0.5), but the correlations of AUC with the other two IncV metrics increases with π. When π = 0.01, AUC and sBrS are negatively correlated and their correlation − 0.11 is the smallest among the three pairs; when π = 0.5, they are the highest positively correlated. We also show the scatter plots of each pair under different π in Figure S7 (supplementary material).

Concordance
The sign of an IncV metric is often used to decide whether the new model is more accurate than the existing one. Positive IncVs favor the new model, while negative or zero values favor the existing one. Here, we define a concordance measure, which quantifies the consistency of the conclusions reached by a pair of IncV metrics.
Take AP and sBrS as an example. Under a scenario, we call the pair concordant if both are > 0 or ≤ 0. If one is > 0 and the other is ≤ 0, the pair is discordant. The measure of concordance is defined as the proportion of scenarios where the pair is concordant minus the proportion of scenarios where it is discordant. For instance, when π = 0.01, AP and sBrS are concordant in about 97% of the total 640 scenarios (i.e., all the combinations of β 1 , β 2 , and β 3 values at each π) and discordant in about 3%. Thus, the concordance measure is 0.93 with a roundoff error. Table 1 reports the concordance for all three pairs of the IncV metrics under each π. The results are similar to those above for the Pearson correlation. When π is small, such as 0.01, 0.05, and 0.1, AP and sBrS are the most concordant; when π = 0.2 or 0.5, AUC and sBrS are the most concordant. AUC and AP are the least concordant for all values of π.
When π is close to 0.5, the three IncV metrics tend to agree. Using any of them, we would very likely reach the same conclusion about whether the new model is more accurate. However, when the event rate is low, i.e., for a rare outcome, AUC can be inconsistent with both sBrS and AP.

AUC versus AP in selected scenarios
Next, we single out four scenarios for an in-depth comparison between AUC and AP at π = 0.01. The first two scenarios have similar AUC but different AP (Fig. 5), whereas the next two have similar AP but different AUC (Fig. 6).
In scenario (i), both the ROC and PR curves of the twomarker model dominate those of the one-marker model, respectively. This indicates that the two-marker model is better at each point, and consequently, (α) is positive throughout (Fig. 5c). In this case, both AUC and AP are positive. However, the size of AP 0.33 is much larger than AUC 0.06, due to the large weight w AP (α) at the upper quantiles (Fig. 5c).
In scenario (ii), both the two ROC curves and PR curves cross, and (α) is below zero for upper quantiles and above zero for lower quantiles (Fig. 5c). This implies that the two-marker model can better separate between events and non-events for lower-risk regions, but not for higherrisk regions. As a result, AUC and AP are conflicting.
AUC is positive because the area under (α) curve above zero is larger than that below zero. However, AP is negative, as it weights the below-zero (α) heavily.
In scenario (iii), the two ROC curves and the two PR curves are almost identical. This indicates that adding the new marker does not change the separation of the distributions of the risk score between events and non-events. It is also reflected in Fig. 6c where the entire (α) curve almost overlaps with the zero line. Thus, both AUC and AP are close to zero. This is an example of both metrics agreeing that the new marker is "useless. " In scenario (iv), although the two-marker model makes poorer predictions for higher-risk regions, its prediction is significantly better for the rest. Thus, AUC is positive and sizable. However, since AP weights heavily on higher-risk regions, the improvement on the majority is offset by the worse performance at the upper quantiles, which leads to a close-to-zero AP.
What if the two-marker model is the true model, i.e., β 3 = 0? Figures S8 and S9 in the supplementary material examine this question and show the scatter plots and plots of the summary statistics of AUC, AP, and sBrS for different π. As expected, all the IncVs are positive. For a smaller π, AP ranges wider than AUC does. As π increases, these two metrics get closer to each other. When π = 0.5, sBrS has the widest range.
Since all the IncVs are positive, their concordance is all 1. Table S1 (supplementary material) lists the Pearson correlation between each pair of the IncV metrics, which are all positive. When π is small, sBrS is more strongly correlated with AP than with AUC. As π increases, all three IncV metrics are strongly correlated with each other.  Pepe et al. (2013) proved that, when one of the two working models is the true model, the hypothesis H 0 : p(X, Y ) = p(X) is equivalent to the hypotheses of no improvement in the accuracy measures such as the AUC, net reclassification index (NRI), or integrated discrimination improvement (IDI) [6]. In their setting, the ROC and PR curves never cross. However, our paper focuses on situations where neither working model is the true model, and the two curves might cross. When they cross each other, the above equivalence among the hypotheses does not hold, and it implies that one of the two models outperforms the other at non-overlapping risk regions. This could lead to the disagreement between AUC and AP. AUC has been criticized for being insensitive to the contribution from an added marker [28]. According to our analysis, the insensitivity is likely a result of its equal treatment across different risk regions, and thus, it often fails to reflect the "local" improvement or deterioration of the new risk model. In the AOF example, the ovarian-dose model demonstrates its superiority in higher-risk regions. However, this advantage disappears in AUC, which Page 12 of 15 takes a simple average over the ovarian-dose model's wins in higher-risk regions and its losses in lower-risk regions.

Discussion
Similarly, if we consider a curve of negative predictive values (NPV, the proportion of non-events among subjects having a lower-risk score than a cut-off value) versus specificity (1− FPR), following our derivation of AP, the area under this curve can be expressed as E [NPV (r 0 (X))] where r 0 (X) denotes the risk score of a non-event subject. We can regard this quantity to be the average NPV. Similar to AP, the IncV of the average NPV, aNPV, can be expressed as a weighted average of the change in the separation of the risk score distributions between events and non-events. However, its weight is larger for lowertail quantiles of the risk score, indicating the average NPV emphasizes on the accuracy of lower-risk regions.
Assessing the change in prediction accuracy is important in investigating the potential of a new marker (or a new measurement for an existing marker) [29]. However, neither AUC or AP considers the cost and benefit associated with the clinical utility of risk prediction [29,30]. Going back to the AOF example, should more expensive ovary dosimetry be used for predicting AOF because it identifies more AOF cases? Unfortunately, both AP and AUC are insufficient to answer this question. Vickers and Elkin [29] proposed a net benefit and decision curve analysis for evaluating the clinical value of a risk model. The net benefit is defined as NB(p t ) = π 1 TPR(p t )−(1−π 1 )FPR(p t ) p t 1−p t , quantifying the net benefit for subjects who are treated based on the rule that the risk probability is above the threshold value p t .
We can express the above net benefit as a function of PPV: NB(p t ) = Pr p(X) > p t Here, Pr p(X) > p t is the proportion of subjects who receive the treatment among the population, and PPV (p t )−p t 1−p t quantifies the expected net benefit given that a subject is treated. The net benefit is regarded as the scaled "average benefit per prediction" [31,32], and thus, PPV (p t )−p t 1−p t is the average benefit per treated subject. Thus, NB(p t ) is determined by the change of the proportion of treated subjects between the two models and PPV (p t ). The analytical relationship between NB and other IncV metrics such as PPV (p t ) and AP is worth further investigation.
The AUC is conditional on the binary outcome, and consequently, only depends on the respective risk score distributions among the events and non-events. Thus, it can be estimated from either a prospective cohort study or a case-control study. In contrast, the AP is conditional on the risk score obtained at baseline. Besides the risk score distributions, the AP also depends on the event rate, and thus, it has previously only been possible to estimate from cohort studies, but not from case-control studies. However, if one can acquire the information on the event rate from a previous cohort study or from surveillance data, the AP can be estimated via combining an estimated or assumed event rate with the risk score distributions of events and non-events estimated from the case-control study using the derived expression of the AP (see Eq. (8) in Appendix).

Conclusion
In this article, we investigated the disagreement between two IncV metrics AUC and AP when neither the existing nor the new risk model is the true model. We showed that they are intrinsically connected; both can be expressed as an average of (α), a quantity characterizing the change in the separation of the risk score distributions between events and non-events when comparing an existing risk model to a new one. However, AP is a weighted average, with weights monotonically increasing as the risk score increases, whereas AUC is a simple average of the change. Due to this difference, they do not always agree with each other; the lower the event rate is, the more these two metrics disagree. In addition, compared to AUC, AP has a wider range and is subsequently more sensitive to the contribution from new information added to the existing risk model. Via the numerical study, we also show that AP and sBrS are highly consistent, but the correlation of AUC and sBrS transitions from a positive correlation to a negative one as the event rate decreases.

Estimation of AUC, AP, and sBrS for binary outcomes
Suppose that the data D = {(D i , X i ), i = 1, · · · , n} is collected from n subjects. Let p i denoted the estimated risk, described in Remark 1. Let r i be a risk score, which is a non-decreasing transformation of p i . The AUC and AP are estimated using r i by the following nonparametric estimators

Derivation of AUC and AP
Let π = Pr(D = 1) be the event rate, and r(X) = r X be a risk score. Let F(c) = Pr(r X ≤ c) denote its cumulative distribution function (CDF) for the entire population, and F 1 (c) = Pr (r X ≤ c | D = 1) and F 0 (c) = Pr (r X ≤ c | D = 0) denote its CDFs for events and nonevents, respectively. The TPR, FPR, and PPV are PPV(c) = Pr(D = 1 | r X > c) AUC is the area under the ROC curve, which can be expressed as Let q α = F −1 1 (α) be the αth quantile of the F 1 distribution, i.e., F 1 (q α ) = α. Thus, let c = q α , and we have AUC = 1 0 F 0 (q α )dα.
AP is the area under the PR curve, which can be expressed as Again, let c = q α , we have

Weight w AP in AP
Let AP old and α new denote the AP of the existing and new models: Thus, with arithmetic operations, the IncV of AP can be expressed as It is a function of α and π . It also depends on F new,0 (q new,α ) and F old,0 (q new,α ). In general, F 0 (q α ) ≥ α because the density curve for non-events is to the left of that for events. Thus, how the weight changes with α and π is mainly determined by the numerator (π −1 − 1)/(1 − α). However, when π and α are fixed, larger values of F old,0 (r old,α ) or F new,0 (r new,α ) or both, i.e., better performance of at least one model, lead to larger weights.