Commentary
What price perfection? Calibration and discrimination of clinical prediction models

https://doi.org/10.1016/0895-4356(92)90192-PGet rights and content

First page preview

First page preview
Click to open first page preview

References (17)

  • R. Detrano

    Accuracy curves: an alternative graphical representation of probability data

    J Clin Epidemiol

    (1989)
  • G.A. Diamond

    Limited assurances

    Am J Cardiol

    (1989)
  • A. Rozanski et al.

    Should the intent of testing influence its interpretation?

    J Am Coll Cardiol

    (1986)
  • J.A. Swets

    Measuring the accuracy of diagnostic systems

    Science

    (1988)
  • J.A. Hanley et al.

    The meaning and use of the area under a receiver operating characteristic (ROC) curve

    Radiology

    (1982)
  • B.J. McNeil et al.

    Statistical approaches to the analysis of receiver operating characteristic (ROC) curves

    Med Decis Making

    (1984)
  • G.A. Diamond

    ROC steady: a receiver-operating characteristic curve that is invariant relative to selection bias

    Med Decis Making

    (1987)
  • J. Hilden et al.

    The measurement of performance in probabilistic diagnosis. II. Trusworthiness of the exact values of the diagnostic probabilities

    Meth Inform Med

    (1978)
There are more references available in the full text version of this article.

Cited by (121)

  • In Reply:

    2021, Annals of Emergency Medicine
  • Detection of calibration drift in clinical prediction models to inform model updating

    2020, Journal of Biomedical Informatics
    Citation Excerpt :

    Best practices addressing these later phases of the clinical predictive analytics cycle are yet to be fully developed and further research is needed to address the unique challenges of clinical environments [3,7]. One such challenge results from model calibration, increasingly recognized as critical to the success and safety of clinical deployment of prediction models [1,8–10], deteriorating over time [11–17]. This calibration drift is a consequence of deploying models in non-stationary clinical environments where differences arise over time between the population on which a model was developed and the population to which that model is applied [5,18–23].

View all citing articles on Scopus
View full text