My Profile

Search abstracts

Lewis Sheiner

Stockholm, Sweden

Montreux, Switzerland

Budapest, Hungary

Lisboa, Portugal

Hersonissos, Crete, Greece

Alicante, Spain

Glasgow, Scotland

Venice, Italy

Athens, Greece

Berlin, Germany

St. Petersburg, Russia

Marseille, France

København, Denmark

Brugge/Bruges, Belgium

Pamplona, Spain

Uppsala, Sweden

Verona, Italy

Paris, France

Basel, Switzerland

Salamanca, Spain

Saintes, France

Wuppertal, Germany

Glasgow, Scotland

Sandwich, UK

Frankfurt, Germany

Greenford, UK

Paris, France

Basel, Switzerland

Printable version

PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
ISSN 1871-6032

PAGE 15 (2006) Abstr 972 []

Poster: Methodology- Model evaluation

Teun Post Accurate Interpretation of the Visual Predictive Check in order to Evaluate Model Performance

Post, T.M.(1), J.I. Freijer (1), W. de Winter (1), B.A. Ploeger (1,2)

(1)LAP&P Consultants BV, Leiden, The Netherlands; (2)Leiden University, Leiden / Amsterdam Center for Drugs Research, Leiden, The Netherlands

PDF of poster

Objective: A valuable method to characterize model performance is the Visual Predictive Check (VPC) [1,2]. The purpose is to determine whether a model can reproduce the variability in the observed data. However, it solely relies on subjective graphical inspection of the distribution in the simulated versus the observed data [2,3]. It is not evaluated whether the expected random distribution of the observations around the predicted median trend is realized. Moreover, it does not account for the number of observations at each time-point or the influence and information residing in missing data (e.g. below LOQ and dropout in longitudinal studies) [4, 5, 6]. Therefore, the model fit might be perceived as being biased, whereas this is due to an unbalanced distribution of the observations over time. We propose a method for a more accurate and objective interpretation of model performance using the Visual Predictive Check.

Method: In the extension to the VPC, the distribution of the observations above and below the model predicted median at each time-point is calculated and visualized, while considering the effect of missing data on the interpretation of the VPC. Secondly, the model predicted median is compared with the 5th, 50th and 95th percentiles of the bootstrapped median of the original observations at each time-point, accounting for the number and assumed position of missing data. The method is illustrated by two examples; a simulated PK study and a phase III PD study [7]. With the PK study, the amount of information is sequentially decreased in order to exemplify the influence of data below LOQ on the interpretation of model performance. The PD example illustrates how the effect of dropout on the predictive performance can be evaluated.

Results: Presentation of the distribution of the observations above and below the model predicted median enabled a more objective characterization of model performance in both examples, regardless the density of the data (PK: 20, PD; 1204 subjects). Comparison between predicted median time-trends and bootstrapped medians of observed data, including its ranges, supported the evaluation of model performance, while relating it to the amount of observed data and the influence of the missing data.

Conclusion: The proposed method facilitated the evaluation of model performance by the linking the VPC to the observed data while accounting for the amount of observed data and the influence of missing data. The applied method puts the VPC in perspective in relation to the distribution of the observations. As a result, this leads to a more accurate and objective evaluation of model performance.

[1] Y. Yano, S.L. Beal, L.B. Sheiner, Evaluating pharmacokinetic/pharmacodynamic models using the Posterior Predictive Check, J Pharmacokinet Pharmacodyn 2001; 28(2) 171-192
[2] N. Holford, The Visual Predictive Check – Superiority to Standard Diagnostic (Rorschach) Plots, PAGE 14 (2005) Abstr 738 []
[3] P.R. Jadhav and J.V.S. Gobburu, A New Equivalence Based Metric for Predictive Check to Qualify Mixed-Effects Models, AAPS Journal  2005; 7(3): E523-E531
[4] C. Hu, M.E. Sale, A joint model for nonlinear longitudinal data with informative dropout, J Pharmacokinet Pharmacodyn 2003; 30(1) 83-103
[5] L.B. Sheiner, S.L. Beal, A. Dunne, Analysis of nonrandomly censored ordered categorical longitudinal data from analgesic trials, J Amer Stats Assn 1997; 92 1235-1255
[6] J.P. Hing, S.G. Woolfrey, D. Greenslade, P.M.C. Wright, Analysis of toxicokinetic data using NONMEM: impact of quantification limit and replacement strategies for censored data, J Pharmacokinet Pharmacodyn 2001; 28(5) 465-479
[7] W. de Winter, J. DeJongh, T. Post et al., A mechanism-based disease model for comparison of long-term effects of pioglitazone, metformin and gliclazide on disease processes underlying type 2 diabetes mellitus, J Pharmacokinet Pharmacodyn, 2006 Mar 22