PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
PAGE 15 (2006) Abstr 972 [www.page-meeting.org/?abstract=972]
Poster: Methodology- Model evaluation
Post, T.M.(1), J.I. Freijer (1), W. de Winter (1), B.A. Ploeger (1,2)
(1)LAP&P Consultants BV, Leiden, The Netherlands; (2)Leiden University, Leiden / Amsterdam Center for Drugs Research, Leiden, The Netherlands
Objective: A valuable method to characterize model performance is the Visual Predictive Check (VPC) [1,2]. The purpose is to determine whether a model can reproduce the variability in the observed data. However, it solely relies on subjective graphical inspection of the distribution in the simulated versus the observed data [2,3]. It is not evaluated whether the expected random distribution of the observations around the predicted median trend is realized. Moreover, it does not account for the number of observations at each time-point or the influence and information residing in missing data (e.g. below LOQ and dropout in longitudinal studies) [4, 5, 6]. Therefore, the model fit might be perceived as being biased, whereas this is due to an unbalanced distribution of the observations over time. We propose a method for a more accurate and objective interpretation of model performance using the Visual Predictive Check.
Method: In the extension to the VPC, the distribution of the observations above and below the model predicted median at each time-point is calculated and visualized, while considering the effect of missing data on the interpretation of the VPC. Secondly, the model predicted median is compared with the 5th, 50th and 95th percentiles of the bootstrapped median of the original observations at each time-point, accounting for the number and assumed position of missing data. The method is illustrated by two examples; a simulated PK study and a phase III PD study . With the PK study, the amount of information is sequentially decreased in order to exemplify the influence of data below LOQ on the interpretation of model performance. The PD example illustrates how the effect of dropout on the predictive performance can be evaluated.
Results: Presentation of the distribution of the observations above and below the model predicted median enabled a more objective characterization of model performance in both examples, regardless the density of the data (PK: 20, PD; 1204 subjects). Comparison between predicted median time-trends and bootstrapped medians of observed data, including its ranges, supported the evaluation of model performance, while relating it to the amount of observed data and the influence of the missing data.
Conclusion: The proposed method facilitated the evaluation of model performance by the linking the VPC to the observed data while accounting for the amount of observed data and the influence of missing data. The applied method puts the VPC in perspective in relation to the distribution of the observations. As a result, this leads to a more accurate and objective evaluation of model performance.