2007 - København - Denmark

PAGE 2007: Methodology- Model evaluation
Emmanuelle Comets

Normalised prediction distribution errors in R: the npde library

Emmanuelle Comets (1), Karl Brendel (2) and France Mentré (1,3)

(1) INSERM U738, Paris, France; Université Paris 7, UFR de Médecine, Paris, France (2) Institut de recherches internationales Servier, Courbevoie, France (3) AP-HP, Hôpital Bichat, UF de Biostatistiques, Paris, France

Model evaluation is an important part of model building. Prediction discrepancies (pd), have been proposed by Mentré and Escolano [1] to evaluate nonlinear mixed effect models. Brendel et al [2] developed an improved version of this metric, termed normalised prediction distribution errors (npde), taking into account repeated observations within one subject. In the present poster, we present a set of routines to compute npde.

Model evaluation consists in assessing whether a given model M (composed of a structural model and parameter estimates) adequately predicts a validation dataset V. V can be the original dataset used to build model M (internal validation) or a separate dataset (external validation). The null hypothesis H0 is that the data in V can be described by model M. The pd for a given observation is defined as the percentile of this observation within the marginal predictive distribution under H0 . Prediction distribution errors are computed in a similar way after correcting for the correlation induced by repeated observations. The predictive distribution itself is approximated by Monte-Carlo simulations: K datasets are simulated under the null hypothesis (model M and corresponding parameters) using the design of V.

The program requires as input a file with the validation dataset V and a file containing the K simulated datasets stacked one after the other. Simulations should be performed beforehand (using for instance NONMEM). The program then computes the npde. Optionally, pd can be computed instead or in addition, which is less time-consuming but leads to type-I error inflation especially as the number of observations per subject increases. Graphical diagnostics are plotted to evaluate model adequacy. Tests can be performed to compare the distribution of the npde relative to the expected standard normal distribution.

The code is available as a library for the open-source statistical environment R [3]. The package contains an example of model building followed by model evaluation using npde. The control file used to perform the necessary simulations is also included.

References:
[1] F. Mentré and S. Escolano. Prediction discrepancies for the evaluation of nonlinear mixed-effects models. J Pharmacokinet Biopharm 33:345-­67 (2006).
[2] K. Brendel, E. Comets, C. Laffont, C. Laveille, and F. Mentré. Metrics for external model evaluation with an application to the population pharmacokinetics of gliclazide. Pharm Res 23:2036-­49 (2006).
[3] R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2006).




Reference: PAGE 16 (2007) Abstr 1120 [www.page-meeting.org/?abstract=1120]
Poster: Methodology- Model evaluation
Click to open PDF poster/presentation (click to open)
Top