Prediction discrepancies (pd) for evaluation of models with data under limit of quantification
Thi Huyen Tram Nguyen, Emmanuelle Comets, France Mentré
INSERM UMR738, Paris, France; University Paris Diderot-Paris 7, Paris, France
Objectives: Prediction discrepancies (pd) for model evaluation were first used to evaluate nonlinear mixed effect models by Mesnil et al. in 1998 [1,2]. Normalised prediction distribution errors (npde) are normalised pd taking into account correlations within subjects [3,4]. In the current version of npde , data below limit of quantification (BQL) are removed before computing pd and npde but this approach can introduce fake indications of model misspecification if the amount of BQL data is important . Our objective were : i) to develop a new approach for quantifying pd for BQL observation, ii) to propose additional graphs, iii) to illustrate these new features using real data from a clinical study of HIV viral dynamic response (COPHAR 3-ANRS).
Methods: pd are computed as the quantile of the observation in the predictive distribution . To deal with BQL data, we first evaluate for each time point the probability of BQL data (pLOQ) from the predictive distribution. For each BQL observation, pd is then drawn randomly from a uniform distribution on [0;pLOQ]. This method was applied to evaluate the adequacy of the bi-exponential model describing the virus load decrease of COPHAR 3 trial, in which the percentage of BLQ viral loads is high due to high treatment efficacy (49%). To estimate parameters, we used the SAEM algorithm implemented in MONOLIX 3.2 .
Results: In accordance with the results of other studies, taking into account the BQL in modelling the full dataset resulted in adequate estimates. However, the residuals graphs, including VPCs and npde, indicated model misspecification and were misleading for model evaluation. The new approach for computing pd on the other hand provided evaluation graphs correctly indicating model adequacy.
Conclusions: Presently, goodness-of-fit plots proposed by most software discard BQL observations and/or impute them to LOQ, even when those observations were correctly handled in the estimation process. If the amount of BQL data is important this provides distorted graphs. pd can easily be modified to accommodate left-censored data and these new diagnostic plots offer a better assessment of model adequacy without the need to split the graphs for measured observations and BQL data as in usual VPC. Taking into account correlation within subjects for the new pd is under development. The same idea can be applied for model evaluation for binary or categorical data.
 F Mesnil, F Mentre, C Dubruc, JP Thenot, A Mallet (1998). Population pharmacokinetic analysis of mizolastine and validation from sparse data on patients using the nonparametric maximum likelihood method. J Pharmacokinet Biopharm, 26: 133-61.
 F Mentré, and S Escolano (2006). Prediction discrepancies for the evaluation of nonlinear mixed-effects models. J Pharmacokinet Pharmacodyn, 33: 345-67.
 K Brendel, E Comets, C Laffont, C Lavielle, F Mentré (2006). Metrics for external model evaluation with an application to the population pharmacokinetics of gliclazide. Pharm Res, 23: 2036-49.
 K Brendel, E Comets, C Laffont, F Mentré (2010). Evaluation of different tests based on observations for external model evaluation of population analyses. J Pharmacokinet Pharmacodyn, 37: 49-65.
 E Comets, K Brendel, F Mentré (2008). Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: the npde add-on package for R. Comput Methods Programs Biomed, 90: 154-66.
 M Bergstrand and MO Karlsson (2009). Handling data below the limit of quantification in mixed effect models. AAPS J, 11: 371-80.
 A Samson, M Lavielle, F Mentré (2006). Extension of the SAEM algorithm to left-censored data in non-linear mixed-effects model: application to HIV dynamics model. Computational Statistics and Data Analysis, 51: 1562-74.