PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
PAGE 15 (2006) Abstr 997 [www.page-meeting.org/?abstract=997]
Poster: Methodology- Model evaluation
Lindbom, Lars(1), Justin J Wilkins(1), Nicolas Frey(2), Mats O Karlsson(1), E Niclas Jonsson(1,2)
1) Division of Pharmacokinetics and Drug Therapy, Department of Biopharmaceutical Sciences, Uppsala University, Uppsala, Sweden; 2) Modeling & Simulation Group, Medical Science/Clinical Pharmacology, F. Hoffmann-La Roche, Basel, Switzerland
Introduction: Model evaluation is widely recognized as an important part of population pharmacokinetic (PK) and pharmacodynamic (PD) model development. Although methods for performing such model evaluations have been proposed and used, they have not been extensively evaluated within this particular application area. In this study, we investigate the properties of some of the proposed methods: the nonparametric bootstrap, log-likelihood profiling, case-deletion diagnostics and regression-derived standard errors. The investigation focuses on the ability to assess model appropriateness in terms of stability, sensitivity to influential data and parameter precision.
Methods: Twenty-two PK and PD models based on real clinical and nonclinical data, implemented using the software application NONMEM and using different estimation methods, were tested with each of the methods as implemented using Perl-speaks-NONMEM (PsN) (1) and the results compared and contrasted.
Results and discussion: Using percentiles from a nonparametric bootstrap with 2 000 iterations as the best available evaluation method, log-likelihood profiling (LLP) and the nonparametric bootstrap gave very similar results for fixed-effects parameters when the first-order conditional estimation method (FOCE) was used. On the other hand, LLP produced the most different confidence intervals for model parameters when the first-order estimation method (FO) was applied. This is in line with earlier findings that a change in objective function values using the FO method is a comparatively poor basis for confidence. The impact of a successful or failed covariance step in NONMEM on the results from a bootstrap was found to be small. Of the models included in this analysis, many exhibited problems to some extent, particularly with respect to overparameterization, despite being classified as final by their developers before this analysis was begun. The condition number of the covariance matrix of the original model is a strong predictor of NONMEM stability in the bootstrap and case-deletion diagnostics.