What is the Value of Uncertainty Parameter Estimates provided by Different Population PK Methods?
C. Dartois (1), C. Laveille (2), B. Tranchand (1,3), M. Tod (4), P . Girard (1,5)
(1) EA3738, UCLB University, Lyon; (2) IRIS Servier, Courbevoie; (3) Centre Anti-Cancéreux Léon Bérard, Lyon ; (4) Pharmacy UCLB University, Lyon; (5) INSERM, Lyon ; France
Background In population models, parameter uncertainty is estimated either by posterior distribution of fixed and random effect parameters or by standard errors (SE) derived from the Hessian matrix with maximum likelihood methods. The SE is a crucial, and controversial, piece of information which varies with the available quantity of information and can be considered as one of the very first steps of model evaluation as well as essential for any future simulations.
Purpose The purpose of this study is to compare by simulations the performances for estimating SE of a PK model with different population methods across various designs.
Method We simulate from one compartment PK model with oral absorption, inter-individual log-normal variability and additive error model. Simulation (true) parameters are based on Theophyline dataset provided with NONMEM and optimal sample times by PFIMOPT for all designs. We simulate 100 datasets for each combination of number of subjects (30, 100, 500) associated with each sampling design (3, 6, 15 points/patient). Each of the 900 datasets is fitted using NONMEM methods (FO, FOCE, FOCE Inter) and SE computed either from default NONMEM covariance matrix ($COV) or by non-parametric bootstrap (n=200) after FOCE Inter. We compare, for each of the 4 methods, parameters with their true values and SE with expected values computed by PFIMOPT.
Results In terms of bias with sparse data, we observe no differences between various methods for fixed effect parameters, with all estimates close to true value. When number of points / patient increases, a slight bias appears with FO method (+10% on average). For random effect parameters, consistent bias (+35%) is observed across all designs and methods. Residual error bias disappears when number of points/patient increases. As expected, SEs decrease quickly when subject sample size increases. SEs of fixed effect parameters are close to expected SEs and consistent across all designs and all methods, bootstrap included. SEs of random effect parameters appear to be consistently different from expected ones across all methods and all designs, but highly variable from one data set to another. We find no noticeable difference between bootstrapped and $COV standard errors. Regarding CPU times, SEs are obtained 200 times faster with $COV compared with bootstrap.
Conclusion For all estimation methods, fixed effect parameter SEs, derived from Hessian, look reliable, while those of random effects appear to be highly variable and different from PFIMOPT expected ones. Bootstrap SE are very close to $COV SE. Those results have to be confirmed further with estimates from nlme, with and without bootstrap, and Bayesian posterior estimates from BUGS.
 NM user group 07/2003
 S. Retout and F. Mentre. Optimization of individual and population
designs using Splus. J Pharmacokinet.Pharmacodyn. 30 (6):417-443, 2003.