IV-20

Design evaluation using a bootstrapped Monte Carlo variance-covariance matrix.

Eric A. Strömberg, Mats O. Karlsson, Andrew C. Hooker.

Department of Pharmaceutical Biosciences, Uppsala University

Objectives: When evaluating study designs, methods based on Monte Carlo simulations such as Stochastic Simulation and Estimation (SSE) are considered the gold standard. The parameter vectors estimated in the SSEs can be utilized to calculate an empirical variance-covariance matrix (empCOV), which may be used for design evaluation in a similar manner to the FIM, by i.e. calculating a D-criterion. However, the true empCOV-matrix can only be achieved when the number of simulations goes towards infinity. In this work this uncertainty in the empCOV and empirical D-criterion calculation is addressed by a bootstrap of the empirical D-Criteria from empCOV matrices based on samples from the estimated parameter vectors.

Methods: For two example models (one PK and one PD) the optimal designs using the FO and FOCE approximated full and block-diagonal FIMs were determined using PopED [1-2]. A completely random design was also generated for comparison with the optimal designs. In the SSE studies, 3000 simulated datasets where used to generate 3000 parameter vectors estimated (FOCEI) with NONMEM 7.2 [3] in PsN [4]. The median and the 5th and 95th percentiles of the D-criteria from the empCOV where calculated in MATLAB 7.13 using a 1000 iteration case-resampling bootstrap of the 3000 estimated parameter vectors where the empCOV was calculated from 3000 parameter vectors.

Results: The medians of the D-criterions of the optimal designs for the PK example ranged between 1704 and 1774 and the medians for the PD example where between 739 and 762. The average sizes of the confidence intervals were 56 and 42 for the PK and PD examples respectively. The calculated D-criteria without bootstraps were for all designs close to the medians of the confidence intervals.

Conclusions: In our examples, when calculating the empCOVs and D-criteria from the SSE parameter vectors without performing bootstraps, the results indicate that there is a difference in performance amongst all the designs. However, the confidence intervals of the D-criteria generated by the bootstraps shows that there is no significant difference between the optimal designs. Using a D-criterion from SSE parameter vectors to compare designs may include a risk of false conclusions of design superiority caused by the uncertainty of the empCOV calculation. Performing this bootstrap of empCOV matrices from the parameter vectors and instead comparing confidence intervals of D-criterion reduces the risk of false conclusions.

References:
[1] Foracchia M, Hooker A, Vicini P, Ruggeri A.,”POPED, a software for optimal experiment design in population kinetics.”, 2004 Computer Methods and Programs in Biomedicine, 74(1), pp. 29-46
[2] Nyberg J, Ueckert S, Strömberg E.A., Hennig S, Karlsson M.O., Hooker A.C.,“PopED: an extended, parallelized, nonlinear mixed effects models optimal design tool.”, 2012 Computer Methods and Programs in Biomedicine 108(2),pp. 789-805
[3] Beal S., Sheiner L.B.,Boeckmann, A., & Bauer, R.J., “NONMEM User’s Guides.” (1989-  2009), Icon Development Solutions, Ellicott City, MD, USA, 2009.
[4] Keizer R.J., Karlsson M.O., Hooker A. “Modeling and Simulation Workbench for NONMEM: Tutorial on Pirana, PsN, and Xpose”. CPT Pharmacometrics Syst Pharmacol. 2013 Jun 26;2:e50.[http://psn.sourceforge.net/] (accessed on 2013-03-13)

Reference: PAGE 23 (2014) Abstr 3249 [www.page-meeting.org/?abstract=3249]

Poster: Methodology - Study Design

PDF poster / presentation (click to open)