David Bayard and Roger Jelliffe
Laboratory of Applied Pharmacokinetics, USC School of Medicine, Los Angeles CA USA
Objective: To develop an experiment design approach for multiple model problems where the population model associated with the Bayesian prior is specified as a finite discrete probability distribution. Such population models are generated routinely by nonparametric population modeling programs such as NPEM and NPAG.
Methods: The multiple model estimation process can be interpreted as a classification problem. As a classification problem, estimator performance can be scored in terms of how well it minimizes the Bayes risk, i.e., the probability of a misclassification. The use of Bayes risk as an experiment design criteria provides an alternative to D-optimality and other criteria based on the asymptotic Fisher Information matrix. Unfortunately, the Bayes risk is difficult to compute. However, a theoretical upper bound on the Bayes risk has recently appeared in the literature (cf., Blackmore, Rajamanoharan and Williams 2008). Because of its clear computational advantages, this poster proposes experiment designs for pharmacokinetic applications based on minimizing the Bayes risk upper bound.
Results: In a simulated example, sampling times were discretized to every 15 minutes rather than continuously. An additive assay error of 0.1 units was assumed. It is not necessary to get one sample for each model parameter. For a 1 compartment model with parameters V and Kel, and a 1 hour infusion IV, the 1 sample strategy was best at 4.25 hrs, with a cost of 1.6457. The 2 sample strategy was best at 1 hr and 9.5 hrs, with cost of 0.7946. The 3 sample strategy was best at 1, 1, and 10.5 hrs, with cost of 0.5988. The 4 sample strategy was best at 1, 1, 1, and 10.75 hrs, with cost of 0.5062.
Conclusions: Multiple Model Optimal Design (MMOpt) can potentially improve on D-optimal design, as it is based on a true MM formulation of the problem (classification theory), and is optimal with respect to a Bayesian prior. It is applicable to the full assay error polynomial: Sigma_noise=c0 + c1*y + c2*y^2 + c3*y^3. It is based on a recent theoretical overbound on Bayes Risk. In contrast to D-optimal designs, MMOpt discriminates models by using global differences in model response curves rather than local sensitivity to small parameter variations. Also, MMopt experiment designs can handle populations of heterogeneous model types, for example, models having different numbers of compartments. MMOpt will soon be included in the USC RightDose software.
Supported by NIH Grants GM068968 and HD070886
Reference: PAGE 22 (2013) Abstr 2704 [www.page-meeting.org/?abstract=2704]
Poster: Study Design