2007 - KÝbenhavn - Denmark

PAGE 2007: Methodology- Algorithms
Robert Leary

An evolutonary nonparametric NLME algorithm

Robert H. Leary

Pharsight Corporation

Nonparametric NLME algorithms [1,2,3] relax the usual multivariate normality assumption for the random effects distribution to allow arbitrary nonparametric distributions.  The nonparametric maximum likelihood distribution can be shown to be a discrete distribution defined by probabilities PJ on support points SJ, J=1, 2, ..., M,  where M ≤ N with N the number of subjects.  Support points have dimensionality D equal to the number of random effects.  While finding optimal probabilities for a given set of support points is a convex optimization problem with known fast and reliable algorithms, optimizing support point positions is a much more difficult global optimization problem with many local optima, particularly in higher dimensions. 

In the low dimensional case (e.g., D=1 or 2), a computationally feasible and effective strategy involves gridding the random effects space (as in the NPEM algorithm in [2]) at reasonable resolution over a box whose dimensions are derived from the observed post-hoc and/or Omega matrix estimates from an initial parametric analysis. This allows a (near) global optimum solution to be found in a single step using only a very fast and effective convex primal dual probability optimization algorithm [3] that selects an optimal subset from the candidate pool defined by the grid.  However, in higher dimensions reliance on gridding alone breaks down due Bellman's ‘curse of dimensionality' - the number of grid points grows exponentially with D and quickly overwhelms computational feasibility if reasonable resolution is maintained.  The alternate strategy of direct optimization of support point coordinates with derivative-based methods often leads to trapping in one of a large number of local optima.

Here we consider an evolutionary type algorithm that leverages the power and speed of the convex probability optimization algorithm to find good approximations to the solution of the more general nonconvex global optimization problem.  Using only the probability optimization algorithm, it is possible to select the M optimal support points (and assign maximum likelihood probabilities) from an arbitrarily large collection of candidates.   The algorithm evolves a population of candidate support points in a manner that guarantees that the likelihood of the optimal subset monotonically increases with each generation.  A novel feature of the algorithm is the use of the dual solution for the current generation to prescreen large numbers of candidates for new support points to limit population growth to only the most promising new points.  The candidate proposal function prevents trapping in local optima by proposing candidate points outside of the current basin of attraction.

As in NONMEM 6, the algorithm is initiated with a population of candidate support points consisting of the post hoc estimates from an initial parametric analysis.  However, unlike NONMEM 6 where support points are static, support points in the evolutionary algorithm move toward maximal likelihood placement as a result of multiple iterations of probability optimization and population evolution.   Computational results are presented comparing the two approaches.  Comparisons are also made with the adaptive grid nonparametric approach in the NPAG algorithm in USC*PACK.

[1] Mallet, A., A maximum likelihood estimation method for random coefficient regression models, Biometrika 73: 645-656, 1986.
[2] Schumitzky, A., Nonparametric EM algorithms for estimating prior distributions, App. Math. Comput.  45: 143-157. 1991.
[3] Leary, R. et al, An adaptive grid non-parametric approach to pharmacokinetic and dynamic(PK/PD) population models, 14-th IEEE Symposium on Computer Based Medical Systems, 389-394, 2001.

Reference: PAGE 16 (2007) Abstr 1100 [www.page-meeting.org/?abstract=1100]
Poster: Methodology- Algorithms
Click to open PDF poster/presentation (click to open)