An overview of Estimation-Maximization Algorithms Used in Population Analysis Methods
Robert J. Bauer
ICON plc, Gaithersburg, MD
To present an overview of expectation-maximization algorithms used to analyse pharmacokinetic/ pharmacodynamics models for population data. Their statistical basis will be briefly discussed, with emphasis on practical suggestions as to when and how to use them.
Over the last thirty years, a series of algorithms for PK/PD modelling and population analysis have become available to the pharmaceutical scientist to solve what are called non-linear mixed effects models. The statistical goal of these tools is to find the best fitting set of population parameters that fit the entire population of data, taking into consideration all possible values of individual parameters.
The first algorithms to be developed for population analysis of PK/PD data were the first order methods for normally distributed data, which use first order approximations to simplify the otherwise daunting task of this statistical analysis problem. These methods are reasonable when residual variance within subjects is small, and/or the non-linearity of parameter distribution of the PK/PD model is not large. For additional accuracy, and useful for non-normally distributed data, higher order approximation methods such as Laplace and Gaussian quadrature were developed, but require more computation time and can be unstable for complex problems.
To reduce the bias and computational cost in the analysis resulting from imposing a linear or higher-order approximation, Monte Carlo expectation maximization (EM) methods were developed. The Monte Carlo methods are particularly useful for efficient analysis of complex PK/PD models, and provide robust and accurate analyses for conditions that cause high non-linearity (non-normality) in the individual parameter distributions, such as very sparse data (few data points per subject) or categorical and other non-normally distributed data.
There are several algorithms and variants of these EM methods. The importance sampling algorithm samples the important regions of the posterior density to obtain weighted conditional means and variances of individual parameters, which are then used to efficiently update values of population parameters and population variances. A variant of the importance sampling uses a quasi-random sampling (Sobol) sequence, called quasi-random parametric expectation-maximization (QRPEM), which can reduce the stochastic variability in the results, thus requiring fewer random samples for a given desired noise level limit. As the data becomes more sparse (fewer data points per subject), or for large numbers of data below the quantifiable limit of the assay, or the observed data is non-normally distributed such as with categorical data, the importance sampler should be adjusted with increasing number of samples, expanding the variance of the importance sampler relative to the variance of the posterior density (lowering the effective acceptance rate), and/or using a t-distribution sampler with low degrees of freedom, to obtain more samples near the elongated tails of the posterior distribution.
The stochastic approximation EM (SAEM) algorithm uses a Markov-Chain Monte Carlo sequence of samples to sample the posterior density. Instead of staying in one place as the importance sampler does, the sampler travels throughout the region of posterior density, collecting and accepting many samples in regions where the goodness of fit to the individual parameters is high, and collecting and accepting few samples where the goodness of fit is low. As the data becomes more sparse (fewer data points per subject), or for large numbers of data below the quantifiable limit of the assay, or the observed data is non-normally distributed such as with categorical data, the SAEM method should be adjusted with increasing number of samples collected for each individual.