PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
PAGE 18 (2009) Abstr 1458 [www.page-meeting.org/?abstract=1458]
Click to open
Oral Presentation : Lewis Sheiner Student Session
Antic J. (1,2), Chenel M. (2), Laffont C. M. (1), Concordet D. (1).
(1) UMR181 Physiopathologie et Toxicologie Expérimentales, INRA, ENVT, Toulouse, France ;(2) Institut de Recherches Internationales Servier, Courbevoie, France.
Background. Parametric methods, routinely used for population pharmacokinetics (PK) and/or pharmacodynamics (PD) analyses, rely on the normality of random effects for interindividual variability (ETAs). However, this normality assumption can be too restrictive, especially in phase II or phase III of clinical trials, which involve heterogeneous population of patients: the distribution of ETAs can be multimodal, because of sub-populations, heavy-tailed, because of outliers... Identifying such departures from normality is important to develop efficient and safe drugs. A graphical procedure to evaluate the normality assumption is based on ETAs' individual predictions following parametric estimation, known as Empirical Bayes Estimates (EBEs). Unfortunately, when data are sparse, the EBEs can be unreliable because of ETA-shrinkage (). In that context, nonparametric (NP) methods, which do not rely on a normality assumption, are attractive. However, their use remains limited: they can be difficult to handle while their interest is few documented.
Objectives. To give practical answers to the following questions. For parametric estimation, is the inspection of EBEs reliable to detect departures from normality? When NP methods should be preferred over parametric ones? Have all NP methods equivalent statistical properties? Which NP method achieves the best compromise between implementation/computation burden and ability to detect departures from normality?
Methods: We studied four widespread NP methods: NPML , NPEM , SNP  and NP-NONMEM . We evaluated and compared these NP methods, first in theory thought a bibliography review, and then in practice, based on simulations studies (, ). Several datasets were simulated from more and more challenging scenarios:
In each scenario, the simulated distribution of ETAs was not normal, since we simulated a sub-population of individuals with lower clearance (scenario 1, 2 and 3) or treatment's effect (scenario 4). The aim was to assess the abilities of the tested methods to detect this departure from normality. We simulated as many datasets as necessary to have 100 datasets with successful termination of the parametric NONMEM $ESTIMATION subroutine. The NP methods were computed on these 100 datasets. NPML and NPEM were implemented in C++. SNP was computed using the nlmix fortran 77 code (). NPNM was computed using NONMEM VI with default options.
Results: (i)Theoretical comparison. Theoretical knowledge of NP methods appeared few documented. It mainly concerns the consistency of the methods (which insures that increasing the sample size improves the estimation accuracy). The consistency of NPML, NPEM and SNP has been established, under more or less restrictive conditions. However, to the best of our knowledge, the consistency of NPNM is still unproved. For NPML, NPEM and NPNM, some important theoretical questions appeared still open. How to estimate the parameters describing residual error? How handling covariates? For SNP, these questions have roughly been addressed.
(ii)Ease and time of computation. Implementation was easy for NPNM, more demanding for the others NP methods. None NP method failed to complete estimation on a dataset. The computation times increased with the sample size and the number of random effects. The average computation time of NPNM ranged from less than 1 minute per dataset (scenario 1) to 14 minutes per dataset (scenario 4). Computation times of the others methods were very sensitive to computational settings (initializations, stopping criteria...). With our settings, SNP was the fastest method (less than 5 min per dataset), NPEM was the slowest (up to more than 3 hours per dataset), NPML was a bit slower than NPNM (average 29 min per dataset in scenario 4).
(iii)Detection of the bimodality. To assess the ability to detect the simulated sub-population, we graphically inspected the distribution of clearance (scenario 1, 2 and 3) or drug's effect (scenario 4) estimated with NP methods. More precisely, we plotted this distribution for all datasets (for each scenario and NP method). These graphs, compared to the true distribution, allow to evaluate the bias and the variability of the methods. In scenario 1, the bimodality was generally detected by all the methods (parametric inspection of EBEs and NP). In scenario 2, it appeared difficult to suspect the bimodality with the inspection of EBEs. NP methods roughly allowed to detect the bimodality. In scenario 3, the bimodality seemed well described by NPEM and SNP than by NPNM and NPML. In scenario 4, the subpopulation of non-responders was never detected by the inspection of EBEs. Only NPEM and SNP allowed to clearly detect it on some datasets. The variability of the NP methods was always larger than the variability of the parametric inspection of EBEs.
Conclusions: Based on our extensive bibliographic and simulation studies, we can give some recommendations for the use of NP methods. The inspection of EBEs seems sufficient to detect departures from normality when EBEs are reliable like in phase I clinical trials. However, when individual information is sparse, it can be misleading, even for a very simple PK model such as a monocompartmental model with intravenous bolus administration. In that case, the NP methods should be preferred over classical parametric method. With an easy implementation and reasonable computation times, NP-NONMEM seems suitable for datasets with moderate ETA-shrinkage (<50%). However, for datasets with high ETA-shrinkage (>50%), only more demanding NP methods (like SNP or NPEM) seems satisfactory.