2021 - Online - In the cloud

PAGE 2021: Methodology - Other topics
Estelle Chasseloup

Individual Model Averaging to increase robustness in drug effect estimation

Estelle Chasseloup (1), Xinyi Li (1), Adrien Tessier (2), Mats O. Karlsson (1)

(1) Dept. of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden. (2) Division of Quantitative Pharmacology, Institut de Recherches Internationales Servier, Suresnes, France.

Objectives

Non-linear mixed effect models (NLMEM) have been proven to be helpful during drug development to characterize drug effects and inform decisions[1-3]. However, model misspecification will typically have consequences: (i) a placebo model misspecification will result in biased drug effect estimate and inflated type I error, while (ii) drug model misspecification will lead to a biased drug effect model and a loss of power to detect the drug effect. Such perceived lack of robustness to model misspecification may thwart the use of model-based approaches.
The purpose of this work was to develop a new NLMEM approach called Individual Model Averaging (IMA)[4], using mixture models, to overcome or attenuate these problems. The focus is balanced two-arm designs, but unbalanced designs, and dose-response were also investigated. 

Methods

Approaches description

 The standard NLMEM approach (STD) models and tests for drug effect using the likelihood ratio test (LRT) to discriminate between a reduced (H0: no drug model), and a full (H1: drug model for treated subjects only) model, where all subjects additionally have a placebo submodel. In IMA all subjects have, through a mixture feature, a probability of being described by the “drug” model. This probability θ is estimated (H1: full), or not (H0: reduced), depending on the patient allocation, and the LRT is also used to accept H1 or H0. 

Data

Three real placebo data sets were used: ADAS-cog from 800 patients[5], Likert-pain score data from 230 patients[6,7], and daily seizure counts data from 500 patients [8].
To explore STD and IMA properties in presence of drug effect, the observed ADAS-cog data were modified adding simulated values from of a time-dependent exponential model with 30% inter-individual variability (IIV). To explore their properties in presence of dose-response, an Emax model was used to simulate studies 1:1:1:1 with three treated arms (20%, 40%, and 80% of a maximum effect of 10 points) and a placebo arm. 

Comparison

Using the data sets with repeated randomisations of the design features [N=1000], IMA and STD were compared in terms of type I error rate (alpha=0.05) or power [9,10] depending on the presence of drug effect in the scenarios, and bias in the estimated drug effect. Both two arm comparisons (balanced or unbalanced), and dose-response analyses were considered. For STD, it was additionally investigated whether “standard” model averaging[11] could improve the type I error results. The robustness of the methods towards model misspecification was tested with various combinations of placebo and drug models, with or without IIV. 

Results

Data without drug effect

 STD had inflated type I error (median values across all scenarios were 26%, 97% and 45% for ADAS-cog, pain-score and seizures counts data), associated for the majority of the scenarios with considerable bias in the drug effect. In contrast, IMA had controlled type I error (3.5%, 5.0% and 5.0%) and unbiased drug estimates regardless of the placebo-drug model combination tried. Similar trends were observed for with unbalanced designs and dose-response: uncontrolled type I error rates and bias in drug effect for STD vs controlled type I error and no bias in drug effect for IMA.
For STD with model averaging across both placebo and drug models, even though no drug effect was present, a biased, and significant, treatment effect would be concluded for all the three data sets. 

Data with drug effect 

When using the ADAS-cog data modified by the addition of a drug effect, IMA had higher power than STD which was pulled down by the empirical cut-off used to correct for its inflated type I error. When adding of a dose-response to the data (typical values of 1.7,3.5, and 6.9 points at tlast), IMA had no appreciable bias in the drug estimates (typical values of 1.4, 3.0, and 6.9) contrary to STD (typical values of 0.3, 0.6, and 1.3), but both had a high power (>95%). 

Conclusions

With STD, any feature of the data not described by the placebo model is likely to make a new model feature significant, leading to bias in drug effects and inflated type I errors rates when the drug models provide a new degree of freedom to describe the data. IMA do not suffer from this issue since the placebo and the drug model are fitted together to the whole data set, both in the reduced and the full model, which creates a robust approach towards model misspecification.



References:
[1] Lalonde, R. L. et al. Model-based Drug Development. Clinical Pharmacology & Therapeutics 82, 21–32 (2007).
[2] Milligan, P. A. et al. Model-Based Drug Development: A Rational Approach to Efficiently Accelerate Drug Development. Clinical Pharmacology & Therapeutics 93, 502–514 (2013).
[3] Marshall, S. et al. Model-Informed Drug Discovery and Development: Current Industry Good Practice and Regulatory Expectations and Future Perspectives. CPT Pharmacometrics Syst Pharmacol 8, 87–96 (2019).
[4] Chasseloup, E., Tessier, A. & Karlsson, M. O. Assessing Treatment Effects with Pharmacometric Models: A New Method that Addresses Problems with Standard Assessments. AAPS J 23, 63 (2021).
[5] Ito, K. et al. Disease progression model for cognitive deterioration from Alzheimer’s Disease Neuroimaging Initiative database. Alzheimers Dement 7, 151–160 (2011).
[6] Plan, E. L., Elshoff, J.-P., Stockis, A., Sargentini-Maier, M. L. & Karlsson, M. O. Likert pain score modeling: a Markov integer model and an autoregressive continuous model. Clin. Pharmacol. Ther. 91, 820–828 (2012).
[7] Schindler, E. & Karlsson, M. O. A Minimal Continuous-Time Markov Pharmacometric Model. AAPS J 19, 1424–1435 (2017).
[8] Trocóniz, I. F., Plan, E. L., Miller, R. & Karlsson, M. O. Modelling overdispersion and Markovian features in count data. J Pharmacokinet Pharmacodyn 36, 461–477 (2009).
[9] Vong, C., Bergstrand, M., Nyberg, J. & Karlsson, M. O. Rapid sample size calculations for a defined likelihood ratio test-based power in mixed-effects models. AAPS J 14, 176–186 (2012).
[10] Ueckert, S., Karlsson, M. O. & Hooker, A. C. Accelerating Monte Carlo power studies through parametric power estimation. J Pharmacokinet Pharmacodyn 43, 223–234 (2016).
[11] Aoki, Y., Röshammar, D., Hamrén, B. & Hooker, A. C. Model selection and averaging of nonlinear mixed-effect models for robust phase III dose selection. J Pharmacokinet Pharmacodyn 44, 581–597 (2017).


Reference: PAGE 29 (2021) Abstr 9830 [www.page-meeting.org/?abstract=9830]
Poster: Methodology - Other topics
Top