IV-75 Adrien Tessier

Use of mixture models in pharmacometric model-based analysis of confirmatory trials: part I – simulation study evaluating type I error and power of proof-of-concept trials

Adrien Tessier (1), Estelle Chasseloup (2), Mats Karlsson (2)

(1) Pharmacometrics and Clinical Pharmacokinetics division, Servier, Suresnes, France (2) Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden

Objectives: Proof-of-concept studies (POC) are designed to eliminate inefficient drugs. Statistics underlying such decision are critical. Pharmacometric approaches based on nonlinear mixed effect models achieves higher power to detect a drug effect compare to traditional statistical hypothesis tests [1] but drawbacks come from model building process, where multiple tests or model misspecifications result in an inflated type I error. This error is of major concern as it can result in the pursuit of inefficient drugs development. To use pharmacometric models as primary analysis in confirmatory trials such as POC, it is required to develop approaches that could better control type I error. To compare, using simulations, a standard modelling approach to the use of a mixture model with respect to type I error control and power of detecting a drug effect in a typical POC design. A treatment-response model was used as motivating example.

Methods: One POC design was simulated where patients received a placebo or active treatment (30 patients per arm, 1:1 randomisation). Response was observed as a continuous variable at pre-dose (0) and at times 1, 2 and 3 after administration. Two datasets were simulated: (i) base, using a baseline parameter and a placebo effect model (including an asymptotic progression with an exponential model); (ii) full, adding a treatment effect model to the base model. Random effects were simulated for baseline and treatment effect through exponential models, and for maximal placebo effect through a multiplicative model allowing negative values of the placebo effect. A covariance was simulated between random effects of baseline and maximal placebo effect. The residual error was additive. 500 replicates of each dataset were generated and fitted using the same model as used for simulation (True model) or including different misspecifications (False models): (i) direct placebo effect, (ii) time-proportional placebo effect, (iii) omission of the covariance between random effects of baseline and maximal placebo effect, (iv) time-proportional treatment effect, and (v) an additional covariance between random effects of baseline and treatment effect. Each model were divided in two nested models. The standard modelling approach contrasted models without (base) or with (full) the treatment effect. The other approach utilized a mixture model where each patients’ data were described by either the base (placebo) or the full (placebo + treatment) model. The two contrasted models were: (i) a model with equal probability for the two mixtures (as per randomisation), and (ii) a model where the probability for each mixture was an estimated function with allocation arm as a covariate. A likelihood ratio test (LRT) was then performed to compare the nested models, using nominal cut-off from the chi-squared distribution, or after calibration through randomization test [2]. The fraction of the 500 base and full dataset where LRT was significant predicted the type I error and power respectively. NONMEM 7.4.3 and PsN 4.8.8 were used to address simulation, estimation with FOCE method and randomization test.

Results: Overall, type I error for the nominal cut-off was better controlled at the nominal 5% with the mixture model approach. Type I errors for the True and False models 1-5 were 2.03%, 2.35%, 57%, 1.24%, 1.49%, 2.58% (standard) and 4.87%, 5.24%, 4.95%, 7.11%, 5.56%, 7.2% (mixture model). The power of detecting a true drug effect was higher for the mixture model in 5 out of the 6 scenarios when the using the nominal, chi-square, cut-off value and higher for the standard approach in 5 out of 6 scenarios when a randomization calibrated cut-off was used.

Conclusions: The sometimes very low type I errors for the standard approach, are likely linked to the random effect variance does not constitute a full degree of freedom as shown previously [3]. The inflated type 1 error for the standard approach for false model 2 is likely linked to the placebo model not appropriately being able to capture the time-course of change from baseline. The use of mixture models to evaluate the treatment effect in POC achieved a better control of the type I error compared to a standard modelling approach. After calibration it resulted in lower power than the standard approach, likely related to one extra parameter being estimated. The results here are in agreement with similar evaluations on real data [4].

References:
[1] Karlsson KE et al. (2013). Comparisons of Analysis Methods for Proof-of-Concept Trials. CPT Pharmacomet. Syst. Pharmacol. 2(1), e23.
[2] Wählby U et al. (2001). Assessment of Actual Significance Levels for Covariate Effects in NONMEM. J. Pharmacokinet. Pharmacodyn. 28(3), 231–252.
[3] Wählby U et al. (2004). Evaluation of Type I Error Rates When Modeling Ordered Categorical Data in NONMEM. J. Pharmacokinet. Pharmacodyn. 31(1), 61–74.
[4] Chasseloup E. et al. (2019). Pharmacometric model-based analysis of POC trials: using mixture models to control the type I error – Part II: Applications. PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe. PAGE2019.

Reference: PAGE 28 (2019) Abstr 9233 [www.page-meeting.org/?abstract=9233]

Poster: Methodology - New Modelling Approaches

PDF poster / presentation (click to open)