Daniel Kaschek (1), Daniel Lill (1,2), Henning Schmidt (1)
(1) IntiQuan GmbH, Basel, Switzerland, (2) University of Freiburg, Freiburg, Germany
Introduction: The field of pharmacometrics had decades to evolve and establish itself as an indispensable component of drug development in the pharmaceutical industry. Part of this development is the emergence of specific, tailored software tools to simulate typical models and infer population parameters from typical patient data sets. Because the models consist of only a few equations and the complexity lies within the data (inter-individual variability, covariates, etc.), efforts have been undertaken to develop software that deals with mixed-effects estimation, and hence, the complexity of the data.
At the same time, another discipline has gained increasing application in the pharmaceutical industry, especially when it comes to translation of study results across species: physiology-based pharmacokinetic modeling (PBPK). PBPK models can comprise hundreds of states and parameters which are typically not estimated but are derived from first principle or “directly” measured. Accordingly, the developed software tools focus on simulation rather than parameter estimation.
With the appearance of Quantitative Systems Pharmacology (QSP) in recent years, yet another problem class has appeared. Suddenly, models of the size of PBPK models were supposed to be informed by patient-level clinical data as typically used for pharmacometric population modeling. Although the needs of QSP modeling are different from typical pharmacometrics or PBPK work, the tools used by industry to approach QSP were still the same.
Objectives:
- To close the gap between mechanistic and population modeling by introduction of a targeted parameter estimation approach implemented as an industry-ready software package
- To illustrate applicability of our individual-level QSP approach based on a realistic example
Methods: To qualify for parameter estimation in mechanistic modeling, the underlying optimization method needs to be both robust and precise. To this end, we employ a deterministic trust-region algorithm leveraging first and second order derivatives of the log-likelihood objective function. To ensure the required precision, derivatives are computed using analytically derived sensitivity equations [1]. The availability of gradient and Hessian of the objective function opens doors to more sophisticated statistical methods such as likelihood profiling [2]. In our framework, inter-individual parameter variability is implemented based on a penalized maximum likelihood approach, equivalent to maximum a posteriori estimation in the Bayesian setting. Compared to the NLME objective function, the evaluation of the penalized log-likelihood is computationally cheap while still providing reasonable individual parameter estimates. That is illustrated on a realistic example.
Results: The approach was applied to a published model of LDL cholesterol lowering [3] with 8 differential equations. Individual-level data had been published for 18 subjects. In a first step, key parameters being potentially identifiable based on the observed data were determined using two methods: (1) standard sensitivity analysis, and (2) a step-wise parameter modeling (SPM) algorithm specifically developed for this application. Subsequently, typical and individual values for the selected parameters were estimated for all subjects using our penalized log-likelihood approach. Identifiability of the parameters was confirmed, and confidence intervals were derived using the profile likelihood method. The reduction of the number of estimated parameters by the SPM algorithm allowed to apply conventional NLME software. Consequently, parameter distributions could be compared with corresponding distributions of SAEM estimates obtained with Monolix and were found to be compatible. Accordingly, predictions based on sampling from the parameter distributions were compared between our approach and Monolix and were found to share nearly the same quantiles.
Conclusion: The presented parameter estimation approach has proven to be highly robust and efficient. It is applicable in situations where known NLME software is either too slow or is incapable of providing parameter estimates at all. At the same time, the presented approach closes the gap to individual-level parameter estimation as being illustrated on the example. Endowed with these features, the presented implementation crosses the border from mechanistic modeling to modeling of populations of virtual patients.
References:
[1] A. Raue, M. Schilling, J. Bachmann, A. Matteson, M. Schelker, D. Kaschek, S. Hug, C. Kreutz, B.D. Harms, F.J. Theis, U. Klingmuller, J. Timmer. Lessons learned from quantitative dynamical modeling in systems biology. PLoS ONE 8, 2013
[2] C. Kreutz, A. Raue, D. Kaschek, J. Timmer. Profile likelihood in systems biology. FEBS Journal 280, 2013, 2564-2571
[3] Gadkar K, Budha N, Baruch A, Davis JD, Fielder P, Ramanujan S. A mechanistic systems pharmacology model for prediction of LDL cholesterol lowering by PCSK9 antagonism in human dyslipidemic populations. CPT: pharmacometrics & systems pharmacology. 2014 Nov;3(11):1-9.
Reference: PAGE 29 (2021) Abstr 9716 [www.page-meeting.org/?abstract=9716]
Poster: Methodology - New Modelling Approaches