Vaibhav Dixit

Comparison of Bayesian Functionality between Pumas and NONMEM

Vaibhav Dixit[1], Andreas Noack[2], Simon Byrne[2], Joakim Nyberg[3], Christopher Rackuackas[4,5], Vijay Ivaturi[5]

[1]Indian Institute of Technology (B.H.U.), Varanasi, India, [2] Julia Computing; [3] Pharmetheus, Uppsala, Sweden; [4] Massachusetts Institute of Technology, Cambridge MA; [5] Center for Translational Medicine, University of Maryland School of Pharmacy, Baltimore, MD, USA

Objectives:
To showcase bayesian inference functionality in Pumas and compare it with NONMEM.

Methods:
Pumas can do Bayesian inference by leveraging the MCMC backend of Turing.jl[1], one of the many Probabilistic Programming Languages (PPLs) available in the Julia Language. Pumas uses No-U-Turn Sampler (NUTS) implemented in the AdvancedHMC.jl[2] package which is the MCMC backend for Turing.jl. The NUTS is a gradient based MCMC sampling method and Pumas computes gradients of the objective function in full precision with forward mode automatic differentiation provided by the ForwardDiff.jl package.

In the @param block within the model specification, it is possible to assign distributions to each of the parameters instead of just specifying domains for the parameters. When estimating the model with MCMC, the distributions will serve as prior distributions for the parameters. E.g. a variance parameter could be assigned an Inverse Gamma Prior distribution with shape and scale parameters 0.1 by setting

σ² ~ InverseGamma(0.6, 0.2)

Any distribution definition from the Distributions.jl [3] package or custom distributions that follow the same interface can be used as prior distributions. This allows for flexibility in experimenting with alternative prior distributions. If a parameter is not assigned a distribution but just a domain, the parameter will implicitly be assigned a flat (improper) prior.

It is assumed that all parameters are independent meaning that the definition

σ²_prop ~ InverseGamma(0.6, 0.2)

σ²_add ~ InverseGamma(0.6, 0.2)

will result in independent priors for σ²_prop and σ²_add. A joint distribution for the variance parameters can be specified by using a single matrix valued parameter for the variances

Σ ~ InverseWishart(ν, [0.125 0.1; 0.1 0.125]*(ν + 3))

Typically, the fixed effects parameters are assigned a constrained multivariate normal distribution to allow for correlation between the parameters and avoid extreme parameter value. An example could look like

θ ~ Constrained(MvNormal([1.9, 0.0781,0.0463,1.5,0.4],

                       Diagonal([9025, 15.25, 5.36, 5625, 400])),

                       lower=[0.1,0.008,0.0004,0.1,0.0001],

                       upper=[5,0.5,0.09,5,1.5],

                       init=[1.9,0.0781,0.0463,1.5,0.4] )

It is possible to use the maximum likelihood estimation methods FO, FOCE(I), and Laplace(I) for models with prior distributions. In that case, the prior distribution is only used for specifying the domain.

Results:
We showcase the comparison of Bayes in Pumas and NONMEM using the commonly used Theophylline dataset. We also showcase the readily available parameter summarization and plotting of posteriors that are available in Pumas. 

Conclusions:
Important features of Bayesian inference in Pumas.

  • Model specification is identical to maximum likelihood methods and SAEM
  • Future release will use the core Julia features for chain level parallelism and GPU acceleration

References:
[1] https://turing.ml/dev/
[2]
https://github.com/TuringLang/AdvancedHMC.jl
[3]
https://github.com/JuliaStats/Distributions.jl

Reference: PAGE () Abstr 9272 [www.page-meeting.org/?abstract=9272]

Poster: Methodology - Estimation Methods

PDF poster / presentation (click to open)