**Multivariate Exact Discrepancy : a new tool for PK/PD model evaluation**

Sarah Baklouti (1,2), Emmanuelle Comets (3,4), Peggy Gandia (1,2), Didier Concordet (2)

(1) Laboratoire de Pharmacocinétique et Toxicologie, CHU de Toulouse, Toulouse, France (2) INTHERES, Université de Toulouse, INRAE, ENVT, Toulouse, France (3) Université Paris Cité and Université Sorbonne Paris Nord, Inserm, IAME, F-75018 Paris, France (4) Université de Rennes, Inserm, EHESP, Irset - UMR_S 1085, F-35000 Rennes, France

**Objectives:**

The criteria used to validate pharmacokinetic model can be grouped into two families. The first family comprises metrics based on the prediction of individual pharmacokinetic parameters. These metrics may not always be appropriate, especially when the predictions suffer from shrinkage [1]. The second family includes methods that compare distributions provided by the data and distributions imposed by the model. The most well-known methods are Visual Predictive Check (VPC) and Normalised Prediction Distribution Error (NPDE). Despite their usefulness, these methods have some limitations. Indeed, VPC does not consider the dependence between concentrations measured in the same patient, while NPDE use decorrelated concentrations but this does not imply that they are independent [2].

The aim of our work is to propose a method that accounts for the dependency between concentrations to evaluate a model during its development.

**Methods:**

The first step of the method consists in simulating the concentrations of “clones” for each individual (same sampling times, same covariate values and same dosage regimen) using the tested model. Then, for each individual, the probability density function (PDF) of the simulated joint concentration vectors is computed.

The second step is to determine 1) the smallest level set of the PDF containing the observed concentration vector and 2) to determine the probability that a new vector of concentrations belong to this level set. The resulting probability can be interpreted as the probability of observing the individual’s vector of concentrations.

If the model describes the data well, the distribution of the probability of each individual should be drawn from a uniform distribution on [0, 1].

The third step involves verifying the uniformity of the probability distribution.

We conducted two types of simulations to evaluate the performances of our method in detecting a misspecification of the structural model.

In the first type, we evaluated the type I error, as the ability to not reject a good model. We generated data sets of 2 observed concentrations for 200 individuals using a monocompartmental kinetic, IV administration. The residual error was assumed to be proportional with 20% variability, and the two parameters had lognormal distributions with variances of 10% for maximal concentration and elimination rate constant. The model tested was the model used to simulate the observed concentrations. We used our method and NPDE to evaluate the percentage of rejected models.

In the second type, we generated data sets of 2 observed concentrations for 200 individuals using a bicompartmental kinetic IV route. We assumed a residual error proportional to 20% variability and lognormal distribution with variances of 10% for the four parameters. We then constructed a one-compartment model with IV administration that best fit the data. The residual error was assumed to be proportional with 48% variability, and the two parameters had lognormal distributions with variances of 14% and 8.8% for maximal concentration and elimination rate constant, respectively. We performed power tests to determine the percentage of rejected models using our method and NPDE.

**Results:**

In the first simulation, when the nominal error was set at 5%, the power test was equal to 4.8% for our method and 2.5% for NPDE.

In the second simulation, with the same type I error, our method achieved a detection power of 100% for identifying model misspecification, whereas NPDE has a detection power of only 51.5%.

These results show that our method controls the type I error and outperforms NPDE in detecting structural model misspecification.

**Conclusions:**

We propose a straightforward method for evaluating models during their development. In our preliminary results, this method demonstrated greater power than NPDE in detecting model misspecification. However, given that these results are preliminary, further evaluation on a larger scale is necessary. For example, we would like to test the ability of our method to detect other model misspecifications, such as errors in the choice of covariate models. Additionally, this method could also be used to determine whether a given model is suitable for therapeutic drug monitoring in a patient.

**References:**

[1] Savic RM, Karlsson MO. 2009. Importance of shrinkage in empirical bayes estimates for diagnostics: problems and solutions. AAPS J 11:558–569.

[2] Comets E, Brendel K, Mentré F. 2010. Model evaluation in nonlinear mixed effect models, with applications to pharmacokinetics. Journal de la Société Française de Statistique 151:106–128.