My Profile

Search abstracts

Lewis Sheiner


2016
Lisboa, Portugal



2015
Hersonissos, Crete, Greece

2014
Alicante, Spain

2013
Glasgow, Scotland

2012
Venice, Italy

2011
Athens, Greece

2010
Berlin, Germany

2009
St. Petersburg, Russia

2008
Marseille, France

2007
KÝbenhavn, Denmark

2006
Brugge/Bruges, Belgium

2005
Pamplona, Spain

2004
Uppsala, Sweden

2003
Verona, Italy

2002
Paris, France

2001
Basel, Switzerland

2000
Salamanca, Spain

1999
Saintes, France

1998
Wuppertal, Germany

1997
Glasgow, Scotland

1996
Sandwich, UK

1995
Frankfurt, Germany

1994
Greenford, UK

1993
Paris, France

1992
Basel, Switzerland



Printable version

PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
ISSN 1871-6032

Reference:
PAGE 15 (2006) Abstr 997 [www.page-meeting.org/?abstract=997]


Poster: Methodology- Model evaluation


Lars Lindbom Evaluating the evaluations: resampling methods for determining model appropriateness in pharmacometric data analysis

Lindbom, Lars(1), Justin J Wilkins(1), Nicolas Frey(2), Mats O Karlsson(1), E Niclas Jonsson(1,2)

1) Division of Pharmacokinetics and Drug Therapy, Department of Biopharmaceutical Sciences, Uppsala University, Uppsala, Sweden; 2) Modeling & Simulation Group, Medical Science/Clinical Pharmacology, F. Hoffmann-La Roche, Basel, Switzerland

Introduction: Model evaluation is widely recognized as an important part of population pharmacokinetic (PK) and pharmacodynamic (PD) model development. Although methods for performing such model evaluations have been proposed and used, they have not been extensively evaluated within this particular application area. In this study, we investigate the properties of some of the proposed methods: the nonparametric bootstrap, log-likelihood profiling, case-deletion diagnostics and regression-derived standard errors. The investigation focuses on the ability to assess model appropriateness in terms of stability, sensitivity to influential data and parameter precision.

Methods: Twenty-two PK and PD models based on real clinical and nonclinical data, implemented using the software application NONMEM and using different estimation methods, were tested with each of the methods as implemented using Perl-speaks-NONMEM (PsN) (1) and the results compared and contrasted.

Results and discussion: Using percentiles from a nonparametric bootstrap with 2 000 iterations as the best available evaluation method, log-likelihood profiling (LLP) and the nonparametric bootstrap gave very similar results for fixed-effects parameters when the first-order conditional estimation method (FOCE) was used. On the other hand, LLP produced the most different confidence intervals for model parameters when the first-order estimation method (FO) was applied. This is in line with earlier findings that a change in objective function values using the FO method is a comparatively poor basis for confidence. The impact of a successful or failed covariance step in NONMEM on the results from a bootstrap was found to be small. Of the models included in this analysis, many exhibited problems to some extent, particularly with respect to overparameterization, despite being classified as final by their developers before this analysis was begun. The condition number of the covariance matrix of the original model is a strong predictor of NONMEM stability in the bootstrap and case-deletion diagnostics.  

Reference:
1. Lindbom L, Pihlgren P, Jonsson N. PsN-Toolkit-A collection of computer intensive statistical methods for non-linear mixed effect modeling using NONMEM. Comput Methods Programs Biomed. 2005 Sep;79(3):241-57.