Now showing 1 - 2 of 2
  • Publication
    Effect of changes in testing parameters on the cost-effectiveness of two pooled test methods to classify infection status of animals in a herd
    Monte Carlo simulation was used to determine optimal fecal pool sizes for identification of all Mycobacterium avium subsp. paratuberculosis (MAP)-infected cows in a dairy herd. Two pooling protocols were compared: a halving protocol involving a single retest of negative pools followed by halving of positive pools and a simple protocol involving single retest of negative pools but no halving of positive pools. For both protocols, all component samples in positive pools were then tested individually. In the simulations, the distributions of number of tests required to classify all individuals in an infected herd were generated for various combinations of prevalence (0.01, 0.05 and 0.1), herd size (300, 1000 and 3000), pool size (5, 10, 20 and 50) and test sensitivity (0.5-0.9). Test specificity was fixed at 1.0 because fecal culture for MAP yields no or rare false-positive results. Optimal performance was determined primarily on the basis of a comparison of the distributions of numbers of tests needed to detect MAP-infected cows using the Mann-Whitney U test statistic. Optimal pool size was independent of both herd size and test characteristics, regardless of protocol. When sensitivity was the same for each pool size, pool sizes of 20 and 10 performed best for both protocols for prevalences of 0.01 and 0.1, respectively, while for prevalences of 0.05, pool sizes of 10 and 20 were optimal for the simple and halving protocols, respectively. When sensitivity decreased with increasing pool size, the results changed for prevalences of 0.05 and 0.1 with pool sizes of 50 being optimal especially at a prevalence of 0.1. Overall, the halving protocol was more cost effective than the simple protocol especially at higher prevalences. For detection of MAP using fecal culture, we recommend use of the halving protocol and pool sizes of 10 or 20 when the prevalence is suspected to range from 0.01 to 0.1 and there is no expected loss of sensitivity with increasing pool size. If loss in sensitivity is expected and the prevalence is thought to be between 0.05 and 0.1, the halving protocol and a pool size of 50 is recommended. Our findings are broadly applicable to other infectious diseases under comparable testing conditions.
    Scopus© Citations 5  451
  • Publication
    Frequentist and Bayesian approaches to prevalence estimation using examples from Johne's disease
    Although frequentist approaches to prevalence estimation are simple to apply, there are circumstances where it is difficult to satisfy assumptions of asymptotic normality and nonsensical point estimates (greater than 1 or less than 0) may result. This is particularly true when sample sizes are small, test prevalences are low and imperfect sensitivity and specificity of diagnostic tests need to be incorporated into calculations of true prevalence. Bayesian approaches offer several advantages including direct computation of range-respecting interval estimates (e.g. intervals between 0 and 1 for prevalence) without the requirement of transformations or large-sample approximations, direct probabilistic interpretation, and the flexibility to model in a straightforward manner the probability of zero prevalence. In this review, we present frequentist and Bayesian methods for animal- and herd-level true prevalence estimation based on individual and pooled samples. We provide statistical methods for detecting differences between population prevalence and frequentist methods for sample size and power calculations. All examples are motivated using Mycobacterium avium subspecies paratuberculosis infection and we provide WinBUGS code for all examples of Bayesian estimation.
    Scopus© Citations 70  714