Now showing 1 - 3 of 3
  • Publication
    Pooled Testing for Bovine Paratuberculosis: Details Matter
    (Johne's Disease Integrated Program, 2010-09) ;
    Pooling of fecal samples with subsequent testing for Mycobacterium avium subspecies paratuberculosis (MAP) by culture or real-time PCR is used for 3 main purposes: herd or group classification, prevalence estimation and as a low-cost initial screening for identification of animals infected with MAP. Though pooling is touted to reduce costs for the latter purpose when prevalence is low, some important considerations have been overlooked. First, there is no consensus as to which pool size is optimal for a given within-herd prevalence. Second, rarely are negative pools retested. Although the choice to not retest might be reasonable if the objective is to find animals shedding moderate to high numbers of MAP (i.e. the most infectious animals), it may be sub-optimal if the goal is to detect all infected animals. Some infected pools will invariably test negative because culture and PCR are only about 50 to 60 % sensitive based on a single sample. Third, more sophisticated pooling protocols (for example those requiring re-creation of pools of half or quarter the original size) might offer cost-saving advantages Reticence about application of the latter two testing modifications is understandable since pooling is not supposed to increase laboratory work-load.
      37
  • Publication
    Frequentist and Bayesian approaches to prevalence estimation using examples from Johne's disease
    Although frequentist approaches to prevalence estimation are simple to apply, there are circumstances where it is difficult to satisfy assumptions of asymptotic normality and nonsensical point estimates (greater than 1 or less than 0) may result. This is particularly true when sample sizes are small, test prevalences are low and imperfect sensitivity and specificity of diagnostic tests need to be incorporated into calculations of true prevalence. Bayesian approaches offer several advantages including direct computation of range-respecting interval estimates (e.g. intervals between 0 and 1 for prevalence) without the requirement of transformations or large-sample approximations, direct probabilistic interpretation, and the flexibility to model in a straightforward manner the probability of zero prevalence. In this review, we present frequentist and Bayesian methods for animal- and herd-level true prevalence estimation based on individual and pooled samples. We provide statistical methods for detecting differences between population prevalence and frequentist methods for sample size and power calculations. All examples are motivated using Mycobacterium avium subspecies paratuberculosis infection and we provide WinBUGS code for all examples of Bayesian estimation.
      525Scopus© Citations 58
  • Publication
    Effect of changes in testing parameters on the cost-effectiveness of two pooled test methods to classify infection status of animals in a herd
    Monte Carlo simulation was used to determine optimal fecal pool sizes for identification of all Mycobacterium avium subsp. paratuberculosis (MAP)-infected cows in a dairy herd. Two pooling protocols were compared: a halving protocol involving a single retest of negative pools followed by halving of positive pools and a simple protocol involving single retest of negative pools but no halving of positive pools. For both protocols, all component samples in positive pools were then tested individually. In the simulations, the distributions of number of tests required to classify all individuals in an infected herd were generated for various combinations of prevalence (0.01, 0.05 and 0.1), herd size (300, 1000 and 3000), pool size (5, 10, 20 and 50) and test sensitivity (0.5-0.9). Test specificity was fixed at 1.0 because fecal culture for MAP yields no or rare false-positive results. Optimal performance was determined primarily on the basis of a comparison of the distributions of numbers of tests needed to detect MAP-infected cows using the Mann-Whitney U test statistic. Optimal pool size was independent of both herd size and test characteristics, regardless of protocol. When sensitivity was the same for each pool size, pool sizes of 20 and 10 performed best for both protocols for prevalences of 0.01 and 0.1, respectively, while for prevalences of 0.05, pool sizes of 10 and 20 were optimal for the simple and halving protocols, respectively. When sensitivity decreased with increasing pool size, the results changed for prevalences of 0.05 and 0.1 with pool sizes of 50 being optimal especially at a prevalence of 0.1. Overall, the halving protocol was more cost effective than the simple protocol especially at higher prevalences. For detection of MAP using fecal culture, we recommend use of the halving protocol and pool sizes of 10 or 20 when the prevalence is suspected to range from 0.01 to 0.1 and there is no expected loss of sensitivity with increasing pool size. If loss in sensitivity is expected and the prevalence is thought to be between 0.05 and 0.1, the halving protocol and a pool size of 50 is recommended. Our findings are broadly applicable to other infectious diseases under comparable testing conditions.
      275Scopus© Citations 5