Effect of changes in testing parameters on the cost-effectiveness of two pooled test methods to classify infection status of animals in a herd

Files in This Item:
File Description SizeFormat 
Pooling_Article_Tables_sub2_final.pdf289.59 kBAdobe PDFDownload
Pooling_Article_figures_sub2_final.pdf73.65 kBAdobe PDFDownload
Pooling_Article_appendix_sub2_final.pdf130.1 kBAdobe PDFDownload
Pooling_Article_sub2_final.pdf699.33 kBAdobe PDFDownload
Title: Effect of changes in testing parameters on the cost-effectiveness of two pooled test methods to classify infection status of animals in a herd
Authors: Messam, Locksley L. McV.
O'Brien, Joshua M.
Hietala, Sharon K.
Gardner, Ian A.
Permanent link: http://hdl.handle.net/10197/10174
Date: 1-May-2010
Online since: 2019-04-29T08:18:26Z
Abstract: Monte Carlo simulation was used to determine optimal fecal pool sizes for identification of all Mycobacterium avium subsp. paratuberculosis (MAP)-infected cows in a dairy herd. Two pooling protocols were compared: a halving protocol involving a single retest of negative pools followed by halving of positive pools and a simple protocol involving single retest of negative pools but no halving of positive pools. For both protocols, all component samples in positive pools were then tested individually. In the simulations, the distributions of number of tests required to classify all individuals in an infected herd were generated for various combinations of prevalence (0.01, 0.05 and 0.1), herd size (300, 1000 and 3000), pool size (5, 10, 20 and 50) and test sensitivity (0.5-0.9). Test specificity was fixed at 1.0 because fecal culture for MAP yields no or rare false-positive results. Optimal performance was determined primarily on the basis of a comparison of the distributions of numbers of tests needed to detect MAP-infected cows using the Mann-Whitney U test statistic. Optimal pool size was independent of both herd size and test characteristics, regardless of protocol. When sensitivity was the same for each pool size, pool sizes of 20 and 10 performed best for both protocols for prevalences of 0.01 and 0.1, respectively, while for prevalences of 0.05, pool sizes of 10 and 20 were optimal for the simple and halving protocols, respectively. When sensitivity decreased with increasing pool size, the results changed for prevalences of 0.05 and 0.1 with pool sizes of 50 being optimal especially at a prevalence of 0.1. Overall, the halving protocol was more cost effective than the simple protocol especially at higher prevalences. For detection of MAP using fecal culture, we recommend use of the halving protocol and pool sizes of 10 or 20 when the prevalence is suspected to range from 0.01 to 0.1 and there is no expected loss of sensitivity with increasing pool size. If loss in sensitivity is expected and the prevalence is thought to be between 0.05 and 0.1, the halving protocol and a pool size of 50 is recommended. Our findings are broadly applicable to other infectious diseases under comparable testing conditions.
Type of material: Journal Article
Publisher: Elsevier
Journal: Preventive Veterinary Medicine
Volume: 94
Issue: 3-4
Start page: 202
End page: 212
Copyright (published version): 2010 Elsevier
Keywords: Cost-effectivenessPooled testingMycobacterium avium subsp. paratuberculosisRetestingCattle Diseases
DOI: 10.1016/j.prevetmed.2010.01.005
Language: en
Status of Item: Peer reviewed
Appears in Collections:Veterinary Medicine Research Collection

Show full item record

SCOPUSTM   
Citations 50

3
Last Week
0
Last month
checked on May 24, 2019

Google ScholarTM

Check

Altmetric


This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.