Extracting Pasture Phenotype and Biomass Percentages using Weakly Supervised Multi-target Deep Learning on a Small Dataset
|Title:||Extracting Pasture Phenotype and Biomass Percentages using Weakly Supervised Multi-target Deep Learning on a Small Dataset||Authors:||Narayanan, Badri; Saadeldin, Mohamed; Albert, Paul; McGuinness, Kevin; MacNamee, Brian||Permanent link:||http://hdl.handle.net/10197/12572||Date:||31-Aug-2020||Online since:||2021-10-26T10:57:51Z||Abstract:||The dairy industry uses clover and grass as fodder for cows. Accurate estimation of grass and clover biomass yield enables smart decisions in optimizing fertilization and seeding density, resulting in increased productivity and positive environmental impact. Grass and clover are usually planted together, since clover is a nitrogen-fixing plant that brings nutrients to the soil. Adjusting the right percentages of clover and grass in a field reduces the need for external fertilization. Existing approaches for estimating the grass-clovercomposition of a field are expensive and time consuming—random samples of the pasture are clipped and then the components are physically separated to weigh and calculate percentages of dry grass, clover and weeds in each sample. There is growing interest in developing novel deep learning based approaches to nondestructively extract pasture phenotype indicators and biomass yield predictions of different plant species from agricultural imagery collected from the field. Providing these indicators and predictions from images alone remains a significant challenge. Heavy occlusions in the dense mixture of grass, clover and weeds make it difficult to estimate each component accurately. Moreover, although supervised deep learning models perform well with large datasets, it is tedious to acquire large and diverse collections of field images with precise ground truth for different biomass yields. In this paper, we demonstrate that applying data augmentation and transfer learning is effective in predicting multi-target biomass percentages of different plant species, even with a small training dataset. The scheme proposed in this paper used a training set of only 261 images and provided predictions of biomass percentages of grass, clover, white clover, red clover, and weeds with mean absolute error (MAE) of 6.77%, 6.92%, 6.21%, 6.89%, and 4.80% respectively. Evaluation and testing were performed on a publicly available dataset provided by the Biomass Prediction Challenge [Skovsen et al., 2019]. These results lay the foundation for our next set of experiments with semi-supervised learning to improve the benchmarks and will further the quest to identify phenotype characteristics from imagery in a non-destructive way.||Funding Details:||Department of Agriculture, Food and the Marine
Science Foundation Ireland
|Type of material:||Conference Publication||Keywords:||Computer vision; Deep learning; Transfer learning; Smart agriculture; Data augmentation; Weak supervision||Other versions:||https://imvipconference.github.io/||Language:||en||Status of Item:||Peer reviewed||Conference Details:||The Irish Machine Vision and Image Processing Conference 2020 (IMVIP 2020), Sligo, Ireland (held online due to coronavirus outbreak), 31 August - 2 September 2020||This item is made available under a Creative Commons License:||https://creativecommons.org/licenses/by-nc-nd/3.0/ie/|
|Appears in Collections:||Computer Science Research Collection|
Insight Research Collection
Show full item record
If you are a publisher or author and have copyright concerns for any item, please email email@example.com and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.