How Deep is Your Encoder: An Analysis of Features Descriptors for an Autoencoder-Based Audio-Visual Quality Metric

Files in This Item:
File Description SizeFormat 
insight_publication-1.pdf3.33 MBAdobe PDFDownload
Title: How Deep is Your Encoder: An Analysis of Features Descriptors for an Autoencoder-Based Audio-Visual Quality Metric
Authors: Martinez, HelardHines, AndrewFarias, Mylène C.Q.
Permanent link:
Date: 28-May-2020
Online since: 2021-05-26T11:50:25Z
Abstract: The development of audio-visual quality assessment models poses a number of challenges in order to obtain accurate predictions. One of these challenges is the modelling of the complex interaction that audio and visual stimuli have and how this interaction is interpreted by human users. The No-Reference Audio-Visual Quality Metric Based on a Deep Autoencoder (NAViDAd) deals with this problem from a machine learning perspective. The metric receives two sets of audio and video features descriptors and produces a low-dimensional set of features used to predict the audio-visual quality. A basic implementation of NAViDAd was able to produce accurate predictions tested with a range of different audio-visual databases. The current work performs an ablation study on the base architecture of the metric. Several modules are removed or re-trained using different configurations to have a better understanding of the metric functionality. The results presented in this study provided important feedback that allows us to understand the real capacity of the metric's architecture and eventually develop a much better audio-visual quality metric.
Funding Details: Science Foundation Ireland
Funding Details: Insight Research Centre
Conselho Nacional de Desenvolvimento Cientfico eTecnol ogico (CNPq)
Coordenacao de Aperfeicoamento de Pessoal de Nıvel Superior (CAPES)
Fundacao de Apoio aPesquisa do Distrito Federal (FAPDF)
University of Brasılia (UnB)
Type of material: Conference Publication
Publisher: IEEE
Copyright (published version): 2020 IEEE
Keywords: Machine learning & statisticsAudio-visualQuality metricsAutoencoderQoEMachine learning
DOI: 10.1109/QoMEX48832.2020.9123142
Other versions:
Language: en
Status of Item: Peer reviewed
Is part of: 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX)
Conference Details: International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland (held online due to coronavirus outbreak), 26-28 May 2020
ISBN: 978-1-7281-5965-2
This item is made available under a Creative Commons License:
Appears in Collections:Computer Science Research Collection
Insight Research Collection

Show full item record

Page view(s)

Last Week
Last month
checked on Jun 15, 2021


checked on Jun 15, 2021

Google ScholarTM



If you are a publisher or author and have copyright concerns for any item, please email and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.