Author: Helard Alberto Becerra Martinez
Supervisor(s) and Committee member(s): Mylene C. Q. Farias (supervisor); Eduardo Peixoto Fernandes da Silva (opponent); Teoófilo Emidio de Campos (opponent); Bruno L. M. Espinoza (rapporteur).
The development of models for quality prediction of both audio and video signals is a fairly mature field. But, although several multimodal models have been proposed, the area of audiovisual quality prediction is still an emerging area. In fact, despite the reasonable performance obtained by combination and parametric metrics, currently there is no reliable pixel-based audiovisual quality metric. The approach presented in this work is based on the assumption that autoencoders, fed with descriptive audio and video features, might produce a set of features that is able to describe the complex audio and video interactions. Based on this hypothesis, we propose a set of multimedia quality metrics: video, audio and audiovisual. The visual features are natural scene statistics (NSS) and spatial-temporal measures of the video component. Meanwhile, the audio features are obtained by computing the spectrogram representation of the audio component. The model is formed by a 2-layer framework that includes an autoencoder layer and a classification layer. These two layers are stacked and trained to build the autoencoder network model. The model is trained and tested using a large set of stimuli, containing representative audio and video artifacts. The model performed well when tested against the UnB-AV and the LiveNetflix-II databases.
GPDS (Laboratory of Signal Processing)
The GPDS is formed by faculty from the Electrical Engineering and Computer Science Departments, who are engaged in advanced research in several areas of signal processing, such as, audio and sound processing, image and video processing, and computer vision.