Loading...
Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer
Yap, Chuin Hong; orcid: 0000-0003-2251-9308; email: chuin.h.yap@stu.mmu.ac.uk ; Cunningham, Ryan; orcid: 0000-0001-6883-6515; email: Ryan.Cunningham@mmu.ac.uk ; Davison, Adrian K.; orcid: 0000-0002-6496-0209; email: adrian.davison@manchester.ac.uk ; Yap, Moi Hoon; orcid: 0000-0001-7681-4287; email: M.Yap@mmu.ac.uk
Yap, Chuin Hong; orcid: 0000-0003-2251-9308; email: chuin.h.yap@stu.mmu.ac.uk
Cunningham, Ryan; orcid: 0000-0001-6883-6515; email: Ryan.Cunningham@mmu.ac.uk
Davison, Adrian K.; orcid: 0000-0002-6496-0209; email: adrian.davison@manchester.ac.uk
Yap, Moi Hoon; orcid: 0000-0001-7681-4287; email: M.Yap@mmu.ac.uk
Advisors
Editors
Other Contributors
Affiliation
EPub Date
Publication Date
2021-08-11
Submitted Date
Collections
Other Titles
Abstract
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.
Citation
Journal of Imaging, volume 7, issue 8, page e142
Publisher
MDPI
Journal
Research Unit
DOI
PubMed ID
PubMed Central ID
Type
article
Language
Description
From MDPI via Jisc Publications Router
History: accepted 2021-08-06, pub-electronic 2021-08-11
Publication status: Published
History: accepted 2021-08-06, pub-electronic 2021-08-11
Publication status: Published
