DC Element | Wert | Sprache |
---|---|---|
dc.contributor.author | Ortmann, Thorben | - |
dc.contributor.author | Wang, Qi | - |
dc.contributor.author | Putzar, Larissa | - |
dc.date.accessioned | 2025-02-17T14:27:37Z | - |
dc.date.available | 2025-02-17T14:27:37Z | - |
dc.date.issued | 2024-10-04 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.12738/17122 | - |
dc.description.abstract | Emotion recognition promotes the evaluation and enhancement of Virtual Reality (VR) experiences by providing emotional feedback and enabling advanced personalization. However, facial expressions are rarely used to recognize users’ emotions, as Head-Mounted Displays (HMDs) occlude the upper half of the face. To address this issue, we conducted a study with 37 participants who played our novel affective VR game EmojiHeroVR. The collected database, EmoHeVRDB (EmojiHeroVR Database), includes 3,556 labeled facial images of 1,778 reenacted emotions. For each labeled image, we also provide 29 additional frames recorded directly before and after the labeled image to facilitate dynamic Facial Expression Recognition (FER). Additionally, EmoHeVRDB includes data on the activations of 63 facial expressions captured via the Meta Quest Pro VR headset for each frame. Leveraging our database, we conducted a baseline evaluation on the static FER classification task with six basic emotions and neutral using the EfficientNet-B0 architecture. The best model achieved an accuracy of 69.84% on the test set, indicating that FER under HMD occlusion is feasible but significantly more challenging than conventional FER. | en |
dc.language.iso | en | en_US |
dc.publisher | arxiv.org | en_US |
dc.relation.ispartof | De.arxiv.org | en_US |
dc.subject | facial expressions | en_US |
dc.subject | emotion recognition | en_US |
dc.subject | virtual reality | en_US |
dc.subject | affective game | en_US |
dc.subject.ddc | 004: Informatik | en_US |
dc.title | EmojiHeroVR : a study on facial expression recognition under partial occlusion from head-mounted displays | en |
dc.type | Preprint | en_US |
dc.relation.conference | International Conference on Affective Computing and Intelligent Interaction 2024 | en_US |
dc.description.version | ReviewPending | en_US |
tuhh.oai.show | true | en_US |
tuhh.publication.institute | Department Medientechnik | en_US |
tuhh.publication.institute | Fakultät Design, Medien und Information | en_US |
tuhh.publisher.doi | 10.48550/arXiv.2410.03331 | - |
tuhh.type.opus | Preprint (Vorabdruck) | - |
dc.rights.cc | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.type.casrai | Other | - |
dc.type.dini | preprint | - |
dc.type.driver | preprint | - |
dc.type.status | info:eu-repo/semantics/draft | en_US |
dcterms.DCMIType | Text | - |
item.creatorOrcid | Ortmann, Thorben | - |
item.creatorOrcid | Wang, Qi | - |
item.creatorOrcid | Putzar, Larissa | - |
item.openairetype | Preprint | - |
item.fulltext | No Fulltext | - |
item.creatorGND | Ortmann, Thorben | - |
item.creatorGND | Wang, Qi | - |
item.creatorGND | Putzar, Larissa | - |
item.languageiso639-1 | en | - |
item.grantfulltext | none | - |
item.openairecristype | http://purl.org/coar/resource_type/c_816b | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | Department Medientechnik | - |
crisitem.author.dept | Department Medientechnik | - |
crisitem.author.orcid | 0009-0006-6589-4262 | - |
crisitem.author.parentorg | Fakultät Design, Medien und Information | - |
crisitem.author.parentorg | Fakultät Design, Medien und Information | - |
Enthalten in den Sammlungen: | Publications without full text |
Volltext ergänzen
Feedback zu diesem Datensatz
Export
Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons