DC ElementWertSprache
dc.contributor.authorOrtmann, Thorben-
dc.contributor.authorWang, Qi-
dc.contributor.authorPutzar, Larissa-
dc.date.accessioned2025-02-17T14:27:37Z-
dc.date.available2025-02-17T14:27:37Z-
dc.date.issued2024-10-04-
dc.identifier.urihttps://hdl.handle.net/20.500.12738/17122-
dc.description.abstractEmotion recognition promotes the evaluation and enhancement of Virtual Reality (VR) experiences by providing emotional feedback and enabling advanced personalization. However, facial expressions are rarely used to recognize users’ emotions, as Head-Mounted Displays (HMDs) occlude the upper half of the face. To address this issue, we conducted a study with 37 participants who played our novel affective VR game EmojiHeroVR. The collected database, EmoHeVRDB (EmojiHeroVR Database), includes 3,556 labeled facial images of 1,778 reenacted emotions. For each labeled image, we also provide 29 additional frames recorded directly before and after the labeled image to facilitate dynamic Facial Expression Recognition (FER). Additionally, EmoHeVRDB includes data on the activations of 63 facial expressions captured via the Meta Quest Pro VR headset for each frame. Leveraging our database, we conducted a baseline evaluation on the static FER classification task with six basic emotions and neutral using the EfficientNet-B0 architecture. The best model achieved an accuracy of 69.84% on the test set, indicating that FER under HMD occlusion is feasible but significantly more challenging than conventional FER.en
dc.language.isoenen_US
dc.publisherarxiv.orgen_US
dc.relation.ispartofDe.arxiv.orgen_US
dc.subjectfacial expressionsen_US
dc.subjectemotion recognitionen_US
dc.subjectvirtual realityen_US
dc.subjectaffective gameen_US
dc.subject.ddc004: Informatiken_US
dc.titleEmojiHeroVR : a study on facial expression recognition under partial occlusion from head-mounted displaysen
dc.typePreprinten_US
dc.relation.conferenceInternational Conference on Affective Computing and Intelligent Interaction 2024en_US
dc.description.versionReviewPendingen_US
tuhh.oai.showtrueen_US
tuhh.publication.instituteDepartment Medientechniken_US
tuhh.publication.instituteFakultät Design, Medien und Informationen_US
tuhh.publisher.doi10.48550/arXiv.2410.03331-
tuhh.type.opusPreprint (Vorabdruck)-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/en_US
dc.type.casraiOther-
dc.type.dinipreprint-
dc.type.driverpreprint-
dc.type.statusinfo:eu-repo/semantics/draften_US
dcterms.DCMITypeText-
item.creatorOrcidOrtmann, Thorben-
item.creatorOrcidWang, Qi-
item.creatorOrcidPutzar, Larissa-
item.openairetypePreprint-
item.fulltextNo Fulltext-
item.creatorGNDOrtmann, Thorben-
item.creatorGNDWang, Qi-
item.creatorGNDPutzar, Larissa-
item.languageiso639-1en-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_816b-
item.cerifentitytypePublications-
crisitem.author.deptDepartment Medientechnik-
crisitem.author.deptDepartment Medientechnik-
crisitem.author.orcid0009-0006-6589-4262-
crisitem.author.parentorgFakultät Design, Medien und Information-
crisitem.author.parentorgFakultät Design, Medien und Information-
Enthalten in den Sammlungen:Publications without full text
Zur Kurzanzeige

Seitenansichten

21
checked on 14.03.2025

Google ScholarTM

Prüfe

HAW Katalog

Prüfe

Volltext ergänzen

Feedback zu diesem Datensatz


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons