DC ElementWertSprache
dc.contributor.authorZach, Juri-
dc.contributor.authorStelldinger, Peer-
dc.date.accessioned2025-09-19T14:09:43Z-
dc.date.available2025-09-19T14:09:43Z-
dc.date.issued2025-07-17-
dc.identifier.isbn979-8-4007-1402-3en_US
dc.identifier.urihttps://hdl.handle.net/20.500.12738/18198-
dc.description.abstractThis work presents a novel self-supervised learning framework for deep visual odometry on stereo cameras. Recent work on deep visual odometry is often based on monocular vision. A common approach is to use two separate neural networks, which use raw images for depth and ego-motion prediction. This paper proposes an alternative approach that argues against separate prediction of depth and ego-motion and emphasizes the advantages of optical flow and stereo cameras. Its central component is a deep neural network for optical flow predictions, from which both depth and ego-motion can be derived. The neural network training is regulated by a 3D-geometric constraint, which enforces a realistic structure of the scene over consecutive frames and models static and moving objects. It ensures that the neural network has to predict the optical flow as it would occur in the real world. The presented framework is tested on the KITTI dataset. It achieves very good results, outperforming most algorithms for deep visual odometry, and exceeds state-of-the-art results for depth detection.en
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.subjectdeep learningen_US
dc.subjectoptical flowen_US
dc.subjectself-supervised learningen_US
dc.subjectstereo image processingen_US
dc.subjectvisual odometryen_US
dc.subject.ddc004: Informatiken_US
dc.titleSelf-supervised deep visual stereo odometry with 3D-geometric constraintsen
dc.typeinProceedingsen_US
dc.relation.conferenceACM International Conference on PErvasive Technologies Related to Assistive Environments 2025en_US
dc.identifier.scopus2-s2.0-105013073348en
dc.description.versionPeerRevieweden_US
local.contributorCorporate.editorAssociation for Computing Machinery-
tuhh.container.endpage342en_US
tuhh.container.startpage336en_US
tuhh.oai.showtrueen_US
tuhh.publication.instituteDepartment Informatiken_US
tuhh.publication.instituteFakultät Technik und Informatiken_US
tuhh.publisher.doi10.1145/3733155.3733194-
tuhh.relation.ispartofseriesProceedings of The 18th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2025) : June 25– June 27, Corfu, Greeceen_US
tuhh.type.opusInProceedings (Aufsatz / Paper einer Konferenz etc.)-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/en_US
dc.type.casraiConference Paper-
dc.type.dinicontributionToPeriodical-
dc.type.drivercontributionToPeriodical-
dc.type.statusinfo:eu-repo/semantics/publishedVersionen_US
dcterms.DCMITypeText-
dc.source.typecpen
item.seriesrefProceedings of The 18th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2025) : June 25– June 27, Corfu, Greece-
item.languageiso639-1en-
item.openairetypeinProceedings-
item.openairecristypehttp://purl.org/coar/resource_type/c_5794-
item.creatorOrcidZach, Juri-
item.creatorOrcidStelldinger, Peer-
item.cerifentitytypePublications-
item.tuhhseriesidProceedings of The 18th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA 2025) : June 25– June 27, Corfu, Greece-
item.fulltextNo Fulltext-
item.creatorGNDZach, Juri-
item.creatorGNDStelldinger, Peer-
item.grantfulltextnone-
crisitem.author.deptDepartment Informatik-
crisitem.author.deptDepartment Informatik-
crisitem.author.orcid0000-0001-8079-2797-
crisitem.author.parentorgFakultät Technik und Informatik-
crisitem.author.parentorgFakultät Technik und Informatik-
Enthalten in den Sammlungen:Publications without full text
Zur Kurzanzeige

Google ScholarTM

Prüfe

HAW Katalog

Prüfe

Volltext ergänzen

Feedback zu diesem Datensatz


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons