DC ElementWertSprache
dc.contributor.authorWitte, Anja-
dc.contributor.authorLange, Sascha-
dc.contributor.authorLins, Christian-
dc.date.accessioned2024-10-17T13:36:44Z-
dc.date.available2024-10-17T13:36:44Z-
dc.date.issued2024-08-23-
dc.identifier.issn2731-667Xen_US
dc.identifier.urihttps://hdl.handle.net/20.500.12738/16387-
dc.description.abstractThe amount of labelled data in industrial use cases is limited because the annotation process is time-consuming and costly. As in research, self-supervised pretraining such as MAE resulted in training segmentation models with fewer labels, this is also an interesting direction for industry. The reduction of required labels is achieved with large amounts of unlabelled images for the pretraining that aims to learn image features. This paper analyses the influence of MAE pretraining on the efficiency of label usage for semantic segmentation with UNETR. This is investigated for the use case of log-yard cranes. Additionally, two transfer learning cases with respect to crane type and perspective are considered in the context of label-efficiency. The results show that MAE is successfully applicable to the use case. With respect to the segmentation, an IoU improvement of 3.26% is reached while using 2000 labels. The strongest positive influence is found for all experiments in the lower label amounts. The highest effect is achieved with transfer learning regarding cranes, where IoU and Recall increase about 4.31% and 8.58%, respectively. Further analyses show that improvements result from a better distinction between the background and the segmented crane objects.en
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofIndustrial artificial intelligenceen_US
dc.subjectMasked autoencoderen_US
dc.subjectSelf-supervised pretrainingen_US
dc.subjectSemantic segmentationen_US
dc.subjectUNETRen_US
dc.subjectLabel-efficiencyen_US
dc.subjectLog- yard cranesen_US
dc.subject.ddc004: Informatiken_US
dc.titleMasked autoencoder : influence of self-supervised pretraining on object segmentation in industrial imagesen
dc.typeArticleen_US
dc.identifier.doi10.48441/4427.1962-
dc.description.versionPeerRevieweden_US
openaire.rightsinfo:eu-repo/semantics/openAccessen_US
tuhh.container.issue1en_US
tuhh.container.volume2en_US
tuhh.identifier.urnurn:nbn:de:gbv:18302-reposit-195713-
tuhh.oai.showtrueen_US
tuhh.publication.instituteDepartment Informatiken_US
tuhh.publication.instituteFakultät Technik und Informatiken_US
tuhh.publisher.doi10.1007/s44244-024-00020-y-
tuhh.type.opus(wissenschaftlicher) Artikel-
tuhh.type.rdmtrue-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/en_US
dc.type.casraiJournal Article-
dc.type.diniarticle-
dc.type.driverarticle-
dc.type.statusinfo:eu-repo/semantics/publishedVersionen_US
dcterms.DCMITypeText-
tuhh.container.articlenumber7 (2024)en_US
local.comment.externalWitte, A., Lange, S. & Lins, C. Masked autoencoder: influence of self-supervised pretraining on object segmentation in industrial images. Industrial Artificial Intelligence 2, 7 (2024). https://doi.org/10.1007/s44244-024-00020-yen_US
tuhh.apc.statusfalseen_US
item.creatorGNDWitte, Anja-
item.creatorGNDLange, Sascha-
item.creatorGNDLins, Christian-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.creatorOrcidWitte, Anja-
item.creatorOrcidLange, Sascha-
item.creatorOrcidLins, Christian-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypeArticle-
crisitem.author.deptDepartment Informatik-
crisitem.author.orcid0000-0003-3714-0069-
crisitem.author.parentorgFakultät Technik und Informatik-
Enthalten in den Sammlungen:Publications with full text
Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat
2024_Witte_MaskedAutoencoder.pdf4.54 MBAdobe PDFÖffnen/Anzeigen
Zur Kurzanzeige

Seitenansichten

29
checked on 21.11.2024

Download(s)

14
checked on 21.11.2024

Google ScholarTM

Prüfe

HAW Katalog

Prüfe

Feedback zu diesem Datensatz


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons