DC ElementWertSprache
dc.contributor.authorAndersen, Jakob Smedegaard-
dc.contributor.authorZukunft, Olaf-
dc.date.accessioned2022-03-23T10:31:48Z-
dc.date.available2022-03-23T10:31:48Z-
dc.date.issued2022-
dc.identifier.isbn978-989-758-547-0en_US
dc.identifier.urihttp://hdl.handle.net/20.500.12738/12790-
dc.description.abstractReliably classifying huge amounts of textual data is a primary objective of many machine learning applications. However, state-of-the-art text classifiers require extensive computational resources, which limit their applicability in real-world scenarios. In order to improve the application of lightweight classifiers on edge devices, e.g. personal work stations, we adapt the Human-in-the-Loop paradigm to improve the accuracy of classifiers without re-training by manually validating and correcting parts of the classification outcome. This paper performs a series of experiments to empirically assess the performance of the uncertainty-based Human-in-the-Loop classification of nine lightweight machine learning classifiers on four real-world classification tasks using pre-trained SBERT encodings as text features. Since time efficiency is crucial for interactive machine learning pipelines, we further compare the training and inference time to enable rapid interactions. Our results indicate that lightweight classifiers with a human in the loop can reach strong accuracies, e.g. improving a classifier’s F1-Score from 90.19 to 97% when 22.62% of a dataset is classified manually. In addition, we show that SBERT based classifiers are time efficient and can be re-trained in < 4 seconds using a Logistic Regression model.en
dc.language.isoenen_US
dc.publisherSciTePressen_US
dc.subjectHybrid Intelligent Systemsen_US
dc.subjectMachine Learningen_US
dc.subjectText Classificationen_US
dc.subjectInteractive Machine Learningen_US
dc.subjectTime Efficiencyen_US
dc.subject.ddc004: Informatiken_US
dc.titleTowards more reliable text classification on edge devices via a Human-in-the-Loopen
dc.typeinProceedingsen_US
dc.relation.conferenceInternational Conference on Agents and Artificial Intelligence 2022en_US
dc.description.versionPeerRevieweden_US
tuhh.container.endpage646en_US
tuhh.container.startpage636en_US
tuhh.oai.showtrueen_US
tuhh.publication.instituteForschungsgruppe Big Data Laben_US
tuhh.publication.instituteDepartment Informatiken_US
tuhh.publication.instituteFakultät Technik und Informatiken_US
tuhh.publisher.doi10.5220/0010980600003116-
tuhh.publisher.urlhttps://www.scitepress.org/Papers/2022/109806/109806.pdf-
tuhh.relation.ispartofseriesProceedings of the 14th International Conference on Agents and Artificial Intelligenceen_US
tuhh.relation.ispartofseriesnumber2 : ICAARTen_US
tuhh.type.opusInProceedings (Aufsatz / Paper einer Konferenz etc.)-
dc.rights.cchttps://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.type.casraiConference Paper-
dc.type.dinicontributionToPeriodical-
dc.type.drivercontributionToPeriodical-
dc.type.statusinfo:eu-repo/semantics/publishedVersionen_US
dcterms.DCMITypeText-
item.creatorGNDAndersen, Jakob Smedegaard-
item.creatorGNDZukunft, Olaf-
item.fulltextNo Fulltext-
item.creatorOrcidAndersen, Jakob Smedegaard-
item.creatorOrcidZukunft, Olaf-
item.seriesrefProceedings of the 14th International Conference on Agents and Artificial Intelligence;2 : ICAART-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.tuhhseriesidProceedings of the 14th International Conference on Agents and Artificial Intelligence-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_5794-
item.openairetypeinProceedings-
crisitem.author.deptDepartment Informatik-
crisitem.author.deptDepartment Informatik-
crisitem.author.orcid0000-0001-8606-9743-
crisitem.author.parentorgFakultät Technik und Informatik-
crisitem.author.parentorgFakultät Technik und Informatik-
Enthalten in den Sammlungen:Publications without full text
Zur Kurzanzeige

Seitenansichten

102
checked on 26.12.2024

Google ScholarTM

Prüfe

HAW Katalog

Prüfe

Volltext ergänzen

Feedback zu diesem Datensatz


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons