DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kinkeldey, Christoph | - |
dc.contributor.author | Müller-Birn, Claudia | - |
dc.contributor.author | Gülenman, Tom | - |
dc.contributor.author | Benjamin, Jesse Josua | - |
dc.contributor.author | Halfaker, Aaron | - |
dc.date.accessioned | 2023-08-30T15:00:46Z | - |
dc.date.available | 2023-08-30T15:00:46Z | - |
dc.date.issued | 2019-07-17 | - |
dc.identifier.uri | http://hdl.handle.net/20.500.12738/14108 | - |
dc.description.abstract | Machine learning systems are ubiquitous in various kinds of digital applications and have a huge impact on our everyday life. But a lack of explainability and interpretability of such systems hinders meaningful participation by people, especially by those without a technical background. Interactive visual interfaces (e.g., providing means for manipulating parameters in the user interface) can help tackle this challenge. In this position paper we present PreCall, an interactive visual interface for ORES, a machine learning-based web service for Wikimedia projects such as Wikipedia. While ORES can be used for a number of settings, it can be challenging to translate requirements from the application domain into formal parameter sets needed to configure the ORES models. Assisting Wikipedia editors in finding damaging edits, for example, can be realized at various stages of automatization, which might impact the precision of the applied model. Our prototype PreCall attempts to close this translation gap by interactively visualizing the relationship between major model parameters (recall, precision, false positive rate and the threshold between valuable and damaging edits). Furthermore, PreCall visualizes the probable results for the current parameter set to improve the human’s understanding of the relationship between parameters and outcome when using ORES. We describe PreCall’s components and present a use case that highlights the benefits of our approach. Finally, we pose further research questions we would like to discuss during the workshop. | en |
dc.language.iso | en | en_US |
dc.publisher | Center for Open Science (OSF) | en_US |
dc.subject | machine learning | en_US |
dc.subject | Visualisierung | en_US |
dc.subject | Wikimedia | en_US |
dc.subject | Explainability | en_US |
dc.subject.ddc | 020: Bibliotheks- und Informationswissenschaft | en_US |
dc.title | PreCall : a visual interface for threshold optimization in ML model selection | en |
dc.type | Preprint | en_US |
dc.relation.conference | Conference on Human Factors in Computing Systems 2019 | en_US |
dc.description.version | NonPeerReviewed | en_US |
tuhh.oai.show | true | en_US |
tuhh.publication.institute | Freie Universität Berlin | en_US |
tuhh.publisher.doi | 10.31219/osf.io/rp76n | - |
tuhh.type.opus | Preprint (Vorabdruck) | - |
dc.rights.cc | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.type.casrai | Other | - |
dc.type.dini | preprint | - |
dc.type.driver | preprint | - |
dc.type.status | info:eu-repo/semantics/acceptedVersion | en_US |
dcterms.DCMIType | Text | - |
item.creatorGND | Kinkeldey, Christoph | - |
item.creatorGND | Müller-Birn, Claudia | - |
item.creatorGND | Gülenman, Tom | - |
item.creatorGND | Benjamin, Jesse Josua | - |
item.creatorGND | Halfaker, Aaron | - |
item.languageiso639-1 | en | - |
item.cerifentitytype | Publications | - |
item.openairecristype | http://purl.org/coar/resource_type/c_816b | - |
item.creatorOrcid | Kinkeldey, Christoph | - |
item.creatorOrcid | Müller-Birn, Claudia | - |
item.creatorOrcid | Gülenman, Tom | - |
item.creatorOrcid | Benjamin, Jesse Josua | - |
item.creatorOrcid | Halfaker, Aaron | - |
item.fulltext | No Fulltext | - |
item.grantfulltext | none | - |
item.openairetype | Preprint | - |
crisitem.author.dept | Department Information und Medienkommunikation | - |
crisitem.author.orcid | 0000-0001-5669-6295 | - |
crisitem.author.parentorg | Fakultät Design, Medien und Information | - |
Appears in Collections: | Publications without full text |
Add Files to Item
Note about this record
Export
This item is licensed under a Creative Commons License