Verlagslink DOI: 10.3390/e23111422
Titel: Language representation models : an overview
Sprache: Englisch
Autorenschaft: Schomacker, Thorben 
Tropmann-Frick, Marina  
Schlagwörter: Attention-based models; Deep learning; Embeddings; Multi-task learning; Natural language processing; Neural networks; Transformer
Erscheinungsdatum: 28-Okt-2021
Verlag: MDPI
Quellenangabe: article number : 1422
Zeitschrift oder Schriftenreihe: Entropy 
Zeitschriftenband: 23
Zeitschriftenausgabe: 11
Zusammenfassung: 
In the last few decades, text mining has been used to extract knowledge from free texts. Applying neural networks and deep learning to natural language processing (NLP) tasks has led to many accomplishments for real-world language problems over the years. The developments of the last five years have resulted in techniques that have allowed for the practical application of transfer learning in NLP. The advances in the field have been substantial, and the milestone of outperforming human baseline performance based on the general language understanding evaluation has been achieved. This paper implements a targeted literature review to outline, describe, explain, and put into context the crucial techniques that helped achieve this milestone. The research presented here is a targeted review of neural language models that present vital steps towards a general language representation model.
URI: http://hdl.handle.net/20.500.12738/12332
ISSN: 1099-4300
Einrichtung: Department Informatik 
Fakultät Technik und Informatik 
Dokumenttyp: Zeitschriftenbeitrag
Enthalten in den Sammlungen:Publications without full text

Zur Langanzeige

Seitenansichten

143
checked on 27.12.2024

Google ScholarTM

Prüfe

HAW Katalog

Prüfe

Volltext ergänzen

Feedback zu diesem Datensatz


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons