Publisher DOI: 10.3390/e23111422
Title: Language representation models : an overview
Language: English
Authors: Schomacker, Thorben 
Tropmann-Frick, Marina  
Keywords: Attention-based models; Deep learning; Embeddings; Multi-task learning; Natural language processing; Neural networks; Transformer
Issue Date: 28-Oct-2021
Publisher: MDPI
Source: article number : 1422
Journal or Series Name: Entropy 
Volume: 23
Issue: 11
Abstract: 
In the last few decades, text mining has been used to extract knowledge from free texts. Applying neural networks and deep learning to natural language processing (NLP) tasks has led to many accomplishments for real-world language problems over the years. The developments of the last five years have resulted in techniques that have allowed for the practical application of transfer learning in NLP. The advances in the field have been substantial, and the milestone of outperforming human baseline performance based on the general language understanding evaluation has been achieved. This paper implements a targeted literature review to outline, describe, explain, and put into context the crucial techniques that helped achieve this milestone. The research presented here is a targeted review of neural language models that present vital steps towards a general language representation model.
URI: http://hdl.handle.net/20.500.12738/12332
ISSN: 1099-4300
Institute: Department Informatik 
Fakultät Technik und Informatik 
Type: Article
Appears in Collections:Publications without full text

Show full item record

Page view(s)

143
checked on Dec 26, 2024

Google ScholarTM

Check

HAW Katalog

Check

Add Files to Item

Note about this record


This item is licensed under a Creative Commons License Creative Commons