Author
Pougué Biyong, J
Wang, B
Lyons, T
Nevado-Holgado, A
Journal title
ACL Anthology
DOI
10.18653/v1/2020.clinicalnlp-1.5
Last updated
2024-03-24T23:46:21.98+00:00
Page
41-54
Abstract
Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.
Symplectic ID
1138000
Favourite
Off
Publication type
Conference Paper
Publication date
01 Nov 2020
Please contact us with feedback and comments about this page. Created on 16 Oct 2020 - 16:47.