File size: 4,093 Bytes
c1492c3
 
988424f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
language:
  - da
  - no
license: cc-by-4.0
datasets:
  - MiMe-MeMo/Corpus-v1.1
  - MiMe-MeMo/Sentiment-v1
  - MiMe-MeMo/WSD-Skaebne
metrics:
  - f1
tags:
  - historical-texts
  - digital-humanities
  - sentiment-analysis
  - word-sense-disambiguation
  - danish
  - norwegian
model-index:
  - name: MeMo-BERT-03
    results:
      - task:
          type: text-classification
          name: Sentiment Analysis
        dataset:
          name: MiMe-MeMo/Sentiment-v1
          type: text
        metrics:
          - name: f1
            type: f1
            value: 0.77
      - task:
          type: text-classification
          name: Word Sense Disambiguation
        dataset:
          name: MiMe-MeMo/WSD-Skaebne
          type: text
        metrics:
          - name: f1
            type: f1
            value: 0.61
---

# MeMo-BERT-03

**MeMo-BERT-03** is a pre-trained language model for **historical Danish and Norwegian literary texts** (1870–1900).  
It was introduced in [Al-Laith et al. (2024)](https://aclanthology.org/2024.lrec-main.431/) as part of the first dedicated PLMs for historical Danish and Norwegian.

## Model Description

- **Architecture:** XLM-RoBERTa-base (24 layers, 1024 hidden size, 16 heads, vocab size 250k)  
- **Pre-training strategy:** Continued pre-training of [DanskBERT](https://huggingface.co/vesteinn/DanskBERT) on historical data  
- **Training objective:** Masked Language Modeling (MLM, 15% masking)  
- **Training data:** MeMo Corpus v1.1 (839 novels, ~53M words, 1870–1900)  
- **Hardware:** 2 × A100 GPUs  
- **Training time:** ~32 hours  

## Intended Use

- **Primary tasks:**  
  - Sentiment Analysis (positive, neutral, negative)  
  - Word Sense Disambiguation (historical vs. modern senses of *skæbne*, "fate")  

- **Intended users:**  
  - Researchers in Digital Humanities, Computational Linguistics, and Scandinavian Studies.  
  - Historians of literature studying 19th-century Scandinavian novels.  

- **Not intended for:**  
  - Contemporary Danish/Norwegian NLP tasks (performance may degrade).  
  - High-stakes applications (e.g., legal, medical, political decision-making).  

## Training Data

- **Corpus:** [MeMo Corpus v1.1](https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1) (Bjerring-Hansen et al. 2022)  
- **Time period:** 1870–1900  
- **Size:** 839 novels, 690 MB, 3.2M sentences, 52.7M words  
- **Preprocessing:** OCR-corrected, normalized to modern Danish spelling, tokenized, lemmatized, annotated  

## Evaluation

### Benchmarks

| Task | Dataset | Test F1 | Notes |
|------|---------|---------|-------|
| Sentiment Analysis | MiMe-MeMo/Sentiment-v1 | **0.77** | 3-class (pos/neg/neu) |
| Word Sense Disambiguation | MiMe-MeMo/WSD-Skaebne | **0.61** | 4-class (pre-modern, modern, figure of speech, ambiguous) |

### Comparison

MeMo-BERT-03 outperforms MeMo-BERT-1, MeMo-BERT-2, and contemporary baselines (DanskBERT, ScandiBERT, DanBERT, BotXO) across both tasks.

## Limitations

- Domain-specific: trained only on **novels from 1870–1900**.  
- May not generalize to other genres (newspapers, folk tales, poetry).  
- Evaluation datasets are relatively small.  
- OCR/normalization errors remain in some texts.  

## Ethical Considerations

- All texts are **public domain** (authors deceased).  
- Datasets released under **CC BY 4.0**.  
- Word sense annotations created by literary scholars, no sensitive personal data.  

## Citation

If you use this model, please cite:

```bibtex
@inproceedings{al-laith-etal-2024-development,
    title = "Development and Evaluation of Pre-trained Language Models for Historical {D}anish and {N}orwegian Literary Texts",
    author = "Al-Laith, Ali and Conroy, Alexander and Bjerring-Hansen, Jens and Hershcovich, Daniel",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    pages = "4811--4819",
    url = "https://aclanthology.org/2024.lrec-main.431/"
}