File size: 4,821 Bytes
c13d2f7
b73ef67
c13d2f7
b73ef67
 
 
 
f47e633
c13d2f7
b73ef67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
language: de
license: mit
metrics:
- accuracy
model-index:
- name: GePaBERT
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# GePaBERT

This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on a corpus of parliamentary speeches held in the German Bundestag.
It was specifically designed for the KONVENS 2023 shared task on speaker attribution.
It achieves the following results on the evaluation set: 
- Loss: 0.7997
- Accuracy: 0.8020

## Training and evaluation data

The corpus of parliamentary speeches covers speeches held in the German Bundestag during the 9th-20th legislative period, from 1980 to April 2023. (757 MB)
The speeches were automatically prepared from the publicly available [plenary protocols](https://www.bundestag.de/services/opendata), using the
extraction pipeline [Open Discourse](https://opendiscourse.de) ([GitHub code](https://github.com/open-discourse/open-discourse)).
Evaluation was done on a randomly-sampled 5% held-out dataset.

### Training hyperparameters

The following hyperparameters were used during training:
- `learning_rate`: 2e-05
- `train_batch_size`: 8
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `num_epochs`: 5

### Training results

| Training Loss | Epoch | Step   | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 1.0697        | 0.1   | 3489   | 0.7697   | 0.9802          |
| 1.0339        | 0.2   | 6978   | 0.7727   | 0.9562          |
| 1.0203        | 0.3   | 10467  | 0.7739   | 0.9463          |
| 1.0215        | 0.4   | 13956  | 0.7743   | 0.9477          |
| 1.0046        | 0.5   | 17445  | 0.7779   | 0.9299          |
| 1.0036        | 0.6   | 20934  | 0.7764   | 0.9372          |
| 1.2439        | 0.7   | 24423  | 0.7352   | 1.2473          |
| 1.4382        | 0.8   | 27912  | 0.6947   | 1.5782          |
| 1.1744        | 0.9   | 31401  | 0.7764   | 0.9360          |
| 0.9718        | 1.0   | 34890  | 0.7799   | 0.9179          |
| 0.9557        | 1.1   | 38379  | 0.7824   | 0.9038          |
| 0.947         | 1.2   | 41868  | 0.7830   | 0.9000          |
| 0.9487        | 1.3   | 45357  | 0.7833   | 0.8982          |
| 0.9457        | 1.4   | 48846  | 0.7851   | 0.8862          |
| 0.9442        | 1.5   | 52335  | 0.7863   | 0.8839          |
| 0.9473        | 1.6   | 55824  | 0.7850   | 0.8855          |
| 0.9388        | 1.7   | 59313  | 0.7865   | 0.8771          |
| 0.9293        | 1.8   | 62802  | 0.7868   | 0.8805          |
| 0.9242        | 1.9   | 66291  | 0.7873   | 0.8738          |
| 0.9241        | 2.0   | 69780  | 0.7872   | 0.8757          |
| 0.9127        | 2.1   | 73269  | 0.7896   | 0.8641          |
| 0.9114        | 2.2   | 76758  | 0.7900   | 0.8627          |
| 0.9095        | 2.3   | 80247  | 0.7913   | 0.8540          |
| 0.9042        | 2.4   | 83736  | 0.7920   | 0.8518          |
| 0.8999        | 2.5   | 87225  | 0.7919   | 0.8514          |
| 0.899         | 2.6   | 90714  | 0.7918   | 0.8543          |
| 0.8945        | 2.7   | 94203  | 0.7935   | 0.8418          |
| 0.8867        | 2.8   | 97692  | 0.7934   | 0.8437          |
| 0.893         | 2.9   | 101181 | 0.7938   | 0.8414          |
| 0.8798        | 3.0   | 104670 | 0.7951   | 0.8359          |
| 0.868         | 3.1   | 108159 | 0.7943   | 0.8375          |
| 0.8736        | 3.2   | 111648 | 0.7956   | 0.8323          |
| 0.8756        | 3.3   | 115137 | 0.7959   | 0.8315          |
| 0.8681        | 3.4   | 118626 | 0.7964   | 0.8258          |
| 0.8726        | 3.5   | 122115 | 0.7966   | 0.8266          |
| 0.8594        | 3.6   | 125604 | 0.7967   | 0.8246          |
| 0.8515        | 3.7   | 129093 | 0.7973   | 0.8227          |
| 0.8568        | 3.8   | 132582 | 0.7979   | 0.8195          |
| 0.8626        | 3.9   | 136071 | 0.7983   | 0.8173          |
| 0.8585        | 4.0   | 139560 | 0.7978   | 0.8190          |
| 0.8497        | 4.1   | 143049 | 0.7991   | 0.8127          |
| 0.8383        | 4.2   | 146538 | 0.7992   | 0.8154          |
| 0.8457        | 4.3   | 150027 | 0.8002   | 0.8080          |
| 0.8353        | 4.4   | 153516 | 0.8005   | 0.8077          |
| 0.8393        | 4.5   | 157005 | 0.8009   | 0.8027          |
| 0.8417        | 4.6   | 160494 | 0.8050   | 0.8007          |
| 0.836         | 4.7   | 163983 | 0.8004   | 0.8017          |
| 0.8317        | 4.8   | 167472 | 0.7993   | 0.8021          |
| 0.832         | 4.9   | 170961 | 0.8011   | 0.8013          |


### Framework versions

- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3