clarine commited on
Commit
4115910
1 Parent(s): 44a46d4

Unified readme format

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -7,9 +7,9 @@ language:
7
  - it
8
  - ja
9
  - nl
10
- - pl
11
  - pt
12
  - zh
 
13
  ---
14
 
15
  # Model Card for `passage-ranker.pistachio`
@@ -22,16 +22,16 @@ Model name: `passage-ranker.pistachio`
22
 
23
  The model was trained and tested in the following languages:
24
 
25
- - Chinese (simplified)
26
- - Dutch
27
  - English
28
  - French
29
  - German
 
30
  - Italian
 
31
  - Japanese
32
- - Polish
33
  - Portuguese
34
- - Spanish
 
35
 
36
  Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
37
  [list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
@@ -43,7 +43,7 @@ Besides the aforementioned languages, basic support can be expected for addition
43
  | English Relevance (NDCG@10) | 0.474 |
44
  | Polish Relevance (NDCG@10) | 0.380 |
45
 
46
- Note that the relevance score is computed as an average over 14 retrieval datasets (see
47
  [details below](#evaluation-metrics)).
48
 
49
  ## Inference Times
@@ -131,7 +131,7 @@ the [PIRBenchmark](https://github.com/sdadas/pirb) with BM25 as the first stage
131
  | arguana-pl | 0.285 |
132
  | dbpedia-pl | 0.283 |
133
  | fiqa-pl | 0.223 |
134
- | hotpoqa-pl | 0.603 |
135
  | msmarco-pl | 0.259 |
136
  | nfcorpus-pl | 0.293 |
137
  | nq-pl | 0.355 |
@@ -142,12 +142,14 @@ the [PIRBenchmark](https://github.com/sdadas/pirb) with BM25 as the first stage
142
 
143
  #### Other languages
144
 
145
- We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages.
 
 
146
 
147
  | Language | NDCG@10 |
148
  |:----------------------|--------:|
149
- | Chinese (simplified) | 0.454 |
150
  | French | 0.439 |
151
  | German | 0.418 |
 
152
  | Japanese | 0.517 |
153
- | Spanish | 0.487 |
 
7
  - it
8
  - ja
9
  - nl
 
10
  - pt
11
  - zh
12
+ - pl
13
  ---
14
 
15
  # Model Card for `passage-ranker.pistachio`
 
22
 
23
  The model was trained and tested in the following languages:
24
 
 
 
25
  - English
26
  - French
27
  - German
28
+ - Spanish
29
  - Italian
30
+ - Dutch
31
  - Japanese
 
32
  - Portuguese
33
+ - Chinese (simplified)
34
+ - Polish
35
 
36
  Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
37
  [list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
 
43
  | English Relevance (NDCG@10) | 0.474 |
44
  | Polish Relevance (NDCG@10) | 0.380 |
45
 
46
+ Note that the relevance score is computed as an average over several retrieval datasets (see
47
  [details below](#evaluation-metrics)).
48
 
49
  ## Inference Times
 
131
  | arguana-pl | 0.285 |
132
  | dbpedia-pl | 0.283 |
133
  | fiqa-pl | 0.223 |
134
+ | hotpotqa-pl | 0.603 |
135
  | msmarco-pl | 0.259 |
136
  | nfcorpus-pl | 0.293 |
137
  | nq-pl | 0.355 |
 
142
 
143
  #### Other languages
144
 
145
+ We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its
146
+ multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics
147
+ for the existing languages.
148
 
149
  | Language | NDCG@10 |
150
  |:----------------------|--------:|
 
151
  | French | 0.439 |
152
  | German | 0.418 |
153
+ | Spanish | 0.487 |
154
  | Japanese | 0.517 |
155
+ | Chinese (simplified) | 0.454 |