clarine commited on
Commit
d4a1a0e
1 Parent(s): 1ad319f

Add model files and README

Browse files
Files changed (4) hide show
  1. README.md +126 -0
  2. config.json +30 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer.json +0 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ - en
5
+ - es
6
+ - fr
7
+ - it
8
+ - ja
9
+ - nl
10
+ - pl
11
+ - pt
12
+ - zh
13
+ ---
14
+
15
+ # Model Card for `passage-ranker.pistachio`
16
+
17
+ This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
18
+
19
+ Model name: `passage-ranker.pistachio`
20
+
21
+ ## Supported Languages
22
+
23
+ The model was trained and tested in the following languages:
24
+
25
+ - Chinese (simplified)
26
+ - Dutch
27
+ - English
28
+ - French
29
+ - German
30
+ - Italian
31
+ - Japanese
32
+ - Polish
33
+ - Portuguese
34
+ - Spanish
35
+
36
+ Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
37
+ [list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
38
+
39
+ ## Scores
40
+
41
+ | Metric | Value |
42
+ |:--------------------|------:|
43
+ | Relevance (NDCG@10) | 0.480 |
44
+
45
+ Note that the relevance score is computed as an average over 14 retrieval datasets (see
46
+ [details below](#evaluation-metrics)).
47
+
48
+ ## Inference Times
49
+
50
+ | GPU | Quantization type | Batch size 1 | Batch size 32 |
51
+ |:------------------------------------------|:------------------|---------------:|---------------:|
52
+ | NVIDIA A10 | FP16 | 2 ms | 28 ms |
53
+ | NVIDIA A10 | FP32 | 4 ms | 82 ms |
54
+ | NVIDIA T4 | FP16 | 3 ms | 65 ms |
55
+ | NVIDIA T4 | FP32 | 14 ms | 369 ms |
56
+ | NVIDIA L4 | FP16 | 3 ms | 38 ms |
57
+ | NVIDIA L4 | FP32 | 5 ms | 123 ms |
58
+
59
+ ## Gpu Memory usage
60
+
61
+ | Quantization type | Memory |
62
+ |:-------------------------------------------------|-----------:|
63
+ | FP16 | 850 MiB |
64
+ | FP32 | 1200 MiB |
65
+
66
+ Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
67
+ size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
68
+ can be around 0.5 to 1 GiB depending on the used GPU.
69
+
70
+ ## Requirements
71
+
72
+ - Minimal Sinequa version: 11.10.0
73
+ - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
74
+ - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
75
+
76
+ ## Model Details
77
+
78
+ ### Overview
79
+
80
+ - Number of parameters: 167 million
81
+ - Base language model: [Multilingual BERT-Base](https://huggingface.co/bert-base-multilingual-uncased)
82
+ - Insensitive to casing and accents
83
+ - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
84
+
85
+ ### Training Data
86
+
87
+ - MS MARCO Passage Ranking
88
+ ([Paper](https://arxiv.org/abs/1611.09268),
89
+ [Official Page](https://microsoft.github.io/msmarco/),
90
+ [English & translated datasets on the HF dataset hub](https://huggingface.co/datasets/unicamp-dl/mmarco), [translated dataset in Polish on the HF dataset hub](https://huggingface.co/datasets/clarin-knext/msmarco-pl))
91
+ - Original English dataset
92
+ - Translated datasets for the other nine supported languages
93
+
94
+ ### Evaluation Metrics
95
+
96
+ To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
97
+ [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
98
+
99
+ | Dataset | NDCG@10 |
100
+ |:------------------|--------:|
101
+ | Average | 0.474 |
102
+ | | |
103
+ | Arguana | 0.539 |
104
+ | CLIMATE-FEVER | 0.230 |
105
+ | DBPedia Entity | 0.369 |
106
+ | FEVER | 0.765 |
107
+ | FiQA-2018 | 0.329 |
108
+ | HotpotQA | 0.694 |
109
+ | MS MARCO | 0.413 |
110
+ | NFCorpus | 0.337 |
111
+ | NQ | 0.486 |
112
+ | Quora | 0.714 |
113
+ | SCIDOCS | 0.144 |
114
+ | SciFact | 0.649 |
115
+ | TREC-COVID | 0.651 |
116
+ | Webis-Touche-2020 | 0.312 |
117
+
118
+ We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages.
119
+
120
+ | Language | NDCG@10 |
121
+ |:----------------------|--------:|
122
+ | Chinese (simplified) | 0.454 |
123
+ | French | 0.439 |
124
+ | German | 0.418 |
125
+ | Japanese | 0.517 |
126
+ | Spanish | 0.487 |
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "directionality": "bidi",
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "pooler_fc_size": 768,
20
+ "pooler_num_attention_heads": 12,
21
+ "pooler_num_fc_layers": 3,
22
+ "pooler_size_per_head": 128,
23
+ "pooler_type": "first_token_transform",
24
+ "position_embedding_type": "absolute",
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.34.0",
27
+ "type_vocab_size": 2,
28
+ "use_cache": true,
29
+ "vocab_size": 105879
30
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9940cfb211b3702b096051646e3b8284c342f721701378cd58aad2f7680ab971
3
+ size 669500654
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff