Shitao commited on
Commit
3c06a35
1 Parent(s): 694b615

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md CHANGED
@@ -6,3 +6,205 @@ tags:
6
  - sentence-similarity
7
 
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - sentence-similarity
7
 
8
  ---
9
+
10
+
11
+ # BGE-M3
12
+ In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
13
+ - Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
14
+ - Multi-Linguality: It can support more than 100 working languages.
15
+ - Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
16
+
17
+ **Some suggestions for retrieval pipeline in RAG:**
18
+ We recommend to use following pipeline: hybrid retrieval + re-ranking.
19
+ - Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
20
+ A classic example: using both embedding retrieval and the BM25 algorithm.
21
+ Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
22
+ This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
23
+ - As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
24
+ Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
25
+
26
+
27
+ ## FAQ
28
+
29
+ **1. Introduction for different retrieval methods**
30
+
31
+ - Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
32
+ - Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
33
+ - Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
34
+
35
+ **2. How to use BGE-M3 in other projects?**
36
+
37
+ For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
38
+ The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
39
+ For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
40
+ Contributions from the community are welcome.
41
+
42
+
43
+ **3. How to fine-tune bge-M3 model?**
44
+
45
+ You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
46
+ to fine-tune the dense embedding.
47
+
48
+ Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
49
+
50
+
51
+
52
+
53
+ ## Usage
54
+
55
+ Install:
56
+ ```
57
+ git clone https://github.com/FlagOpen/FlagEmbedding.git
58
+ cd FlagEmbedding
59
+ pip install -e .
60
+ ```
61
+ or:
62
+ ```
63
+ pip install -U FlagEmbedding
64
+ ```
65
+
66
+
67
+
68
+ ### Generate Embedding for text
69
+
70
+ - Dense Embedding
71
+ ```python
72
+ from FlagEmbedding import BGEM3FlagModel
73
+
74
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
75
+
76
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
77
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
78
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
79
+
80
+ embeddings_1 = model.encode(sentences_1)['dense_vecs']
81
+ embeddings_2 = model.encode(sentences_2)['dense_vecs']
82
+ similarity = embeddings_1 @ embeddings_2.T
83
+ print(similarity)
84
+ # [[0.6265, 0.3477], [0.3499, 0.678 ]]
85
+ ```
86
+ You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
87
+ Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
88
+
89
+
90
+ - Sparse Embedding (Lexical Weight)
91
+ ```python
92
+ from FlagEmbedding import BGEM3FlagModel
93
+
94
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
95
+
96
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
97
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
98
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
99
+
100
+ output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
101
+ output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
102
+
103
+ # you can see the weight for each token:
104
+ print(model.convert_id_to_token(output_1['lexical_weights']))
105
+ # [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
106
+ # {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
107
+
108
+
109
+ # compute the scores via lexical mathcing
110
+ lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
111
+ print(lexical_scores)
112
+ # 0.19554901123046875
113
+
114
+ print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
115
+ # 0.0
116
+ ```
117
+
118
+ - Multi-Vector (ColBERT)
119
+ ```python
120
+ from FlagEmbedding import BGEM3FlagModel
121
+
122
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
123
+
124
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
125
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
126
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
127
+
128
+ output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
129
+ output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
130
+
131
+ print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
132
+ print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
133
+ # 0.7797
134
+ # 0.4620
135
+ ```
136
+
137
+
138
+ ### Compute score for text pairs
139
+ Input a list of text pairs, you can get the scores computed by different methods.
140
+ ```python
141
+ from FlagEmbedding import BGEM3FlagModel
142
+
143
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
144
+
145
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
146
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
147
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
148
+
149
+ sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
150
+ print(model.compute_score(sentence_pairs))
151
+ # {
152
+ # 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
153
+ # 'sparse': [0.05865478515625, 0.0026397705078125, 0.0, 0.0540771484375],
154
+ # 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
155
+ # 'sparse+dense': [0.5266395211219788, 0.2692706882953644, 0.2691181004047394, 0.563307523727417],
156
+ # 'colbert+sparse+dense': [0.6366440653800964, 0.3531297743320465, 0.3487969636917114, 0.6618075370788574]
157
+ # }
158
+ ```
159
+
160
+
161
+
162
+
163
+ ## Evaluation
164
+
165
+ - Multilingual (Miracl dataset)
166
+
167
+ ![avatar](./imgs/miracl.jpg)
168
+
169
+ - Cross-lingual (MKQA dataset)
170
+
171
+ ![avatar](./imgs/mkqa.jpg)
172
+
173
+ - Long Document Retrieval
174
+
175
+ ![avatar](./imgs/long.jpg)
176
+
177
+
178
+ ## Training
179
+ - Self-knowledge Distillation: combining multiple outputs from different
180
+ retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
181
+ - Efficient Batching: Improve the efficiency when fine-tuning on long text.
182
+ The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
183
+ - MCLS: A simple method to improve the performance on long text without fine-tuning.
184
+ If you have no enough resource to fine-tuning model with long text, the method is useful.
185
+
186
+ Refer to our [report]() for more details.
187
+
188
+ **The fine-tuning codes and datasets will be open-sourced in the near future.**
189
+
190
+ ## Models
191
+
192
+ We release two versions:
193
+ - BAAI/bge-m3-unsupervised: the model after contrastive learning in a large-scale dataset
194
+ - BAAI/bge-m3: the final model fine-tuned from BAAI/bge-m3-unsupervised
195
+
196
+ ## Acknowledgement
197
+
198
+ Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
199
+
200
+ ## Citation
201
+
202
+ If you find this repository useful, please consider giving a star :star: and citation
203
+
204
+ ```
205
+
206
+ ```
207
+
208
+
209
+
210
+