Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,8 @@ tags:
|
|
17 |
|
18 |
## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto).
|
19 |
|
|
|
|
|
20 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/kLpOpCsAm2SQu544mlFU_.png)
|
21 |
|
22 |
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".
|
|
|
17 |
|
18 |
## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto).
|
19 |
|
20 |
+
## imatrix_data included
|
21 |
+
|
22 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/kLpOpCsAm2SQu544mlFU_.png)
|
23 |
|
24 |
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".
|