SicariusSicariiStuff commited on
Commit
08a151a
1 Parent(s): 1690bdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -34,7 +34,18 @@ Cheers,
34
  Sicarius
35
  </details>
36
 
 
 
 
 
 
 
 
 
 
 
37
 
 
38
 
39
 
40
  # Model instruction template:
@@ -55,7 +66,7 @@ TO BE UPDATED SOON
55
 
56
  - FP16: soon...
57
  - EXL2: soon...
58
- - GPTQ: soon...
59
 
60
  ### Support
61
  <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
 
34
  Sicarius
35
  </details>
36
 
37
+ <details>
38
+ <summary><b>June 20, 2024 Update</b>, Unaligning was partially successful, and the results are OK, but I am not fully satisfied, I decided to bite the bullet, and do a full finetune, god have mercy on my GPUs. I am also releasing the intermediate checkpoint of this model.</summary>
39
+ It's been a long ride, and I want to do it right, but the model would simply refuse some requests, with (almost) complete disregard for parts of the training data. Of course, one would argue that some easy prompt engineering will get around it, but the point was to make an unaligned model out of the box. Another point is that I could simply use a faster learning rate on more epochs, which would also work (I've tried that before), but the result would be an overcooked model and, therefore more dumb. So I decided to bite the bullet and do a full proper fine-tuning. This is going to be a serious pain in the ass, but I might as well try to do it right. Since I am releasing the intermediate checkpoint of this model under https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha, I might as well take the time and add some features I haven't seen in other models. In short, besides the normal goodies of logic, some theory of mind, and uncensored content along with general NLP tasks, I will TRY to add a massive dataset (that does not yet exist) of story writing, and a new, completely organic and original Roleplay dataset. LimaRP is awesome, but maybe, just maybe... things are finally carefully extricated from LimaRP, the same sentences will leave its entwined body under the stars towards something new, something fresh. This is going to take some serious effort and some time. Any support will be appreciated, even if it's just some feedback. My electricity bill gonna be huge this month LOL.
40
+
41
+ Cheers,
42
+
43
+ Sicarius
44
+ </details>
45
+
46
+ ## Intermediate checkpoint of this model:
47
 
48
+ - (Can still be decent for merges, fairly uncensored): [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
49
 
50
 
51
  # Model instruction template:
 
66
 
67
  - FP16: soon...
68
  - EXL2: soon...
69
+ - GGUF: soon...
70
 
71
  ### Support
72
  <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">