Lewdiculous commited on
Commit
c7c7f9c
1 Parent(s): f3658f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ tags:
4
+ - requests
5
+ - gguf
6
+ - quantized
7
  ---
8
+ # Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!
9
+
10
+ Read bellow for more information.
11
+
12
+ **Requirements to request model quantizations:**
13
+
14
+ For the model:
15
+ - Maximum model parameter size of **11B**. <br>
16
+ *At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
17
+
18
+ Important:
19
+ - Fill the request template as outlined in the next section.
20
+
21
+ #### How to request a model quantization:
22
+
23
+ 1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) with a title of "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`".
24
+
25
+ 2. Include the following template in your message and fill the information ([example request here](link.link)):
26
+
27
+ ```
28
+ Model name:
29
+
30
+ Model link:
31
+
32
+ Brief description:
33
+
34
+ An image to represent the model (square shaped):
35
+
36
+ ```