Lewdiculous commited on
Commit
61ef53a
1 Parent(s): e52acf5

bump-model-size

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -32,10 +32,10 @@ This card is meant only to request GGUF-IQ-Imatrix quants for models that meet t
32
  **Requirements to request GGUF-Imatrix model quantizations:**
33
 
34
  For the model:
35
- - Maximum model parameter size of **11B**. <br>
36
  *At the moment I am unable to accept requests for larger models due to hardware/time limitations.* <br>
37
  *Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.* <br>
38
- *If you need a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). Pretty awesome.*
39
 
40
  Important:
41
  - Fill the request template as outlined in the next section.
@@ -44,22 +44,22 @@ Important:
44
 
45
  1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.
46
 
47
- 2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):
48
 
49
  ```
50
- **[Required] Model name:**
 
51
 
 
 
52
 
53
- **[Required] Model link:**
 
54
 
 
 
55
 
56
- **[Required] Brief description:**
57
-
58
-
59
- **[Required] An image/direct image link to represent the model (square shaped):**
60
-
61
-
62
- **[Optional] Additonal quants (if you want any):**
63
 
64
  <!-- Keep in mind that anything bellow I/Q3 isn't recommended, -->
65
  <!-- since for these smaller models the results will likely be -->
 
32
  **Requirements to request GGUF-Imatrix model quantizations:**
33
 
34
  For the model:
35
+ - Maximum model parameter size of ~~11B~~ **12B**. Small note is that models sizes larger than 8B parameters may take longer to process and upload than the smaller ones.<br>
36
  *At the moment I am unable to accept requests for larger models due to hardware/time limitations.* <br>
37
  *Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.* <br>
38
+ *If you need quants for a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). He's doing an amazing work.*
39
 
40
  Important:
41
  - Fill the request template as outlined in the next section.
 
44
 
45
  1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.
46
 
47
+ 2. Include the following template in your new discussion post, you can just copy and paste it as is, and fill the required information by replacing the {{placeholders}} ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):
48
 
49
  ```
50
+ **[Required] Model name:** <br>
51
+ {{replace-this}}
52
 
53
+ **[Required] Model link:** <br>
54
+ {{replace-this}}
55
 
56
+ **[Required] Brief description:** <br>
57
+ {{replace-this}}
58
 
59
+ **[Required] An image/direct image link to represent the model (square shaped):** <br>
60
+ {{replace-this}}
61
 
62
+ **[Optional] Additonal quants (if you want any):** <br>
 
 
 
 
 
 
63
 
64
  <!-- Keep in mind that anything bellow I/Q3 isn't recommended, -->
65
  <!-- since for these smaller models the results will likely be -->