File size: 2,502 Bytes
f3658f0
 
c7c7f9c
 
 
 
01e9029
fbf41f5
 
567fd0d
 
 
fbf41f5
8fc65ee
de9ad81
 
 
8fc65ee
6d4c527
8fc65ee
bd97b85
8fc65ee
 
a4c6bb8
7dfdf0d
8fc65ee
c7c7f9c
 
51de1f5
 
1185955
51de1f5
b3c1263
c7c7f9c
 
 
57656fc
 
 
c7c7f9c
 
 
 
 
 
24d7af4
c7c7f9c
24d7af4
c7c7f9c
 
92fd3f9
c7c7f9c
 
92fd3f9
01002cd
 
92fd3f9
01002cd
 
92fd3f9
 
 
 
51f1643
2b45d02
 
 
a7ffc2a
b9a77d9
ff3a6c5
51f1643
8a567e7
48d2369
8a567e7
 
 
94a49e9
c7c7f9c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: cc-by-4.0
tags:
- requests
- gguf
- quantized

---

> [!WARNING]
> **Notice:** <br>
> Requests are paused at the moment due to unforseen circumstances.


![requests-banner/png](https://huggingface.co/Lewdiculous/Model-Requests/resolve/main/requests-banner.png)


> [!TIP]
> I apologize for disrupting your experience. <br>
> My upload speeds have been cooked and unstable lately. <br>
> I'd need to move to get a better provider or eventually rent a server. <br>
> If you **want** and you are able to... <br>
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br>
> In the meantime I'll be working to make do with the resources at hand at the time. <br>


# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!

Please read everything.

This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow.

**Requirements to request GGUF-Imatrix model quantizations:**

For the model:
- Maximum model parameter size of **11B**. <br>
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.* <br>
*Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.* <br>
*If you need a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). Pretty awesome.*

Important:
- Fill the request template as outlined in the next section.

#### How to request a model quantization:

1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.

2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):

```
**[Required] Model name:**


**[Required] Model link:**


**[Required] Brief description:**


**[Required] An image/direct image link to represent the model (square shaped):**


**[Optional] Additonal quants (if you want any):**

<!-- Keep in mind that anything bellow I/Q3 isn't recommended,   -->
<!-- since for these smaller models the results will likely be   -->
<!-- highly incoherent rendering them unusable for your needs.   -->


Default list of quants for reference:

        "IQ3_M", "IQ3_XXS",
        "Q4_K_M", "Q4_K_S", "IQ4_XS",
        "Q5_K_M", "Q5_K_S",
        "Q6_K",
        "Q8_0"

```