TheBloke commited on
Commit
9cf7dc4
1 Parent(s): a5ca5f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -66
README.md CHANGED
@@ -23,10 +23,17 @@ These files are GPTQ 4bit model files for [Panchovix's merge of Guanaco 33B and
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
 
 
 
 
 
 
 
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ)
29
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Panchovix/Guanaco-33B-SuperHOT-8k)
31
 
32
  ## How to easily download and use this model in text-generation-webui
@@ -37,70 +44,17 @@ Please make sure you're using the latest version of text-generation-webui
37
  2. Under **Download custom model or LoRA**, enter `TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ`.
38
  3. Click **Download**.
39
  4. The model will start downloading. Once it's finished it will say "Done"
40
- 5. In the top left, click the refresh icon next to **Model**.
41
- 6. In the **Model** dropdown, choose the model you just downloaded: `Guanaco-33B-SuperHOT-8K-GPTQ`
42
- 7. The model will automatically load, and is now ready for use!
43
- 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
44
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
45
- 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
46
-
47
- ## How to use this GPTQ model from Python code
48
-
49
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
50
-
51
- `pip install auto-gptq`
52
-
53
- Then try the following example code:
54
-
55
- ```python
56
- from transformers import AutoTokenizer, pipeline, logging
57
- from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
58
- import argparse
59
-
60
- model_name_or_path = "TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ"
61
- model_basename = "guanaco-33b-superhot-8k-GPTQ-4bit--1g.act.order"
62
-
63
- use_triton = False
64
-
65
- tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
66
-
67
- model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
68
- model_basename=model_basename,
69
- use_safetensors=True,
70
- trust_remote_code=False,
71
- device="cuda:0",
72
- use_triton=use_triton,
73
- quantize_config=None)
74
-
75
- # Note: check the prompt template is correct for this model.
76
- prompt = "Tell me about AI"
77
- prompt_template=f'''USER: {prompt}
78
- ASSISTANT:'''
79
-
80
- print("\n\n*** Generate:")
81
-
82
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
83
- output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
84
- print(tokenizer.decode(output[0]))
85
-
86
- # Inference can also be done using transformers' pipeline
87
-
88
- # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
89
- logging.set_verbosity(logging.CRITICAL)
90
 
91
- print("*** Pipeline:")
92
- pipe = pipeline(
93
- "text-generation",
94
- model=model,
95
- tokenizer=tokenizer,
96
- max_new_tokens=512,
97
- temperature=0.7,
98
- top_p=0.95,
99
- repetition_penalty=1.15
100
- )
101
 
102
- print(pipe(prompt_template)[0]['generated_text'])
103
- ```
104
 
105
  ## Provided files
106
 
@@ -111,9 +65,9 @@ This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. Ther
111
  It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
112
 
113
  * `guanaco-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors`
114
- * Works with AutoGPTQ in CUDA or Triton modes.
115
- * LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
116
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
117
  * Works with text-generation-webui, including one-click-installers.
118
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
119
 
 
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is currently only tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ Please read carefully below to see how to use it.
31
+
32
+ **NOTE**: Using the full 8K context will exceed 24GB VRAM.
33
+
34
  ## Repositories available
35
 
36
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ)
 
37
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Panchovix/Guanaco-33B-SuperHOT-8k)
38
 
39
  ## How to easily download and use this model in text-generation-webui
 
44
  2. Under **Download custom model or LoRA**, enter `TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ`.
45
  3. Click **Download**.
46
  4. The model will start downloading. Once it's finished it will say "Done"
47
+ 5. Untick **Autoload the model**
48
+ 6. In the top left, click the refresh icon next to **Model**.
49
+ 7. In the **Model** dropdown, choose the model you just downloaded: `Guanaco-33B-SuperHOT-8K-GPTQ`
50
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
51
+ 9. Now click **Save Settings** followed by **Reload**
52
+ 10. The model will automatically load, and is now ready for use!
53
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
+ ## How to use this GPTQ model from Python code - TBC
 
 
 
 
 
 
 
 
 
56
 
57
+ Using this model with increased context from Python code is currently untested, so this section is removed for now.
 
58
 
59
  ## Provided files
60
 
 
65
  It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
66
 
67
  * `guanaco-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors`
68
+ * Designed for use with ExLlama with increased context (4096 or 8192)
69
+ * Should work with AutoGPTQ in CUDA or Triton modes, but without increased context - TBC.
70
+ * Should work with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
71
  * Works with text-generation-webui, including one-click-installers.
72
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
73