Commit
3e19cd3
•
1 Parent(s): 20542e5

Update readme.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  library_name: peft
11
  ---
12
 
13
- # 🚀 al-baka-llama3-8b
14
 
15
  [<img src="https://i.ibb.co/fMsBM0M/Screenshot-2024-04-20-at-3-04-34-AM.png" width="150"/>](https://www.omarai.co)
16
 
@@ -28,8 +28,6 @@ Al Baka is an Fine Tuned Model based on the new released LLAMA3-8B Model on the
28
 
29
  - The model was fine-tuned in 4-bit precision using [unsloth](https://github.com/unslothai/unsloth)
30
 
31
- - The run is performed only for 1000 steps with a single Google Colab T4 GPU NVIDIA GPU with 15 GB of available memory.
32
-
33
 
34
  <span style="color:red">The model is currently being Experimentally Fine Tuned to assess LLaMA-3's response to Arabic, following a brief period of fine-tuning. Larger and more sophisticated models will be introduced soon.</span>
35
 
@@ -61,11 +59,11 @@ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False
61
 
62
 
63
  model, tokenizer = FastLanguageModel.from_pretrained(
64
- model_name = "Omartificial-Intelligence-Space/al-baka-16bit-llama3-8b",
65
  max_seq_length = max_seq_length,
66
  dtype = dtype,
67
  load_in_4bit = load_in_4bit,
68
- # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
69
  )
70
  ```
71
 
 
10
  library_name: peft
11
  ---
12
 
13
+ # 🚀 al-baka-llama3-8b ( Lora Only)
14
 
15
  [<img src="https://i.ibb.co/fMsBM0M/Screenshot-2024-04-20-at-3-04-34-AM.png" width="150"/>](https://www.omarai.co)
16
 
 
28
 
29
  - The model was fine-tuned in 4-bit precision using [unsloth](https://github.com/unslothai/unsloth)
30
 
 
 
31
 
32
  <span style="color:red">The model is currently being Experimentally Fine Tuned to assess LLaMA-3's response to Arabic, following a brief period of fine-tuning. Larger and more sophisticated models will be introduced soon.</span>
33
 
 
59
 
60
 
61
  model, tokenizer = FastLanguageModel.from_pretrained(
62
+ model_name = "Omartificial-Intelligence-Space/al-baka-Lora-llama3-8b",
63
  max_seq_length = max_seq_length,
64
  dtype = dtype,
65
  load_in_4bit = load_in_4bit,
66
+ # token = "hf_...",
67
  )
68
  ```
69