ruslanmv commited on
Commit
ba4b189
1 Parent(s): aabc4fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - ruslanmv
8
  - llama
9
  - trl
10
- base_model: meata/llama-3-8b
11
  datasets:
12
  - ruslanmv/ai-medical-chatbot
13
  ---
@@ -20,7 +20,7 @@ This repository provides a fine-tuned version of the powerful Llama3 8B model, s
20
 
21
  - **Developed by:** ruslanmv
22
  - **License:** Apache-2.0
23
- - **Finetuned from model:** unsloth/llama-3-8b-bnb-4bit
24
 
25
  **Key Features**
26
 
@@ -44,8 +44,8 @@ Here's a Python code snippet demonstrating how to interact with the `Medical-Lla
44
  from transformers import AutoTokenizer, AutoModelForCausalLM
45
 
46
  # Load tokenizer and model
47
- tokenizer = AutoTokenizer.from_pretrained("ruslanmv/Medical-Llama3-8B-16bit")
48
- model = AutoModelForCausalLM.from_pretrained("ruslanmv/Medical-Llama3-8B-16bit").to("cuda") # If using GPU
49
 
50
  # Function to format and generate response with prompt engineering
51
  def askme(question):
 
7
  - ruslanmv
8
  - llama
9
  - trl
10
+ base_model: meta-llama/Meta-Llama-3-8B
11
  datasets:
12
  - ruslanmv/ai-medical-chatbot
13
  ---
 
20
 
21
  - **Developed by:** ruslanmv
22
  - **License:** Apache-2.0
23
+ - **Finetuned from model:** meta-llama/Meta-Llama-3-8B
24
 
25
  **Key Features**
26
 
 
44
  from transformers import AutoTokenizer, AutoModelForCausalLM
45
 
46
  # Load tokenizer and model
47
+ tokenizer = AutoTokenizer.from_pretrained("ruslanmv/Medical-Llama3-8B")
48
+ model = AutoModelForCausalLM.from_pretrained("ruslanmv/Medical-Llama3-8B").to("cuda") # If using GPU
49
 
50
  # Function to format and generate response with prompt engineering
51
  def askme(question):