JeongwonChoi commited on
Commit
28fda5c
β€’
1 Parent(s): 0325e40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -23,7 +23,7 @@ datasets:
23
  ### **Trained On**
24
 
25
  - **OS**: Ubuntu 20.04
26
- - **GPU**: H100 80GB x4
27
  - **transformers**: v4.36.2
28
 
29
  ### **Dataset**
@@ -38,16 +38,15 @@ E.g.
38
 
39
  ```python
40
  text = """\
41
- ### System:
42
  당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.
43
 
44
- ### User:
45
  λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?
46
 
47
- ### Assistant:
48
  λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.
49
 
50
- ### User:
51
  μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?
52
  """
53
  ```
@@ -56,9 +55,9 @@ text = """\
56
 
57
  ### **Ko-LLM-Leaderboard**
58
 
59
- | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
60
- | -------------------------------- | --------- | --------- | ------------ | --------- | ------------- | --------------- |
61
- | **DataVortexM-7B-Instruct-v0.1** | **39.81** | **34.13** | **42.35** | **38.73** | **45.46** | **38.37** |
62
 
63
  ## **Implementation Code**
64
 
@@ -70,8 +69,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
70
 
71
  device = "cuda" # the device to load the model onto
72
 
73
- model = AutoModelForCausalLM.from_pretrained("DataVortexM-7B-Instruct-v0.1")
74
- tokenizer = AutoTokenizer.from_pretrained("DataVortexM-7B-Instruct-v0.1")
75
 
76
  messages = [
77
  {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."},
 
23
  ### **Trained On**
24
 
25
  - **OS**: Ubuntu 20.04
26
+ - **GPU**: H100 80GB 4ea
27
  - **transformers**: v4.36.2
28
 
29
  ### **Dataset**
 
38
 
39
  ```python
40
  text = """\
 
41
  당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.
42
 
43
+ ### Instruction:
44
  λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?
45
 
46
+ ### Response:
47
  λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.
48
 
49
+ ### Instruction:
50
  μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?
51
  """
52
  ```
 
55
 
56
  ### **Ko-LLM-Leaderboard**
57
 
58
+ | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
59
+ | ---------------------------------------- | --------- | --------- | ------------ | --------- | ------------- | --------------- |
60
+ | **Edentns/DataVortexM-7B-Instruct-v0.1** | **39.81** | **34.13** | **42.35** | **38.73** | **45.46** | **38.37** |
61
 
62
  ## **Implementation Code**
63
 
 
69
 
70
  device = "cuda" # the device to load the model onto
71
 
72
+ model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1")
73
+ tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1")
74
 
75
  messages = [
76
  {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."},