ShieldX commited on
Commit
c9ea919
1 Parent(s): d681c5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -20,6 +20,106 @@ library_name: transformers
20
  - **License:** apache-2.0
21
  - **Finetuned from model :** unsloth/tinyllama
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
25
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
20
  - **License:** apache-2.0
21
  - **Finetuned from model :** unsloth/tinyllama
22
 
23
+ <style>
24
+ img{
25
+ width: 45vw;
26
+ height: 45vh;
27
+ margin: 0 auto;
28
+ display: flex;
29
+ align-items: center;
30
+ justify-content: center;
31
+ }
32
+ </style>
33
+
34
+ # ShieldX/manovyadh-1.1B-v1
35
+
36
+ Introducing ManoVyadh by LumaticAI. A finetuned version of TinyLlama 1.1B Chat on Mental Health Counselling Dataset.
37
+
38
+
39
+ <img class="custom-image" src="manovyadh.png" alt="BongLlama">
40
+
41
+
42
+ # Model Details
43
+
44
+ ## Model Description
45
+
46
+ ManoVyadh is a LLM for mental health counselling.
47
+
48
+ # Uses
49
+
50
+ ## Direct Use
51
+
52
+ - base model for further finetuning
53
+ - for fun
54
+
55
+
56
+ ## Downstream Use
57
+
58
+ - can be deployed with api
59
+ - used to create webapp or app to show demo
60
+
61
+
62
+ ## Out-of-Scope Use
63
+
64
+ - cannot be used for production purpose
65
+ - not to be applied in real life health purpose
66
+ - cannot be used to generate text for research or academic purposes
67
+
68
+
69
+ # Bias, Risks, and Limitations
70
+
71
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
72
+
73
+ # Training Details
74
+
75
+ # Model Examination
76
+
77
+ We will be further finetuning this model on large dataset to see how it performs
78
+
79
+ # Environmental Impact
80
+
81
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
82
+
83
+ - **Hardware Type:** 1 X Tesla T4
84
+ - **Hours used:** 0.30
85
+ - **Cloud Provider:** Google Colab
86
+ - **Compute Region:** India
87
+
88
+ # Technical Specifications
89
+
90
+ ## Model Architecture and Objective
91
+
92
+ Finetuned on Tiny-Llama 1.1B Chat model
93
+
94
+
95
+ ### Hardware
96
+
97
+ 1 X Tesla T4
98
+
99
+
100
+ # Citation
101
+
102
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
103
+
104
+ **BibTeX:**
105
+
106
+ ```
107
+ @misc{ShieldX/manovyadh-1.1B-v1,
108
+ url={[https://huggingface.co/ShieldX/manovyadh-1.1B-v1](https://huggingface.co/ShieldX/manovyadh-1.1B-v1)},
109
+ title={ManoVyadh},
110
+ author={Rohan Shaw},
111
+ year={2024}, month={Jan}
112
+ }
113
+ ```
114
+
115
+ # Model Card Authors
116
+
117
+ ShieldX a.k.a Rohan Shaw
118
+
119
+ # Model Card Contact
120
+
121
+ email : [email protected]
122
+
123
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
124
 
125
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)