File size: 4,309 Bytes
60e5f2d
8221010
 
 
 
 
 
 
 
 
 
 
 
60e5f2d
 
8221010
60e5f2d
8221010
60e5f2d
8221010
 
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
 
 
60e5f2d
8221010
 
 
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
60e5f2d
8221010
 
 
 
 
 
 
 
 
 
 
60e5f2d
8221010
60e5f2d
8221010
 
 
 
 
 
 
 
 
 
 
 
60e5f2d
8221010
60e5f2d
8221010
 
 
 
 
84ac562
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
library_name: peft
license: apache-2.0
tags:
  - fine-tuned
  - custom
  - mistral-7b
  - youtube-comments
  - conversational-ai
model-index:
  - name: imangpt-mistral-7b-youtube-comments-ft
    results: []
---

# imangpt-mistral-7b-youtube-comments-ft

This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ), performed by **Iman Heshmat**. The fine-tuning was done using a custom dataset consisting of YouTube audience comments and responses from the respective channel owner. The goal of this fine-tuning process was to enable the model to generate responses that closely mimic the style and tone of the channel owner when replying to audience comments.

It achieves the following results on the evaluation set:
- **Loss:** 1.3211

## Model description

This model has been fine-tuned specifically for the task of generating YouTube comment replies in a manner similar to the original channel owner. It has learned to understand the context of comments and respond appropriately, capturing the unique style and tone of the channel owner. This makes the model particularly useful for automating responses to audience interactions on YouTube channels, helping maintain engagement while preserving the channel's voice.

## Intended uses & limitations

### Intended uses:
- **Automating YouTube comment responses**: The model can be used to automatically generate replies to audience comments on YouTube videos, ensuring consistency in the channel owner's communication style.
- **Conversational AI applications**: It can also be integrated into other conversational AI systems where maintaining a specific tone and style in responses is crucial.

### Limitations:
- **Generalization**: The model is specifically fine-tuned on the data of a particular YouTube channel. Its performance may vary when applied to different channels with different communication styles.
- **Contextual Understanding**: While the model is good at mimicking the style, its understanding of context might be limited to the patterns observed in the training data. It might not perform as well on comments that are vastly different from those in the training set.

## Training and evaluation data

The dataset used for fine-tuning consists of YouTube audience comments and the corresponding responses from the channel owner. The data was carefully curated to capture a wide range of interactions, including casual replies, informative responses, and engagement-driven interactions. The dataset reflects real-world usage and aims to enhance the model's ability to generate appropriate and contextually relevant replies.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:

- **learning_rate:** 0.0002
- **train_batch_size:** 4
- **eval_batch_size:** 4
- **seed:** 42
- **gradient_accumulation_steps:** 4
- **total_train_batch_size:** 16
- **optimizer:** Adam with betas=(0.9,0.999) and epsilon=1e-08
- **lr_scheduler_type:** linear
- **lr_scheduler_warmup_steps:** 2
- **num_epochs:** 10
- **mixed_precision_training:** Native AMP

### Training results

| **Training Loss** | **Epoch** | **Step** | **Validation Loss** |
|:-----------------:|:---------:|:--------:|:-------------------:|
| 1.7286            | 0.9231    | 3        | 1.5518              |
| 1.4587            | 1.8462    | 6        | 1.4154              |
| 1.3376            | 2.7692    | 9        | 1.3703              |
| 0.9482            | 4.0       | 13       | 1.3354              |
| 1.2544            | 4.9231    | 16       | 1.3249              |
| 1.1956            | 5.8462    | 19       | 1.3228              |
| 1.1577            | 6.7692    | 22       | 1.3216              |
| 0.883             | 8.0       | 26       | 1.3217              |
| 1.1654            | 8.9231    | 29       | 1.3213              |
| 0.8462            | 9.2308    | 30       | 1.3211              |

### Framework versions

- **PEFT:** 0.12.0
- **Transformers:** 4.42.4
- **Pytorch:** 2.4.0+cu121
- **Datasets:** 2.21.0
- **Tokenizers:** 0.19.1


**My Colab:** [fine-tuning-mistral-7b.ipynb](https://colab.research.google.com/drive/1SXfva7tuLr_8CHCMQ3Dmr6PzWw6Un7Mj?usp=sharing)