Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
Tags:
transformers
Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,12 @@ The dataset has been formatted in ShareGPT format for use with conversational la
|
|
15 |
|
16 |
This dataset can be an invaluable resource for training and refining language models, offering a rich source of nuanced, intellectual, and thought-provoking dialogue. Furthermore, the diversity of topics covered provides a broad spectrum of language usage, idiomatic expressions, and subject matter expertise.
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
# What I did
|
19 |
1. Fetch all episode links of Lex Fridman Podcast from https://steno.ai
|
20 |
2. For each episode, transform the transcript in html to json format (Vicuna ShareGPT format)
|
|
|
15 |
|
16 |
This dataset can be an invaluable resource for training and refining language models, offering a rich source of nuanced, intellectual, and thought-provoking dialogue. Furthermore, the diversity of topics covered provides a broad spectrum of language usage, idiomatic expressions, and subject matter expertise.
|
17 |
|
18 |
+
### 3 versions
|
19 |
+
1. _original: original dataset where each item is an entire episode
|
20 |
+
2. _chunked: chunked dataset where episodes are formated into chunks of approximately 1500 words or 2048 tokens
|
21 |
+
3. _chunked_gpt: change "lex" & "guest" to "human" & "gpt" in _chunked dataset to fit Vicuna training
|
22 |
+
|
23 |
+
|
24 |
# What I did
|
25 |
1. Fetch all episode links of Lex Fridman Podcast from https://steno.ai
|
26 |
2. For each episode, transform the transcript in html to json format (Vicuna ShareGPT format)
|