Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,37 @@ dataset_info:
|
|
12 |
num_examples: 15157
|
13 |
download_size: 4756942293
|
14 |
dataset_size: 4684490295.359
|
|
|
15 |
---
|
|
|
16 |
# Dataset Card for "lsun-bedrooms"
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
num_examples: 15157
|
13 |
download_size: 4756942293
|
14 |
dataset_size: 4684490295.359
|
15 |
+
license: mit
|
16 |
---
|
17 |
+
|
18 |
# Dataset Card for "lsun-bedrooms"
|
19 |
|
20 |
+
This is a copy of the bedrooms category in [`LSUN`](https://github.com/fyu/lsun), uploaded as a dataset for convenience.
|
21 |
+
|
22 |
+
The license for _this compilation only_ is MIT. The data retains the same license as the original dataset.
|
23 |
+
|
24 |
+
This is (roughly) the code that was used to upload this dataset:
|
25 |
+
|
26 |
+
```Python
|
27 |
+
import os
|
28 |
+
import shutil
|
29 |
+
|
30 |
+
from miniai.imports import *
|
31 |
+
from miniai.diffusion import *
|
32 |
+
|
33 |
+
from datasets import load_dataset
|
34 |
+
|
35 |
+
path_data = Path('data')
|
36 |
+
path_data.mkdir(exist_ok=True)
|
37 |
+
path = path_data/'bedroom'
|
38 |
+
|
39 |
+
url = 'https://s3.amazonaws.com/fast-ai-imageclas/bedroom.tgz'
|
40 |
+
if not path.exists():
|
41 |
+
path_zip = fc.urlsave(url, path_data)
|
42 |
+
shutil.unpack_archive('data/bedroom.tgz', 'data')
|
43 |
+
|
44 |
+
dataset = load_dataset("imagefolder", data_dir="data/bedroom")
|
45 |
+
dataset = dataset.remove_columns('label')
|
46 |
+
dataset = dataset['train'].train_test_split(test_size=0.05)
|
47 |
+
dataset.push_to_hub("pcuenq/lsun-bedrooms")
|
48 |
+
```
|