File size: 4,141 Bytes
99bcabb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
import os, torch, nibabel as nib, numpy as np
from glob import glob
from torch.utils.data import DataLoader
import torch

# function that takes four slices from 4 modalities and stack them
def stack_slices(t1c, t2f, t1n, t2w):
    stacked_slices = np.stack((t1c, t2f, t1n, t2w), axis=0)
    return stacked_slices


# function that takes the segmentation mask and turn it into a 4 channel mask
def convert_to_multichannel(mask):
    """
    Convert labels to multi channels based on brats classes:
    The provided segmentation labels have values of:

    1 for NCR (necrotic)
    2 for ED (edema)
    3 for ET (enhancing tumor)
    0 for background.
    The possible classes are TC (Tumor core == NCR and ET), WT (Whole tumor)
    , ET (Enhancing tumor) and background.

    """
    results = []
    # merge label 1 and label 3 to construct TC
    results.append( np.logical_or(mask == 1, mask == 3) )
    # merge labels 1, 2 and 3 to construct WT
    results.append( mask != 0 )
    # merge label 3 to keep ET
    results.append( mask == 3 )
    # merge label 0 to keep background
    results.append( mask == 0 )

    return np.stack(results, axis=0).astype(np.uint8)
class MyIterableDataset(torch.utils.data.IterableDataset):
    def __init__(self, images_t1c, images_t2f, images_t1n, images_t2w, segs):
        self.images_t1c = images_t1c
        self.images_t2f = images_t2f
        self.images_t1n = images_t1n
        self.images_t2w = images_t2w
        self.segs = segs

    def stream(self):
        for i in range(self.start, self.end):
            t1c = nib.load(self.images_t1c[i]).get_fdata()
            t2f = nib.load(self.images_t2f[i]).get_fdata()
            t1n = nib.load(self.images_t1n[i]).get_fdata()
            t2w = nib.load(self.images_t2w[i]).get_fdata()
            seg = nib.load(self.segs[i]).get_fdata()
            for j in range(t1c.shape[2]):
                if np.sum(t1c[:,:,j]) != 0:
                    yield stack_slices(t1c[:,:,j], t2f[:,:,j], t1n[:,:,j], t2w[:,:,j]), convert_to_multichannel(seg[:,:,j])
                else:
                    continue

    def __iter__(self):
        worker_info = torch.utils.data.get_worker_info()
        if worker_info is None:
            self.start = 0
            self.end = len(self.images_t1c)
        else:
            per_worker = int(np.ceil(len(self.images_t1c) / float(worker_info.num_workers)))
            self.worker_id = worker_info.id
            self.start = self.worker_id * per_worker
            self.end = min(self.start + per_worker, len(self.images_t1c))
        return self.stream()

def get_MyIterableDataset(folder_name):
    
    # check if the folder exists
    if not os.path.exists(folder_name):
        raise FileNotFoundError(f"Folder {folder_name} not found,current working directory: {os.getcwd()}")

    images_t1c = sorted(glob(os.path.join(folder_name, "*/*-t1c.nii.gz")))
    images_t2f = sorted(glob(os.path.join(folder_name, "*/*-t2f.nii.gz")))
    images_t1n = sorted(glob(os.path.join(folder_name, "*/*-t1n.nii.gz")))
    images_t2w = sorted(glob(os.path.join(folder_name, "*/*-t2w.nii.gz")))

    segs = sorted(glob(os.path.join(folder_name, "*/*seg.nii.gz")))


    number_of_scans = len(images_t1c)
    print(f"Number of scans: {number_of_scans}")
    number_of_slices = nib.load(images_t1c[0]).get_fdata().shape[2]
    
    # do a train test split
    train_size = int(0.8 * number_of_scans)
    train_images_t1c = images_t1c[:train_size]
    train_images_t2f = images_t2f[:train_size]
    train_images_t1n = images_t1n[:train_size]
    train_images_t2w = images_t2w[:train_size]
    train_segs = segs[:train_size]

    test_images_t1c = images_t1c[train_size:]
    test_images_t2f = images_t2f[train_size:]
    test_images_t1n = images_t1n[train_size:]
    test_images_t2w = images_t2w[train_size:]
    test_segs = segs[train_size:]

    train_dataset = MyIterableDataset(train_images_t1c, train_images_t2f, train_images_t1n, train_images_t2w, train_segs)
    test_dataset = MyIterableDataset(test_images_t1c, test_images_t2f, test_images_t1n, test_images_t2w, test_segs)

    return train_dataset, test_dataset