Here is the code to visualize the BRUSH dataset:
```python
    for writer_id in tqdm(os.listdir(data/BRUSH)):
        if os.path.isdir(fdata/BRUSH/{writer_id}):
            for drawing_id in os.listdir(fdata/BRUSH/{writer_id}):
                with open(fdata/BRUSH/{writer_id}/{drawing_id}, 'rb') as f:
                    sentence, drawing, label = pickle.load(f)
```

Please write a dataloader for this dataset.
Sure, here is an example of a dataloader for the BRUSH dataset provided:
`    import os
    import pickle
    from torch.utils.data import Dataset, DataLoader

    class BRUSHDataset(Dataset):
         def __init__(self, root_dir):
            self.root_dir = root_dir
            self.data = []
        
            for writer_id in os.listdir(self.root_dir):
                if os.path.isdir(os.path.join(self.root_dir, writer_id)):
                    for drawing_id in os.listdir(os.path.join(self.root_dir, writer_id)):
                        with open(os.path.join(self.root_dir, writer_id, drawing_id), 'rb') as f:
                            sentence, drawing, label = pickle.load(f)
                            self.data.append((sentence, drawing, label))
                        
        def __len__(self):
            return len(self.data)
    
        def __getitem__(self, idx):
            sentence, drawing, label = self.data[idx]
            return sentence, drawing, label

    # Usage example
    dataset = BRUSHDataset(data/BRUSH)
    dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

    for sentences, drawings, labels in dataloader:
        # do something with the batch of data`

This code defines a BRUSHDataset class that inherits from PyTorch's Dataset class. The __init__ method loads all the data from the specified root_dir directory and stores it in a list. The __len__ method returns the length of the data list, and the __getitem__ method returns a single example from the dataset.

To use this dataset with PyTorch's DataLoader class, you simply need to create an instance of the BRUSHDataset class and pass it to the DataLoader constructor, along with the desired batch size and other parameters. You can then iterate over the DataLoader to get batches of data.
You broke the code box at the very beginning of the code, please, re-do your prompt with the needed fix.
I apologize for my mistake. Here is an example of a dataloader for the BRUSH dataset provided, with proper formatting:
```Python
import os
import pickle
from torch.utils.data import Dataset, DataLoader
class BRUSHDataset(Dataset):
     def __init__(self, root_dir):
        self.root_dir = root_dir
        self.data = []
    
        for writer_id in os.listdir(self.root_dir):
            if os.path.isdir(os.path.join(self.root_dir, writer_id)):
                for drawing_id in os.listdir(os.path.join(self.root_dir, writer_id)):
                    with open(os.path.join(self.root_dir, writer_id, drawing_id), 'rb') as f:
                        sentence, drawing, label = pickle.load(f)
                        self.data.append((sentence, drawing, label))
                    
    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        sentence, drawing, label = self.data[idx]
        return sentence, drawing, label
# Usage example
dataset = BRUSHDataset(data/BRUSH)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
for sentences, drawings, labels in dataloader:
    # do something with the batch of data
```