Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File "<__array_function__ internals>", line 180, in concatenate ValueError: need at least one array to concatenate #7

Closed
harrykwon0524 opened this issue Mar 30, 2022 · 3 comments

Comments

@harrykwon0524
Copy link

Hi I am trying to run the file and put down the path for hippocampus dataset, and as a result, I see the error as stated in the title. Is there some additional lines that need to be put? Thank you

Traceback (most recent call last):
File "data_preprocess.py", line 43, in
main()
File "data_preprocess.py", line 39, in main
data = data_obj.pre_process_dataset(cfg)
File "/home/cs2212/Desktop/voxel2mesh-master/data/hippocampus.py", line 97, in pre_process_dataset
inputs_ = np.concatenate(inputs_, axis=0)
File "<array_function internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

@yoterel
Copy link

yoterel commented Apr 5, 2022

I have the same error.

Further investigation shows that the preprocess step expects .npy files in the dataset folder of Hippocampus/imagesTr, but only .nii.gz files exist (and same goes for imagesTs and labelsTr).
This is using the download link provided in the in the repository (README.md)

@udaranga3001
Copy link
Collaborator

udaranga3001 commented Apr 16, 2022

It seems I have missed the dataset_init function. I have copied it below and will try to make a commit with the updated hippocampus.py soon. Meantime, you can try the function below. Replace line 75 in hippocampus.py with the function call dataset_init(data_root, multi_stack)

def dataset_init(data_root, multi_stack=None):
    # multi_stack: if image (x) is 4-dim, (i.e. same file has multiple image volumes, you should specify the index of the volume to be used.
    samples = [dir for dir in os.listdir('{}/imagesTr'.format(data_root))]

    inputs = []
    labels = []
    real_sizes = []
    file_names = []

    vals = []
    sizes = [] 

    count = 0
    for itr, sample in enumerate(tqdm(samples)):
        if '.nii.gz' in sample and '._' not in sample and '.npy' not in sample and '.tif' not in sample:
 
            x = nib.load('{}/imagesTr/{}'.format(data_root, sample))
            y = nib.load('{}/labelsTr/{}'.format(data_root, sample)).get_fdata() > 0
            resolution = np.diag(x.header.get_sform())[:3]
            x = x.get_fdata()
            if multi_stack is not None:
                x = x[:, :, :, multi_stack]
            real_size = np.round(np.array(x.shape) * resolution)
            
            file_name = sample
 

            x = torch.from_numpy(x).permute([2, 1, 0]).cuda().float()
            y = torch.from_numpy(y).permute([2, 1, 0]).cuda().float()
            #
            W, H, D = real_size
            W, H, D = int(W), int(H), int(D)
            base_grid = torch.zeros((1, D, H, W, 3))
            w_points = (torch.linspace(-1, 1, W) if W > 1 else torch.Tensor([-1]))
            h_points = (torch.linspace(-1, 1, H) if H > 1 else torch.Tensor([-1])).unsqueeze(-1)
            d_points = (torch.linspace(-1, 1, D) if D > 1 else torch.Tensor([-1])).unsqueeze(-1).unsqueeze(-1)
            base_grid[:, :, :, :, 0] = w_points
            base_grid[:, :, :, :, 1] = h_points
            base_grid[:, :, :, :, 2] = d_points
            grid = base_grid.cuda()
 

            x = F.grid_sample(x[None, None], grid, mode='bilinear', padding_mode='border')[0, 0].cpu().numpy()
            y = F.grid_sample(y[None, None], grid, mode='nearest', padding_mode='border')[0, 0].long().cpu().numpy()


            x = (x - np.mean(x))/np.std(x) 
            np.save('{}/imagesTr/p{}'.format(data_root, file_name), x)
            np.save('{}/labelsTr/p{}'.format(data_root, file_name), y)

            count += 1 

@harrykwon0524
Copy link
Author

thank you for the update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants