Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Scaling percentage docs are not consistent #194

Closed
fepegar opened this issue Aug 8, 2018 · 1 comment
Closed

Scaling percentage docs are not consistent #194

fepegar opened this issue Aug 8, 2018 · 1 comment

Comments

@fepegar
Copy link
Collaborator

fepegar commented Aug 8, 2018

The config spec docs say:

scaling_percentage
Float array indicates a random spatial scaling should be applied (This can be slow depending on the input volume dimensionality).

It's not clear what the numbers should be, and their effect.

The example says:
scaling_percentage | float array | scaling_percentage=0.8,1.2

I wanted many of my training windows to be zoomed out, so I used scaling_percentage=0.25, 1. I didn't see any difference in my results, so I decided to investigate.

According to an INI file in the repo, it seems like these parameters should be given in terms of the difference in percentage with respect to 100%:

[TRAINING]
sample_per_volume = 32
rotation_angle = (-10.0, 10.0)
scaling_percentage = (-10.0, 10.0)
lr = 0.0001
loss_type = Dice
starting_iter = 0
save_every_n = 5
max_iter = 6
max_checkpoints = 20

And in the code:

class RandomSpatialScalingLayer(RandomisedLayer):
"""
generate randomised scaling along each dim for data augmentation
"""
def __init__(self,
min_percentage=-10.0,
max_percentage=10.0,
name='random_spatial_scaling'):
super(RandomSpatialScalingLayer, self).__init__(name=name)
assert min_percentage < max_percentage
self._min_percentage = max(min_percentage, -99.9)
self._max_percentage = max_percentage
self._rand_zoom = None
def randomise(self, spatial_rank=3):
spatial_rank = int(np.floor(spatial_rank))
rand_zoom = np.random.uniform(low=self._min_percentage,
high=self._max_percentage,
size=(spatial_rank,))
self._rand_zoom = (rand_zoom + 100.0) / 100.0

So I think the example in the docs should be -20, 20 instead of 0.8,1.2, and I should've used -75, 0 instead of 0.25, 1 (I think with those numbers I actually got the -slightly- opposite effect, a scaling between 1.0025 and 1.01).

@fepegar
Copy link
Collaborator Author

fepegar commented Aug 8, 2018

I actually think that the (wrong?) example of the docs is a better format. It feels more intuitive to write 0.25, 1 for scaling parameters than -75, 0.

@wyli wyli closed this as completed in 1f2ffa2 Sep 11, 2018
elitap pushed a commit to elitap/NiftyNet that referenced this issue Feb 16, 2021
Merging train valid monitoring

Closes NifTK#201, NifTK#108, NifTK#194, NifTK#137, and NifTK#187

See merge request !125
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant