Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
yt-4.0 adding "gather" approach to arbitrary_grid and slice SPH pixelization routines #1828
The new PR adds the ability to use SPH gather smoothing on an
import yt from yt.units import kpc ds = yt.load('GadgetDiskGalaxy/snapshot_200.hdf5') _, c = ds.find_max(('gas', 'density')) width = 50 * kpc field = ('gas', 'density') ds.sph_smoothing_style = "gather" ds.num_neighbors = 40 # slice yt.SlicePlot(ds, 'x', field, center=c, width=width) # arb grid ag = ds.arbitrary_grid(c - width / 2, c + width / 2, *2 + ) dens2 = ag[field][:, :, 0].d
changed the title
[WIP] yt-4.0 adding "gather" approach to pixelization routines
Jun 10, 2018
Here is a comparison testing using an Arepo dataset of mine (since smoothing lengths for Arepo are weird, one should not take this test too seriously).
import yt ds = yt.load("snapshot_300.hdf5") #ds.sph_smoothing_style = "gather" #ds.num_neighbors = 16 slc = yt.SlicePlot(ds, "z", [("gas","density")], width=(500.0,"kpc"), center="max") slc.save()
Note that I comment and uncomment the two lines above out to produce the different plots.
With gather and
Note that gather smooths it out a bit. Decreasing
The differences between these are all somewhat subtle, however. I think it's pretty good.
I should also note that when I ran this on the whole box (~40 Mpc) it was very slow, despite the fact that most of the particles are in the central region. There are ~20 million particles in this dataset.
This is just for information--I don't think that we should judge the merits of this PR (which is awesome!) based on this Arepo dataset, considering that support is still experimental and limited to my fork at the moment.
Thanks for checking it out!
I'm actually shopping currently but will look at the plots when I'm home.
Can you tell which parts were slow, was it the initial KDTree build or the neighbor generating? There should be progress bars which say what's happening and expected times etc
EDIT: In my experience you get optimum neighbor searching efficiency when leaves contain around 1.5-2x the number of desired neighbors, so for 16 neighbors, with the default leafsize of 64, that is a bit sub optimum.
The other thing is, I can't tell from your reply whether or not it had to build the KDTree, this can take around 10 seconds (I think) for 20 million particles. We do have some loose plans to optimize a lot of this using "spatial" chunking, rather than "io" chunking in the future if the performance hit really is on the neighbor finding / interpolation.
I just tested on a different dataset with 6 million particles and the interpolation step is slower than I'd like due to a memory conservative approach which requires a random look up. Another issue was that the KDTree kept being rebuilt which took about 30 seconds - this a bug which I need to look into separately. if you reproduce this, I'll make an issue and look into it.
KDTree ~ 30 seconds to build (it should be loaded from memory almost instantly)
Are you also getting the
@nathangoldbaum, after re-reading Price's paper and thinking about this a bit, I'm convinced that yes you should be using the 3D kernel for a slice plot.
Just to check: you do this by setting the 'pixels' at the slice plane, and then doing a neighbour search around them to get your Nngbs, then construct the density/weighted quantity you want for that pixel (as a 3D quantity)?