Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imaging with Powderday #137

Open
ACDylan opened this issue Aug 2, 2021 · 33 comments
Open

Imaging with Powderday #137

ACDylan opened this issue Aug 2, 2021 · 33 comments

Comments

@ACDylan
Copy link

ACDylan commented Aug 2, 2021

I would like to produce images following the "Imaging" section of the website:
https://powderday.readthedocs.io/en/latest/quickstart.html#imaging

in order to obtain RGB images (like the figure 10 of the Powderday's paper: https://arxiv.org/pdf/2006.10757.pdf)
At first, I have set the following parameters in parameters_master:

# Images and SED Parameters
NTHETA = 1
NPHI = 1
SED = True

SED_MONOCHROMATIC = False
FIX_SED_MONOCHROMATIC_WAVELENGTHS = False
SED_MONOCHROMATIC_min_lam = 0.3 # micron
SED_MONOCHROMATIC_max_lam = 0.4 # micron

IMAGING = True
filterdir = '/home/chosson/POWDERDAY/powderday/filters/'
filterfiles = [
    'arbitrary.filter',
#    'galex2500.filter',
#    'ACS_F475W.filter',
#    'ACS_F606W.filter',
#    'ACS_F814W.filter',
#    'B_subaru.filter',
]

npix_x = 128
npix_y = 128

IMAGING_TRANSMISSION_FILTER = False
filter_list = ['filters/irac_ch1.filter']
TRANSMISSION_FILTER_REDSHIFT = 0.001

The goal would be to use after another filter like GALEX one, but first I am using the arbitraty one.
When I execute powderday, I have the following issue:

 [main] exiting raytracing iteration
 [image_write] writing out SEDs
 [image_write] done
 ------------------------------------------------------------
 Total CPU time elapsed:           632.76
 Ended on 02 August 2021 at 21:46:55
 ------------------------------------------------------------

Beginning Monochromatic Imaging RT
Traceback (most recent call last):
  File "/home/chosson/local/Python-3.9.4/bin/pd_front_end.py", line 4, in <module>
    __import__('pkg_resources').run_script('powderday==0.1.0', 'pd_front_end.py')
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/pkg_resources/__init__.py", line 651, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1455, in run_script
    exec(script_code, namespace, namespace)
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/EGG-INFO/scripts/pd_front_end.py", line 204, in <module>
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/powderday/front_end_tools.py", line 103, in make_image
NameError: name 'm_imaging' is not defined
@dnarayanan
Copy link
Owner

hi, thanks for filling this!

I'll be out for the bulk of the day but realized that I wonder if this is actually a bug introduced in 54cc264 (maybe anyways).

can you please try to revert to 661c31d

and see if it works? that will help diagnose this!

@ACDylan
Copy link
Author

ACDylan commented Aug 2, 2021

Hi,
Sure!

I have switched to 661c31d
and this issue occures:

Initializing refined index: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 514/514 [01:27<00:00,  5.90it/s]
[front_end_controller:] bounding_box being used
[front_end_controller:] gadget data set detected
[grid_construction]: bbox_lim =  100000.0
Traceback (most recent call last):
  File "/home/chosson/local/Python-3.9.4/bin/pd_front_end.py", line 4, in <module>
    __import__('pkg_resources').run_script('powderday==0.1.0', 'pd_front_end.py')
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/pkg_resources/__init__.py", line 651, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1455, in run_script
    exec(script_code, namespace, namespace)
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/EGG-INFO/scripts/pd_front_end.py", line 74, in <module>
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/powderday/sph_tributary.py", line 35, in sph_m_gen
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/powderday/grid_construction.py", line 49, in yt_octree_generate
  File "/home/chosson/local/Python-3.9.4/lib/python3.9/site-packages/powderday-0.1.0-py3.9.egg/powderday/front_ends/gadget2pd.py", line 300, in gadget_field_add
  File "/home/chosson/POWDERDAY/yt/yt/loaders.py", line 93, in load
    return candidates[0](fn, *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'over_refine_factor'

@dnarayanan
Copy link
Owner

hi - can you possibly make a copy of the snapshot you're trying to run available somewhere, along with parameter files?

@dnarayanan
Copy link
Owner

dnarayanan commented Aug 3, 2021

oh I know the issue here - it's that the issues fixed by commit 224e78a are cropping up now.

I'll test the imaging myself to see if I can understand If there's a bug.

dnarayanan added a commit that referenced this issue Aug 3, 2021
@dnarayanan
Copy link
Owner

hi - I think I may have fixed this...can you please pull from master (952af33) and see if this works?

@ACDylan
Copy link
Author

ACDylan commented Aug 3, 2021

Hi - I have obtained a new file called "convolved.134.hdf5" (related to gizmo_mw) as expecting with your quickstart: https://powderday.readthedocs.io/en/latest/quickstart.html#imaging
as well as (input/output).image files.
I still haven't processed it, but overall it seems to be working now!

Just need to see if it also works with my own snapshots.

@ACDylan
Copy link
Author

ACDylan commented Aug 4, 2021

Hi - beginner questions:

IMAGING = True
filterdir = '/home/chosson/POWDERDAY/powderday/filters/'
filterfiles = [
    'arbitrary.filter',
    'ACS_F814W.filter',
    'wfc3-F125W.filter',
    'wfc3-F160W.filter',
]
npix_x = 256
npix_y = 256

IMAGING_TRANSMISSION_FILTER = False
filter_list = ['filters/irac_ch1.filter']
TRANSMISSION_FILTER_REDSHIFT = 0.001

but my output file is empty. Whereas it contains
"
cell_info.134_0.npz convolved.134.hdf5 grid_physical_properties.134_galaxy0.npz SKIRT.134_galaxy0.gas.particles.txt SKIRT.134_galaxy0.stars.particles.txt stellar_seds.134_galaxy0.npz
"
when performing the simulation with gizmo_mw. So, I only have "output_sim.image" in the main folder, is it normal?
If not, is it because of the multiple filterfiles? Like, should I perform a simulation for each filter?

  • Is there an example in the convenience folder of powderday about how to obtain RGB images like the make_image_single_wavelength.py ?

@ACDylan
Copy link
Author

ACDylan commented Aug 16, 2021

Hi - I have a question about how pd performs simulation with filters.
When I ask

filterfiles = [
    'arbitrary.filter',
] 

It takes a really decent time to compute.

However, if I ask for multiple filters like:

filterfiles = [
    'arbitrary.filter',
    'wfc3-F125W.filter',
]

or

filterfiles = [
    'ACS_F814W.filter',
    'arbitrary.filter',
    'wfc3-F125W.filter',
    'wfc3-F160W.filter',
]

Both jobs are still not completed after 100+ hours of computation.

Is that normal?

@ACDylan
Copy link
Author

ACDylan commented Aug 20, 2021

Update:
After a week of computation, it ends by crashing with the following issue:

[main] exiting raytracing iteration
[virgo01:165581] *** An error occurred in MPI_Reduce
[virgo01:165581] *** reported by process [3394371585,0]
[virgo01:165581] *** on communicator MPI_COMM_WORLD
[virgo01:165581] *** MPI_ERR_COUNT: invalid count argument
[virgo01:165581] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[virgo01:165581] ***    and potentially your MPI job)
Run did not complete successfully: output file appears to be corrupt
delta_chunk_indices =  730
Entering Pool.map multiprocessing for Stellar SED generation
Execution time for SED generation in Pool.map multiprocessing = 0:01:24.301950
[SED_gen: ] total_lum_in_sed_gen =  92053.93069181351
adding point source collections
Non-Cosmological Simulation: Adding Disk and Bulge Stars:
adding disk stars to the grid: adding as a point source collection
[source_creation/add_bulge_disk_stars:] totallum_disktars =  1.5511171212480686e+39
[source_creation/add_binned_seds:] Execution time for point source collection adding = 0:00:06.261077
[source_creation/add_binned_seds:] Total Luminosity of point source collection is:  1.0232616928666393e+44
Done adding Sources
Setting up Model
[pd_front_end]: Beginning RT Stage: Calculating SED using a binned spectrum
Beginning Monochromatic Imaging RT
An error occurred, and the run did not complete

@dnarayanan
Copy link
Owner

Hi - very sorry for the delay. The run typically shouldn't take a whole week though it is true that imaging is the slowest part of the code, and the more filters you add, it will add a roughly linear amount to the computation per wavelength in the filter files. So this could become quite onerous.

Are you able to get the code to finish with just a very simple filter file that has only one or two wavelengths just to debug this?

@ACDylan
Copy link
Author

ACDylan commented Aug 20, 2021

Hi - Thank you again for your time.
With the "Arbitrary Filter", it takes less than one hour and works well.
Computation takes longer when I change (or add) another filter.

Should I try to put FIX_SED_MONOCHROMATIC_WAVELENGTHS to True in a range 0.3-0.4 micron (example)?

@dnarayanan
Copy link
Owner

FIX_SED_MONOCHROMATIC_WAVELENGTHS is useful for running an SED calculation at specific wavelengths -- this can be useful if you want to highly resolve nebular lines (for example), or to have the radiative transfer run at the exact wavelengths the FSPS models are created at.

What is your overall goal? That might help enable me to point you in the right direction.

@ACDylan
Copy link
Author

ACDylan commented Aug 20, 2021

My overall goal first is to produce SED of isolated galaxy using my own snapshots (and possibly compare them with SKIRT's one) and also to produce "realistic" RGB images of these galaxies.

e.g. Figure 10 of the Powderday's paper: https://arxiv.org/pdf/2006.10757.pdf
Figure 2 of Snyder et al. (2015a) https://ui.adsabs.harvard.edu/abs/2015MNRAS.451.4290S/abstract

@dnarayanan
Copy link
Owner

Okay great!

For the images -- Greg Snyder made these himself. I'm linking the codes he used and a short description he gave to me via email.

The functions I use to make the RGB objects are here:
https://github.com/gsnyder206/synthetic-image-morph/blob/master/make_color_image.py
And a totally not-random usage example is here (ignore the parent folder that was probably a whoopsie):
https://github.com/gsnyder206/mock-surveys/blob/master/original_illustris/powderday_rgb.py
 
This code takes a little getting used to, involving some trial and error, unit conversion etc.  I almost always use the function “make_interactive_nasa” which makes bright parts look white as in STScI press releases.  Aside from the fudge factors and units, there are 2 parameters that control the image brightness and features – alph and Q.  I usually start by setting Q to almost zero (1e-10), and then adjust alph coarsely until I like the way the lowest surface brightness features look. With Q tiny, the image might be totally saturated in the center, but that’s expected.  Then, when the faint fuzzy parts are to my liking, I make Q more like ~1 and adjust from there to get the middle saturation and color looking sensible.

@ACDylan
Copy link
Author

ACDylan commented Aug 23, 2021

Hi - In the example of Greg Snyder, he used a file named 'image.1.0.angle'+angle+'.npz' to use it as R,G or B like

r=0.75*1.0e-35*((np.load('image.1.0.angle'+angle+'.npz'))['image'])
g=1.0*1.0e-35*((np.load('image.0.5.angle'+angle+'.npz'))['image'])
b=2.5*1.0e-35*((np.load('image.0.3.angle'+angle+'.npz'))['image'])

However, my simulations gave me the following:

  • cell_info.240_0.npz
  • convolved.240.hdf5
  • grid_physical_properties.240_galaxy0.npz
  • SKIRT.240_galaxy0.gas.particles.txt / SKIRT.240_galaxy0.stars.particles.txt
  • stellar_seds.240_galaxy0.npz

Am I missing an option in parameters_master.py ?

@dnarayanan
Copy link
Owner

dnarayanan commented Aug 23, 2021

oh sorry about that - these are just npz files I packaged for greg. they're just the 2D image array at a given wavelength -- i.e. they look like this:

In [2]: data = np.load('image.0.3.angle2.npz')

In [3]: data.files
Out[3]: ['image']

In [4]: data['image']
Out[4]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]])

In [5]: data['image'].shape
Out[5]: (768, 768)

@ACDylan
Copy link
Author

ACDylan commented Aug 24, 2021

Thanks for the clarification.
Sorry, I still have questions regarding pd and its behaviour:
"At a given wavelength" means that a pd simulation was performed at this given wavelength (i.e. with a parameter or a specific filter)? Or it is possible to select after the simulation with the output files? Because, in general with pd, it is still not clear for me how to perform a simulation for a specific wavelength.
This would help me figure out which files I should replace 'image.0.3.angle2.npz' with (for r, g and b parameters in the python script of Snyder).

@ACDylan
Copy link
Author

ACDylan commented Aug 24, 2021

Should I use

  • cell_info.240_0.npz
    
  • grid_physical_properties.240_galaxy0.npz
    
  • stellar_seds.240_galaxy0.npz?
    

Because, by showing files, they contains

>>> data = np.load('cell_info.240_0.npz')
>>> data.files
['refined', 'fc1', 'fw1', 'xmin', 'xmax', 'ymin', 'ymax', 'zmin', 'zmax']
>>> data = np.load('grid_physical_properties.240_galaxy0.npz')
>>> data.files
['particle_fh2', 'particle_fh1', 'particle_gas_mass', 'particle_star_mass', 'particle_star_metallicity', 'particle_stellar_formation_time', 'grid_gas_metallicity', 'grid_gas_mass', 'grid_star_mass', 'particle_sfr', 'particle_dustmass']
>>> data = np.load('stellar_seds.240_galaxy0.npz')
>>> data.files
['nu', 'fnu', 'lam', 'flam', 'README']

@dnarayanan
Copy link
Owner

sorry - what I mean is:

  1. run a simulation with imaging on
  2. you can read in the .image HDF5 file with something like convenience/make_image_single_wavelength
  3. in the above, the line:
image = m.get_image(units='ergs/s')

will have in it the 2D array with the image in it. you could use this to create an npz file to use with Greg's code (or alternatively change up his code to use the 2D array information from the imaging HDF5 file -- i.e. basically merge what's happening in the make_image_single_wavelength.py file with the lines:

r=0.75*1.0e-35*((np.load('image.1.0.angle'+angle+'.npz'))['image'])
g=1.0*1.0e-35*((np.load('image.0.5.angle'+angle+'.npz'))['image'])
b=2.5*1.0e-35*((np.load('image.0.3.angle'+angle+'.npz'))['image'])

another way of saying this is: for the example in the powderday paper, I ran 3 images at 1 micron (r), 0.5 microns (g), and 0.3 microns (b) -- these are the 1, 0.5 and 0.3 in the file names above. then I saved the 2D arrays of those images (using the m.get_image line above to extract the array) as npz files for greg's code. you could probably just merge these processes though and skip the npz part

@ACDylan
Copy link
Author

ACDylan commented Aug 25, 2021

Hi - Thank you for your kind help and your time, really appreciated.

I have decided to merge both codes into one (only using external make_color_image of Greg for the make_interactive_nasa function)

The main core of my function (to compute r, g and b) is:

m = ModelOutput('/home/chosson/simulation/arbi/output_sim.image')

# Get the image from the ModelOutput object
image = m.get_image(units='ergs/s')

# Find the closest wavelength at 1 micron
wav = 1.0  # micron
iwav = np.argmin(np.abs(wav - image.wav))
r=0.75*1.0e-35*image.val[0, :, :, iwav]

# Find the closest wavelength at 0.5 micron
wav = 0.5  # micron
iwav = np.argmin(np.abs(wav - image.wav))
g=1.00*1.0e-35*image.val[0, :, :, iwav]

# Find the closest wavelength at 0.3 micron
wav = 0.3  # micron
iwav = np.argmin(np.abs(wav - image.wav))
b=2.50*1.0e-35*image.val[0, :, :, iwav]

Is the computation of r,g,b with *image.val[0, :, :, iwav] correct according to your sentence: "I ran 3 images at 1 micron (r), 0.5 microns (g), and 0.3 microns (b) [...], then I saved the 2D arrays of those images".
Because my 2D arrays are looking strange.

@dnarayanan
Copy link
Owner

Hi, I think this should be right -- assuming that you have run an imaging run with a filter set up at 0.3, 0.5 and 1 micron.

What looks strange about the 2D arrays?

@ACDylan
Copy link
Author

ACDylan commented Aug 29, 2021

Hi - Sorry for my late answer.
Indeed it is right, let me explain: The 2D arrays were ones of my snapshots. I have then moved to gizmo_mw_zoom as a test to see if both my simulation parameters and python code are working.
I have attached the image I obtained with gizmo_mw_zoom here:

image

I am really satisfied with it (thank you again for your help)!

However, back to my own snapshot (with same filters/parameters, python code for RGB, etc.), this is what I have:
image

This is probably why I was suspecting the '.image' of my own snapshots.
Do you have any suggestion that might explain why this is not working with my snaphots?

@dnarayanan
Copy link
Owner

great glad we're getting somewhere good!

a few things come to mind:

a. first I think there is some tuning necessary in greg's code to get the image to be aesthetically pleasing. here, it looks like there's some saturation in the middle of the disk>

b. I've found that increasing the photon count can have a dramatic impact on images. what photon count are you using? perhaps trying increasing by a factor 10 to see if it helps?

@ACDylan
Copy link
Author

ACDylan commented Aug 30, 2021

I used 1e6 photons and moved to 1e7.
Edit : I am running a 1e8 simulation to see if it changes anything.

I have modified parameters in greg's code but still obtained an image like

image

Could this come from the filters used?
For both personal snapshot and gizmo_mw_zoom, I used:
u_SDSS for 0.3 micron
g_SDDS for 0.5 micron
z_SDSS for 1 micron.

@ACDylan
Copy link
Author

ACDylan commented Sep 2, 2021

Hi - It seems that 1e8 photons take quit a while to perform (still not finished).
Can snapshot resolution also play a role in the imaging?
Your gizmo_mw_zoom is 6Gb while my mock test is 50Mb.

@dnarayanan
Copy link
Owner

dnarayanan commented Sep 2, 2021

hi yes - lower resolution simulations with few particles can only be so refined and will therefore limit the resolution of the image

@ACDylan
Copy link
Author

ACDylan commented Sep 11, 2021

Hi - I performed a new simulation with more particles, increasing the resolution.
Then, regarding Powderday, I set the following parameters:

# RT Information
n_photons_initial = 1e7
n_photons_imaging = 1e7
n_photons_raytracing_sources = 1e7
n_photons_raytracing_dust = 1e7

for two different pixel sizes (running two simulations):
With npix_x = 512 / npix_y = 512:
image

And with npix_x = 1024 / npix_y = 1024:
image

As you can see, I have a pretty good rendering of the galaxy here.
However, are we seeing a lot of random noise introduced from the finite Monte Carlo methods used by Hyperion/Powderday?
Because it seems that adding more pixels make the rendering better... But also make the noisy visualization problem worse.
And I don't think that it is a plotting or scaling issue.

I am running a simulation with 1e8 photons for npix = 1024 to see if I have a difference.

@ACDylan
Copy link
Author

ACDylan commented Sep 15, 2021

Hi - Sorry to bother you but, do you have any idea where the noise is coming from, and if it would be possible to remove it?

@dnarayanan
Copy link
Owner

Hi - there are generally two possibilities here:

(1) low photon count. 1e8 will be expensive but will solve whether or not increasing photon count makes things better
(2) low particle count in the parent simulation. certainly I've seen much better results in very high particle count simulations.

can you describe a bit more the parent simulation?

@ACDylan
Copy link
Author

ACDylan commented Sep 15, 2021

Hi -
Here is the particle counts of the parent simulation:
NGAS=1e6 ----> mgas_res = 8.593e3 Msun
NHALO=1e6 ----> mdark_res = 1.254e6 Msun
NDISK=1e6 ----> mdisk_res = 3.4373e4 Msun
NBULGE=1,25e5 ----> mbulge_res = 3.4373e4 Msun (Bulge to total disk ratio=0.1, Bulge to stellar disk=0.125)

@dnarayanan
Copy link
Owner

Hi,

Ah yeah - so this may be somewhat limited by NGAS. You could see if you could run a simulation with a factor of 8 higher resolution possibly to see if it looked better -- if you could compare just an early few snapshots (at least, once enough metals have formed that you have enough dust since presumably your dust content is set by the dust to metals ratio) with this run that might be telling. I'm guessing this is an idealized disk so hopefully not too difficult to run?

@ACDylan
Copy link
Author

ACDylan commented Sep 15, 2021

Hi,
Exactly, so I'm going to run a new simulation right away with your advice.
Thank you again.

@dnarayanan
Copy link
Owner

great keep me posted!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants