Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle memory error while modeling a dataset? #189

Closed
joaofrancafisica opened this issue Apr 9, 2022 · 2 comments
Closed

How to handle memory error while modeling a dataset? #189

joaofrancafisica opened this issue Apr 9, 2022 · 2 comments

Comments

@joaofrancafisica
Copy link

Hi all, my name is João França. I am a PhD student from CBPF, Brazil and I plan to write my thesis about strong lensing systems. PyAutoLens have been helping a lot! By the way, I am trying to fit 20 different strong lens systems using the DynesticStatic method from autofit but it keeps returning an error related to memory ([Errno 12] Cannot allocate memory) by about the 10th system. The code has been structured similarly to the "fitting multiple datasets" tutorial in the AF website. In other words, I have a class object that gets each image path, mounts the imaging object (as well as each model) and applies the optimization. Checking at the task manager, it looks like all results have been stored in memory, as a cache, until the error arises. My laptop has 8gb ram and an intel i5 of 10th gen. Is there something I can try to avoid such problem? I have tried to use the SQL database option and disabled the prior_passer options in the ini config file, but it doesn't seem to help. I am sorry if I am missing something.

Thanks,

João.

@Jammy2211
Copy link
Owner

Hi,

Could you send the full script you are usingt o model lenses?

I suspect the solution is to simply run the script for each lens one at a time, as opposed to to modeling all lenses in the same script (I assume you have a for loop in your script cycling over the data).

So, something like:

from os import path
import sys
import json

workspace_path = path.join(path.sep, "your", "workspace", "path")

""" 
__AUTOLENS + DATA__
"""
import autofit as af
import autolens as al

dataset_name = [
    "lens_0", 
    "lens_1",  
    "lens_2",  
]

dataset_name = dataset_name[int(sys.argv[1])] # sys.argv[1] loads the first value passwed in from when Python is run

dataset_path = path.join(cosma_dataset_path, dataset_label, dataset_name)

imaging = al.Imaging.from_fits(
    image_path=f"{dataset_path}/image_lens_light_scaled.fits",
    psf_path=f"{dataset_path}/psf.fits",
    noise_map_path=f"{dataset_path}/noise_map_scaled.fits",
    pixel_scales=pixel_scales,
    name=dataset_name,
)

If the above script is called model.py and you run python3 model.py 0, the 0 is passed to int(sys.argv[1]) such that the lens_0 string and dataset is loaded and used.

You can then simply run python3 model.py 1 and so on to model all lenses.

@joaofrancafisica
Copy link
Author

Thanks for your answer!

Could you send the full script you are usingt o model lenses?

Sure, here is the class that manages the fit process:

class pipeline:

    def __init__(self, system_identifier_name, lens_redshift, source_redshift, pixel_scales, core_usage=5, session=None, dataset_name=None):

        self.system_identifier_name = system_identifier_name
        self.lens_redshift = lens_redshift
        self.source_redshift = source_redshift
        self.pixel_scales = pixel_scales
        self.core_usage = core_usage
        self.session = session
        self.dataset_name = dataset_name

    def fit_system_autolens(self, residual_image, original_noise_map, original_psf, nlive, fit_mcmc=True, system_radius=8.0):

        image_object = al.Imaging(al.Array2D.manual(np.array(residual_image, dtype=float), pixel_scales=self.pixel_scales), # cutout
                                  al.Array2D.manual(np.array(original_noise_map, dtype=float), pixel_scales=self.pixel_scales), # noise_map 
                                  al.Kernel2D.manual(np.array(original_psf, dtype=float), pixel_scales=self.pixel_scales, shape_native=(100, 100))) # psf

        mask = al.Mask2D.circular(shape_native=image_object.shape_native, pixel_scales=image_object.pixel_scales, radius=system_radius)
        masked_object = image_object.apply_mask(mask=mask)

        # source galaxy model
        bulge = af.Model(al.lmp.EllSersic)
        source_galaxy_model = af.Model(al.Galaxy,
                                       redshift=self.source_redshift,
                                       bulge=bulge)
        # lens galaxy model
        lens_galaxy_model = af.Model(al.Galaxy,
                                     redshift=self.lens_redshift ,
                                     mass=al.mp.EllIsothermal)    

        autolens_model = af.Collection(galaxies=af.Collection(lens=lens_galaxy_model, source=source_galaxy_model))
        # autolens fit full bright distribution

        if fit_mcmc:
            search = af.Emcee(
                path_prefix='./',
                name=str(self.system_identifier_name)+'_source_light',
                unique_tag = self.dataset_name,
                session=self.session)
        else:
            print(str(self.system_identifier_name) + '_source_light', self.dataset_name)
            search = af.DynestyStatic(path_prefix = './',
                                      name = str(self.system_identifier_name) + '_source_light',
                                      unique_tag = self.dataset_name,
                                      nlive = nlive,
                                      number_of_cores = self.core_usage, # be carefull here! verify your core numbers
                                      session=self.session) 

        analysis = al.AnalysisImaging(dataset=masked_object)
        result = search.fit(model=autolens_model, analysis=analysis)
        return result

But, I agree, running a script should be a better solution to this problem. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants