New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
demix steps fail after array exception warning #258
Comments
Hi maria-mmtz, unfortunately I am not able to reproduce your error with a raw LBA data set. Have these data already been pre-processed? What is the integration time and your frequency resolution as well as the total amount of input frequency channels and time steps compared to your demixing step parameters? |
Hi @adrabent,
My data is HBA and I downloaded them from the LTA, I suppose they have already been processed up to a certain degree?
I am not sure how to accurately answer these questions... The integration time is 1s with 64 channels/subband and 243 subbands in total, the averaging steps is in time 1.0 and in frequency 4.0. In the demixing the averaging steps is 10 for time and 16 for frequency. Does that help? If not what else should I look into? |
Hmm.. I was just wondering if demix might need some regular grid, i.e. if you have, lets's say 600 timesteps and you use like demix_timestep of 77, then it could crash. |
@adrabent yes I had uploaded two MSs, one of the target (L733787_SB145_uv.MS) and one of the calibrator (L733793_SB145_uv.MS) at /data/scratch/moutzouri |
I did some modifications on the target pipeline. |
Hi, I still get an error, however it seems to be a different one, can't quite figure out what happened. I'm attaching the log file |
I seem to recall the new error had something to do with copying data. Are you running out of disk space perhaps? |
Hi, I run it again after freeing up some space. The target data folder is about 5TB and I have 12TB free. It fails again, the output is the same. |
I just ran into this error myself (though not in prefactor), and it was due to running out of memory. You might watch the memory usage (e.g., with "top") while it's running to see if this could be the problem. |
Hello, it seems that after the pipeline stopped, NDPPP was still eating up all the memory so I manually forced it to stop. I ran it again and it looks like it worked a little better but it's still unsuccessful. |
Hi, I compressed the log file from the previous run |
That log shows
Does it (still) exist? If it does, it might have become corrupted if a previous time it crashed/got killed mid-write for example and you may need to get a fresh copy of it. |
@maria-mmtz |
Hi, I did that and now I get this message that I didn't get before: |
It tries to predict the A-Team sources and to write this into the This file is created in the step |
Hi, I think I've tackled this issue and the memory problem (it looks like it was using more than 200GB before it crashed again, is that normal?). I'm now receiving this message: |
@maria-mmtz .. since original issue was solved, I close this issue. If you encounter any other/new issues please open up a new thread. |
Hi,
I have been trying to reduce my data using prefactor, however, my target is very close to A-team sources. During the pipeline run for the target I received this interesting warning before it failed for each subband:
WARNING node.852592d2a0bb.executable_args.L733787_SB243_uv.MS: /opt/lofarsoft/bin/NDPPP stderr:
std exception detected: ArrayBase::operator()(b,e,i) - incorrectly specified
begin: [0, 0]
end: [74, 74]
incr: [1, 1]
array shape: [62, 62]
required: b >= 0; b <= e; e < shape; i >= 0
I am not sure how to rerun dppp outside the pipeline to check how the demix steps are working, but I am attaching my latest logfile and parset (changed to .txt so I can upload it) if this is of any help.
Pre-Facet-Target.parset.txt
pipeline-Pre-Facet-Target-2019-10-10T12:14:58.log
The text was updated successfully, but these errors were encountered: