Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARNING:root:WARNING: First collocation point, ... is at a baseline shorter than the shortest deprojected baseline in the dataset #143

Closed
bjnorfolk opened this issue Nov 27, 2020 · 13 comments

Comments

@bjnorfolk
Copy link

Hi,
I've recently updated frank in an attempt to write my own person package that wraps my visibility plotting code and has an option to model the visibilities with frank. I had previously been working on an old branch from Richard Booth. The updated frank is printing the new error:
WARNING:root:WARNING: First collocation point, q[0] = 2.632e+04 \lambda, is at a baseline shorter than the shortest deprojected baseline in the dataset, min(uv) = 2.939e+04 \lambda. For q[0] << min(uv), the fit's total flux may be biased low.
Only for low obs ATCA data and it produces weird fits.
The old fits in the image didn't throw the error and have a low chi2 value (I computed chi2 using: (real_data - model_data)**2/real_data_err)

image

Whereas these new fits are oscillating strangely for the same alpha and ws choices, and the chi2 is significantly higher, plus the errors produced by uvbinner are larger.
image

@jeffjennings
Copy link
Collaborator

jeffjennings commented Nov 27, 2020

Hey Brodie,

So if I understand, you didn't have this warning when you were using an older version of frank, but you've since pulled the latest version of master and are getting the warning.

The warning indicates that the shortest baseline in the frank fit (the first collocation point in visibility space) is at smaller baseline than the shortest baseline in your dataset (you can see this in the frank fit lines in your new plot going to shorter baseline than the first datapoint). This could cause the fit to become erroneous because it would be trying to extrapolate the visibility amplitude to baselines on which there is no information, and the data look like they have a nontrivial slope at short baseline, so the fit could predict the visibility amplitude keeps rising at short baseline (as it appears to have done), when instead it should probably plateau.

Are you explicitly passing in baselines at which to evaluate the Hankel transform of the brightness profile? (These points are the q array in DiscreteHankelTransform.transform.)

It's a bit hard to judge by eye, but it looks to me like the errorbar in each uv bin is the same size in the new plot as in the old plot, just that the new plot itself is taller. This would suggest your deprojected visibilities haven't changed. Or did you verify numerically that the new errorbars are larger?

Thanks!

@rbooth200
Copy link
Collaborator

@jeffjennings - thanks for getting on this so quickly. @bjnorfolk 's dataset is a bit different to the ALMA ones we're used to (low SNR, sparser uv coverage) so I'll add my two-cents.

@bjnorfolk - I didn't quite understand what (if anything) you've changed between the fits. We added the warning after you are seeing after some issues that arose in a few tests when the largest scale in the fit (Rmax) is much larger than the maximum recoverable scale of the data. Since frank damps power in the reconstructed brightness when the SNR of the data is low (or zero for missing short / long baselines) this can lead to a fit with a low (or zero) total flux. However, the code should still run and produce a result when that warning is printed (can you let me know if that's not the case). Essentially, we added that warning to encourage people to check that the fit (and extrapolation) at the shortest baselines looks reasonable.

@bjnorfolk
Copy link
Author

Thanks for the reply @jeffjennings and @rbooth200 .

You're right @jeffjennings, apologies for the obvious error. The visibility errors are identical, it was just the figure size.

But the main issue still remains, if you look at the old fit for alpha = 1.01 and ws=1e-2, the chi2 value is 1.875 and the fit looks "smoother". Whereas for the new fit, chi2=5.109 and the brightness profile doesn't resemble the gaussian ring I get in the old model. I'll include how I use Frank below the images
Old fit
image
New fit
image

#Setting up model
bin_width = 30.0
inc= 29.5
PA=151.0
dRA=0.0
dDec=0.0
disk_geometry = FixedGeometry(float(inc), float(PA), dRA=dRA, dDec=dDec)

Rmax = 3
N = 250
alpha = 1.01
ws = 1e-2
FF = FrankFitter(Rmax=Rmax, N=N, geometry=disk_geometry, alpha=alpha, weights_smooth=ws)
#Solving Model
sol = FF.fit(self.u, self.v, self.vis, self.wgt)
#Deprojecting
u_deproj, v_deproj, vis_deproj = sol.geometry.apply_correction(self.u, self.v, self.vis)
baselines = (u_deproj**2 + v_deproj**2)**.5

model_grid = np.logspace(np.log10(min(baselines.min(), sol.q[0])), np.log10(max(baselines.max(), sol.q[-1])), 10**4)
binned_vis = UVDataBinner(baselines, vis_deproj, weights, bin_width)
#Brightness profile
I_nn = sol.solve_non_negative()
#Vis model
vis_model = sol.predict_deprojected(model_grid, I=I_nn).real
#Vis model for chi2
vis_model_realbins = sol.predict_deprojected(binned_vis.uv, I=I_nn).real
#Visibilities
real = binned_vis.V.real
real_err = binned_vis.error.real
img = binned_vis.V.imag
img_err = binned_vis.error.imag
#Chi2
log = np.sum((real - vis_model_realbins)**2/real_err)

#Plotting ...

@rbooth200
Copy link
Collaborator

@bjnorfolk have you changed anything apart from the version of frank used (e.g. the weights)?

@bjnorfolk
Copy link
Author

I do re-estimate the weights using the previous function you sent me, but I did do this previously for the old model. As a sanity check here's the models if I don't re-weight:
image
But it appears the weights for our ATCA data is statistical.

def estimate_baseline_dependent_weight(q, V, bin_width):
   
	uvBin = UVDataBinner(q, V, np.ones_like(q), bin_width)
	var = 0.5*(uvBin.error.real**2 + uvBin.error.imag**2) * uvBin.bin_counts

	weights = np.full_like(q, np.nan)
	left, right = uvBin.bin_edges
	for i, [l,r] in enumerate(zip(left, right)):
		idx = (q >= l) & (q < r)
		weights[idx] = 1/var[i]

	assert np.all(~np.isnan(weights)), "Weights needed for all data points"
	return weights

@rbooth200
Copy link
Collaborator

Ok, thanks. Do you still have access to the old version of your code?

@bjnorfolk
Copy link
Author

This is where I'm kicking myself - I accidentally updated now I'm in this predicament. It was the "error" branch you originally suggested I use when I first tested frank.

@rbooth200
Copy link
Collaborator

I might still have it somewhere. In the mean time, I wonder if the old solutions hadn't fully converged. If you reduce the number of iterations to e.g. 200, do you get something that looks more like your old results?

@bjnorfolk
Copy link
Author

Unfortunately not
N=200
image
N=150
image

@jeffjennings
Copy link
Collaborator

jeffjennings commented Nov 27, 2020

hmm Richard will likely have the best insight, but two of the new fits -- alpha=1.01, ws=1e-4; and alpha=1.1, ws=1e-4 -- look to me to be very similar, perhaps identical, to two of the old fits that use alpha=1.2 and 1.3. That would entail that the noise properties (this could be the weights, the (u, v) points, and/or the visibility amplitudes) of the dataset currently being passed into frank are different from the dataset that gave your old results.

And the new fits look surprisingly bad; they seem biased low relative to the data for baselines <~200 k\lambda, and biased high relative to the data between ~300 - 400 k\lambda. I wonder if you're overplotting the old binned visibilities and new frank visibility fits that are to a different visibility distribution.

@bjnorfolk are you sure that self.wgt in the first line of your below code block is the same array as weights in the last line?

sol = FF.fit(self.u, self.v, self.vis, self.wgt)
#Deprojecting
u_deproj, v_deproj, vis_deproj = sol.geometry.apply_correction(self.u, self.v, self.vis)
baselines = (u_deproj**2 + v_deproj**2)**.5

model_grid = np.logspace(np.log10(min(baselines.min(), sol.q[0])), np.log10(max(baselines.max(), sol.q[-1])), 10**4)
binned_vis = UVDataBinner(baselines, vis_deproj, weights, bin_width)

@bjnorfolk
Copy link
Author

Thanks for checking @jeffjennings, I should've included my entire code prior to the previous message. I'm confident that self.wgt is the same as weights, in the code I sent I either re-estimate the weights or pass weights = self.wgt.

if est_weights == True:
	baselines = (self.u**2 + self.v**2)**.5
	weights = estimate_baseline_dependent_weight(baselines, self.vis, bin_width)
else:
	weights = self.wgt

@bjnorfolk
Copy link
Author

It appears that with low SNR and sparser uv coverage data, Rmax plays a key role in the fit. I've discovered that low values of Rmax produce fits identical to that previously obtained (I must have set a lower Rmax value to begin with). Thanks for all the help!

@rbooth200
Copy link
Collaborator

Thanks - I think your ATCA data is on the edge of the SNR range that can be reasonably fit with frank. Thus too large an Rmax produces more degrees of freedom in the model than can be constrained by the data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants