Partial fix to #181 and a bug fix in photon()#187
Partial fix to #181 and a bug fix in photon()#187allegroLeiden merged 4 commits intolime-rt:masterfrom tlunttil:fix_iteration
Conversation
RNG seeds are now saved and reset at the start of each iteration, so that all iterations use the same set of rays. This allows better control of convergence.
Fixed a bug in photon() relating to in/out split. Also added schedule(dynamic) to the main loop in levelPops().
|
Note that in this PR resetting the RNG for each iteration is always used. Maybe that should be a user option? |
|
A further comment: the potential race condition problem caused by updating the level populations on-the-fly had already been solved as a side product of @allegroLeiden 's PR #152. |
That would probably be better - at least for now. For Lime-2.0 we could look forward to boiling away all the excess alternatives (rubs hands together). A couple of other comments:
|
|
Yes, I started thinking that making it an optional thing may be better for the time being. So should I put another parameter in configInfo? The other comments:
|
|
I guess another parameter is inevitable... I look forward to a grand cull one day. |
|
I just mentioned this PR to Michiel, and he made an interesting suggestion: that the number of photons per grid cell needs at some level to be adaptively altered; and that the criterion for increasing the number of photons for any grid cell (it would unfortunately require that info to be kept for each grid cell) would be some function of the noisiness of the population time series for that cell. He suggested a 'fixed-seed' phase to arrive quickly at a solution, which will probably be quite a good one for the majority of grid cells, followed by a 'random-seed' phase in which the adaptive stuff would take place. I don't propose that you should go away and code such up, but I'd be interested in your views on it, because it sounded a good idea to me. |
|
Yes, I think that is a good idea. So good that I think I once proposed something similar myself ;) (And all this is of course also related to #160.) Making it work (well) requires a complete reworking of convergence monitoring, which I think is going to have to wait a while. |
|
I made the RNG reset thing user-settable and defaulting to off and returned I also seem to have removed unused parameter |
Let them eat cake. |
Quite possibly, somewhere in the wilderness of github-land. I thought however that your proposal was to adaptively weight the photon directions, which is a bit different. |
Hmph, you've thwarted my attempt to usurp priority. My idea (I think we've only discussed this in emails) is basically to improve sampling of J by making the angular sampling better in some of the cells. The simplest option is of course to just use more rays with isotropically distributed directions; that adaptive direction stuff is a further step. However, my idea to look for noisiness in contributions from rays from different directions (during one iteration) to select where to refine the sampling, instead of using changes in level populations between iterations. It seems to me that looking at different iterations mixes the effect of MC noise and convergence and it would be hard to know which of these is causing the variations. Although if I understood correctly, Michiel's idea would be to start the scheme only after reaching a semi-converged solution, so variation would indeed be mostly caused by MC noise due to random ray directions (and frequencies, as long as we're still doing that with MC), |
Well ok, but then what is 'MC noise' when you think about it? Imagine 2 grid points: point A receives fairly isotropic radiation, but inward radiation to point B is dominated by a bright source nearby in the model. Would one not expect that separate iterations with random photons would produce a fairly quiet time series for A but a jumpy one for B? Ideally we would in that case, for B, put more photon directions towards the bright source. An adaptive way to do that would be fine but a simple brute-force method in which the photon distribution was kept isotropic but the number was increased would also work, more robustly if not as efficiently. However I don't think this is something we could feasibly jam into 1.7. Sorry if I forgot the previous discussion - I'm an old bugger, I forget things. |
Yes, I agree, at least after a few iterations. In the early phase I could imagine there being a lot of jumpiness even in an isotropic field due to convergence (i.e., even with perfect sampling J would change significantly from one iteration to another, but with MC noise on top of that it could be hard to tell what is causing what). I completely agree that this isn't something for 1.7. Indeed, I think a more important short-term goal would be to get some sort of convergence monitoring. Currently the iteration is just stopped after a set number of rounds without any checks (and even that may be hard to do in time for 1.7). This sampling stuff is a different, but related, issue.
I'm not sure if I ever discussed this much with you or possibly someone else, or if they were just some private thoughts. Anyway, my comment about priority wasn't entirely serious ;) |
|
And of course there's still that strange 'scale the number of rays according to Delaunay edges' thing going on (issue #160). I think I'll get to that next. |
|
I am working on changing choice of next edge from nearest-to-photon-direction to nearest-to-photon-track at the mo. PR will come shortly. |

This PR saves the random seeds and resets RNGs so that each iteration will have use the same set of rays. This should allow better monitoring of convergence in a yet-to-be-written PR. This PR also includes a bug fix/improvement to photon(), where the in/out split was not used quite correctly.