-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
R versus python #18
Comments
Snap - I've just been looking at the same thing. The issue is that different sets of postcodes get selected - check the displays in the existing documents. There's a very different set in the vicinity of Casey. In python the 10km radius is to the centroid, while the R version appears to be to the nearest point of the postcode. Either is fine, but lets go with the version that is easier to reproduce on both systems. |
Oh yeah, they really are quite different. I'll incorporate that difference as well and update the code. EDIT: Bingo! That's the origin of the difference. Code push impending ... |
hmm... that commit reduced the load to Casey to the desired 13% or so, but also reduced the load to Dandenong so that Kingston is then around 60%. That means there are still other differences at play here. |
The postcodes in the 2 sets weren't quite exactly matched, so the above commit fixed that. Results still don't agree too strongly, something like
I suppose that's not too bad, but it would still be nice to have them somewhat closer ... |
So the python code of @gboeing first estimates the stroke incidence per postcode based on the basic demographic data, which the R code does not do. The latest commit modifies the sampling scheme so that numbers of cases are scaled to the estimate incidence rates per postcode, resulting in ...
|
That commit gets it to something like:
Those values for R come from:
The error estimates come from the code at the end of the README. Importantly, these error estimates themselves are not very reproducible, indicating that some portion of the R-vs-py differences must be presumed to arise from sampling effects alone. Because the python code was based on 1,000 random points in total, while the R code used that number per postcode (and there are 57 of those), the latter must be presumed more accurate here. In contrast to the above values, a more realistic estimate from the R code is likely to come from using the weighted street network, and sampling addresses within each postcode, which corresponds to these values:
|
Future ref for @gboeing: I'll dig more deeply in |
To avoid junking up #12, I've created a new document at
RehabCatchments/rvspy
(in the README so can be directly viewed on github) that attempts to examine reasons for the different results generated by @gboeing in python and myself in R. To repeat the table in #12, these differences were in terms of final estimated case loads on each rehab centre (here just the relative percentages):I had hypothesised there that differences could be due to
The new document here compares both of those differences, yet generates quite robust estimates reflecting the previous R values, yet still failing to recreate the python values. Any insights, help, solutions appreciated here!
The text was updated successfully, but these errors were encountered: