Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test suite #12

Closed
marcosci opened this issue Jun 7, 2018 · 42 comments
Closed

Test suite #12

marcosci opened this issue Jun 7, 2018 · 42 comments
Assignees

Comments

@marcosci
Copy link
Member

marcosci commented Jun 7, 2018

@Nowosad suggested to test the results of our functions against the results from fragstats.

To do so, @mhesselbarth ran each examplary landscape in the package through fragstats:
fragstat_landscapemetrics.zip

Would make sense to bring the fragstats results in a tidy format and then come up with a streamlined approach to test everything.

@Nowosad
Copy link
Member

Nowosad commented Jun 9, 2018

I'll work more on it when the functions will be in a more stable state, but you can see a sneak preview for the landscape for a landscape and class level below:

Landscape level

abbreviation value_fs value
TA 0.09 900.0000000
AREA_MN 0.0033 33.3333333
AREA_CV 268.3046 273.4155755
AREA_SD 0.0089 91.1385252
LPI 50.7778 50.7777778
TE 364 364.0000000
NP 27 27.0000000
PR 3 3.0000000
PRD 3333.3333 0.3333333
RPR NA NA
ENN_MN 3.1816 3.2235465
SHEI 0.9194 0.9193879
SHDI 1.0101 1.0100508

Class level

abbreviation class value_fs value
TA NA NA NA
AREA_MN 1 0.002 19.888889
AREA_MN 3 0.012 119.750000
AREA_MN 2 0.0017 17.285714
AREA_CV 1 228.5906 242.456974
AREA_CV 3 162.6136 187.770006
AREA_CV 2 159.0109 165.013438
AREA_SD 1 0.0045 48.221998
AREA_SD 3 0.0195 224.854583
AREA_SD 2 0.0027 28.523751
PLAND 1 19.8889 19.888889
PLAND 1 19.8889 90.000000
PLAND 3 53.2222 53.222222
PLAND 3 53.2222 160.500000
PLAND 2 26.8889 26.888889
PLAND 2 26.8889 113.500000
LPI 1 16.4444 16.444444
LPI 3 50.7778 50.777778
LPI 2 10.8889 10.888889
NP 1 9 9.000000
NP 3 4 4.000000
NP 2 14 14.000000
ENN_MN 1 3.6829 3.682856
ENN_MN 3 2 2.000000
ENN_MN 2 3.1969 3.277861

@mhesselbarth
Copy link
Member

Nice...thanks for preparing this! 👍

One reasons for some differences are the different units (everything area related). FRAGSTATS uses hectare, whereas we are currently using something like 'map units'. Assuming that a resolution of xy(1,1) means a cell is 1x1 meter, this translates to square meter at the moment.

@jhollist
Copy link

I've been fairly absent in this so far... But quick thought on the units. I like how it currently is coded to use map units. Assuming hectares I think is confusing. Presumably users will have put some thought into projection (a boy can dream!) and thus changing to hectares under the hood may not be desired.

Also, great work on all this! I am impressed and humbled by your progress.

@Nowosad
Copy link
Member

Nowosad commented Jun 14, 2018

I agree with Jeff. We should be using the map units. I can just apply hectar<->meter correction for the testing purposes.

@marcosci
Copy link
Member Author

Thanks for your input both!

That's basically the same discussion as Max and I had, but (I have no reason why) we ended up with thinking it would be more appropriate to have the same results as in FRAGSTATS (to be able to reproduce past results) and make the comparison of different maps more intuitive. Any thoughts on that?

@mhesselbarth
Copy link
Member

The only requirement for the user is currently that the cell size (resolution) of the raster must be in meter. However, it doesn't matter if a cell is 1 x 1 meter oder 30 x 30 meteres, that is automatically taken into account by all functions. This gives us the advantages that the results are quiet naturally to interpret (e.g. the edge length in meter is a tangible result) and probably easy to compare to other studies (e.g. using FRAGSTATS). I believe in this context requiring the cell size of the input raster in meters is a rather fair assumption.

@Nowosad
Copy link
Member

Nowosad commented Jun 16, 2018

Update of the tests. Most of the values are identical or very similar. However, there is a several that do not fit - for example, AREA_CV, AREA_SD, ENN_MN:

Landscape levels

abbreviation value_fs value
TA 0.0900 0.0900000
AREA_MN 0.0033 0.0033333
AREA_CV 268.3046 273.4155755
AREA_SD 0.0089 0.0091139
LPI 50.7778 50.7777778
TE 364.0000 364.0000000
NP 27.0000 27.0000000
PR 3.0000 3.0000000
PRD 3333.3333 33.3333333
RPR NA NA
ENN_MN 3.1816 3.2235465
SHEI 0.9194 0.9193879
SHDI 1.0101 1.0100508

Class levels

abbreviation class value_fs value
TA NA NA NA
AREA_MN 1 0.0020 0.0019889
AREA_MN 3 0.0120 0.0119750
AREA_MN 2 0.0017 0.0017286
AREA_CV 1 228.5906 242.4569742
AREA_CV 3 162.6136 187.7700064
AREA_CV 2 159.0109 165.0134380
AREA_SD 1 0.0045 0.0048222
AREA_SD 3 0.0195 0.0224855
AREA_SD 2 0.0027 0.0028524
PLAND 1 19.8889 19.8888889
PLAND 3 53.2222 53.2222222
PLAND 2 26.8889 26.8888889
LPI 1 16.4444 16.4444444
LPI 3 50.7778 50.7777778
LPI 2 10.8889 10.8888889
NP 1 9.0000 9.0000000
NP 3 4.0000 4.0000000
NP 2 14.0000 14.0000000
ENN_MN 1 3.6829 3.6828562
ENN_MN 3 2.0000 2.0000000
ENN_MN 2 3.1969 3.2778606

@Nowosad
Copy link
Member

Nowosad commented Jun 17, 2018

I 've found one issue comparing the results on a patch level - numbering of patches differs between FRAGSTATS and landscapemetrics:

Fragstat

metric class id value_fs
area 1 1 0.0001
area 1 5 0.0148
area 1 8 0.0005
area 1 16 0.0014
area 1 19 0.0001
area 1 22 0.0005
area 1 23 0.0001
area 1 25 0.0001
area 1 26 0.0003
area 3 2 0.0009
area 3 4 0.0457
area 3 21 0.0010
area 3 27 0.0003
area 2 3 0.0035
area 2 6 0.0057
area 2 7 0.0024
area 2 9 0.0002
area 2 10 0.0001
area 2 11 0.0002
area 2 12 0.0003
area 2 13 0.0003
area 2 14 0.0004
area 2 15 0.0002
area 2 17 0.0098
area 2 18 0.0003
area 2 20 0.0001
area 2 24 0.0007

landscapemetrics

layer level class id metric value
1 patch 1 1 area 0.0001
1 patch 1 2 area 0.0005
1 patch 1 3 area 0.0148
1 patch 1 4 area 0.0001
1 patch 1 5 area 0.0001
1 patch 1 6 area 0.0014
1 patch 1 7 area 0.0003
1 patch 1 8 area 0.0005
1 patch 1 9 area 0.0001
1 patch 2 10 area 0.0035
1 patch 2 11 area 0.0002
1 patch 2 12 area 0.0002
1 patch 2 13 area 0.0098
1 patch 2 14 area 0.0002
1 patch 2 15 area 0.0001
1 patch 2 16 area 0.0024
1 patch 2 17 area 0.0001
1 patch 2 18 area 0.0003
1 patch 2 19 area 0.0003
1 patch 2 20 area 0.0057
1 patch 2 21 area 0.0004
1 patch 2 22 area 0.0007
1 patch 2 23 area 0.0003
1 patch 3 24 area 0.0457
1 patch 3 25 area 0.0009
1 patch 3 26 area 0.0003
1 patch 3 27 area 0.0010

@marcosci
Copy link
Member Author

Could we just use %in%?
We start numbering the patches from the upper left corner, I have no clue how FRAGSTATS is doing that.

@Nowosad
Copy link
Member

Nowosad commented Jun 18, 2018

Sure, we can do that.

@marcosci
Copy link
Member Author

Do you have an educated opinion regarding the tolerance for your test?
For some metrics, I think it is impossible to achieve the exact same result (as we don't know how it was implemented in FRAGSTATS) - so loosening it a bit more is in my opinion totally fine.

Btw, is codecov capable of detecting the nested functions in lsm_calculate or are we only covering a single function then?

@Nowosad
Copy link
Member

Nowosad commented Jun 18, 2018

Good questions @marcosci , I do not have good answers though..

  1. Based on the examples above, FRAGSTATS gives values with four digits after the decimal. Therefore, the tolerance should be one level higher (0.001). Am I right?
  2. I do not know, but assume that codecov is not capable of this kind of detection. What do you think? Is it better to have a test for each function separately?

@marcosci
Copy link
Member Author

marcosci commented Jun 18, 2018

Yeah, I am lacking that, too.

  1. If you and Jeff are voting for having map units rather than hectare I would just increase it. It s anyways a different result than for some metrics. I would argue for something like a 10% difference, if we are beyound that we completely missed it and there is a high chance that we did something wrong. Or FRAGSTATS, and we just can't check 😆

  2. Hm, I think the laziness in me is leaning towards just a have three scale level tests. However, having a single test file for each metric is probably worth the effort. Maybe we can turn that also in a result for an .Rmd to show on the webpage, where landscapemetrics and FRAGSTATS match and where not?

    • I pushed some files that import the fragstat results as data and scripts to prepare them. There is now also a test file for lsm_p_area that makes use of that. Is that a way to tackle it?

@Nowosad
Copy link
Member

Nowosad commented Jun 19, 2018

  1. A decision about map units is not very important for the testing purposes. I can always rescale compared values just before testing. A precision of actual FRAGSTATS calculations is more important (Bugs in FRAGSTATS #13 (comment)) - some of the metrics depend on each other, and therefore some outputs can be visibly different.
  2. I think I will create a single test set for each metric. I also think that we should take a look at every value (at least just for the landscape dataset) and make an educated decision if it is correct. (BTW I like the webpage idea)

@marcosci
Copy link
Member Author

Thanks for your input 👍

I just uploaded some tests for all patch level metrics we have at the moment (and where we are to some degree sure they are correct ...). For every metric where we have a complete match with FRAGSTATS, I included a test for that. In every other case, it just tests for consistency of the resulting tibble.

I will start a new vignette soon where I will collect everything we have on why our metrics differ and why.
Also maybe showing 1 or 2 plots, of the cases where we match and where not.

@marcosci
Copy link
Member Author

@Nowosad is there a reason why you use the hardcoded values instead of the tibbles?
Just asking, if you did it there probably is.

@Nowosad
Copy link
Member

Nowosad commented Jun 20, 2018

@marcosci probably not;) I will improve that later this week.

@marcosci
Copy link
Member Author

😅 fair enough. I would vote for my solution (if we find out we used the wrong fragstats metrics that does mean we don't have to change hardcoded values in 200 files ...).

I pushed tests now for every function on every level. If you feel there is a good test missing, just point it out and will help including it.

@Nowosad
Copy link
Member

Nowosad commented Jun 24, 2018

@marcosci Now I've improved and clean many of the tests, etc. I also fixed the travis settings alowing for a longer testing time.

@marcosci
Copy link
Member Author

Nice job @Nowosad, thanks a lot! Seems like I got a bit dizzy while copying around ...

Right now, we are a bit off with some metrics, while others are equal. For the ones where we are in the range of FRAGSTATS, I think we have good arguments for why that is the case. The ones way off definitely need some work.

You implemented the comparison now for every metric - does this mean that your goal is it to have an exact replica of the FRAGSTATS results? Max and I were also discussing this, as this defines how to progress from now on.

@Nowosad
Copy link
Member

Nowosad commented Jun 25, 2018

My thinking now - we should investigate the metrics with results different from FRAGSTATS one by one:

  • If the difference is due to a bug in FRAGSTATS - document it and comment the tests out
  • If the difference is due to the area definition - multiply the results in the test (we can also add an argument to give the output in the same units as in FRAGSTATS)
  • If the difference is due to a calculation precision - document it and adjust the tests
  • What else??

@marcosci
Copy link
Member Author

I started that now and completed it for the patch level.

I uncommented the tests where we don´t get the result, but we think we implemented it correctly.
Precision and area were corrected in the tests.
A list of the metrics that are different between the two tools can now be found here:

https://marcosci.github.io/landscapemetrics/articles/articles/comparing_fragstats_landscapemetrics.html#differences-on-patch-level

... and will do the same for class and landscape level.

@Nowosad
Copy link
Member

Nowosad commented Jun 30, 2018

Great work Marco.

I compile a list of all mismatches (there are still two issues on the patch level):

── 1. Failure: lsm_c_area_cv results are equal to fragstats (@test-lsm-c-area-cv.R#7)  ───
── 2. Failure: lsm_c_area_sd results are equal to fragstats (@test-lsm-c-area-sd.R#7)  ───
── 3. Failure: lsm_c_cai_cv results are equal to fragstats (@test-lsm-c-cai-cv.R#7)  ─────
── 4. Failure: lsm_c_cai_mn results are equal to fragstats (@test-lsm-c-cai-mn.R#7)  ─────
── 5. Failure: lsm_c_cai_sd results are equal to fragstats (@test-lsm-c-cai-sd.R#7)  ─────
── 6. Failure: lsm_c_circle_cv results are equal to fragstats (@test-lsm-c-circle-cv.R#7) 
── 7. Failure: lsm_c_circle_mn results are equal to fragstats (@test-lsm-c-circle-mn.R#7) 
── 8. Failure: lsm_c_circle_sd results are equal to fragstats (@test-lsm-c-circle-sd.R#7) 
── 9. Failure: lsm_c_cohesion results are equal to fragstats (@test-lsm-c-cohesion.R#7)  ─
── 10. Failure: lsm_c_core_cv results are equal to fragstats (@test-lsm-c-core-cv.R#7)  ──
── 11. Failure: lsm_c_core_mn results are equal to fragstats (@test-lsm-c-core-mn.R#7)  ──
── 12. Failure: lsm_c_core_sd results are equal to fragstats (@test-lsm-c-core-sd.R#7)  ──
── 13. Failure: lsm_c_cpland results are equal to fragstats (@test-lsm-c-cpland.R#7)  ────
── 14. Failure: lsm_c_dcad results are equal to fragstats (@test-lsm-c-dcad.R#7)  ────────
── 15. Failure: lsm_c_dcore_cv results are equal to fragstats (@test-lsm-c-dcore_cv.R#7)  
── 16. Failure: lsm_c_dcore_mn results are equal to fragstats (@test-lsm-c-dcore_mn.R#7)  
── 17. Failure: lsm_c_dcore_sd results are equal to fragstats (@test-lsm-c-dcore_sd.R#7)  
── 18. Failure: lsm_c_enn_cv results are equal to fragstats (@test-lsm-c-enn-cv.R#7)  ────
── 19. Failure: lsm_c_enn_mn results are equal to fragstats (@test-lsm-c-enn-mn.R#7)  ────
── 20. Failure: lsm_c_enn_sd results are equal to fragstats (@test-lsm-c-enn-sd.R#7)  ────
── 21. Failure: lsm_c_frac_cv results are equal to fragstats (@test-lsm-c-frac-cv.R#7)  ──
── 22. Failure: lsm_c_frac_sd results are equal to fragstats (@test-lsm-c-frac-sd.R#7)  ──
── 23. Failure: lsm_c_gyrate_cv results are equal to fragstats (@test-lsm-c-gyrate-cv.R#7)
── 24. Failure: lsm_c_gyrate_mn results are equal to fragstats (@test-lsm-c-gyrate-mn.R#7)
── 25. Failure: lsm_c_gyrate_sd results are equal to fragstats (@test-lsm-c-gyrate-sd.R#7)
── 26. Failure: lsm_c_lsi results are equal to fragstats (@test-lsm-c-lsi.R#7)  ──────────
── 27. Failure: lsm_c_ndca results are equal to fragstats (@test-lsm-c-ndca.R#7)  ────────
── 28. Failure: lsm_c_pafrac results are equal to fragstats (@test-lsm-c-pafrac.R#7)  ────
── 29. Failure: lsm_c_para_cv results are equal to fragstats (@test-lsm-c-para-cv.R#7)  ──
── 30. Failure: lsm_c_para_mn results are equal to fragstats (@test-lsm-c-para-mn.R#7)  ──
── 31. Failure: lsm_c_para_sd results are equal to fragstats (@test-lsm-c-para-sd.R#7)  ──
── 32. Failure: lsm_c_pladj results are equal to fragstats (@test-lsm-c-pladj.R#7)  ──────
── 33. Failure: lsm_c_shape_cv results are equal to fragstats (@test-lsm-c-shape-cv.R#7)  
── 34. Failure: lsm_c_shape_mn results are equal to fragstats (@test-lsm-c-shape-mn.R#7)  
── 35. Failure: lsm_c_shape_sd results are equal to fragstats (@test-lsm-c-shape-sd.R#7)  
── 36. Failure: lsm_c_tca results are equal to fragstats (@test-lsm-c-tca.R#7)  ──────────
── 37. Failure: lsm_l_area_cv results are equal to fragstats (@test-lsm-l-area-cv.R#7)  ──
── 38. Failure: lsm_l_area_sd results are equal to fragstats (@test-lsm-l-area-sd.R#7)  ──
── 39. Failure: lsm_l_cai_cv results are equal to fragstats (@test-lsm-l-cai-cv.R#7)  ────
── 40. Failure: lsm_l_cai_mn results are equal to fragstats (@test-lsm-l-cai-mn.R#7)  ────
── 41. Failure: lsm_l_cai_sd results are equal to fragstats (@test-lsm-l-cai-sd.R#7)  ────
── 42. Failure: lsm_l_circle_cv results are equal to fragstats (@test-lsm-l-circle-cv.R#7)
── 43. Failure: lsm_l_circle_mn results are equal to fragstats (@test-lsm-l-circle-mn.R#7)
── 44. Failure: lsm_l_circle_sd results are equal to fragstats (@test-lsm-l-circle-sd.R#7)
── 45. Failure: lsm_l_core_cv results are equal to fragstats (@test-lsm-l-core-cv.R#7)  ──
── 46. Failure: lsm_l_core_mn results are equal to fragstats (@test-lsm-l-core-mn.R#7)  ──
── 47. Failure: lsm_l_core_sd results are equal to fragstats (@test-lsm-l-core-sd.R#7)  ──
── 48. Failure: lsm_l_dcad results are equal to fragstats (@test-lsm-l-dcad.R#7)  ────────
── 49. Failure: lsm_l_dcore_cv results are equal to fragstats (@test-lsm-l-dcore-cv.R#7)  
── 50. Failure: lsm_l_dcore_mn results are equal to fragstats (@test-lsm-l-dcore-mn.R#7)  
── 51. Failure: lsm_l_dcore_sd results are equal to fragstats (@test-lsm-l-dcore-sd.R#7)  
── 52. Failure: lsm_l_enn_cv results are equal to fragstats (@test-lsm-l-enn-cv.R#7)  ────
── 53. Failure: lsm_l_enn_mn results are equal to fragstats (@test-lsm-l-enn-mn.R#7)  ────
── 54. Failure: lsm_l_enn_sd results are equal to fragstats (@test-lsm-l-enn-sd.R#7)  ────
── 55. Failure: lsm_l_frac_cv results are equal to fragstats (@test-lsm-l-frac-cv.R#7)  ──
── 56. Failure: lsm_l_frac_sd results are equal to fragstats (@test-lsm-l-frac-sd.R#7)  ──
── 57. Failure: lsm_l_gyrate_cv results are equal to fragstats (@test-lsm-l-gyrate-cv.R#7)
── 58. Failure: lsm_l_gyrate_mn results are equal to fragstats (@test-lsm-l-gyrate-mn.R#7)
── 59. Failure: lsm_l_gyrate_sd results are equal to fragstats (@test-lsm-l-gyrate-sd.R#7)
── 60. Failure: lsm_l_lsi results are equal to fragstats (@test-lsm-l-lsi.R#7)  ──────────
── 61. Failure: lsm_l_ndca results are equal to fragstats (@test-lsm-l-ndca.R#7)  ────────
── 62. Failure: lsm_l_para_cv results are equal to fragstats (@test-lsm-l-para-cv.R#7)  ──
── 63. Failure: lsm_l_para_mn results are equal to fragstats (@test-lsm-l-para-mn.R#7)  ──
── 64. Failure: lsm_l_para_sd results are equal to fragstats (@test-lsm-l-para-sd.R#7)  ──
── 65. Failure: lsm_l_shape_cv results are equal to fragstats (@test-lsm-l-shape-cv.R#7)  
── 66. Failure: lsm_l_shape_mn results are equal to fragstats (@test-lsm-l-shape-mn.R#7)  
── 67. Failure: lsm_l_shape_sd results are equal to fragstats (@test-lsm-l-shape-sd.R#7)  
── 68. Failure: lsm_l_tca results are equal to fragstats (@test-lsm-l-tca.R#7)  ──────────
── 69. Failure: lsm_p_core results are equal to fragstats (@test-lsm-p-core.R#7)  ────────
── 70. Failure: lsm_p_shape results are equal to fragstats (@test-lsm-p-shape.R#7)  ──────

@Nowosad
Copy link
Member

Nowosad commented Jul 1, 2018

Important note: everytime you uncomment the FRAGSTATS<-> landscapemetrics tests, you should add a hardcoded test instead. This could be very important in the future when working on performance fixes.

@marcosci
Copy link
Member Author

marcosci commented Jul 1, 2018

Makes sense :)

Weird - shape is the correct when I test it?
Can you show me the values?

@marcosci
Copy link
Member Author

marcosci commented Jul 1, 2018

Ah, I vaguely remember the issue we had there ...

The documentation in the manual as PDF and html differs - I guess you tested before pulling?
Will remove that section now in the vignette.

@Nowosad
Copy link
Member

Nowosad commented Jul 1, 2018

Ok, the updated list:

── 1. Failure: lsm_c_area_cv results are equal to fragstats (@test-lsm-c
── 2. Failure: lsm_c_area_sd results are equal to fragstats (@test-lsm-c
── 3. Failure: lsm_c_cai_cv results are equal to fragstats (@test-lsm-c-
── 4. Failure: lsm_c_cai_mn results are equal to fragstats (@test-lsm-c-
── 5. Failure: lsm_c_cai_sd results are equal to fragstats (@test-lsm-c-
── 6. Failure: lsm_c_circle_cv results are equal to fragstats (@test-lsm
── 7. Failure: lsm_c_circle_mn results are equal to fragstats (@test-lsm
── 8. Failure: lsm_c_circle_sd results are equal to fragstats (@test-lsm
── 9. Failure: lsm_c_core_cv results are equal to fragstats (@test-lsm-c
── 10. Failure: lsm_c_core_mn results are equal to fragstats (@test-lsm-
── 11. Failure: lsm_c_core_sd results are equal to fragstats (@test-lsm-
── 12. Failure: lsm_c_cpland results are equal to fragstats (@test-lsm-c
── 13. Failure: lsm_c_dcad results are equal to fragstats (@test-lsm-c-d
── 14. Failure: lsm_c_dcore_cv results are equal to fragstats (@test-lsm
── 15. Failure: lsm_c_dcore_mn results are equal to fragstats (@test-lsm
── 16. Failure: lsm_c_dcore_sd results are equal to fragstats (@test-lsm
── 17. Failure: lsm_c_enn_cv results are equal to fragstats (@test-lsm-c
── 18. Failure: lsm_c_enn_sd results are equal to fragstats (@test-lsm-c
── 19. Failure: lsm_c_frac_cv results are equal to fragstats (@test-lsm-
── 20. Failure: lsm_c_frac_sd results are equal to fragstats (@test-lsm-
── 21. Failure: lsm_c_gyrate_cv results are equal to fragstats (@test-ls
── 22. Failure: lsm_c_gyrate_mn results are equal to fragstats (@test-ls
── 23. Failure: lsm_c_gyrate_sd results are equal to fragstats (@test-ls
── 24. Failure: lsm_c_ndca results are equal to fragstats (@test-lsm-c-n
── 25. Failure: lsm_c_pafrac results are equal to fragstats (@test-lsm-c
── 26. Failure: lsm_c_para_cv results are equal to fragstats (@test-lsm-
── 27. Failure: lsm_c_para_mn results are equal to fragstats (@test-lsm-
── 28. Failure: lsm_c_para_sd results are equal to fragstats (@test-lsm-
── 29. Failure: lsm_c_pladj results are equal to fragstats (@test-lsm-c-
── 30. Failure: lsm_c_shape_cv results are equal to fragstats (@test-lsm
── 31. Failure: lsm_c_shape_sd results are equal to fragstats (@test-lsm
── 32. Failure: lsm_c_tca results are equal to fragstats (@test-lsm-c-tc
── 33. Failure: lsm_l_area_cv results are equal to fragstats (@test-lsm-
── 34. Failure: lsm_l_area_sd results are equal to fragstats (@test-lsm-
── 35. Failure: lsm_l_cai_cv results are equal to fragstats (@test-lsm-l
── 36. Failure: lsm_l_cai_mn results are equal to fragstats (@test-lsm-l
── 37. Failure: lsm_l_cai_sd results are equal to fragstats (@test-lsm-l
── 38. Failure: lsm_l_circle_cv results are equal to fragstats (@test-ls
── 39. Failure: lsm_l_circle_mn results are equal to fragstats (@test-ls
── 40. Failure: lsm_l_circle_sd results are equal to fragstats (@test-ls
── 41. Failure: lsm_l_core_cv results are equal to fragstats (@test-lsm-
── 42. Failure: lsm_l_core_mn results are equal to fragstats (@test-lsm-
── 43. Failure: lsm_l_core_sd results are equal to fragstats (@test-lsm-
── 44. Failure: lsm_l_dcad results are equal to fragstats (@test-lsm-l-d
── 45. Failure: lsm_l_dcore_cv results are equal to fragstats (@test-lsm
── 46. Failure: lsm_l_dcore_mn results are equal to fragstats (@test-lsm
── 47. Failure: lsm_l_dcore_sd results are equal to fragstats (@test-lsm
── 48. Failure: lsm_l_enn_cv results are equal to fragstats (@test-lsm-l
── 49. Failure: lsm_l_enn_sd results are equal to fragstats (@test-lsm-l
── 50. Failure: lsm_l_frac_cv results are equal to fragstats (@test-lsm-
── 51. Failure: lsm_l_frac_sd results are equal to fragstats (@test-lsm-
── 52. Failure: lsm_l_gyrate_cv results are equal to fragstats (@test-ls
── 53. Failure: lsm_l_gyrate_mn results are equal to fragstats (@test-ls
── 54. Failure: lsm_l_gyrate_sd results are equal to fragstats (@test-ls
── 55. Failure: lsm_l_ndca results are equal to fragstats (@test-lsm-l-n
── 56. Failure: lsm_l_para_cv results are equal to fragstats (@test-lsm-
── 57. Failure: lsm_l_para_mn results are equal to fragstats (@test-lsm-
── 58. Failure: lsm_l_para_sd results are equal to fragstats (@test-lsm-
── 59. Failure: lsm_l_shape_cv results are equal to fragstats (@test-lsm
── 60. Failure: lsm_l_shape_sd results are equal to fragstats (@test-lsm
── 61. Failure: lsm_l_tca results are equal to fragstats (@test-lsm-l-tc

@marcosci
Copy link
Member Author

marcosci commented Jul 2, 2018

I hardcoded every metric where it made sense, so fixing units and wrong calculations of sd/cv. The ones where I couldn't think of something meaningful to test against and where we already fail at replicating at patch level are listed in the difference vignette.

@mhesselbarth
Copy link
Member

I just had a quick glance, but seems like there are quite a few differences between the results for the podlasie_ccilc raster. My first guess was that this is due to the CRS

@mhesselbarth
Copy link
Member

Okay, I had a closer look. Some metrics are identical, some are not. It could also be a problem of the rounding issue

@Nowosad
Copy link
Member

Nowosad commented Jul 13, 2018

Yes, it could be due to the geographic CRS. This is why I added this dataset to the package - we should assume that users will provide data not only in a projected CRS...

@marcosci
Copy link
Member Author

Appears to be the rounding issue, as everything is fine with augusta for example.

@Nowosad
Copy link
Member

Nowosad commented Jul 24, 2018

Are you sure it is not a projection issue? Augusta has a projected CRS (in meters) while Podlasie has a geographic one (in degrees). Can it influence the results?

@marcosci
Copy link
Member Author

Relatively sure. You can't set a CRS in FRAGSTATS, only the number of cells and resolution - and we use the same properties, not including the projection at all. It s just that FRAGSTATS rounds the resolution of podlasie to 0.003 instead of using 0.002777778.

@Nowosad
Copy link
Member

Nowosad commented Jul 24, 2018

Ok, makes sense.
The new question is - how we should treat data in geographic coordinates. Is this influence the outcomes of the metrics? If so - should we (can we?) add a warning when a geographic CRS is used?

@Nowosad Nowosad reopened this Jul 24, 2018
@marcosci
Copy link
Member Author

marcosci commented Jul 24, 2018

Some of the metrics only make sense if you have a metric CRS.
We could check for non metric CRS and let these functions (using metric units) print a warning?

@jhollist
Copy link

Just flying by here.

Big +1 on the geographic coord checks. Without thinking much about this... It feels like not many metrics make much sense without being projected. If that is actually the case (and not just my whim) then requiring projected and not geographic coordinates might make sense...

@marcosci
Copy link
Member Author

So, to structure that a bit:

  1. I don't think that FRAGSTATS is doing any conversion - you can not specify that anywhere and while playing around with it it appeared to only read the number of cells and resolution as meta info about your raster.

  2. The function in the package should work no matter how projected your data is. However, you lose the information about the resulting units and cannot compare them to other metrics from further rasters with a differing CRS. This is due to the fact that FRAGSTATS returns some values in meter/hectare/10 hectare and we do the some for these metrics. This gives us two options (I can think of):

  • We check for the CRS and if it is not one with meters we give a warning saying that the units of the results are not in meters, they are in whatever unit your raster is projected. Furthermore, comparing to other studies is difficult, as they probably used another CRS.

  • We reproject internally to a metric CRS. Is that possible @Nowosad, @jhollist ? I am not very familiar with that.

@Nowosad
Copy link
Member

Nowosad commented Jul 24, 2018

The second option is technically possible, but rather not recommended. There is not a perfect metric CRS for all of the cases.
Therefore I would be in favor of the first option.

@marcosci
Copy link
Member Author

OK :-)

We could also make it in map units, but then we would lose the advantage to compare against studies in the literature that used FRAGSTATS - I think this would be less desirable than the warning?

@mhesselbarth
Copy link
Member

So if we throw a warning and just return the values, wouldn't that be somehow using map units ?

@marcosci marcosci mentioned this issue Aug 1, 2018
@marcosci
Copy link
Member Author

marcosci commented Aug 1, 2018

Moved the units discussion in a new thread. Tests are looking good right now ( > 1400 tests in total 😲 ) and I am just going through the ones with low coverage.

@marcosci marcosci closed this as completed Aug 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants