Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependencies #2

Closed
jacobjcastaneda opened this issue Apr 27, 2021 · 7 comments
Closed

Dependencies #2

jacobjcastaneda opened this issue Apr 27, 2021 · 7 comments

Comments

@jacobjcastaneda
Copy link

Hi!

I love the work you have done here. I have had some trouble with the dependencies and would also like to know if you would be open to getting in contact so I could ask more questions. Additionally, is it possible to review your publication as it is not yet published?

Best,

Jacob

@ydoherty
Copy link
Owner

Hi Jacob,

Yeah for sure. What operating system are you using? It 100% works on mac (that's what I wrote it in), but I had some issues running it on windows which I haven't had time to get around to resolving yet.

The best installation method is using the coastsat planetscope environment.yml file if you haven't tried that yet.

Regarding the publication, we have had some delays but the plan is to release a pre-print shortly once we submit for peer review. I can let you know/provide a link once that happens!

Regards,
Yarran

@jacobjcastaneda
Copy link
Author

jacobjcastaneda commented Apr 28, 2021

Hi Yarran,

I'm running it on Mac but was wondering it is possible to have issues with the new OS Big Sur.

First, I had issues with the version of GDAL you specify which I found others were having similar issues with in other projects.

After, fixing that I had issues with scikit-learn which would only work on my machine while simultaneously working with your code using version 22.2.post1 --I believe that's the name.

I was thinking the previous packages didn't gel with the new OS but it's also certainly possible there are underlying issues with previous installations I made.

After getting the code to run, I ran into an issue wherein I've given the code my PlanetScope file which contains the .tif,.xml, etc and it read the files but produced no output. Have you come across this issue before?

I stepped through the code and it seems that since there are 'udm2' files in my planet scope download that your program is skipping the cloud mask assignment altogether (and not also looking to see if 'udm' file simultaneously exist) thus causing my run to fail once it needs the cloud mask. My resolution to this point has been to just remove the 'udm2' files. Still learning so im not sure what the difference is between 'udm2' and 'udm' but will look into it.

Now with that, in the function ref_im_select() it cannot pick one of my images for the reference image. Both images are found to meet the condition sum(nan_mask) > 0 and sum(im_ms[:, :, 0] > nan_max therefore it continues and does not ever plot the corrected RGB below. This doesnt quite make sense to me. Hoping you could help with that. Did you by chance use this function or did you provide a reference image from the get-go.

Interestingly, when I commented the code (see image below I add breakpoints in a cluster to make it obvious which lines im talking about), it worked. At least to the extent that ive stepped through that function. Can you help me understand the point of those four lines? im note sure I get why they are there.

Screen Shot 2021-04-28 at 9 12 14 PM

Jacob

@ydoherty
Copy link
Owner

ydoherty commented May 1, 2021

Hi Jacob,

I just updated to OS Big Sur this week and have had to re-install anaconda as a result annoyingly. I tried installing my planetscope environment.yml file again which worked for me, however when running the code I am getting gdal errors (I assume similar to what you had). I had run into this issue previously (and scikit-learn issues) but the most recent environment.yml file had resolved both of those from my end at least. Did you get past the installation steps with a modified .yml file? I'd be interested to know what you did so I can try resolve the installation problems.

I have an older environment which I can run the code from so I'll try export that as a workaround in the meantime.

Regarding udm2 masks, basically udm was the original data quality mask which Planet then updated to udm2 in 2018. For all the images I downloaded for my thesis/paper (~1000 imaged from 2016-2020), none had a udm2 mask so I never got around to implementing any udm2 features. I had put some code in to flag udm2 masks not being compatible yet but it may not be working. Deleting the udm2 files seems like a good workaround if you also have associated udm files.

I'll try run you through the code you've commented out. Since I was batch downloading and processing ~1000 images, some of the images I'd downloaded for my AOI only covered a tiny fraction of the AOI (say 50 pixels in one corner). This was causing problems when selecting a reference image so I wrote that code snippet to skip them. As a quick overview:

np.sum(im_ms[:,:,0] == 0) counts the number of zero valued pixels in the image (ie. regions in the AOI not actually part of the image) while nan_mask is the number of nan values in the image (pixel errors etc.)

    if np.sum(im_ms[:,:,0] == 0) > nan_max or np.sum(nan_mask) > 0:
        continue

^^^ basically filters any image that has a nan value or lots of zero value pixels. The 'continue' statement skips to the next loop so if both your images satisfy the criteria (ie. True) then both images will be skipped so it makes sense it wasn't working for you. Since I was dealing with so many images I could be picky and have it only show me images completely covering the AOI and with no nan pixels. I can see this being an issue with only a few images which may have some nan pixels but still be useable. Commenting out that section is fine since you'll manually select the image from the popup window.

Does it work after that? As long as you've got a reasonably clear reference image selected with minimal nan pixels it should work for the georectification/merging steps.

Hope you don't have too many bugs and I'm keen on feedback if it works! I only used two study sites (Duck USA/Sydney AUS) in developing the code and it works for those two sites so keen to know how it goes elsewhere.

Cheers,
Yarran

@jacobjcastaneda
Copy link
Author

jacobjcastaneda commented May 1, 2021

Hi Yarran,

It's great to know the issue isn't isolated to me. I ended up getting it to work and it did great! I'm using it on San Francisco Bay and it performed very well even in turbid water without any retraining.

I stepped through every part of your code and now surprisingly know it pretty well with the exception of the nuance in your classification algorithm.

I resolved the environment issues with gdal 3.2.1 and scikit-learn 22.2.post1. GDAL 3.2.1 I figured out worked because another conda env I had worked and was using it. I don't know why your environment.yml file downloads the old version but I just forced it once updated it worked. For scikit-learn, the version I chose was best because it's the last version the has not yet deprecated the package syntax you use in your pickled classification data, but also still works for python 3.xx. A work around to this I think would be providing a secondary version of the pickled file with the imported packages updated to the new path for the MLP module you use. Moreover, joblib is no longer stored in sklearn.externals. However, 22.2.post1 still lets you maintain your existing syntax, albeit with a warning of deprecation.

Attached you will find a .yml file of the conda environment I'm currently running to use your code.

For some reason, the snippet of code you used to filter your images was preventing me from assigning a reference image. Most likely, it's because I was pretty careless in defining my Aoi. For now I have commented that code out. I performed my run without co-registration. Despite reading through your code, the coregistration process was something I could not figure out. Should it be a more complete image that the image fragments can refer to? Is this a tool for tracking in time? It's unclear to me. Also, does the image you co-register to have to be planet or could it be Landsat/sentinel for instance where the swath is huge and guarantees a large complete image (since planet scope has so many small bands of imagery). Any info you have on that would be helpful.

Additionally, it would be helpful to note on the read-me file that buried in the image merging functions is a need to set the path to GDAL specific to your machine. For most macs this will be opt/anaconda3/{env name}/bin but could vary. Perhaps including a variable for this in the settings dictionary would be helpful in the future.

Lastly, I'm wondering how you went about considering error bounds for your shoreline detection. Can you refer me to any resources for doing so?

See link for .yml file:

https://stanford.box.com/s/kchxsjsyt793zzim9tczctwgz00fwyup

Note: I set this to expire may 15, 2021.

Best,
Jacob

@ydoherty
Copy link
Owner

Hi Jacob,

Apologies for the delayed response, I've been flat out recently. Good to hear it's getting some use and seems to be working!

Thanks for the efforts with resolving environment/dependency issues. I'll implement them into the GitHub repo when I have some spare time in the next few weeks hopefully.

Regarding the reference image selection, if you're only using a few images (say <10), there is a chance that all the PS images you're using either don't cover the entire AOI, or have some nan/cloud pixels (sometimes the Planet cloud filter accidentally filters whitewater or bright roofs thinking it's cloud). Were you able to select a reference image with that filter section commented out? If so co-registration should still work. As long as a clear image with minimal cloud is selected it will work fine. I can tweak the code to allow selection of images with some nan/cloud pixels so users don't need to comment out code.

The co-registration is all a bit confusing. Essentially in ref_im_select() you select a raw PS image to be used as a base reference image which all the other PS images are aligned to using arosics. As you mentioned, the raw PS images are relatively small (~25x15km from memory) so may not entirely cover the AOI. In my code PS images with a similar timestamp (+-1s) from the same individual Dove satellite are merged to form a single scene for shoreline extraction. I found that sometimes these two scenes were slightly offset from one another hence I put the co-registration step before the merge step. The base image needs to clear and cover the entire AOI to enable all images to be aligned (hence the filter I had in place). My AOI for my study sites was ~5km x 1km so I had no trouble finding images that covered the entire AOI but this may prove difficult for larger study regions without access to a large number of images.

Regarding the reference image source, based on the arosics paper/readme site it is definitely possible to use a non-PlanetScope image as a reference image. It may change the accuracy/reliability of the co-registration but you can refer to the arosics paper for their assessment on that. Landsat/sentinel may be too coarse to improve the geolocation accuracy, but a higher res image should work. You'd need to modify the code, but hopefully all that would be needed would be to change the variable storing the reference image filepath so that it points to the new reference image. You would also need to ensure that a non PS reference image had the same number of bands (RGB + NiR) and that the order of these bands in the image was the same. It would also need to be the same reference system. I can't recall if a nan mask is used, but if so you could just make a dummy mask.

On the accuracy error bounds, I had access to two large sets of in-situ transect surveys to which I could validate the PS shoreline accuracy. The RMSE I state in the readme is based on the difference between the survey and PS image shorelines. I found an RMSE of 3-5m after correcting for tide based on a generic beach slope (similar method to CoastSat). All the method/results details are outlined in the paper which will be available as a pre-print soon. I'll follow up with my supervisor and let you know when that will be available.

Regards,
Yarran

@jacobjcastaneda
Copy link
Author

Hey Yarran,

I wanted to clarify with you what is meant by "NmB". I was assuming this was shorthand for NDWI due to the difference between NIR and blue radiance at TOA.

Also, does your code take DN data or TOA ? Planet labs now provides TOA, so i was providing that before I realized it seems you compute TOA so my TOA in your code ends up being multiplied by the conversion twice (once by planet and once by your code) I assume this still gives a reasonable result albeit not entirely correct.

Last, do you save a pkl or provide a means for saving a .tif of the classified pixels themselves ? I see an NmB .tif which I thought was just a NDWI output by pixel.

Thanks !

Jacob

@ydoherty
Copy link
Owner

Hi Jacob,

NmB is near-infrared minus blue (TOA radiance). I tried out NDWI (normalised NiR-Green) and 5 other band combination indices and NmB performed the best during validation studies for both Narrabeen (Aus) and Duck (USA). These are outlined in the paper which should be available as a pre-print in mid June.

TOA imagery was not available when I wrote the code so it's based on raw DN images which the code then converts to TOA. Using DN imagery would be needed if TOA is now available until the code is tweaked. I'll update this in the readme to clarify.

Yep the classified image should be saved as a .tif file. It is saved in the same location as the intermediate nan/cloud/TOA images. Filepath should be something along these lines:
...CoastSat.PlanetScope/outputs/LOCATION_NAME/toa_image_data/merged_data/COREG_SETTING/2016-02-19 16_19_05 0c73 PS2 cropped class.tif

Regards,
Yarran

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants