-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processing imagery with dask/xarray: example application for identifying outlier imagery #154
Comments
Do you think it would be useful to use the ratio of rmse to psnr to filter out images? For example if the ratio of some images is much larger than the average that image gets tossed? |
I think that would be a good idea. Also I liked your idea of "It might also be possible to adapt this idea for outputs ... you can imagine loading all the "predseg" images into an array, making the average, and filtering out bad segmentations using a similar approach ... (for v2 maybe)" |
here is my script I can now work on a combined label filter and shoreline detection (from #168) workflow |
new_shoreline_detect_workflow.zip This contains two scripts, one that filters out bad images automatically (seems to work quite well), and the other is my latest attempt at a shoreline detection algorithm. This approach is working quite well, so we can discuss what steps may be required for its implementation. In the end, the part based on the distance transform was causing more problems than it was solving, and I ended up ditching the boundary tracing algorithm too because of bad results when the coastline is not linear I'll be out for most of the next week, but wanted to make sure I updated here first. Some outputs pasted below |
Thanks for providing your code and testing these algorithms on a variety of sites. |
This issue is under development. In coastseg I've already implemented the functionality to sort the model outputs into "good" and "bad" directories using the This logic has been incorporated into the main branch. for satname in satellites:
# get all the model_outputs that have the satellite in the filename
files = glob(f"{session_path}{os.sep}*{satname}*.npz")
if len(files) != 0:
filter_model_outputs(satname, files, good_folder, bad_folder)
# for each satellite get the list of files that were sorted as 'good'
filtered_files = get_filtered_files_dict(good_folder, "npz", sitename)
# keep only the metadata for the files that were sorted as 'good'
metadata = edit_metadata(metadata, filtered_files) |
Problem: automatically identify bad satellite imagery
Potential solution?
Imports and Dask cluster:
Read in a folder of files from a particular sensor. We expect each image to be identical size. If it is not, it is discarded
Make xarray
Make a refernece time-averaged image, ignoring NaNs
Cycle through each image and compute RMSE and PSNR metrics. Good images should have low RMSE and high PSNR
Make an animated gif of all the imagery and values
Example (truncated) inputs:
ms.zip
Example output:
The text was updated successfully, but these errors were encountered: