Skip to content
This repository has been archived by the owner on Sep 29, 2021. It is now read-only.

No module named 's5p_tools' #3

Closed
LillaBW2327 opened this issue Apr 19, 2021 · 12 comments
Closed

No module named 's5p_tools' #3

LillaBW2327 opened this issue Apr 19, 2021 · 12 comments

Comments

@LillaBW2327
Copy link

Hello, I was wondering if I may have some guidance on an issue I have encountered when trying to use the s5p-request.py script. When trying to run the following command I get an error : "ModuleNotFoundError: No module named 's5p_tools' "

I have not altered the s5p-request.py script in anyway and have followed the recommended set up instructions for the environment in conda:

conda create --override-channels -c conda-forge -c stcorp --file requirements.txt --name SP5

Any advice would be greatly appreciated. Thank you.

Command and error:

python s5p-request.py L2__O3____ --date NOW-1MONTH --aoi map.geojson
Traceback (most recent call last):
File "s5p-request.py", line 15, in
import s5p_tools
ModuleNotFoundError: No module named 's5p_tools'

@HichemOmr
Copy link

Hi there,

Try with this command line: python s5p-request.py L2__O3____ --date YYmmdd1 YYmmdd2 --aoi map.geojson

where YY-mm-dd1 == for e.g. 20210401 and YY-mm-dd2 ==fior e.g., 20210420

Note: The code is working now properly in my computer. Check again and let us know,

I hope this helps,

Cheers


@LillaBW2327
Copy link
Author

Hi,

Thank you for getting back to me so quickly, I really appreciate it!

I have tried the format you suggested and I still get the same error unfortunately:

python s5p-request.py L2__O3____ --date 210301 210310 --aoi map.geojson
Traceback (most recent call last):
File "s5p-request.py", line 15, in
from s5p_tools import (
ModuleNotFoundError: No module named 's5p_tools'

My workflow is:

conda create --override-channels -c conda-forge -c stcorp --file requirements.txt --name SP5
pip install sentinelsat
s5p-request.py L2__O3____ --date 210301 210310 --aoi map.geojson

I was wondering if maybe I have missed a step when defining 's5p_tools' at all?

Thank you again for your help.

My s5p-request.py script:

import argparse
import sys
import warnings
from multiprocessing import cpu_count
from os import makedirs
from os.path import exists
from pathlib import Path

import numpy as np
import pandas as pd
import rioxarray
import xarray as xr
from tqdm import tqdm

from s5p_tools import (
bounding_box,
convert_to_l3_products,
get_filenames_request,
request_copernicus_hub,
)

def main(
product,
aoi,
date,
qa,
unit,
resolution,
command,
chunk_size,
num_threads,
num_workers,
):

tqdm.write("\nRequesting products\n")

_, products = request_copernicus_hub(
    login=DHUS_USER,
    password=DHUS_PASSWORD,
    hub=DHUS_URL,
    aoi=aoi,
    date=date,
    platformname="Sentinel-5 Precursor",
    producttype=product,
    download_directory=DOWNLOAD_DIR / product,
    checksum=CHECKSUM,
    num_threads=num_threads,
)

L2_files_urls = get_filenames_request(products, DOWNLOAD_DIR / product)

if len(L2_files_urls) == 0:
    tqdm.write("Done\n")
    sys.exit(0)

# PREPROCESS DATA

tqdm.write("Converting into L3 products\n")

# harpconvert commands :
# the source data is filtered + binning data by latitude/longitude

keep_general = [
    "latitude",
    "longitude",
    "sensor_altitude",
    "sensor_azimuth_angle",
    "sensor_zenith_angle",
    "solar_azimuth_angle",
    "solar_zenith_angle",
]

harp_dict = {
    "L2__O3____": {
        "keep": [
            "O3_column_number_density",
            "O3_effective_temperature",
            "cloud_fraction",
        ],
        "filter": [f"O3_column_number_density_validity>={qa}"],
        "convert": [f"derive(O3_column_number_density [{unit}])"],
    },
    "L2__NO2___": {
        "keep": [
            "tropospheric_NO2_column_number_density",
            "NO2_column_number_density",
            "stratospheric_NO2_column_number_density",
            "NO2_slant_column_number_density",
            "tropopause_pressure",
            "absorbing_aerosol_index",
            "cloud_fraction",
        ],
        "filter": [
            f"tropospheric_NO2_column_number_density_validity>={qa}",
            "tropospheric_NO2_column_number_density>=0",
        ],
        "convert": [
            f"derive(tropospheric_NO2_column_number_density [{unit}])",
            f"derive(stratospheric_NO2_column_number_density [{unit}])",
            f"derive(NO2_column_number_density [{unit}])",
            f"derive(NO2_slant_column_number_density [{unit}])",
        ],
    },
    "L2__SO2___": {
        "keep": [
            "SO2_column_number_density",
            "SO2_column_number_density_amf",
            "SO2_slant_column_number_density",
            "cloud_fraction",
        ],
        "filter": [f"SO2_column_number_density_validity>={qa}"],
        "convert": [
            f"derive(SO2_column_number_density [{unit}])",
            f"derive(SO2_slant_column_number_density [{unit}])",
        ],
    },
    "L2__CO____": {
        "keep": ["CO_column_number_density", "H2O_column_number_density"],
        "filter": [f"CO_column_number_density_validity>={qa}"],
        "convert": [
            f"derive(CO_column_number_density [{unit}])",
            f"derive(H2O_column_number_density [{unit}])",
        ],
    },
    "L2__CH4___": {
        "keep": [
            "CH4_column_volume_mixing_ratio_dry_air",
            "aerosol_height",
            "aerosol_optical_depth",
            "cloud_fraction",
        ],
        "filter": [f"CH4_column_volume_mixing_ratio_dry_air_validity>={qa}"],
        "convert": [],
    },
    "L2__HCHO__": {
        "keep": [
            "tropospheric_HCHO_column_number_density",
            "tropospheric_HCHO_column_number_density_amf",
            "HCHO_slant_column_number_density",
            "cloud_fraction",
        ],
        "filter": [f"tropospheric_HCHO_column_number_density_validity>={qa}"],
        "convert": [
            f"derive(tropospheric_HCHO_column_number_density [{unit}])",
            f"derive(HCHO_slant_column_number_density [{unit}])",
        ],
    },
    "L2__CLOUD_": {
        "keep": [
            "cloud_fraction",
            "cloud_top_pressure",
            "cloud_top_height",
            "cloud_base_pressure",
            "cloud_base_height",
            "cloud_optical_depth",
            "surface_albedo",
        ],
        "filter": [f"cloud_fraction_validity>={qa}"],
        "convert": [],
    },
    "L2__AER_AI": {
        "keep": [
            "absorbing_aerosol_index",
        ],
        "filter": [f"absorbing_aerosol_index_validity>={qa}"],
        "convert": [],
    },
    "L2__AER_LH": {
        "keep": [
            "aerosol_height",
            "aerosol_pressure",
            "aerosol_optical_depth",
            "cloud_fraction",
        ],
        "filter": [f"aerosol_height_validity>={qa}"],
        "convert": [],
    },
}

# Step size for spatial re-gridding (in degrees)
lon_step, lat_step = resolution

if aoi is None:
    extent = [-180, 180, -90, 90]
else:
    extent = bounding_box(Path(aoi))

# computes offsets and number of samples
lat_edge_length = int(abs(extent[3] - extent[2]) / lat_step + 1)
lat_edge_offset = extent[2]
lon_edge_length = int(abs(extent[1] - extent[0]) / lon_step + 1)
lon_edge_offset = extent[0]

# create HARP commands
if command is None:
    harp_commands = (
        ";".join(harp_dict[product]["filter"])
        + (";" if len(harp_dict[product]["filter"]) != 0 else "")
        + ";".join(harp_dict[product]["convert"])
        + (";" if len(harp_dict[product]["convert"]) != 0 else "")
        + "derive(datetime_stop {time});"
        + f"bin_spatial({lat_edge_length},{lat_edge_offset},{lat_step},{lon_edge_length},{lon_edge_offset},{lon_step});"
        + "derive(latitude {latitude});derive(longitude {longitude});"
        + f"keep({','.join(harp_dict[product]['keep'] + keep_general)})"
    )

else:
    harp_commands = command

# perform conversion
convert_to_l3_products(
    L2_files_urls,
    pre_commands=harp_commands,
    post_commands="",
    export_path=EXPORT_DIR / product.replace("L2", "L3"),
    num_workers=num_workers,
)

# Recover attributes
attributes = {
    filename.name: {
        "time_coverage_start": xr.open_dataset(filename).attrs[
            "time_coverage_start"
        ],
        "time_coverage_end": xr.open_dataset(filename).attrs["time_coverage_end"],
    }
    for filename in L2_files_urls
}

# AGGREGATE DATASET

tqdm.write("Processing data\n")

# Avoid lost attributes during conversion
xr.set_options(keep_attrs=True)

def preprocess(ds):
    ds["time"] = pd.to_datetime(
        np.array([attributes[ds.attrs["source_product"]]["time_coverage_start"]])
    ).values
    return ds

DS = xr.open_mfdataset(
    [
        str(filename.relative_to(".")).replace("L2", "L3")
        for filename in L2_files_urls
        if exists(str(filename.relative_to(".")).replace("L2", "L3"))
    ],
    combine="nested",
    concat_dim="time",
    parallel=True,
    preprocess=preprocess,
    decode_times=False,
    chunks={"time": chunk_size},
)
DS = DS.sortby("time")
DS.rio.write_crs("epsg:4326", inplace=True)
DS.rio.set_spatial_dims(x_dim="longitude", y_dim="latitude", inplace=True)

# EXPORT DATASET

tqdm.write("Exporting netCDF file\n")

start = min(products[uuid]["beginposition"] for uuid in products.keys())
end = max(products[uuid]["endposition"] for uuid in products.keys())
export_dir = PROCESSED_DIR / f"processed{product[2:]}"
makedirs(export_dir, exist_ok=True)
file_export_name = export_dir / (
    f"{product[4:]}{start.day}-{start.month}-{start.year}__"
    f"{end.day}-{end.month}-{end.year}.nc"
)

DS.to_netcdf(file_export_name)

tqdm.write("Done\n")

if name == "main":

# Ignore warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
warnings.filterwarnings("ignore", category=FutureWarning)

# PARAMS

# Perform checksum verification after each download
CHECKSUM = True

# CLI ARGUMENTS

parser = argparse.ArgumentParser(
    description=(
        "Request, download and process Sentinel data from Copernicus access hub. "
        "Create a processed netCDF file binned by time, latitude and longitude"
    )
)

# Product type: Used to perform a product based search
# Possible values are
#   L2__O3____
#   L2__NO2___
#   L2__SO2___
#   L2__CO____
#   L2__CH4___
#   L2__HCHO__
#   L2__AER_AI
#   L2__CLOUD_
parser.add_argument("product", help="Product type", type=str)

# Date: Used to perform a time interval search
# The general form to be used is:
#       date=(<timestamp>, <timestamp>)
# where < timestamp > can be expressed in one of the following formats:
#   yyyyMMdd
#   yyyy-MM-ddThh:mm:ssZ
#   yyyy-MM-ddThh:mm:ss.SSSZ(ISO8601 format)
#   NOW
#   NOW-<n>MINUTE(S)
#   NOW-<n>HOUR(S)
#   NOW-<n>DAY(S)
#   NOW-<n>MONTH(S)
parser.add_argument(
    "--date",
    help="date used to perform a time interval search",
    nargs=2,
    type=str,
    default=("NOW-24HOURS", "NOW"),
)

# Area of interest: The url of the area of interest (.geojson)
parser.add_argument(
    "--aoi", help="path to the area of interest (.geojson)", type=str
)

# Harp command: Harp convert command used during import of products
parser.add_argument(
    "--command",
    help="harp convert command used during import of products",
    type=str,
)

# Unit: Unit conversion
parser.add_argument("--unit", help="unit conversion", type=str, default="mol/m2")

# qa value: Quality value threshold
parser.add_argument("--qa", help="quality value threshold", type=int, default=50)

# resolution: Spatial resolution in arc degrees
parser.add_argument(
    "--resolution",
    help="spatial resolution in arc degrees",
    nargs=2,
    type=float,
    default=(0.01, 0.01),
)

# chunk-size:
parser.add_argument(
    "--chunk-size",
    help="dask chunk size along the time dimension",
    type=int,
    default=256,
)

# num-threads:
parser.add_argument(
    "--num-threads",
    help="number of threads spawned for L2 download",
    type=int,
    default=4,
)

# num-workers:
parser.add_argument(
    "--num-workers",
    help="number of workers spawned for L3 conversion",
    type=int,
    default=cpu_count(),
)

args = parser.parse_args()

# PATHS

# download_directory: directory for L2 products
DOWNLOAD_DIR = Path("L2_data")

# export_directory: directory for L3 products
EXPORT_DIR = Path("L3_data")

# processed_directory: directory for processed products (aggregated+masked)
PROCESSED_DIR = Path("processed")

# CREDENTIALS

DHUS_USER = "s5pguest"
DHUS_PASSWORD = "s5pguest"
DHUS_URL = "https://s5phub.copernicus.eu/dhus"

main(
    product=args.product,
    aoi=args.aoi,
    date=args.date,
    qa=args.qa,
    unit=args.unit,
    resolution=args.resolution,
    command=args.command,
    chunk_size=args.chunk_size,
    num_threads=args.num_threads,
    num_workers=args.num_workers,
)

@HichemOmr
Copy link

I let Bilel suggest a solution to fix this problem. I believe he is the perfect person to do so since he is the developer of this tool :)
Look forward,

@LillaBW2327
Copy link
Author

I let Bilel suggest a solution to fix this problem. I believe he is the perfect person to do so since he is the developer of this tool :)
Look forward,

Thank you so much for your help! :)

@bilelomrani1
Copy link
Owner

Hi @LillaBW2327, did you clone the repo in your working directory before running the script?

cd your/working/directory
conda activate S5P
git clone https://github.com/bilelomrani1/s5p-tools.git
python s5p-request.py L2__O3____ --date NOW-1MONTH --aoi map.geojson

@HichemOmr

This comment has been minimized.

@LillaBW2327
Copy link
Author

Hi @bilelomrani1 , thank you so much for your quick reply, it is greatly appreciated.
I had not cloned the repo so thank you for highlighting that for me. While that solved my previous issue, I now have a new one I'm afraid!

My code is exactly the same as that posted above, only difference is that I have now cloned the repo. I have tried different versions of the s5p-request.py script and get the same error.
Any thoughts and ideas would be greatly appreciated. Thank you again.

(SP5) [bw43776@abics03 Documents]$ python s5p-request.py L2__O3____ --date NOW-1MONTH --aoi map.geojson
Traceback (most recent call last):
File "s5p-request.py", line 11, in
import rioxarray
File "/home/ab/bw43776/anaconda3/envs/SP5/lib/python3.7/site-packages/rioxarray/init.py", line 6, in
import rioxarray.raster_array # noqa
File "/home/ab/bw43776/anaconda3/envs/SP5/lib/python3.7/site-packages/rioxarray/raster_array.py", line 18, in
import rasterio
File "/home/ab/bw43776/anaconda3/envs/SP5/lib/python3.7/site-packages/rasterio/init.py", line 22, in
from rasterio._base import gdal_version
ImportError: libnsl.so.1: cannot open shared object file: No such file or directory

@HichemOmr
Copy link

Hi there,
You will need to install some packages e.g., : rioxarray, gdal ("File "s5p-request.py", line 11, in import rioxarray")
to install for instance rioxarray run:
conda install -c conda-forge rioxarray
I hope it helps.

@LillaBW2327
Copy link
Author

Hi there,

You will need to install some packages e.g., : rioxarray, gdal ("File "s5p-request.py", line 11, in import rioxarray")
to install for instance rioxarray run:
conda install -c conda-forge rioxarray
I hope it helps.

Hi, thank you so much for your help, but I am afraid I have already tried this as a previous issue was related to not having the rioxarray package. When running for example: conda install -c conda-forge rioxarray, I get the confirmation that "All requested packages already installed".

@LillaBW2327
Copy link
Author

Hi @HichemOmr @bilelomrani1 , thank you so much for your help on this issue. The last issue I posted has been resolved, and was linked to an issue with the libnsl package and was related to my Linux system. (My issue for reference is basically the same as this: conda-forge/fiona-feedstock#138). Once I had downloaded this package separately, the script ran perfectly. Thank you again.

@HichemOmr
Copy link

HichemOmr commented Apr 20, 2021

Awesome !
Let me know in case you would like to do a joint research/study on the topic.
Here is my pro email: hichem.omrani@liser.lu
and here is my skype name: hichem.omrani
Cheers

@bhushanpawar1707
Copy link

(base) C:\Users\bhushan.pawar\my project\me>python s5p-request.py L2_NO2_
Traceback (most recent call last):
File "C:\Users\bhushan.pawar\my project\me\s5p-request.py", line 15, in
from s5p_tools import (bounding_box,convert_to_l3_products,get_filenames_request,request_copernicus_hub,)
File "C:\Users\bhushan.pawar\my project\me\s5p_tools_init_.py", line 2, in
from .preprocess import bounding_box, convert_to_l3_products
File "C:\Users\bhushan.pawar\my project\me\s5p_tools\preprocess.py", line 6, in
import geopandas
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\geopandas_init_.py", line 1, in
from geopandas._config import options # noqa
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\geopandas_config.py", line 126, in
default_value=_default_use_pygeos(),
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\geopandas_config.py", line 112, in _default_use_pygeos
import geopandas.compat as compat
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\geopandas_compat.py", line 202, in
import rtree # noqa
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\rtree_init
.py", line 9, in
from .index import Rtree, Index # noqa
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\rtree\index.py", line 6, in
from . import core
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\rtree\core.py", line 75, in
rt = finder.load()
File "C:\Users\bhushan.pawar\Miniconda3\lib\site-packages\rtree\finder.py", line 67, in load
raise OSError("could not find or load {}".format(lib_name))
OSError: could not find or load spatialindex_c-64.dll

I am facing this error , Kindly help me to solve this

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants