You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it is not very comfortable to source all enumerations everytime.
I think we can build up wrapper with a bit fuzzy logic to determine the Parameter, periodType and TimeResolution.
This is quite similar to issue #8
My original thought with parsing parameters from the file was that you could start anywhere between checking metadata and parsing data into a dataframe. But probably you are right, that nobody would use only one function but it's more like a tube that would mean that only a combination of all those functions make it work. This way instead of a function I would suggest a class. Either this class will call our functions one after each other, or our functions will "complete" our class by time, and if certain steps are not done, for example a link to the file is not given, it will throw an error when trying to download the file.
This is a good point. Maybe we provide once a script that is managing the whole pipeline. There we need a defined interface. and if anyone would like to use one of the functions in the pipeline they have to check the usage by themself.
I think it is not very comfortable to source all enumerations everytime.
I think we can build up wrapper with a bit fuzzy logic to determine the Parameter, periodType and TimeResolution.
This is quite similar to issue #8
Enumerations are helpful to make the code clean but they are not very user friendly I think.
The text was updated successfully, but these errors were encountered: