You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Biosteam community, I'am a new Biosteam user and the first issue I faced up with Biorefineries Bioindustrial-park are the Size of the repo 1.7G, I understand the good practice to share xls files to contrast the ideas, and to validate but its a little awful. I suggest bring all data like .xls, .csv,... on other kind of cloud file mirror services (google drive, dropbox, end so on). Would be fantastic if would exist routines to create or download them assest files on the fly in python execution inicialization even I can help coding some common routings to automate this type of action maybe if someone also think that it is an #enhancement, let me know please. This approch maybe help from the begin of the #Boilerplate-biorefinery. With my best gratitude.
The text was updated successfully, but these errors were encountered:
Maybe the most easy way it's to gitignore .csv and .xls files, next make a pull request providing additional link to a .zip file (in a README file for example) with all this non-critical statics assets in a suitable directory structure (this suppose advanced users). A little more complex approach implement simple snippets methods to load_data into__init__.py files, some key question is the service/technology to choice for mirror this files, the main benefit is transparent for users.
We can add .csv and .xlsx to gitignore and have users add the files only when they are about to publish the biorefinery in a paper.
Using zip files may not be feasible considering we got many users that need quick access to load/save data through python. Please feel free to suggest an implementation for easy saving and loading of xlsx, csv, npy files from a cloud.
Hi Biosteam community, I'am a new Biosteam user and the first issue I faced up with Biorefineries Bioindustrial-park are the Size of the repo 1.7G, I understand the good practice to share xls files to contrast the ideas, and to validate but its a little awful. I suggest bring all data like .xls, .csv,... on other kind of cloud file mirror services (google drive, dropbox, end so on). Would be fantastic if would exist routines to create or download them assest files on the fly in python execution inicialization even I can help coding some common routings to automate this type of action maybe if someone also think that it is an #enhancement, let me know please. This approch maybe help from the begin of the #Boilerplate-biorefinery. With my best gratitude.
The text was updated successfully, but these errors were encountered: