-
Notifications
You must be signed in to change notification settings - Fork 45
Adding a New Example Notebook #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I am adding a new example notebook that walks through creating models in SWAT and XGBOOST and then using SASCTL and PZMM to create the appropriate metadata before pushing everything into Model Manager. This notebook was designed for Viya 2020 environments.
Codecov Report
@@ Coverage Diff @@
## master #85 +/- ##
=======================================
Coverage 74.53% 74.53%
=======================================
Files 48 48
Lines 3361 3361
=======================================
Hits 2505 2505
Misses 856 856 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Can you clean up the mm_swat()
method slightly? The with
syntax isn't actually needed, and the additional files can simply be passed using the files=
parameter of register_model()
. See example below.
model_content = [output_path / f for f in ('dmcas_fitstat.json', 'dmcas_lift.json', 'dmcas_roc.json')]
model_content = {f.name: f for f in model_content}
# Pushing Astore into Model Manager
register_model(feature_astore, feature_name, project=project_name, force=True, version='latest')
register_model(model_astore, model_name, project=project_name, force=True, version='latest', files=model_content)
-Changed /Data to /data -Swapped code in SWAT Model Manager function with suggested code
Thanks for the suggestions Jonathan! I added hmeq.csv into /data, changed /Data to /data in the notebook and implemented your suggested method in mm_swat(). |
* include python 3.8 * fix KeyError * fix KeyError * remove lightgbm restriction * disable checks for Py3.8 * versionadded * custom wrapper function name * wrappers for sklearn methods * drop class inheritance from object to avoid @wraps error in 2.7 * Revert "drop class inheritance from object to avoid @wraps error in 2.7" This reverts commit 0259e46 * updated target code * updated predict_proba output * moved pymas setup to package init() * fix for py27 * multiple wrapped functions * remove double counted vars * updated target * no return code/message by default * prep for release * content-type during file uploads * specify response format * cleanup * cleanup * allow rc and msg in output variables * added tests * fixed binary file content * added relationships service * added cassette * determine algorithm * improved metadata from astore * added neural net * added svm * binary gradboost model * added bayes net * cleanup * improved astore registration * added reports and report_images service * added reports and report_images service * enable testing against live server * moved dataset fixtures * explicitly load table * explicitly load table * cleanup * fixed case * cleanup for live tests * change score() to predict() * cleanup * assorted bugfixes & refreshed cassettes * added pagingiterator * require futures for py27 * updated requirements in docs * test PagingIterator * test param combinations * update doc * fixed performance task * refactoring for PagedList implementation * fixes for py27 * add iterator next() methods for py27 * py27 compatibility * debug * debug * debug * implement __getslice__ * cleanup * py27 compatibility * PagedList changes * cleanup & assorted fixes * added example data * enable passing dataframes and arrays directly to module steps * bug fix * pass pd.Series instance * added dataset readme * drop .sas file extensions * test JSON output from CLI * add --format option * fixed error formatting * improved parsing of target level * improved parsing of target level * CLI improvements * handle single-row dataframe input * recoded astore pipeline * sklearn pipeline * verify unpickle * render individual elements * activate deepsource * release file handles * add tests * cleanup * cleanup * cleanup * cleanup * code qual updates * remove list comprehension * code quality * code quality * code quality * fix len() comparisons * code qual * code qual * perf improvements * cleanup * cleanup * refresh cassettes * ignore private files * ds2 package constructor * bug fix * skip mas publish * add cassette * code qual improvements * code qual improvements * code qual improvements * fix for py27 * check swat build * fix py27 error * check pandas compatibility * code qual improvements * code qual improvements * code qual improvements * consistent project creation in register_model * fix KeyErrors * fixed delete_destination * cleanup * cleanup * include basic REST functions * add iris dataset * use local dataset * fix astore model inputs * cleanup * first draft * test project creation * services consistently define classmethods * fixed service discovery * cleanup * build_pipeline * added cas_management.upload_file * support uploading of local files * updated pymas wrapper * session.to_swat() * convert numpy values to scalar * do not wrap predict_proba method by default. * handle string and numeric outputs * handle predict_proba methods. deprecated return_msg and return_code. * updated targets * fixed tests * fixed formatting * fixed formatting * code quality * fixed astore example * fixed full lifecycle example * test sklearn classification model * any valid classification * code quality * code quality * fix for py 2.7 * raise exception if service unavailable * cleanup * override server in Session.to_swat() * added to_swat() * DLPy example * fixed regression tests * docstring fix * bugfix & prep for 1.5.5 * tweaked optional dependencies * Adding a New Example Notebook (#85) * Adding New Example Notebook I am adding a new example notebook that walks through creating models in SWAT and XGBOOST and then using SASCTL and PZMM to create the appropriate metadata before pushing everything into Model Manager. This notebook was designed for Viya 2020 environments. * Adding data for example * Adding notebook with requested changes -Changed /Data to /data -Swapped code in SWAT Model Manager function with suggested code * added readme for examples * improve paging when server count is approximate * rough draft * pzmm updates (#87) * Merge 4075/4083 commits from OMMR * Adjust pzmm zipping method to output iofile. * Remove now-redundant pzmm model upload process. * Edit notebook example to leverage sasctl framework. * Adjust dmcas writers doc_strings to state acceptance of dataframe, array, or list * First attempt to change dmcas_fitstat writer to accept more data types. * added Session.as_swat() * updated docs * test as_swat() * Combine Lift/ROC chart generation into one call. * Spell transpose correctly and remove unused functions. * Remove uploadData from init statement. * Include empty ROC and Lift dmcas JSON files in installation. * Lowercase all references to pzmm. * Add empty dmcas lift/roc JSON files. * Add missing null roc json file. * Final dev updates to Lift/ROC chart fixes. * custom get_repository() method to ignore 403 errors. Improved efficiency for default_repository() * Adjust score code for invalid column names/values. * Fix missing numpy package in writeScoreCode * Fix isidentifer error with improper type. * Add error for no data provided in calculateFitStat * Fix dataframe input for generateROCLiftStat() * Update example notebook to include new features * Adjust protocol for Session object. * Fix error in generating ROC/LIFT charts. * Update example notebook and outputs. * Fix overwritten model object on repeat runs. * Adjust text to documentation standards. * Remove sas server names from example before GitHub release. * include docs and examples * additional example * Merge from Github * Fix missing argument in pickle and zipping calls. * refreshed cassettes * refreshed cassettes * refreshed cassettes * temporary disable regression test * temporary disable regression test * correctly mock default repo * fix tests * fixed cli command for get_repository * Update changelog.md * prep for release * Adjust score code for invalid column names/values. * Fix missing numpy package in writeScoreCode * Fix isidentifer error with improper type. * Merge 4075/4083 commits from OMMR * Adjust dmcas writers doc_strings to state acceptance of dataframe, array, or list * First attempt to change dmcas_fitstat writer to accept more data types. * custom get_repository() method to ignore 403 errors. Improved efficiency for default_repository() * Adjust protocol for Session object. * Update example notebook and outputs. * Fix overwritten model object on repeat runs. * Adjust text to documentation standards. * Adjust score code for invalid column names/values. * Fix missing numpy package in writeScoreCode * Fix isidentifer error with improper type. * Adjust score code for invalid column names/values. * Fix missing numpy package in writeScoreCode * Fix isidentifer error with improper type. * Change writeScoreCode intercept default behavior * writeScoreCode should handle non-binary targets * Add software version checking API call and task. * Added MM version check to writeScoreCode(). * Add DS2 wrapper function to writeScoreCode.py * Include future API builds for Viya 4.0 * Add API call to move Python score resources. * Fix doc_string errors. * Add f(x) to upload/copy pzmm Python/DS2 score code * Include upload/copy score code in writeScore() * Fix indentation error * First run of testing new copy/upload task. * Fix incorrect role assignment of pickle file. * Add headers to PUT scoreResources API call. * Adjust version to differentiate between main/dev versions. * Handle case of Viya 3.5 & no model provided. * General fixes to DS2 converter. * Fix dtype vs string error. * Call ds2 wrapper endpoint and adjust writeScoreCode. * Add mojo model file handling to PickelModel. * Add H2O option to writeFileMetaData. * Add mojo file to zippable files * Fix astore vs Python typo * Add new import file to handle model imports and adjust init file * Adjust examples and create new simple/h2o notebooks * Include single row csv for testing * Major update to writeScoreCode * Final modification to writeScoreCode & new notebooks * Edit notebook to remove unused cells * Fix pathing and .zip errors in H2O example * Fix trailing .zip * Include default method for scoring/validating in CAS for Viya 3.5 users. * Required *.sas files for scoring in CAS/MAS * Remove reference to uncommitted feature * Bug fixes for CAS wrapper and documentation * Delete FleetManagementSimple.ipynb * Update FleetManagementSimpleH2O.ipynb * Update H2OModelFleetMaintenance.ipynb * Adjust to new handling of model importing * Delete fleet_maintenance.csv * Update change to CSV files * Update H2OModelFleetMaintenance.ipynb * Update README.md * moved get_software_version() to platform_version() * removed f strings for compatibility * bugfix * remove f strings * removed duplicate method Co-authored-by: Scott Lindauer <Scott.Lindauer@sas.com> Co-authored-by: Scott Lindauer <60105574+smlindauer@users.noreply.github.com> * test platform_version * migrated new tasks * prep for release * gh-action test * rough draft * updated doc folder * cleanup actions * default sphinx build * prep for release * check platform_version() * codecov badge * html fix * html fix * cleanup * install from source link * fixed import * prep for release * 1.5.7 update * install from source link * drop regression testing on py34. codecov no longer supports. * syntax: fixed decorators * code cleanup * syntax: py27 * update test envs * fix py39 env * drop py27 support * drop py39 * removed notebook test Co-authored-by: Sophia Rowland <sophiarowland@outlook.com> Co-authored-by: Scott Lindauer <Scott.Lindauer@sas.com> Co-authored-by: Scott Lindauer <60105574+smlindauer@users.noreply.github.com>
I am adding a new example notebook that walks through creating models in SWAT and XGBOOST and then using SASCTL and PZMM to create the appropriate metadata before pushing everything into Model Manager. This notebook was designed for Viya 2020 environments. I haven't made any changes to the source code - just adding a new example.