From 01c23b3acf3223afdfb2a7eb8aed6d75863d4978 Mon Sep 17 00:00:00 2001 From: Marcos Arancibia Date: Wed, 17 Dec 2025 14:51:33 -0500 Subject: [PATCH] Large update to hundreds of Notebooks, dozens of Notebook additions, adjustments to folder names, updates to copyright Large update to hundreds of Notebooks, dozens of Notebook additions, adjustments to folder names, updates to copyright. Adding all Spatial AI notebooks as new OML4Py notebooks. --- ...ed_with_OML4Py_on_Autonomous_Database.json | 2 +- ...ate_data_using_the_Transparency_Layer.json | 66 +- ...Use_in_database_algorithms_and_models.json | 56 +- ...on_objects_and_user_defined_functions.json | 40 +- ...tions_using_Embedded_Python_Execution.json | 26 +- .../oml4py-live-labs/Lab6_Use_AutoML.json | 10 +- .../labs/oml4py-live-labs/README.md | 14 +- .../python/OML Run-me-first.json | 0 ...Party Packages - Environment Creation.json | 0 ... Packages - Python Environment Usage .json | 0 .../python/OML4Py -0- Tour.json | 0 .../python/OML4Py -1- Introduction.json | 0 ...y -2- Data Selection and Manipulation.json | 0 ...y -3- Datastore and Script Repository.json | 0 .../OML4Py -4- Embedded Python Execution.json | 0 .../python/OML4Py -5- AutoML.json | 0 .../python/OML4Py Anomaly Detection SVM.json | 0 .../OML4Py Association Rules Apriori.json | 0 .../OML4Py Attribute Importance MDL.json | 0 .../python/OML4Py Classification DT.json | 0 .../python/OML4Py Classification GLM.json | 0 .../python/OML4Py Classification NB.json | 0 .../python/OML4Py Classification NN.json | 0 .../python/OML4Py Classification RF.json | 0 .../python/OML4Py Classification SVM.json | 0 .../python/OML4Py Clustering EM.json | 0 .../python/OML4Py Clustering KM.json | 0 ...L4Py Data Cleaning Duplicates Removal.json | 0 .../OML4Py Data Cleaning Missing Data.json | 0 .../OML4Py Data Cleaning Outlier Removal.json | 0 ...ata Cleaning Recode Synonymous Values.json | 0 .../OML4Py Data Transformation Binning.json | 0 ...ML4Py Data Transformation Categorical.json | 0 ...nsformation Normalization and Scaling.json | 0 ... Data Transformation One Hot Encoding.json | 0 .../python/OML4Py Dataset Creation.json | 0 .../python/OML4Py Date and Time Classes.json | 0 ...ML4Py Feature Engineering Aggregation.json | 0 ...4Py Feature Extraction ESA Wiki Model.json | 0 .../python/OML4Py Feature Extraction ESA.json | 0 .../python/OML4Py Feature Extraction SVD.json | 0 ...4Py Feature Selection Algorithm-based.json | 0 ...re Selection Using Summary Statistics.json | 0 .../OML4Py Importing Wide Datasets.json | 0 .../python/OML4Py Partitioned Model SVM.json | 0 .../python/OML4Py REST API.json | 0 .../python/OML4Py Regression GLM.json | 0 .../python/OML4Py Regression NN.json | 0 .../python/OML4Py Regression SVM.json | 0 .../python/OML4Py Statistical Functions.json | 0 .../python/OML4Py Text Mining SVM.json | 0 .../python/OML4Py Time Series ESM.json | 0 ...rs_OML4Py_Cross_Validation_AUC_ML_101.json | 0 ..._execution_using_third_party_packages.json | 0 ...ours_OML4Py_Weight_of_Evidence_ML_101.json | 0 ...eHours_OML_Datastore_for_R_and_Python.json | 0 ...loy_an_XGBoost_model_in_OML_Services.ipynb | 0 .../notebooks-classic/python/README.md | 0 .../r/OML Import Wiki ESA Model.json | 0 .../notebooks-classic/r/OML Run-me-first.json | 0 ...Party Packages - Environment Creation.json | 0 ...-Party Packages - R Environment Usage.json | 0 .../r/OML4R -1- Introduction.json | 0 ...R -2- Data Selection and Manipulation.json | 0 ...R -3- Datastore and Script Repository.json | 0 .../r/OML4R -4- Embedded R Execution.json | 0 .../r/OML4R Anomaly Detection SVM.json | 0 .../r/OML4R Association Rules Apriori.json | 0 .../r/OML4R Attribute Importance MDL.json | 0 .../r/OML4R Classification DT.json | 0 .../r/OML4R Classification GLM.json | 0 .../r/OML4R Classification NB.json | 0 .../r/OML4R Classification RF.json | 0 .../r/OML4R Classification SVM.json | 0 .../r/OML4R Clustering EM.json | 0 .../r/OML4R Clustering KM.json | 0 .../r/OML4R Clustering OC.json | 0 ...OML4R Data Cleaning Duplicate Removal.json | 0 .../r/OML4R Data Cleaning Missing Data.json | 0 .../OML4R Data Cleaning Outlier Removal.json | 0 ...ata Cleaning Recode Synonymous Values.json | 0 .../r/OML4R Data Transformation Binning.json | 0 ...ata Transformation Categorical Recode.json | 0 ...4R Data Transformation Date Datatypes.json | 0 ...nsformation Normalization and Scaling.json | 0 ... Data Transformation One-Hot Encoding.json | 0 .../r/OML4R Dataset Creation.json | 0 ...OML4R Feature Engineering Aggregation.json | 0 ...L4R Feature Extraction ESA Wiki Model.json | 0 .../r/OML4R Feature Extraction ESA.json | 0 .../r/OML4R Feature Extraction SVD.json | 0 ...L4R Feature Selection Algorithm-based.json | 0 ...re Selection Using Summary Statistics.json | 0 .../r/OML4R Partitioned Model SVM.json | 0 .../notebooks-classic/r/OML4R REST API.json | 0 .../r/OML4R Regression GLM.json | 0 .../r/OML4R Regression NN .json | 0 .../r/OML4R Regression SVM.json | 0 .../r/OML4R Statistical Functions .json | 0 .../r/OML4R Text Mining SVM.json | 0 .../r/OfficeHours_OML4R_Demonstration.json | 0 ...eHours_OML_Datastore_for_R_and_Python.json | 0 .../notebooks-classic/r/README.md | 0 .../Credit_Scoring_100K_SQL_Create_Table.sql | 0 ... Export and Import Serialized Models.json | 0 ...SQL 21c or 23c Anomaly Detection MSET.json | 0 ...SQL 21c or 23c Classification XGBoost.json | 0 ...OML4SQL 21c or 23c Regression XGBoost.json | 0 .../sql/OML4SQL Anomaly Detection SVM.json | 0 .../OML4SQL Association Rules Apriori.json | 0 .../sql/OML4SQL Attribute Importance MDL.json | 0 .../sql/OML4SQL Classification DT.json | 0 .../sql/OML4SQL Classification GLM.json | 0 .../sql/OML4SQL Classification NB.json | 0 .../sql/OML4SQL Classification NN.json | 0 .../sql/OML4SQL Classification RF.json | 0 .../sql/OML4SQL Classification SVM.json | 0 .../sql/OML4SQL Clustering EM.json | 0 .../sql/OML4SQL Clustering KM.json | 0 .../sql/OML4SQL Clustering OC.json | 0 ...4SQL Data Cleaning Duplicates Removal.json | 0 .../OML4SQL Data Cleaning Missing Data.json | 0 ...OML4SQL Data Cleaning Outlier Removal.json | 0 ...ata Cleaning Recode Synonymous Values.json | 0 .../OML4SQL Data Transformation Binning.json | 0 ...L4SQL Data Transformation Categorical.json | 0 ...ransformation Normalization and Scale.json | 0 .../OML4SQL Exporting Serialized Models.json | 0 ...ture Engineering Aggregation and Time.json | 0 ...SQL Feature Extraction ESA Wiki Model.json | 0 .../sql/OML4SQL Feature Extraction NMF.json | 0 .../sql/OML4SQL Feature Extraction SVD.json | 0 ...SQL Feature Selection Algorithm Based.json | 0 ...ion Unsupervised Attribute Importance.json | 0 ...re Selection Using Summary Statistics.json | 0 .../sql/OML4SQL Nested Columns.json | 0 .../sql/OML4SQL Partitioned Model SVM.json | 0 ...L Procedure for Importing Data to ADB.json | 0 .../sql/OML4SQL Regression GLM.json | 0 .../sql/OML4SQL Regression NN.json | 0 .../sql/OML4SQL Regression SVM.json | 0 .../sql/OML4SQL Statistical Functions.json | 0 .../sql/OML4SQL Text Mining SVM.json | 0 .../sql/OML4SQL Time Series ESM.json | 0 ...OML4SQL_Automated_Text_Mining_Example.json | 0 .../sql/OML4SQL_Credit_Score_Predictions.json | 0 .../sql/OML4SQL_Insurance_Claims_Fraud.json | 0 .../sql/OML4SQL_My_First_Notebook.json | 0 ...Good_Wine_for_20_dollars_with_ADW_OML.json | 0 ...redicting_Customer_Lifetime_Value_LTV.json | 0 .../OML4SQL_SQLFORMAT_and_Forms_Examples.json | 0 .../sql/OML4SQL_Targeting_Top_Customers.json | 0 .../OML4SQL_Targeting_Top_Customers_10k.json | 0 ...ng_Top_Customers_Desktop_Viz_companion.dva | Bin .../notebooks-classic/sql/README.md | 0 .../python/OML Run-me-first.dsnb | 1 + ...Party Packages - Environment Creation.dsnb | 2512 +++++++++++++++++ ...y Packages - Python Environment Usage.dsnb | 1689 +++++++++++ .../notebooks-oml/python/OML4Py -0- Tour.dsnb | 1 + .../python/OML4Py -1- Introduction.dsnb | 1 + ...y -2- Data Selection and Manipulation.dsnb | 1 + ...y -3- Datastore and Script Repository.dsnb | 1 + .../OML4Py -4- Embedded Python Execution.dsnb | 1 + .../python/OML4Py -5- AutoML.dsnb | 1 + .../python/OML4Py Anomaly Detection SVM.dsnb | 1 + .../OML4Py Association Rules Apriori.dsnb | 1 + .../OML4Py Attribute Importance MDL.dsnb | 1 + .../python/OML4Py Classification DT.dsnb | 1 + .../python/OML4Py Classification GLM.dsnb | 1 + .../python/OML4Py Classification NB.dsnb | 1 + .../python/OML4Py Classification NN.dsnb | 1 + .../python/OML4Py Classification RF.dsnb | 1 + .../python/OML4Py Classification SVM.dsnb | 1 + .../python/OML4Py Clustering EM.dsnb | 1 + .../python/OML4Py Clustering KM.dsnb | 1046 +++++++ ...L4Py Data Cleaning Duplicates Removal.dsnb | 1 + .../OML4Py Data Cleaning Missing Data.dsnb | 1 + .../OML4Py Data Cleaning Outlier Removal.dsnb | 1 + ...ata Cleaning Recode Synonymous Values.dsnb | 1 + .../OML4Py Data Transformation Binning.dsnb | 1 + ...ML4Py Data Transformation Categorical.dsnb | 1 + ...Py Data Transformation Date Datatypes.dsnb | 1 + ...nsformation Normalization and Scaling.dsnb | 1 + ... Data Transformation One Hot Encoding.dsnb | 1 + .../python/OML4Py Dataset Creation.dsnb | 1 + ...ML4Py Feature Engineering Aggregation.dsnb | 1 + ...4Py Feature Extraction ESA Wiki Model.dsnb | 1 + .../python/OML4Py Feature Extraction ESA.dsnb | 1 + .../python/OML4Py Feature Extraction NMF.dsnb | 1 + .../python/OML4Py Feature Extraction SVD.dsnb | 1 + ...4Py Feature Selection Algorithm-based.dsnb | 1 + ...re Selection Using Summary Statistics.dsnb | 1 + .../OML4Py Importing Wide Datasets.dsnb | 1 + .../python/OML4Py Partitioned Model SVM.dsnb | 1 + .../notebooks-oml/python/OML4Py REST API.dsnb | 1 + .../python/OML4Py Regression GLM.dsnb | 1 + .../python/OML4Py Regression NN.dsnb | 1 + .../python/OML4Py Regression SVM.dsnb | 1 + ...SQL API for Embedded Python Execution.dsnb | 1 + ...rative Clustering and Regionalization.dsnb | 1 + ...patial AI Categorical Lag Transformer.dsnb | 1 + .../OML4Py Spatial AI DBSCAN Accidents.dsnb | 1 + ...Spatial AI Embedded Execution for SQL.dsnb | 832 ++++++ ...ML4Py Spatial AI Exploratory Analysis.dsnb | 1 + .../OML4Py Spatial AI GWR Regressor.dsnb | 1 + ...Py Spatial AI Geographical Classifier.dsnb | 1 + ...4Py Spatial AI Geographical Regressor.dsnb | 1 + .../OML4Py Spatial AI Hotspot Clustering.dsnb | 1 + .../OML4Py Spatial AI KMeans Clustering.dsnb | 1 + .../OML4Py Spatial AI LOF Accidents.dsnb | 1 + .../OML4Py Spatial AI OLS Regressor.dsnb | 1 + .../OML4Py Spatial AI PAR Object Store.dsnb | 1 + .../OML4Py Spatial AI Run Me First.dsnb | 1 + .../OML4Py Spatial AI SLX Classifier.dsnb | 1 + .../OML4Py Spatial AI Save Load Run.dsnb | 862 ++++++ .../OML4Py Spatial AI Scoord Transformer.dsnb | 1 + ...al AI Spatial Fixed Effects Regressor.dsnb | 1 + .../OML4Py Spatial AI Spatial Imputer.dsnb | 1 + ...Py Spatial AI Spatial Lag Transformer.dsnb | 1 + ...l AI Spatial Lag and Error Regressors.dsnb | 1 + .../OML4Py Spatial AI Spatial Operations.dsnb | 1 + .../OML4Py Spatial AI Spatial Pipeline.dsnb | 1 + ... AI SpatialDataFrame to OML DataFrame.dsnb | 1 + .../python/OML4Py Statistical Functions.dsnb | 1 + .../python/OML4Py Text Mining SVM.dsnb | 1 + .../python/OML4Py Time Series ESM.dsnb | 1 + .../python/README.md | 2 +- .../notebooks-oml/r/OML Run-me-first.dsnb | 1 + ...Party Packages - Environment Creation.dsnb | 2512 +++++++++++++++++ ...-Party Packages - R Environment Usage.dsnb | 1456 ++++++++++ .../notebooks-oml/r/OML4R -0- Tour.dsnb | 1 + .../r/OML4R -1- Introduction.dsnb | 1 + ...R -2- Data Selection and Manipulation.dsnb | 1 + ...R -3- Datastore and Script Repository.dsnb | 1 + .../r/OML4R -4- Embedded R Execution.dsnb | 1 + .../r/OML4R Anomaly Detection SVM.dsnb | 1 + .../r/OML4R Association Rules Apriori.dsnb | 1 + .../r/OML4R Attribute Importance MDL.dsnb | 1 + .../r/OML4R Classification DT.dsnb | 1 + .../r/OML4R Classification GLM.dsnb | 1 + .../r/OML4R Classification NB.dsnb | 1 + .../r/OML4R Classification NN.dsnb | 1 + .../r/OML4R Classification RF.dsnb | 1 + .../r/OML4R Classification SVM.dsnb | 1 + .../notebooks-oml/r/OML4R Clustering EM.dsnb | 1 + .../notebooks-oml/r/OML4R Clustering KM.dsnb | 1145 ++++++++ .../r/OML4R Clustering OC.dsnb | 2 +- ...OML4R Data Cleaning Duplicate Removal.dsnb | 1 + .../r/OML4R Data Cleaning Missing Data.dsnb | 1 + .../OML4R Data Cleaning Outlier Removal.dsnb | 1 + ...ata Cleaning Recode Synonymous Values.dsnb | 1 + .../r/OML4R Data Transformation Binning.dsnb | 1 + ...ata Transformation Categorical Recode.dsnb | 1 + ...4R Data Transformation Date Datatypes.dsnb | 1 + ...nsformation Normalization and Scaling.dsnb | 1 + ... Data Transformation One-Hot Encoding.dsnb | 1 + .../r/OML4R Dataset Creation.dsnb | 1 + ...OML4R Feature Engineering Aggregation.dsnb | 1 + ...L4R Feature Extraction ESA Wiki Model.dsnb | 1 + .../r/OML4R Feature Extraction ESA.dsnb | 1 + .../r/OML4R Feature Extraction SVD.dsnb | 1 + ...L4R Feature Selection Algorithm-based.dsnb | 1 + ...re Selection Using Summary Statistics.dsnb | 1 + .../r/OML4R Partitioned Model SVM.dsnb | 1 + .../notebooks-oml/r/OML4R REST API.dsnb | 1 + .../notebooks-oml/r/OML4R Regression GLM.dsnb | 1 + .../notebooks-oml/r/OML4R Regression NN.dsnb | 1 + .../notebooks-oml/r/OML4R Regression SVM.dsnb | 1 + .../r/OML4R Statistical Functions.dsnb | 1 + .../r/OML4R Text Mining SVM.dsnb | 1 + .../r/OML4R Time Series ESM.dsnb | 1 + .../{notebooks => notebooks-oml}/r/README.md | 2 +- .../rest/OML Batch Scoring REST.dsnb | 1 + .../rest/OML Data Bias Detector REST.dsnb | 1 + .../rest/OML Data Drift Monitoring REST.dsnb | 1 + .../rest/OML Data Monitoring REST.dsnb | 1 + .../rest/OML Model Drift Monitoring REST.dsnb | 0 .../rest/OML Model Monitoring REST.dsnb | 1 + .../rest/OML Monitoring Model Drift REST.dsnb | 1 + .../rest/README.md | 2 +- ...L Export and Import Serialized Models.dsnb | 1 + ... Vector Search - Sentence Transformer.dsnb | 1 + .../sql/OML Import Wiki ESA Model.dsnb | 1 + .../notebooks-oml/sql/OML Run-me-first.dsnb | 1 + ... AI SpatialDataFrame to OML DataFrame.dsnb | 1 + .../sql/OML4SQL Anomaly Detection EM.dsnb | 1 + .../sql/OML4SQL Anomaly Detection MSET.dsnb | 1 + .../sql/OML4SQL Anomaly Detection SVM.dsnb | 1 + .../OML4SQL Association Rules Apriori.dsnb | 1 + .../sql/OML4SQL Attribute Importance MDL.dsnb | 1 + .../sql/OML4SQL Classification DT.dsnb | 1 + .../sql/OML4SQL Classification GLM.dsnb | 1 + .../sql/OML4SQL Classification NB.dsnb | 1 + .../sql/OML4SQL Classification NN.dsnb | 1 + .../sql/OML4SQL Classification RF.dsnb | 1 + .../sql/OML4SQL Classification SVM.dsnb | 1 + .../sql/OML4SQL Classification XGBoost.dsnb | 1 + .../sql/OML4SQL Clustering EM.dsnb | 1521 ++++++++++ .../sql/OML4SQL Clustering KM.dsnb | 1 + .../sql/OML4SQL Clustering OC.dsnb | 1 + ...4SQL Data Cleaning Duplicates Removal.dsnb | 1 + .../OML4SQL Data Cleaning Missing Data.dsnb | 1 + ...OML4SQL Data Cleaning Outlier Removal.dsnb | 1 + ...L4SQL Data Cleaning Outlier Removal_1.dsnb | 1 + ...ata Cleaning Recode Synonymous Values.dsnb | 1 + .../OML4SQL Data Transformation Binning.dsnb | 2 +- ...L4SQL Data Transformation Categorical.dsnb | 1 + ...ransformation Normalization and Scale.dsnb | 1 + ...ture Engineering Aggregation and Time.dsnb | 1 + ...SQL Feature Extraction ESA Wiki Model.dsnb | 1 + .../sql/OML4SQL Feature Extraction NMF.dsnb | 1 + .../sql/OML4SQL Feature Extraction SVD.dsnb | 1 + ...SQL Feature Selection Algorithm Based.dsnb | 1 + ...ion Unsupervised Attribute Importance.dsnb | 1 + ...re Selection Using Summary Statistics.dsnb | 1 + .../sql/OML4SQL Nested Columns.dsnb | 1 + .../sql/OML4SQL Partitioned Model SVM.dsnb | 1 + ...L Procedure for Importing Data to ADB.dsnb | 1 + .../sql/OML4SQL Regression GLM.dsnb | 1 + .../sql/OML4SQL Regression NN.dsnb | 1 + .../sql/OML4SQL Regression SVM.dsnb | 1 + .../sql/OML4SQL Regression XGBoost.dsnb | 1 + .../sql/OML4SQL Statistical Functions.dsnb | 1 + .../sql/OML4SQL Text Mining SVM.dsnb | 1 + .../sql/OML4SQL Time Series ESM.dsnb | 1 + ...me Series Regression ESM plus GLM XGB.dsnb | 1 + .../sql/README.md | 0 ...y -2- Data Selection and Manipulation.dsnb | 1 - .../OML4Py -4- Embedded Python Execution.dsnb | 1 - .../notebooks/python/OML4Py -5- AutoML.dsnb | 1 - .../OML4Py Association Rules Apriori.dsnb | 1 - .../OML4Py Attribute Importance MDL.dsnb | 1 - .../python/OML4Py Classification SVM.dsnb | 1 - .../python/OML4Py Clustering EM.dsnb | 1 - .../python/OML4Py Clustering KM.dsnb | 1 - ...L4Py Data Cleaning Duplicates Removal.dsnb | 1 - .../OML4Py Data Cleaning Missing Data.dsnb | 1 - .../OML4Py Data Cleaning Outlier Removal.dsnb | 1 - ...ata Cleaning Recode Synonymous Values.dsnb | 1 - .../OML4Py Data Transformation Binning.dsnb | 1 - ...ML4Py Data Transformation Categorical.dsnb | 1 - ...Py Data Transformation Date Datatypes.dsnb | 1 - ...nsformation Normalization and Scaling.dsnb | 1 - ... Data Transformation One Hot Encoding.dsnb | 1 - .../python/OML4Py Dataset Creation.dsnb | 1 - ...ML4Py Feature Engineering Aggregation.dsnb | 1 - ...4Py Feature Extraction ESA Wiki Model.dsnb | 1 - .../python/OML4Py Feature Extraction ESA.dsnb | 1 - ...4Py Feature Selection Algorithm-based.dsnb | 1 - ...re Selection Using Summary Statistics.dsnb | 1 - .../OML4Py Importing Wide Datasets.dsnb | 1 - .../python/OML4Py Partitioned Model SVM.dsnb | 1 - .../notebooks/python/OML4Py REST API.dsnb | 1 - .../python/OML4Py Regression GLM.dsnb | 1 - .../python/OML4Py Regression NN.dsnb | 1 - .../python/OML4Py Regression SVM.dsnb | 1 - .../python/OML4Py Time Series ESM.dsnb | 1 - ...R -2- Data Selection and Manipulation.dsnb | 1 - ...R -3- Datastore and Script Repository.dsnb | 1 - .../r/OML4R -4- Embedded R Execution.dsnb | 1 - .../r/OML4R Anomaly Detection SVM.dsnb | 1 - .../r/OML4R Association Rules Apriori.dsnb | 1 - .../r/OML4R Attribute Importance MDL.dsnb | 1 - .../notebooks/r/OML4R Classification DT.dsnb | 1 - .../notebooks/r/OML4R Classification GLM.dsnb | 1 - .../notebooks/r/OML4R Classification NB.dsnb | 1 - .../notebooks/r/OML4R Classification NN.dsnb | 1 - .../notebooks/r/OML4R Classification RF.dsnb | 1 - .../notebooks/r/OML4R Classification SVM.dsnb | 1 - .../notebooks/r/OML4R Clustering EM.dsnb | 1 - .../notebooks/r/OML4R Clustering KM.dsnb | 1 - ...OML4R Data Cleaning Duplicate Removal.dsnb | 1 - .../r/OML4R Data Cleaning Missing Data.dsnb | 1 - .../OML4R Data Cleaning Outlier Removal.dsnb | 1 - ...ata Cleaning Recode Synonymous Values.dsnb | 1 - .../r/OML4R Data Transformation Binning.dsnb | 1 - ...ata Transformation Categorical Recode.dsnb | 1 - ...4R Data Transformation Date Datatypes.dsnb | 1 - ...nsformation Normalization and Scaling.dsnb | 1 - ... Data Transformation One-Hot Encoding.dsnb | 1 - .../notebooks/r/OML4R Dataset Creation.dsnb | 1 - ...OML4R Feature Engineering Aggregation.dsnb | 1 - ...L4R Feature Extraction ESA Wiki Model.dsnb | 1 - .../r/OML4R Feature Extraction ESA.dsnb | 1 - .../r/OML4R Feature Extraction SVD.dsnb | 1 - ...L4R Feature Selection Algorithm-based.dsnb | 1 - ...re Selection Using Summary Statistics.dsnb | 1 - .../r/OML4R Partitioned Model SVM.dsnb | 1 - .../notebooks/r/OML4R REST API.dsnb | 1 - .../notebooks/r/OML4R Regression GLM.dsnb | 1 - .../notebooks/r/OML4R Regression NN.dsnb | 1 - .../notebooks/r/OML4R Regression SVM.dsnb | 1 - .../r/OML4R Statistical Functions.dsnb | 1 - .../notebooks/r/OML4R Time Series ESM.dsnb | 1 - .../rest/OML Batch Scoring REST.dsnb | 1 - .../rest/OML Data Drift Monitoring REST.dsnb | 1 - .../sql/OML4SQL Anomaly Detection EM.dsnb | 1 - .../sql/OML4SQL Anomaly Detection SVM.dsnb | 1 - .../OML4SQL Association Rules Apriori.dsnb | 1 - .../sql/OML4SQL Attribute Importance MDL.dsnb | 1 - .../sql/OML4SQL Classification DT.dsnb | 1 - .../sql/OML4SQL Classification GLM.dsnb | 1 - .../sql/OML4SQL Classification NB.dsnb | 1 - .../sql/OML4SQL Classification NN.dsnb | 1 - .../sql/OML4SQL Classification RF.dsnb | 1 - .../sql/OML4SQL Classification SVM.dsnb | 1 - .../notebooks/sql/OML4SQL Clustering EM.dsnb | 1 - .../notebooks/sql/OML4SQL Clustering KM.dsnb | 1 - .../notebooks/sql/OML4SQL Clustering OC.dsnb | 1 - ...4SQL Data Cleaning Duplicates Removal.dsnb | 1 - .../OML4SQL Data Cleaning Missing Data.dsnb | 1 - ...OML4SQL Data Cleaning Outlier Removal.dsnb | 1 - ...ata Cleaning Recode Synonymous Values.dsnb | 1 - ...L4SQL Data Transformation Categorical.dsnb | 1 - ...ransformation Normalization and Scale.dsnb | 1 - ...ture Engineering Aggregation and Time.dsnb | 1 - ...SQL Feature Extraction ESA Wiki Model.dsnb | 1 - .../sql/OML4SQL Feature Extraction SVD.dsnb | 1 - ...SQL Feature Selection Algorithm Based.dsnb | 1 - ...ion Unsupervised Attribute Importance.dsnb | 1 - ...re Selection Using Summary Statistics.dsnb | 1 - .../notebooks/sql/OML4SQL Nested Columns.dsnb | 1 - .../sql/OML4SQL Partitioned Model SVM.dsnb | 1 - ...L Procedure for Importing Data to ADB.dsnb | 1 - .../notebooks/sql/OML4SQL Regression GLM.dsnb | 1 - .../notebooks/sql/OML4SQL Regression NN.dsnb | 1 - .../notebooks/sql/OML4SQL Regression SVM.dsnb | 1 - .../sql/OML4SQL Statistical Functions.dsnb | 1 - .../sql/OML4SQL Time Series ESM.dsnb | 1 - ...me Series Regression ESM plus GLM XGB.dsnb | 1 - machine-learning/sql/26ai/README.md | 59 + machine-learning/sql/26ai/dmsh.sql | 150 + machine-learning/sql/26ai/dmshgrants.sql | 54 + .../oml4sql-anomaly-detection-1class-svm.sql | 228 ++ .../sql/26ai/oml4sql-anomaly-detection-em.sql | 159 ++ .../sql/26ai/oml4sql-association-rules.sql | 433 +++ .../sql/26ai/oml4sql-attribute-importance.sql | 55 + .../oml4sql-classification-decision-tree.sql | 350 +++ .../sql/26ai/oml4sql-classification-glm.sql | 523 ++++ .../oml4sql-classification-naive-bayes.sql | 605 ++++ ...oml4sql-classification-neural-networks.sql | 439 +++ .../oml4sql-classification-random-forest.sql | 332 +++ ...4sql-classification-regression-xgboost.sql | 357 +++ .../sql/26ai/oml4sql-classification-svm.sql | 470 +++ ...oml4sql-classification-text-mining-svm.sql | 167 ++ ...ql-clustering-expectation-maximization.sql | 335 +++ .../oml4sql-clustering-kmeans-star-schema.sql | 223 ++ .../sql/26ai/oml4sql-clustering-kmeans.sql | 336 +++ .../sql/26ai/oml4sql-clustering-ocluster.sql | 307 ++ ...oml4sql-cross-validation-decision-tree.sql | 223 ++ .../26ai/oml4sql-feature-extraction-cur.sql | 127 + .../26ai/oml4sql-feature-extraction-nmf.sql | 178 ++ .../26ai/oml4sql-feature-extraction-svd.sql | 341 +++ ...sql-feature-extraction-text-mining-esa.sql | 509 ++++ ...sql-feature-extraction-text-mining-nmf.sql | 178 ++ ...eature-extraction-text-term-extraction.sql | 197 ++ .../26ai/oml4sql-partitioned-models-svm.sql | 273 ++ ...ql-r-extensible-algorithm-registration.sql | 265 ++ ...oml4sql-r-extensible-association-rules.sql | 239 ++ ...extensible-attribute-importance-via-rf.sql | 126 + .../sql/26ai/oml4sql-r-extensible-glm.sql | 382 +++ .../sql/26ai/oml4sql-r-extensible-kmeans.sql | 338 +++ ...4sql-r-extensible-principal-components.sql | 316 +++ ...-extensible-regression-neural-networks.sql | 347 +++ .../oml4sql-r-extensible-regression-tree.sql | 357 +++ .../sql/26ai/oml4sql-regression-glm.sql | 438 +++ .../oml4sql-regression-neural-networks.sql | 223 ++ .../26ai/oml4sql-regression-random-forest.sql | 391 +++ .../sql/26ai/oml4sql-regression-svm.sql | 237 ++ .../oml4sql-singular-value-decomposition.sql | 341 +++ .../oml4sql-survival-analysis-xgboost.sql | 451 +++ ...4sql-time-series-esm-auto-model-search.sql | 88 + ...4sql-time-series-exponential-smoothing.sql | 112 + .../sql/26ai/oml4sql-time-series-mset.sql | 180 ++ ...oml4sql-time-series-regression-dataset.sql | 1889 +++++++++++++ .../26ai/oml4sql-time-series-regression.sql | 262 ++ 476 files changed, 28433 insertions(+), 215 deletions(-) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML Run-me-first.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML Third-Party Packages - Environment Creation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML Third-Party Packages - Python Environment Usage .json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -0- Tour.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -1- Introduction.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -2- Data Selection and Manipulation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -3- Datastore and Script Repository.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -4- Embedded Python Execution.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py -5- AutoML.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Anomaly Detection SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Association Rules Apriori.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Attribute Importance MDL.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification DT.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification NB.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification NN.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification RF.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Classification SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Clustering EM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Clustering KM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Cleaning Duplicates Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Cleaning Missing Data.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Cleaning Outlier Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Cleaning Recode Synonymous Values.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Transformation Binning.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Transformation Categorical.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Transformation Normalization and Scaling.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Data Transformation One Hot Encoding.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Dataset Creation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Date and Time Classes.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Engineering Aggregation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Extraction ESA Wiki Model.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Extraction ESA.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Extraction SVD.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Selection Algorithm-based.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Feature Selection Using Summary Statistics.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Importing Wide Datasets.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Partitioned Model SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py REST API.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Regression GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Regression NN.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Regression SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Statistical Functions.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Text Mining SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OML4Py Time Series ESM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OfficeHours_OML4Py_Cross_Validation_AUC_ML_101.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OfficeHours_OML4Py_Embedded_execution_using_third_party_packages.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OfficeHours_OML4Py_Weight_of_Evidence_ML_101.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OfficeHours_OML_Datastore_for_R_and_Python.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/OfficeHours_Python_Deploy_an_XGBoost_model_in_OML_Services.ipynb (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/python/README.md (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML Import Wiki ESA Model.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML Run-me-first.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML Third-Party Packages - Environment Creation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML Third-Party Packages - R Environment Usage.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R -1- Introduction.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R -2- Data Selection and Manipulation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R -3- Datastore and Script Repository.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R -4- Embedded R Execution.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Anomaly Detection SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Association Rules Apriori.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Attribute Importance MDL.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Classification DT.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Classification GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Classification NB.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Classification RF.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Classification SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Clustering EM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Clustering KM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Clustering OC.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Cleaning Duplicate Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Cleaning Missing Data.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Cleaning Outlier Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Cleaning Recode Synonymous Values.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Transformation Binning.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Transformation Categorical Recode.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Transformation Date Datatypes.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Transformation Normalization and Scaling.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Data Transformation One-Hot Encoding.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Dataset Creation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Engineering Aggregation.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Extraction ESA Wiki Model.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Extraction ESA.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Extraction SVD.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Selection Algorithm-based.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Feature Selection Using Summary Statistics.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Partitioned Model SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R REST API.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Regression GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Regression NN .json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Regression SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Statistical Functions .json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OML4R Text Mining SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OfficeHours_OML4R_Demonstration.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/OfficeHours_OML_Datastore_for_R_and_Python.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/r/README.md (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/Credit_Scoring_100K_SQL_Create_Table.sql (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML Export and Import Serialized Models.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL 21c or 23c Anomaly Detection MSET.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL 21c or 23c Classification XGBoost.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL 21c or 23c Regression XGBoost.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Anomaly Detection SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Association Rules Apriori.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Attribute Importance MDL.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification DT.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification NB.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification NN.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification RF.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Classification SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Clustering EM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Clustering KM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Clustering OC.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Cleaning Duplicates Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Cleaning Missing Data.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Cleaning Outlier Removal.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Cleaning Recode Synonymous Values.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Transformation Binning.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Transformation Categorical.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Data Transformation Normalization and Scale.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Exporting Serialized Models.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Engineering Aggregation and Time.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Extraction ESA Wiki Model.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Extraction NMF.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Extraction SVD.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Selection Algorithm Based.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Feature Selection Using Summary Statistics.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Nested Columns.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Partitioned Model SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Procedure for Importing Data to ADB.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Regression GLM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Regression NN.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Regression SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Statistical Functions.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Text Mining SVM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL Time Series ESM.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Automated_Text_Mining_Example.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Credit_Score_Predictions.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Insurance_Claims_Fraud.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_My_First_Notebook.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Picking_a_Good_Wine_for_20_dollars_with_ADW_OML.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Predicting_Customer_Lifetime_Value_LTV.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_SQLFORMAT_and_Forms_Examples.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_10k.json (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_Desktop_Viz_companion.dva (100%) rename machine-learning/{notebooks => notebooks-oml}/notebooks-classic/sql/README.md (100%) create mode 100644 machine-learning/notebooks-oml/python/OML Run-me-first.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML Third-Party Packages - Environment Creation.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML Third-Party Packages - Python Environment Usage.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -0- Tour.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -1- Introduction.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -2- Data Selection and Manipulation.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -3- Datastore and Script Repository.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -4- Embedded Python Execution.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py -5- AutoML.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Anomaly Detection SVM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Association Rules Apriori.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Attribute Importance MDL.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification DT.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification GLM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification NB.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification NN.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification RF.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Classification SVM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Clustering EM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Clustering KM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Cleaning Duplicates Removal.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Cleaning Missing Data.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Cleaning Outlier Removal.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Cleaning Recode Synonymous Values.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Transformation Binning.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Transformation Categorical.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Transformation Date Datatypes.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Transformation Normalization and Scaling.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Data Transformation One Hot Encoding.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Dataset Creation.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Engineering Aggregation.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Extraction ESA Wiki Model.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Extraction ESA.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Extraction NMF.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Extraction SVD.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Selection Algorithm-based.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Feature Selection Using Summary Statistics.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Importing Wide Datasets.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Partitioned Model SVM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py REST API.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Regression GLM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Regression NN.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Regression SVM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py SQL API for Embedded Python Execution.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Agglomerative Clustering and Regionalization.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Categorical Lag Transformer.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI DBSCAN Accidents.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Embedded Execution for SQL.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Exploratory Analysis.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI GWR Regressor.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Geographical Classifier.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Geographical Regressor.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Hotspot Clustering.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI KMeans Clustering.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI LOF Accidents.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI OLS Regressor.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI PAR Object Store.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Run Me First.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI SLX Classifier.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Save Load Run.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Scoord Transformer.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Fixed Effects Regressor.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Imputer.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Lag Transformer.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Lag and Error Regressors.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Operations.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI Spatial Pipeline.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Spatial AI SpatialDataFrame to OML DataFrame.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Statistical Functions.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Text Mining SVM.dsnb create mode 100644 machine-learning/notebooks-oml/python/OML4Py Time Series ESM.dsnb rename machine-learning/{notebooks => notebooks-oml}/python/README.md (97%) create mode 100644 machine-learning/notebooks-oml/r/OML Run-me-first.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML Third-Party Packages - Environment Creation.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML Third-Party Packages - R Environment Usage.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R -0- Tour.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R -1- Introduction.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R -2- Data Selection and Manipulation.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R -3- Datastore and Script Repository.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R -4- Embedded R Execution.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Anomaly Detection SVM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Association Rules Apriori.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Attribute Importance MDL.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification DT.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification GLM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification NB.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification NN.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification RF.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Classification SVM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Clustering EM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Clustering KM.dsnb rename machine-learning/{notebooks => notebooks-oml}/r/OML4R Clustering OC.dsnb (89%) mode change 100755 => 100644 create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Cleaning Duplicate Removal.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Cleaning Missing Data.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Cleaning Outlier Removal.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Cleaning Recode Synonymous Values.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Transformation Binning.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Transformation Categorical Recode.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Transformation Date Datatypes.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Transformation Normalization and Scaling.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Data Transformation One-Hot Encoding.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Dataset Creation.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Engineering Aggregation.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Extraction ESA Wiki Model.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Extraction ESA.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Extraction SVD.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Selection Algorithm-based.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Feature Selection Using Summary Statistics.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Partitioned Model SVM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R REST API.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Regression GLM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Regression NN.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Regression SVM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Statistical Functions.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Text Mining SVM.dsnb create mode 100644 machine-learning/notebooks-oml/r/OML4R Time Series ESM.dsnb rename machine-learning/{notebooks => notebooks-oml}/r/README.md (97%) create mode 100644 machine-learning/notebooks-oml/rest/OML Batch Scoring REST.dsnb create mode 100644 machine-learning/notebooks-oml/rest/OML Data Bias Detector REST.dsnb create mode 100644 machine-learning/notebooks-oml/rest/OML Data Drift Monitoring REST.dsnb create mode 100644 machine-learning/notebooks-oml/rest/OML Data Monitoring REST.dsnb rename machine-learning/{notebooks => notebooks-oml}/rest/OML Model Drift Monitoring REST.dsnb (100%) mode change 100755 => 100644 create mode 100644 machine-learning/notebooks-oml/rest/OML Model Monitoring REST.dsnb create mode 100644 machine-learning/notebooks-oml/rest/OML Monitoring Model Drift REST.dsnb rename machine-learning/{notebooks => notebooks-oml}/rest/README.md (95%) create mode 100644 machine-learning/notebooks-oml/sql/OML Export and Import Serialized Models.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML GPU Vector Search - Sentence Transformer.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML Import Wiki ESA Model.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML Run-me-first.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4Py Spatial AI SpatialDataFrame to OML DataFrame.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Anomaly Detection EM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Anomaly Detection MSET.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Anomaly Detection SVM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Association Rules Apriori.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Attribute Importance MDL.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification DT.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification GLM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification NB.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification NN.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification RF.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification SVM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Classification XGBoost.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Clustering EM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Clustering KM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Clustering OC.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Cleaning Duplicates Removal.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Cleaning Missing Data.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Cleaning Outlier Removal.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Cleaning Outlier Removal_1.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Cleaning Recode Synonymous Values.dsnb rename machine-learning/{notebooks => notebooks-oml}/sql/OML4SQL Data Transformation Binning.dsnb (59%) mode change 100755 => 100644 create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Transformation Categorical.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Data Transformation Normalization and Scale.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Engineering Aggregation and Time.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Extraction ESA Wiki Model.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Extraction NMF.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Extraction SVD.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Selection Algorithm Based.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Feature Selection Using Summary Statistics.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Nested Columns.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Partitioned Model SVM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Procedure for Importing Data to ADB.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Regression GLM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Regression NN.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Regression SVM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Regression XGBoost.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Statistical Functions.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Text Mining SVM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Time Series ESM.dsnb create mode 100644 machine-learning/notebooks-oml/sql/OML4SQL Time Series Regression ESM plus GLM XGB.dsnb rename machine-learning/{notebooks => notebooks-oml}/sql/README.md (100%) delete mode 100755 machine-learning/notebooks/python/OML4Py -2- Data Selection and Manipulation.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py -4- Embedded Python Execution.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py -5- AutoML.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Association Rules Apriori.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Attribute Importance MDL.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Classification SVM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Clustering EM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Clustering KM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Cleaning Duplicates Removal.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Cleaning Missing Data.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Cleaning Outlier Removal.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Cleaning Recode Synonymous Values.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Transformation Binning.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Transformation Categorical.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Transformation Date Datatypes.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Transformation Normalization and Scaling.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Data Transformation One Hot Encoding.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Dataset Creation.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Feature Engineering Aggregation.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Feature Extraction ESA Wiki Model.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Feature Extraction ESA.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Feature Selection Algorithm-based.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Feature Selection Using Summary Statistics.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Importing Wide Datasets.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Partitioned Model SVM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py REST API.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Regression GLM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Regression NN.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Regression SVM.dsnb delete mode 100755 machine-learning/notebooks/python/OML4Py Time Series ESM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R -2- Data Selection and Manipulation.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R -3- Datastore and Script Repository.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R -4- Embedded R Execution.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Anomaly Detection SVM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Association Rules Apriori.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Attribute Importance MDL.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification DT.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification GLM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification NB.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification NN.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification RF.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Classification SVM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Clustering EM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Clustering KM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Cleaning Duplicate Removal.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Cleaning Missing Data.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Cleaning Outlier Removal.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Cleaning Recode Synonymous Values.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Transformation Binning.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Transformation Categorical Recode.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Transformation Date Datatypes.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Transformation Normalization and Scaling.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Data Transformation One-Hot Encoding.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Dataset Creation.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Engineering Aggregation.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Extraction ESA Wiki Model.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Extraction ESA.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Extraction SVD.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Selection Algorithm-based.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Feature Selection Using Summary Statistics.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Partitioned Model SVM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R REST API.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Regression GLM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Regression NN.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Regression SVM.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Statistical Functions.dsnb delete mode 100755 machine-learning/notebooks/r/OML4R Time Series ESM.dsnb delete mode 100755 machine-learning/notebooks/rest/OML Batch Scoring REST.dsnb delete mode 100755 machine-learning/notebooks/rest/OML Data Drift Monitoring REST.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Anomaly Detection EM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Anomaly Detection SVM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Association Rules Apriori.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Attribute Importance MDL.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification DT.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification GLM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification NB.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification NN.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification RF.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Classification SVM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Clustering EM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Clustering KM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Clustering OC.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Cleaning Duplicates Removal.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Cleaning Missing Data.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Cleaning Outlier Removal.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Cleaning Recode Synonymous Values.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Transformation Categorical.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Data Transformation Normalization and Scale.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Engineering Aggregation and Time.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Extraction ESA Wiki Model.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Extraction SVD.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Selection Algorithm Based.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Feature Selection Using Summary Statistics.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Nested Columns.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Partitioned Model SVM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Procedure for Importing Data to ADB.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Regression GLM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Regression NN.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Regression SVM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Statistical Functions.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Time Series ESM.dsnb delete mode 100755 machine-learning/notebooks/sql/OML4SQL Time Series Regression ESM plus GLM XGB.dsnb create mode 100644 machine-learning/sql/26ai/README.md create mode 100644 machine-learning/sql/26ai/dmsh.sql create mode 100644 machine-learning/sql/26ai/dmshgrants.sql create mode 100644 machine-learning/sql/26ai/oml4sql-anomaly-detection-1class-svm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-anomaly-detection-em.sql create mode 100644 machine-learning/sql/26ai/oml4sql-association-rules.sql create mode 100644 machine-learning/sql/26ai/oml4sql-attribute-importance.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-decision-tree.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-glm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-naive-bayes.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-neural-networks.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-random-forest.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-regression-xgboost.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-svm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-classification-text-mining-svm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-clustering-expectation-maximization.sql create mode 100644 machine-learning/sql/26ai/oml4sql-clustering-kmeans-star-schema.sql create mode 100644 machine-learning/sql/26ai/oml4sql-clustering-kmeans.sql create mode 100644 machine-learning/sql/26ai/oml4sql-clustering-ocluster.sql create mode 100644 machine-learning/sql/26ai/oml4sql-cross-validation-decision-tree.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-cur.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-nmf.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-svd.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-text-mining-esa.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-text-mining-nmf.sql create mode 100644 machine-learning/sql/26ai/oml4sql-feature-extraction-text-term-extraction.sql create mode 100644 machine-learning/sql/26ai/oml4sql-partitioned-models-svm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-algorithm-registration.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-association-rules.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-attribute-importance-via-rf.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-glm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-kmeans.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-principal-components.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-regression-neural-networks.sql create mode 100644 machine-learning/sql/26ai/oml4sql-r-extensible-regression-tree.sql create mode 100644 machine-learning/sql/26ai/oml4sql-regression-glm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-regression-neural-networks.sql create mode 100644 machine-learning/sql/26ai/oml4sql-regression-random-forest.sql create mode 100644 machine-learning/sql/26ai/oml4sql-regression-svm.sql create mode 100644 machine-learning/sql/26ai/oml4sql-singular-value-decomposition.sql create mode 100644 machine-learning/sql/26ai/oml4sql-survival-analysis-xgboost.sql create mode 100644 machine-learning/sql/26ai/oml4sql-time-series-esm-auto-model-search.sql create mode 100644 machine-learning/sql/26ai/oml4sql-time-series-exponential-smoothing.sql create mode 100644 machine-learning/sql/26ai/oml4sql-time-series-mset.sql create mode 100644 machine-learning/sql/26ai/oml4sql-time-series-regression-dataset.sql create mode 100644 machine-learning/sql/26ai/oml4sql-time-series-regression.sql diff --git a/machine-learning/labs/oml4py-live-labs/Lab1_Get_Started_with_OML4Py_on_Autonomous_Database.json b/machine-learning/labs/oml4py-live-labs/Lab1_Get_Started_with_OML4Py_on_Autonomous_Database.json index 5e7adf71..a9651ff8 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab1_Get_Started_with_OML4Py_on_Autonomous_Database.json +++ b/machine-learning/labs/oml4py-live-labs/Lab1_Get_Started_with_OML4Py_on_Autonomous_Database.json @@ -1 +1 @@ -{"paragraphs":[{"text":"%md\n## **Initiate a call to the Python interpreter**\nTo run Python commands in a notebook, you must first connect to the Python interpreter. This occurs as a result of running your first `%python` paragraph. To use OML4Py, you must import the `oml` module, which automatically establishes a connection to your database. In an Oracle Machine Learning notebook, you can add multiple paragraphs, and each paragraph can be connected to different interpreters such as SQL or Python. This example shows you how to:\n\n* Connect to a Python interpreter to run Python commands in a notebook\n* Import the Python modules—oml, pandas, numpy, and matplotlib\n* Check if the oml module is connected to the database\n\nNote: `z` is a reserved keyword and must not be used as a variable in %python paragraphs in Oracle Machine Learning Notebooks. You will see `z.show()` fucntion used in the examples to display Python object and proxy object content.\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:30+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"md","editOnDblClick":false},"editorMode":"ace/mode/markdown","editorHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Initiate a call to the Python interpreter

\n

To run Python commands in a notebook, you must first connect to the Python interpreter. This occurs as a result of running your first %python paragraph. To use OML4Py, you must import the oml module, which automatically establishes a connection to your database. In an Oracle Machine Learning notebook, you can add multiple paragraphs, and each paragraph can be connected to different interpreters such as SQL or Python. This example shows you how to:

\n\n

Note: z is a reserved keyword and must not be used as a variable in %python paragraphs in Oracle Machine Learning Notebooks. You will see z.show() fucntion used in the examples to display Python object and proxy object content.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986446_2142145532","id":"20211001-190306_2044056072","dateCreated":"2021-09-21T20:48:10+0000","dateStarted":"2021-09-22T20:18:31+0000","dateFinished":"2021-09-22T20:18:33+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"focus":true,"$$hashKey":"object:376"},{"text":"%python\n\nimport oml","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:33+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true,"editorHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[]},"interrupted":false,"jobName":"paragraph_1633114986450_1302574953","id":"20211001-190306_576066479","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:34+0000","dateFinished":"2021-09-22T20:18:42+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:377"},{"text":"%md\n## **Verify Connection to the Autonomous Database**\nUsing the default interpreter bindings, OML Notebooks automatically establishes a database connection for the notebook. \n\nTo verify the Python interpreter has established a database connection through the `oml` module, run the command shown below. If the notebook is connected, the command returns `True`. \n\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:42+0000","config":{"tableHide":false,"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Verify Connection to the Autonomous Database

\n

Using the default interpreter bindings, OML Notebooks automatically establishes a database connection for the notebook.

\n

To verify the Python interpreter has established a database connection through the oml module, run the command shown below. If the notebook is connected, the command returns True.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_2092135881","id":"20211001-190306_1271780091","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:43+0000","dateFinished":"2021-09-22T20:18:43+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:378"},{"text":"%python\n\noml.isconnected()","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:43+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"True\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_371292639","id":"20211001-190306_2058125440","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:43+0000","dateFinished":"2021-09-22T20:18:44+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:379"},{"text":"%md\n\n## **View Help Files**\nThe Python help function is used to display the documentation of packages, modules, functions, classes, and keywords. The help function has the following syntax:","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:44+0000","config":{"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

View Help Files

\n

The Python help function is used to display the documentation of packages, modules, functions, classes, and keywords. The help function has the following syntax:

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_563552517","id":"20211001-190306_759056463","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:44+0000","dateFinished":"2021-09-22T20:18:44+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:380"},{"text":"%python\n\nhelp([object])","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:44+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on list object:\n\nclass list(object)\n | list(iterable=(), /)\n | \n | Built-in mutable sequence.\n | \n | If no argument is given, the constructor creates a new empty list.\n | The argument must be an iterable if specified.\n | \n | Methods defined here:\n | \n | __add__(self, value, /)\n | Return self+value.\n | \n | __contains__(self, key, /)\n | Return key in self.\n | \n | __delitem__(self, key, /)\n | Delete self[key].\n | \n | __eq__(self, value, /)\n | Return self==value.\n | \n | __ge__(self, value, /)\n | Return self>=value.\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __getitem__(...)\n | x.__getitem__(y) <==> x[y]\n | \n | __gt__(self, value, /)\n | Return self>value.\n | \n | __iadd__(self, value, /)\n | Implement self+=value.\n | \n | __imul__(self, value, /)\n | Implement self*=value.\n | \n | __init__(self, /, *args, **kwargs)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __iter__(self, /)\n | Implement iter(self).\n | \n | __le__(self, value, /)\n | Return self<=value.\n | \n | __len__(self, /)\n | Return len(self).\n | \n | __lt__(self, value, /)\n | Return self\n

For example,

\n\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1567926762","id":"20211001-190306_437824049","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:46+0000","dateFinished":"2021-09-22T20:18:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:382"},{"text":"%python\n\nhelp(oml.create)","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:46+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on cython_function_or_method in module oml.core.methods:\n\ncreate(x, table, oranumber=True, dbtypes=None, append=False)\n Creates a table in Oracle Database from a Python data set.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n table : str\n A name for the table.\n oranumber : bool, True (default)\n If True, use SQL NUMBER for numeric columns. Otherwise, use BINARY_DOUBLE.\n Ignored if ``append`` is True.\n dbtypes : dict mapping str to str or list of str\n A list of SQL types to use on the new table. If a list, its length should\n be equal to the number of columns. If a dict, the keys are the names of the\n columns. Ignored if ``append`` is True.\n append : bool, False (default)\n Indicates whether to append the data to the existing table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n new_table : oml.DataFrame\n A proxy object that represents the newly-created table.\n\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_904721421","id":"20211001-190306_70160754","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:46+0000","dateFinished":"2021-09-22T20:18:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:383"},{"text":"%md\n---\nTo view the help files for `oml` module, type the code below.","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:46+0000","config":{"tableHide":false,"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"
\n

To view the help files for oml module, type the code below.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1816292524","id":"20211001-190306_1536023944","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:47+0000","dateFinished":"2021-09-22T20:18:47+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:384"},{"text":"%python\n\nhelp(oml)","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:47+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on package oml:\n\nNAME\n oml - Oracle Machine Learning for Python\n\nDESCRIPTION\n A component of the Oracle Advanced Analytics Option, Oracle Machine Learning\n for Python makes the open source Python programming language and environment\n ready for enterprise in-database data. Designed for problems involving both\n large and small volumes of data, Oracle Machine Learning for Python integrates\n Python with Oracle Database. Python users can run Python commands and scripts\n for statistical, machine learning, and graphical analyses on data stored in\n Oracle Database. Python users can develop, refine, and deploy Python scripts\n that leverage the parallelism and scalability of Oracle Database to automate\n data analysis. Data analysts and data scientists can run Python modules and\n develop and operationalize Python scripts for machine learning applications\n in one step without having to learn SQL. Oracle Machine Learning for Python\n performs function pushdown for in-database execution of core Python and\n popular Python module functions. Being integrated with Oracle Database,\n Oracle Machine Learning for Python can run any Python module via embedded\n Python while the database manages the data served to the Python engines.\n\nPACKAGE CONTENTS\n algo (package)\n automl (package)\n core (package)\n ds (package)\n embed (package)\n graphics (package)\n mlx (package)\n script (package)\n\nCLASSES\n oml.algo.model.odmModel(builtins.object)\n oml.algo.ai.ai\n oml.algo.ar.ar\n oml.algo.dt.dt\n oml.algo.em.em\n oml.algo.esa.esa\n oml.algo.glm.glm\n oml.algo.km.km\n oml.algo.nb.nb\n oml.algo.nn.nn\n oml.algo.rf.rf\n oml.algo.svd.svd\n oml.algo.svm.svm\n oml.core.number._Number(oml.core.series._Series)\n oml.core.float.Float\n oml.core.series._Series(oml.core.vector._Vector)\n oml.core.boolean.Boolean\n oml.core.bytes.Bytes\n oml.core.string.String\n oml.core.vector._Vector(builtins.object)\n oml.core.frame.DataFrame\n \n class Boolean(oml.core.series._Series)\n | Boolean series data class.\n | \n | Represents a single column of 0, 1, and NULL values in Oracle Database.\n | \n | Method resolution order:\n | Boolean\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __and__(self, other)\n | \n | __init__(self)\n | \n | __invert__(self)\n | \n | __or__(self, other)\n | \n | all(self)\n | Checks whether all elements in the Boolean series data object are True.\n | \n | Returns\n | =======\n | all: bool\n | \n | any(self)\n | Checks whether any elements in the Boolean series data object are True.\n | \n | Returns\n | -------\n | any: bool\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of bool and None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class Bytes(oml.core.series._Series)\n | Bytes(other, dbtype)\n | \n | Binary series data class.\n | \n | Represents a single column of RAW or BLOB data in Oracle Database.\n | \n | Method resolution order:\n | Bytes\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, other, dbtype)\n | Convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.Bytes\n | dbtype : 'raw' or 'blob'\n | \n | len(self)\n | Computes the length of each byte string.\n | \n | Returns\n | -------\n | length : oml.Float\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of bytes and None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class DataFrame(oml.core.vector._Vector)\n | DataFrame(other)\n | \n | Tabular dataframe class.\n | \n | Represents multiple columns of Boolean, Bytes, Float, and/or String data.\n | \n | Method resolution order:\n | DataFrame\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | KFold(self, n_splits=3, seed=12345, strata_cols=None, use_hash=True, hash_cols=None, nvl=None)\n | Splits the oml.DataFrame object randomly into k consecutive folds.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | strata_cols : a list of string values or None (default)\n | Names of the columns used for stratification. If None, stratification\n | is not performed. Must be None when ``use_hash`` is False.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | hash_cols : a list of string values or None (default)\n | If a list of string values, use the values from these named columns\n | to hash to split the data. If None, use the values from the 1st 10\n | columns to hash.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of k 2-tuples of oml.DataFrame objects\n | \n | Raises\n | ------\n | ValueError\n | * If ``hash_cols`` refers to a single LOB column.\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean, str, list of str, 2-tuple\n | * oml.Boolean : select only the rows satisfying the condition. Must be from the same data\n | source as self.\n | * str : select the column of the same name\n | * list of str : select the columns whose names matches the elements in the list.\n | * 2-tuple : The first element in the tuple denotes which rows to select.\n | It can be either a oml.Boolean or ``slice(None)`` (this selects all\n | rows). The second element in the tuple denotes which columns to select.\n | It can be either ``slice(None)`` (this selects all columns), str, list\n | of str, int, or list of int. If int or list of int, selects the\n | column(s) in the corresponding position(s).\n | \n | Returns\n | -------\n | subset : OML data object\n | Is a oml.DataFrame if has more than one column, otherwise is a OML series data object.\n | \n | __init__(self, other)\n | Convert OML series data object(s) to oml.DataFrame.\n | \n | Parameters\n | ----------\n | other : OML series data object or dict mapping str to OML series data objects\n | * OML series data object : initializes a single-column oml.DataFrame containing the\n | same data. \n | * dict : initializes a oml.DataFrame that comprises all the OML series data objects\n | in the dict in an arbitrary order. Each column in the resulting oml.DataFrame has as\n | its column name its corresponding key in the dict.\n | \n | corr(self, method='pearson', min_periods=1, skipna=True)\n | Computes pairwise correlation between all columns where possible,\n | given the type of coefficient.\n | \n | Parameters\n | ----------\n | method : 'pearson' (default), 'kendall', or 'spearman'\n | * pearson : Uses Pearson's correlation coefficient. Can only calculate\n | correlations between Float or Boolean columns.\n | * kendall : Uses Kendall's tau-b coefficient.\n | * spearman : Uses Spearman's rho coefficient.\n | min_periods : int, optional, 1 (default)\n | The minimum number of observations required per pair of columns to \n | have a valid result.\n | skipna : bool, True (default)\n | If True, NaN and (+/-)Inf values are mapped to NULL.\n | \n | Returns\n | -------\n | y : :py:class:`pandas.DataFrame`\n | \n | count(self, numeric_only=False)\n | Returns the number of elements that are not NULL for each column.\n | \n | Parameters\n | ----------\n | numeric_only : boolean, False (default)\n | Includes only Float and Boolean columns.\n | \n | Returns\n | -------\n | count : :py:class:`pandas.Series`\n | \n | crosstab(self, index, columns=None, values=None, rownames=None, colnames=None, aggfunc=None, margins=False, margins_name='All', dropna=True, normalize=False, pivot=False)\n | Computes a simple cross-tabulation of two or more columns. By default,\n | computes a frequency table for the columns unless a column and\n | an aggregation function have been passed.\n | \n | Parameters\n | ----------\n | index : str or list of str\n | Names of the column(s) of the DataFrame to group by. If ``pivot`` is\n | True, these columns are displayed in the rows of the result table.\n | columns : str or list of str, optional\n | Names of the other column(s) of the Dataframe to group by. If ``pivot``\n | is True, these columns are displayed in the columns of the result\n | table.\n | values : str, optional\n | The name of the column to aggregate according to the grouped columns.\n | Requires ``aggfunc`` to be specified.\n | aggfunc : OML DataFrame aggregation function object, optional\n | The supported oml.DataFrame aggregation functions include: count, \n | max, mean, median, min, nunique, std and sum. To use ``aggfunc``, \n | specify the function object using its full name, for example, \n | ``oml.DataFrame.sum``, ``oml.DataFrame.nunique``, and so on.\n | If specified, requires ``values`` to also be specified.\n | rownames : str or list of str, None (default)\n | If specified, must match number of names in ``index``. If None, names in\n | ``index`` are used. \n | colnames : str or list of str, None (default)\n | If specified, must match number of strings in ``columns``. If None,\n | names in ``columns`` are used. Ignored if ``pivot`` is True.\n | margins : bool, False (default)\n | Includes row and column margins (subtotals)\n | margins_name : str, 'All' (default)\n | Names of the row and column that contain the totals when ``margins``\n | is True. Should be a value not contained in any of the columns specified\n | by ``index`` and ``columns``. \n | dropna : bool, True (default)\n | In addition, if ``pivot`` is True, drops columns from the result\n | table if all the entries of the column are NaN.\n | normalize : boolean, {'all', 'index', 'columns'} or {0, 1}, False (default)\n | Normalizes by dividing the values by their sum.\n | \n | * If 'all' or True, normalizes over all values.\n | * If 'index' or 0, normalizes over each row.\n | * If 'columns' or 1, normalizes over each column.\n | * If ``margins`` is True, also normalizes margin values.\n | pivot : bool, False (default)\n | If True, returns results in pivot table format. Else, returns results in\n | relational table format.\n | \n | Returns\n | -------\n | crosstab : oml.DataFrame\n | \n | See Also\n | --------\n | DataFrame.pivot_table\n | \n | cumsum(self, by, ascending=True, na_position='last', skipna=True)\n | Gets the cumulative sum of each ``Float`` or ``Boolean`` column after the\n | ``DataFrame`` object is sorted.\n | \n | Parameters\n | ----------\n | by : str or list of str\n | A single column name or list of column names by which to sort the \n | DataFrame object. Columns in ``by`` do not have to be ``Float`` or \n | ``Boolean``.\n | ascending : bool or list of bool, True (default)\n | If True, sort is in ascending order, otherwise descending. Specify \n | list for multiple sort orders. If this is a list of bools, must match\n | the length of ``by``.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NaN and None at the beginning, ``last`` places them \n | at the end.\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | cumsum : oml.DataFrame\n | \n | describe(self, percentiles=None, include=None, exclude=None)\n | Generates descriptive statistics that summarize the central tendency,\n | dispersion, and shape of the data in each column\n | \n | Parameters\n | ----------\n | percentiles : bool, list-like of numbers, or None (default), optional \n | The percentiles to include in the output for `Float` columns. All\n | must be between 0 and 1. If ``percentiles`` is None or True,\n | ``percentiles`` is set to ``[.25, .5, .75]``, which corresponds\n | to the 25th, 50th, and 75th percentiles. If `percentiles` is False,\n | only ``min`` and ``max`` stats and no other percentiles is\n | included.\n | include : 'all', list-like of OML column types or None (default), optional\n | Types of columns to include in the result. Available options:\n | \n | - 'all': Includes all columns.\n | - List of OML column types : Only includes specified types in\n | the results.\n | - None (default) : If ``Float`` columns exist and ``exclude`` is\n | None, only includes ``Float`` columns. Otherwise, includes all\n | columns.\n | exclude : list of OML column types or None (default), optional\n | Types of columns to exclude from the result. Available options:\n | \n | - List of OML column types : Excludes specified types from\n | the results.\n | - None (default) : Result excludes nothing.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.DataFrame`\n | The concatenation of the summary statistics for each column.\n | \n | See Also\n | --------\n | DataFrame.count\n | DataFrame.max\n | DataFrame.min\n | DataFrame.mean\n | DataFrame.std\n | DataFrame.select_types\n | \n | drop(self, columns)\n | Drops specified columns.\n | \n | Parameters\n | ----------\n | columns : str or list of str\n | Columns to drop from the object.\n | \n | Returns\n | -------\n | dropped : oml.DataFrame\n | \n | drop_duplicates(self, subset=None)\n | Removes duplicated rows from oml.DataFrame object.\n | \n | Use ``subset`` to consider a set of rows duplicates if they have\n | identical values for only a subset of the columns. In this case, after\n | deduplication, each of the other columns contains the minimum value\n | found across the set.\n | \n | Parameters\n | ----------\n | subset : str or list of str, optional\n | Columns to consider for identifying duplicates. If None, use all\n | columns.\n | \n | Returns\n | -------\n | deduplicated : oml.DataFrame\n | \n | dropna(self, how='any', thresh=None, subset=None)\n | Removes rows containing missing values.\n | \n | Parameters\n | ----------\n | how : {'any', 'all'}, 'any' (default)\n | Determines if row is removed from DataFrame when at least one or all\n | values are missing.\n | thresh : int, optional\n | Requires that many of missing values to drop a row from DataFrame.\n | subset : list, optional\n | The names of the columns to check for missing values.\n | \n | Returns\n | -------\n | dropped : oml.DataFrame\n | DataFrame without missing values.\n | \n | kurtosis(self, skipna=True)\n | Returns the sample kurtosis of the values for each ``Float``\n | column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | kurt : :py:class:`pandas.Series`\n | \n | max(self, skipna=True, numeric_only=False)\n | Returns the maximum value in each column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | numeric_only : boolean, False (default)\n | Includes only ``Float`` and ``Boolean`` columns. \n | \n | Returns\n | -------\n | max : :py:class:`pandas.Series`\n | \n | mean(self, skipna=True)\n | Returns the mean of the values for each ``Float`` or ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | mean : :py:class:`pandas.Series`\n | \n | median(self, skipna=True)\n | Returns the median of the values for each ``Float`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Exclude NaN values when computing the result\n | \n | Returns\n | -------\n | median : :py:class:`pandas.Series`\n | \n | merge(self, other, on=None, left_on=None, right_on=None, how='left', suffixes=('_l', '_r'), nvl=True)\n | Joins data sets.\n | \n | Parameters\n | ----------\n | other : an OML data set object\n | on : str or list of str, optional\n | Column names to join on. Must be found in both ``self`` and ``other``.\n | left_on : str or list of str, optional\n | Column names of ``self`` to join on.\n | right_on : str or list of str, optional\n | Column names of ``other`` to join on. If specified, must specify the same\n | number of columns as ``left_on``.\n | how : 'left' (default), 'right', 'inner', 'full'\n | * left : left outer join\n | * right : right outer join\n | * full : full outer join\n | * inner : inner join\n | \n | If ``on`` and ``left_on`` are both None, then ``how`` is ignored,\n | and a cross join is performed.\n | suffixes : sequence of length 2\n | Suffix to apply to column names on the left and right side,\n | respectively.\n | nvl : True (default), False, dict \n | * True : join condition includes NULL value\n | * False : join condition excludes NULL value\n | * dict : specifies the values that join columns use in replacement of NULL value with column names as keys\n | \n | Returns\n | -------\n | merged : oml.DataFrame\n | \n | min(self, skipna=True, numeric_only=False)\n | Returns the minimum value in each column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | numeric_only : boolean, False (default)\n | Includes only ``Float`` and ``Boolean`` columns\n | \n | Returns\n | -------\n | min : :py:class:`pandas.Series`\n | \n | nunique(self, dropna=True)\n | Returns number of unique values for each column of DataFrame.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : pandas.Series\n | \n | pivot_table(self, index, columns=None, values=None, aggfunc=, margins=False, dropna=True, margins_name='All')\n | Converts data set to a spreadsheet-style pivot table. Due to the Oracle\n | 1000 column limit, pivot tables with more than 1000 columns are\n | automatically truncated to display the categories with the most entries\n | for each value column.\n | \n | Parameters\n | ----------\n | index : str or list of str\n | Names of columns containing the keys to group by on the pivot table\n | index.\n | columns : str or list of str, optional\n | Names of columns containing the keys to group by on the pivot table\n | columns. \n | values : str or list of str, optional\n | Names of columns to aggregate on. If None, values are inferred \n | as all columns not in ``index`` or ``columns``.\n | aggfunc : OML DataFrame aggregation function or a list of them, oml.DataFrame.mean (default)\n | The supported oml.DataFrame aggregation functions include: count, max,\n | mean, median, min, nunique, std and sum. When using aggregation\n | functions, specify the function object using its full name, for example,\n | ``oml.DataFrame.sum``, ``oml.DataFrame.nunique``, and so on.\n | If ``aggfunc`` contains more than one function, each function is \n | applied to each column in ``values``. If the function does not apply to\n | the type of a column in ``values``, the result table skips applying \n | the function to the particular column. \n | margins : bool, False (default)\n | Include row and column margins (subtotals)\n | dropna : bool, True (default)\n | Unless ``columns`` is None, drop column labels from the result table if\n | all the entries corresponding to the column label are NaN for all\n | aggregations.\n | margins_name : string, 'All' (default)\n | Names of the row and column that contain the totals when ``margins``\n | is True. Should be a value not contained in any of the columns specified\n | by ``index`` and ``columns``. \n | \n | Returns\n | -------\n | pivoted : oml.DataFrame\n | \n | See Also\n | --------\n | DataFrame.crosstab\n | \n | pull(self, aslist=False)\n | Pulls data represented by the DataFrame from Oracle Database\n | into an in-memory Python object.\n | \n | Parameters\n | ----------\n | aslist : bool\n | If False, returns a pandas.DataFrame. Otherwise, returns the data\n | as a list of tuples.\n | \n | Returns\n | -------\n | pulled_obj : :py:class:`pandas.DataFrame` or list of tuples\n | \n | rename(self, columns)\n | Renames columns.\n | \n | Parameters\n | ----------\n | columns : dict or list\n | ``dict`` contains old and new column names.\n | ``list`` contains the new names for all the columns in order.\n | \n | Notes\n | -----\n | The method changes the column names of the caller DataFrame object too.\n | \n | Returns\n | -------\n | renamed : DataFrame\n | \n | replace(self, old, new, default=None, columns=None)\n | Replace values given in `old` with `new` in specified columns.\n | \n | Parameters\n | ----------\n | columns : list of str or None (default)\n | Columns to look for values in `old`. If None, then all columns\n | of DataFrame will be replaced.\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it \n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying \n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain \n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.DataFrame\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | in the target columns\n | * if `default` is specifed with a non-None value which has data type\n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different \n | data types\n | \n | round(self, decimals=0)\n | Rounds oml.Float values in the oml.DataFrame object to \n | the specified decimal place.\n | \n | Parameters\n | ----------\n | decimals : non-negative int\n | \n | Returns\n | -------\n | rounded: oml.DataFrame\n | \n | sample(self, frac=None, n=None, random_state=None)\n | Return a random sample data sets from an oml.DataFrame object.\n | \n | Parameters\n | ----------\n | frac : a float value\n | Fraction of data sets to return. The value should be between 0 and 1.\n | Cannot be used with n.\n | n : an integer value\n | Number of rows to return. Default = 1 if frac = None.\n | Cannot be used with frac. \n | random_state : int or 12345 (default)\n | The seed to use for random sampling.\n | \n | Returns\n | -------\n | sample_data : an oml.DataFrame objects\n | It contains the random sample rows from an oml.DataFrame object.\n | The fraction of returned data sets is specified by the frac parameter.\n | \n | select_types(self, include=None, exclude=None)\n | Returns the subset of columns include/excluding columns based on their OML\n | type.\n | \n | Parameters\n | ----------\n | include, exclude : list of OML column types\n | A selection of OML column types to be included/excluded. At least one of\n | these parameters must be supplied. \n | \n | Raises\n | ------\n | ValueError\n | * If both of ``include`` and ``exclude`` are None.\n | * If ``include`` and ``exclude`` have overlapping elements.\n | \n | Returns\n | -------\n | subset : oml.DataFrame\n | \n | skew(self, skipna=True)\n | Returns the sample skewness of the values for each ``Float``\n | column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | skew : :py:class:`pandas.Series`\n | \n | sort_values(self, by, ascending=True, na_position='last')\n | Specifies the order in which rows appear in the result set.\n | \n | Parameters\n | ----------\n | by : str or list of str\n | Column names or list of column names.\n | ascending : bool or list of bool, True (default)\n | If True, sort is in ascending order. Sort is in descending order\n | otherwise. Specify list for multiple sort orders. If this is a list of\n | bools, must match the length of ``by``.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them \n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : oml.DataFrame\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, strata_cols=None, use_hash=True, hash_cols=None, nvl=None)\n | Splits the oml.DataFrame object randomly into multiple data sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) default\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | strata_cols: a list of string values or None (default):\n | Names of the columns used for stratification. If None, stratification\n | is not performed. Must be None when use_hash is False.\n | use_hash: boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use random\n | number to split the data.\n | hash_cols: a list of string values or None (default):\n | If a list of string values, use the values from these named columns\n | to hash to split the data. If None, use the values from the 1st 10\n | columns to hash.\n | nvl: numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of oml.DataFrame objects\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | ValueError\n | * If ``hash_cols`` refers to a single LOB column.\n | \n | std(self, skipna=True)\n | Returns the sample standard deviation of the values of each ``Float`` or\n | ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | std : :py:class:`pandas.Series`\n | \n | sum(self, skipna=True)\n | Returns the sum of the values of each ``Float`` or ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Exclude NaN values when computing the result\n | \n | Returns\n | -------\n | sum : :py:class:`pandas.Series`\n | \n | t_dot(self, other=None, skipna=True, pull_from_db=True)\n | Calculates the matrix cross-product of self with other.\n | \n | Equivalent to transposing self first, then multiplying it with other. \n | \n | Parameters\n | ----------\n | other : oml.DataFrame, optional\n | If not specified, self is used.\n | skipna : bool, True (default)\n | Treats NaN entries as 0.\n | pull_from_db : bool, True (default)\n | If True, returns a pandas.DataFrame. If False, returns a\n | oml.DataFrame consisting of three columns:\n | \n | - ROWID: the row number of the resulting matrix \n | - COLID: the column number of the resulting matrix \n | - VALUE: the value at the corresponding position of the matrix \n | \n | Returns\n | -------\n | prod : float, :py:class:`pandas.Series`, or :py:class:`pandas.DataFrame`\n | \n | See Also\n | --------\n | oml.Float.dot\n | \n | ----------------------------------------------------------------------\n | Readonly properties defined here:\n | \n | columns\n | The column names of the data set.\n | \n | dtypes\n | The types of the columns of the data set.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class Float(oml.core.number._Number)\n | Float(other, dbtype=None)\n | \n | Numeric series data class.\n | \n | Represents a single column of NUMBER, BINARY_DOUBLE or BINARY_FLOAT data \n | in Oracle Database.\n | \n | Method resolution order:\n | Float\n | oml.core.number._Number\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __abs__(self)\n | Return the absolute value of every element in ``self``.\n | \n | Equivalent to ``abs(self)``.\n | \n | Returns\n | -------\n | absval : oml.Float\n | \n | __add__(self, other)\n | Equivalent to ``self + other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : add the scalar to each element in ``self``. \n | * oml.Float : must come from the same data source. Add corresponding\n | elements in ``self`` and ``other``.\n | \n | Returns\n | -------\n | sum : oml.Float\n | \n | __contains__(self, item)\n | Check whether all elements in ``item`` exists in the Float series\n | \n | Equivalent to ``item in self``.\n | \n | Parameters\n | ----------\n | item : int/float, list of int/float, oml.Float\n | Values to check in series\n | \n | Returns\n | -------\n | contains : bool\n | Returns `True` if all elements exists, otherwise `False`\n | \n | __divmod__(self, other)\n | Equivalent to ``divmod(self, other)``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Find the quotient and remainder when each element in ``self`` is\n | divided by the scalar.\n | * oml.Float : must come from the same data source. Find the quotient and\n | remainder when each element in ``self`` is divided by the corresponding element\n | in ``other``.\n | \n | Returns\n | -------\n | divrem : oml.DataFrame\n | The first column contains the floor of the quotient, and the second column\n | contains the remainder.\n | \n | __floordiv__(self, other)\n | Equivalent to ``self // other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : divide each element in ``self`` by the scalar. \n | * oml.Float : must come from the same data source. Divide each element in\n | ``self`` by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | quotient : oml.Float\n | \n | __init__(self, other, dbtype=None)\n | Convert to oml.Float, or convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.Boolean or oml.Float\n | * oml.Boolean : initialize a oml.Float object that has value 1 (resp. 0)\n | wherever ``other`` has value True (resp. False).\n | * oml.Float : initialize a oml.Float object with the same data as \n | ``other``, except the underlying Oracle Database type has been converted\n | to the one specified by ``dbtype``. \n | dbtype : 'number' or 'binary_double'\n | Ignored if ``other`` is type ``oml.Boolean``. Must be specified if ``other``\n | is type ``oml.Float``.\n | \n | __matmul__(self, other)\n | Equivalent to ``self @ other`` and ``self.dot(other)``.\n | \n | Returns the inner product with an oml.Float. Matrix multiplication with a\n | oml.DataFrame.\n | \n | Parameters\n | ----------\n | other : oml.Float or oml.DataFrame\n | \n | Returns\n | -------\n | matprod : oml.Float\n | \n | See Also\n | --------\n | Float.dot\n | \n | __mod__(self, other)\n | Equivalent to ``self % other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Find the remainder when each element in ``self`` is divided by the\n | scalar.\n | * oml.Float : must come from the same data source. Find the remainder when each\n | element in ``self`` is divided by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | remainder : oml.Float\n | \n | __mul__(self, other)\n | Equivalent to ``self * other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : multiply the scalar with each element in ``self``. \n | * oml.Float : must come from the same data source. Multiply corresponding\n | elements in ``self`` and ``other``.\n | \n | Returns\n | -------\n | product : oml.Float\n | \n | __neg__(self)\n | Return the negation of every element in ``self``. Equivalent to ``-self``.\n | \n | Returns\n | -------\n | negation : oml.Float\n | \n | __pow__(self, other)\n | Equivalent to ``self ** other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Raise each element in ``self`` to the power of the scalar. \n | * oml.Float : must come from the same data source. Raise each element in \n | ``self`` to the power of the corresponding element in ``other``.\n | \n | Returns\n | -------\n | power : oml.Float\n | \n | __sub__(self, other)\n | Equivalent to ``self - other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : subtract the scalar from each element in ``self``. \n | * oml.Float : must come from the same data source. From each element in\n | ``self``, subtract the corresponding element in ``other``.\n | \n | Returns\n | -------\n | difference : oml.Float\n | \n | __truediv__(self, other)\n | Equivalent to ``self / other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : divide each element in ``self`` by the scalar. \n | * oml.Float : must come from the same data source. Divide each element in\n | ``self`` by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | quotient : oml.Float\n | \n | ceil(self)\n | Returns the ceiling of each element in the Float series data object.\n | \n | Returns\n | -------\n | ceil : oml.Float\n | \n | cut(self, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False)\n | Returns the indices of half-open bins to which each value belongs.\n | \n | Parameters\n | ----------\n | bins : int or strictly monotonically increasing sequence of float/int\n | If int, defines number of equal-width bins in the range of this column.\n | In this case, to include the min and max value, the range is extended by\n | .1% on each side where the bin does not include the endpoint.\n | If a sequence, defines bin edges allowing for non-uniform bin-widths. In \n | this case, the range of x is not extended.\n | right : bool, True (default)\n | Indicates whether the bins include the rightmost edge or the leftmost\n | edge.\n | labels : sequence of unique str, int, or float values, False, or None (default)\n | If a sequence, must be the same length as the resulting number of bins\n | and must have values of same type. If False, bins are sequentially\n | labeled with integers. If None, bins are labeled with the intervals\n | they correspond to.\n | retbins : bool, False (default)\n | Indicates whether to return the bin edges or not. \n | precision : int, 3 (default)\n | When ``labels`` is None, determines the precision of the bin labels.\n | include_lowest : bool, False (default) \n | Indicates whether the first interval should be left-inclusive.\n | \n | Returns\n | -------\n | out : oml.Float or oml.String\n | If labels are ints or floats, return oml.Float.\n | If labels are str, return oml.String.\n | bins : :py:class:`numpy.ndarray` of floats\n | Returned only if ``retbins`` is True.\n | \n | describe(self, percentiles=None)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of the OML series data distribution.\n | \n | Parameters\n | ----------\n | percentiles : list-like of numbers, optional \n | The percentiles to include in the output. All must be between 0 and 1.\n | The default is [.25, .5, .75], which corresponds to the inclusion of \n | the 25th, 50th, and 75th percentiles.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``mean``, ``std``,\n | ``min``, ``max``, and the specified ``percentiles``. The 50th\n | percentile is always included.\n | \n | dot(self, other=None, skipna=True)\n | Returns the inner product with an oml.Float. Matrix multiplication with a\n | oml.DataFrame.\n | \n | Can be called using self @ other.\n | \n | Parameters\n | ----------\n | other : oml.Float or oml.DataFrame, optional\n | If not specified, self is used.\n | skipna : bool, True (default)\n | Treats NaN entries as 0.\n | \n | Returns\n | -------\n | dot_product : :py:class:`pandas.Series` or float\n | \n | exp(self)\n | Returns element-wise e to the power of values in the Float series data object.\n | \n | Returns\n | -------\n | exp : oml.Float\n | \n | floor(self)\n | Returns the floor of each element in the Float series data object.\n | \n | Returns\n | -------\n | floor : oml.Float\n | \n | isinf(self)\n | Detects infinite values element-wise in the Float series data object.\n | \n | Returns\n | -------\n | isinf : oml.Boolean\n | \n | isnan(self)\n | Detects a NaN (not a number) element from Float object.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates NaN for each element.\n | \n | log(self, base=None)\n | Returns element-wise logarithm, to the given ``base``, of values\n | in the Float series data object.\n | \n | Parameters\n | ----------\n | base : int, float, optional\n | The base of the logarithm, by default natural logarithm\n | \n | Returns\n | -------\n | log : oml.Float\n | \n | replace(self, old, new, default=None)\n | Replace values given in `old` with `new`.\n | \n | Parameters\n | ----------\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying\n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain\n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.Float\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | * if `default` is specifed with a non-None value which has data type \n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different\n | data types\n | \n | round(self, decimals=0)\n | Rounds oml.Float values to the specified decimal place.\n | \n | Parameters\n | ----------\n | decimals : non-negative int\n | \n | Returns\n | -------\n | rounded : oml.Float\n | \n | sqrt(self)\n | Returns the square root of each element in the Float series data object.\n | \n | Returns\n | -------\n | sqrt : oml.Float\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.number._Number:\n | \n | cumsum(self, ascending=True, na_position='last', skipna=True)\n | Gets the cumulative sum after the OML series data object is sorted.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | Sorts ascending, otherwise descending.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NaN and None at the beginning, ``last`` places them \n | at the end.\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | cumsum : oml.Float\n | \n | kurtosis(self, skipna=True)\n | Returns the sample kurtosis of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | kurt : float or nan\n | \n | mean(self, skipna=True)\n | Returns the mean of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | mean : float or numpy.nan\n | \n | median(self, skipna=True)\n | Returns the median of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | median : float or numpy.nan\n | \n | skew(self, skipna=True)\n | Returns the sample skewness of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | skew : float or nan\n | \n | std(self, skipna=True)\n | Returns the sample standard deviation of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | std : float or numpy.nan\n | \n | sum(self, skipna=True)\n | Returns the sum of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | sum : float or numpy.nan\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | pull(self)\n | Pulls data represented by the series data object from Oracle Database\n | into an in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class String(oml.core.series._Series)\n | String(other, dbtype)\n | \n | Character series data class.\n | \n | Represents a single column of VARCHAR2, CHAR, or CLOB data in Oracle Database.\n | \n | Method resolution order:\n | String\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __contains__(self, item)\n | Check whether all elements in ``item`` exists in the String series\n | \n | Equivalent to ``item in self``.\n | \n | Parameters\n | ----------\n | item : str, list of str, oml.String\n | Values to check in series\n | \n | Returns\n | -------\n | contains : bool\n | Returns ``True`` if all elements exist, otherwise ``False``.\n | \n | __init__(self, other, dbtype)\n | Convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.String\n | dbtype : 'varchar2' or 'clob'\n | \n | count_pattern(self, pat, flags=0)\n | Counts the number of occurrences of the pattern in each string. \n | \n | Parameters\n | ----------\n | pat : str that is a valid regular expression conforming to the POSIX standard\n | flags : int, 0 (default, no flags)\n | The following :py:mod:`python:re` module flags are supported:\n | \n | - :py:data:`python:re.I`/:py:data:`python:re.IGNORECASE` : Performs case-insensitive matching.\n | - :py:data:`python:re.M`/:py:data:`python:re.MULTILINE` : Treats the source string as multiple lines.\n | Interprets the caret (^) and dollar sign ($) as the start and end,\n | respectively, of any line anywhere in source string. Without this flag,\n | the caret and dollar sign match only the start and end, respectively, of\n | the source string.\n | - :py:data:`python:re.S`/:py:data:`python:re.DOTALL` : Allows the period (.) to match all characters,\n | including the newline character. Without this flag, the period matches all\n | characters except the newline character.\n | \n | Multiple flags can be specifed by bitwise OR-ing them.\n | \n | Returns\n | -------\n | counts : oml.Float\n | \n | find(self, sub, start=0)\n | Returns the lowest index in each string where substring is found that is\n | greater than or equal to ``start``. Returns -1 on failure.\n | \n | Parameters\n | ----------\n | sub : str\n | The text expression to search.\n | start : int\n | A nonnegative integer indicating when the function begins the search. \n | \n | Returns\n | -------\n | found : oml.Float\n | \n | len(self)\n | Computes the length of each string.\n | \n | Returns\n | -------\n | length : oml.Float\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of str and None\n | \n | replace(self, old, new, default=None)\n | Replace values given in `old` with `new`.\n | \n | Parameters\n | ----------\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying\n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain \n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.String\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | * if `default` is specifed with a non-None value which has data type\n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different\n | data types\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class ai(oml.algo.model.odmModel)\n | ai(model_name=None, model_owner=None, **params)\n | \n | In-database `Attribute Importance `_ Model\n | \n | Computes the relative importance of variables (aka attributes or columns) when predicting\n | a target variable (numeric or categorical column). This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. \n | Oracle Machine Learning does not support the prediction functions\n | for attribute importance. The results of attribute importance are the attributes\n | of the build data ranked according to their predictive influence. The ranking and\n | the measure of importance can be used for selecting attributes.\n | \n | :Attributes:\n | \n | **importance** : oml.DataFrame\n | \n | Relative importance of predictor variables for predicting a response variable.\n | It includes the following components:\n | \n | - variable: The name of the predictor variable\n | - importance: The importance of the predictor variable\n | - rank: The predictor variable rank based on the importance value.\n | \n | Method resolution order:\n | ai\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of Attribute Importance object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Attribute Importance model to create an oml.ai object from.\n | The specified database model is not dropped when the oml.ai object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Attribute Importance model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings or Algorithm-specific Settings are not\n | applicable to Attribute Importance model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None)\n | Fits an Attribute Importance Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.ai object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.ai object is deleted\n | unless oml.ai object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class ar(oml.algo.model.odmModel)\n | ar(model_name=None, model_owner=None, **params)\n | \n | In-database `Association Rules `_ Model\n | \n | Builds an Association Rules Model used to discover the probability of item co-occurrence\n | in a collection. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. The relationships between co-occurring items are expressed as \n | association rules.\n | Oracle Machine Learning does not support the prediction functions for association modeling.\n | The results of an association model are the rules that identify patterns of\n | association within the data. Association rules can be ranked by support\n | (How often do these items occur together in the data?) and confidence\n | (How likely are these items to occur together in the data?).\n | \n | :Attributes:\n | \n | **rules** : oml.DataFrame\n | \n | Details of each rule that shows \n | how the appearance of a set of items in a transactional\n | record implies the existence of another set of items.\n | It includes the following components:\n | \n | - rule_id: The identifier of the rule\n | - number_of_items: The total number of attributes referenced in the antecedent and consequent of the rule\n | - lhs_name: The name of the antecedent.\n | - lhs_value: The value of the antecedent.\n | - rhs_name: The name of the consequent.\n | - rhs_value: The value of the consequent.\n | - support: The number of transactions that satisfy the rule.\n | - confidence: The likelihood of a transaction satisfying the rule.\n | - revconfidence: The number of transactions in which the rule occurs divided by the number of transactions in which the consequent occurs.\n | - lift: The degree of improvement in the prediction over random chance when the rule is satisfied.\n | \n | **itemsets** : oml.DataFrame\n | \n | Description of the item sets from the model built.\n | It includes the following components:\n | \n | - itemset_id: The itemset identifier\n | - support: The support of the itemset\n | - number_of_items: The number of items in the itemset\n | - item_name: The name of the item\n | - item_value: The value of the item\n | \n | Method resolution order:\n | ar\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of Association Rules object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Association Rules model to create an oml.ar object from.\n | The specified database model is not dropped when the oml.ar object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Association Rules model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic\n | Data Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Association are applicable.\n | No algorithm-specific Settings are applicable to Association model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None)\n | Fits an Association Rules Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.ar object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.ar object is deleted\n | unless oml.ar object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class dt(oml.algo.model.odmModel)\n | dt(model_name=None, model_owner=None, **params)\n | \n | In-database `Decision Tree `_ Model\n | \n | Builds a Decision Tree Model used to generate rules (conditional statements \n | that can easily be understood by humans and be used within a database to identify \n | a set of records) to predict a target value (numeric or categorical column). This \n | function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | A decision tree predicts a target value by asking a sequence of questions. \n | At a given stage in the sequence, the question that is asked depends upon the \n | answers to the previous questions. The goal is to ask questions that, taken \n | together, uniquely identify specific target values. Graphically, this process \n | forms a tree structure. During the training process, the Decision Tree algorithm \n | must repeatedly find the most efficient way to split a set of cases (records) \n | into two child nodes. The model offers two homogeneity metrics, gini and entropy, \n | for calculating the splits. The default metric is gini.\n | \n | \n | :Attributes:\n | \n | **nodes** : oml.DataFrame\n | \n | The node summary information with tree node details.\n | It includes the following components:\n | \n | - parent: The node ID of the parent\n | - node.id: The node ID\n | - row.count: The number of records in the training set that belong to the node\n | - prediction: The predicted Target value\n | - split: The main split\n | - surrogate: The surrogate split\n | - full.splits: The full splitting criterion\n | \n | **distributions** : oml.DataFrame\n | \n | The target class distributions at each tree node.\n | It includes the following components:\n | \n | - node_id: The node ID\n | - target_value: The target value\n | - target_count: The number of rows for a given target_value\n | \n | Method resolution order:\n | dt\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of dt object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Decision Tree model to create an oml.dt object from.\n | The specified database model is not dropped when the oml.dt object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Decision Tree model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Decision Tree model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, cost_matrix=None, case_id=None)\n | Fits a decision tree model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.dt object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.dt object is deleted\n | unless oml.dt object is saved into a datastore.\n | cost_matrix : OML DataFrame, list of ints, floats or None (default)\n | An optional numerical matrix that specifies the costs for incorrectly\n | predicting the target values. The first value represents the actual target value.\n | The second value represents the predicted target value. The third value is the cost.\n | In general, the diagonal entries of the matrix are zeros. Refer to `Oracle Data\n | Mining User's Guide `_\n | for more details about cost matrix.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if proba is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols``, and the\n | results. The results include the most likely target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class em(oml.algo.model.odmModel)\n | em(n_clusters=None, model_name=None, model_owner=None, **params)\n | \n | In-database `Expectation Maximization `_ Model\n | \n | Builds an Expectation Maximization (EM) Model used to performs probabilistic \n | clustering based on a density estimation algorithm. This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. In density estimation, \n | the goal is to construct a density function that captures how a given population is \n | distributed. The density estimate is based on observed data that represents a \n | sample of the population.\n | \n | :Attributes:\n | \n | **clusters** : oml.DataFrame\n | \n | The general per-cluster information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - cluster_name: The name of a cluster in the model\n | - record_count: The number of rows used in the build\n | - parent: The ID of the parent\n | - tree_level: The number of splits from the root\n | - left_child_id: The ID of the left child\n | - right_child_id: The ID of the right child\n | \n | **taxonomy**: oml.DataFrame\n | \n | The parent/child cluster relationship.\n | It includes the following components:\n | \n | - parent_cluster_id: The ID of the parent cluster\n | - child_cluster_id: The ID of the child cluster\n | \n | **centroids**: oml.DataFrame\n | \n | Per cluster-attribute center (centroid) information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - attribute_name: The attribute name\n | - mean: The average value of a numeric attribute\n | - mode_value: The most frequent value of a categorical attribute\n | - variance: The variance of a numeric attribute\n | \n | **leaf_cluster_counts**: pandas.DataFrame\n | \n | Leaf clusters with support.\n | It includes the following components:\n | \n | - cluster_id: The ID of a leaf cluster in the model\n | - cnt: The number of records in a leaf cluster\n | \n | **attribute_importance**: oml.DataFrame\n | \n | Attribute importance of the fitted model.\n | It includes the following components:\n | \n | - attribute_name: The attribute name\n | - attribute_importance_value: The attribute importance for an attribute\n | - attribute_rank: The rank of the attribute based on importance\n | \n | **projection**: oml.DataFrame\n | \n | The coefficients used by random projections to map nested columns to a lower dimensional space.\n | It exists only when nested or text data is present in the build data.\n | It includes the following components:\n | \n | - feature_name: The name of feature\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - coefficient: The projection coefficient for an attribute\n | \n | **components**: oml.DataFrame\n | \n | EM components information about their prior probabilities and what cluster they map to.\n | It includes the following components:\n | \n | - component_id: The unique identifier of a component\n | - cluster_id: The ID of a cluster in the model\n | - prior_probability: The component prior probability\n | \n | **cluster_hists**: oml.DataFrame\n | Cluster histogram information.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - variable: The attribute name\n | - bin.id: The ID of a bin\n | - lower.bound: The numeric lower bin boundary\n | - upper.bound: The numeric upper bin boundary\n | - label: The label of the cluster\n | - count: The histogram count\n | \n | **rules**: oml.DataFrame\n | \n | Conditions for a case to be assigned with some probability to a cluster.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - rhs.support: The record count\n | - rhs.conf: The record confidence\n | - lhr.support: The rule support\n | - lhs.conf: The rule confidence\n | - lhs.var: The attribute predicate name\n | - lhs.var.support: The attribute predicate support\n | - lhs.var.conf: The attribute predicate confidence\n | - predicate: The attribute predicate\n | \n | Method resolution order:\n | em\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, n_clusters=None, model_name=None, model_owner=None, **params)\n | Initializes an instance of em object.\n | \n | Parameters\n | ----------\n | n_clusters : positive integer, None (default)\n | The number of clusters. If n_clusters is None, the number of clusters will be determined\n | either by current setting parameters or automatically by the algorithm.\n | model_name : string or None (default)\n | The name of an existing database Expectation Maximization model to create an oml.em object from.\n | The specified database model is not dropped when the oml.em object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Expectation Maximization model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Clustering and `Algorithm-specific\n | Settings `_\n | are applicable to Expectation Maximization model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None)\n | Fits an Expectation Maximization Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.em object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.em object is deleted\n | unless oml.em object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. If the mode is 'class', the results include the most likely\n | target class and its probability. If mode is 'raw', the results\n | include for each target class, the probability belonging\n | to that class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each cluster on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned clusters to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each cluster, the probability\n | belonging to that cluster.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class esa(oml.algo.model.odmModel)\n | esa(model_name=None, model_owner=None, **params)\n | \n | In-database `Explicit Semantic Analysis `_ Model\n | \n | Builds an Explicit Semantic Analysis (ESA) Model to be used for feature extraction. \n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | ESA uses concepts of an existing knowledge base as features rather than latent\n | features derived by latent semantic analysis methods such as Singular\n | Value Decomposition and Latent Dirichlet Allocation. Each row, for example,\n | a document in the training data maps to a feature, that is, a concept.\n | ESA works best with concepts represented by text documents.\n | It has multiple applications in the area of text processing, most\n | notably semantic relatedness (similarity) and explicit topic modeling.\n | Text similarity use cases might involve, for example, resume matching, searching\n | for similar blog postings, and so on.\n | \n | :Attributes:\n | \n | **features** : oml.DataFrame\n | \n | Description of each feature extracted. \n | It includes the following components:\n | \n | - feature_id: The unique identifier of a feature as it appears in the training data\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - coefficient: The coefficient (weight) associated with the attribute in a particular feature.\n | \n | Method resolution order:\n | esa\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of esa object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Explicit Semantic Analysis model to create an oml.esa object from.\n | The specified database model is not dropped when the oml.esa object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Explicit Semantic Analysis model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Feature Extraction and `Algorithm-specific\n | Settings `_\n | are applicable to Explicit Semantic Analysis model.\n | \n | __repr__(self)\n | \n | feature_compare(self, x, compare_cols=None, supplemental_cols=None)\n | Compares features of data and generates relatedness.\n | \n | Parameters\n | ----------\n | x : an OML object\n | The data used to measure relatedness.\n | compare_cols : str, a list of str or None (default)\n | The column(s) used to measure data relatedness.\n | If None, all the columns of ``x`` are compared to measure relatedness.\n | supplemental_cols : a list of str or None (default)\n | A list of columns to display along with the resulting 'SIMILARITY' column.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains a 'SIMILARITY' column that measures relatedness and supplementary columns if specified.\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits an ESA Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.esa object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.esa object is deleted\n | unless oml.esa object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include the most likely feature and its probability.\n | \n | transform(self, x, supplemental_cols=None, topN=None)\n | Make predictions and return relevancy for each feature on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned values to\n | the specified number of features that have the highest topN values.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the relevancy for each feature on new data and the specified ``supplemental_cols``.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class glm(oml.algo.model.odmModel)\n | glm(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Generalized Linear Models `_\n | \n | Builds Generalized Linear Models (GLM), which include and extend the class of \n | linear models (linear regression), to be used for classification or regression. \n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | Generalized linear models relax the restrictions on linear models, which are often \n | violated in practice. For example, binary (yes/no or 0/1) responses do not have same \n | variance across classes. This model uses a parametric modeling technique. Parametric \n | models make assumptions about the distribution of the data. When the assumptions are \n | met, parametric models can be more efficient than non-parametric models.\n | \n | :Attributes:\n | \n | **coef** : oml.DataFrame\n | \n | The coefficients of the GLM model, one for each predictor variable.\n | It includes the following components:\n | \n | - nonreference: The target value used as nonreference\n | - attribute name: The attribute name\n | - attribute value: The attribute value\n | - coefficient: The estimated coefficient\n | - std error: The standard error\n | - t value: The test statistics\n | - p value: The statistical significance\n | \n | **fit_details**: oml.DataFrame\n | \n | The model fit details such as adjusted_r_square, error_mean_square and so on.\n | It includes the following components:\n | \n | - name: The fit detail name\n | - value: The fit detail value\n | \n | **deviance**: float\n | \n | Minus twice the maximized log-likelihood, up to a constant.\n | \n | **null_deviance**: float\n | \n | The deviance for the null (intercept only) model.\n | \n | **aic**: float\n | \n | Akaike information criterion.\n | \n | **rank**: integer\n | \n | The numeric rank of the fitted model.\n | \n | **df_residual**: float\n | \n | The residual degrees of freedom.\n | \n | **df_null**: float\n | \n | The residual degrees of freedom for the null model.\n | \n | **converged**: bool\n | \n | The indicator for whether the model converged.\n | \n | **nonreference**: int or str\n | \n | For logistic regression, the response values that represents success.\n | \n | Method resolution order:\n | glm\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of glm object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' or 'REGRESSION', 'CLASSIFICATION' (default)\n | Type of model mining functionality\n | model_name : string or None (default)\n | The name of an existing database Generalized Linear Model to create an oml.glm object from.\n | The specified database model is not dropped when the oml.glm object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Generalized Linear Model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Generalized Linear Model model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, ctx_settings=None)\n | Fits a GLM Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.glm object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.glm object is deleted\n | unless oml.glm object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None, confint=None, level=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | confint : bool, False (default)\n | A logical indicator for whether to produce confidence intervals.\n | for the predicted values.\n | level : float between 0 and 1 or None (default)\n | A numeric value within [0, 1] to use for the confidence level.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True for classification.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and optionally its probability and confidence\n | intervals. For a linear regression model, the results consist of a column\n | for the prediction and optionally its confidence intervals.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | residuals(self, x, y)\n | Return the deviance residuals, which includes the following components:\n | - deviance: The deviance residual\n | - pearson: The Pearson residual\n | - response: The residual of the working response.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | \n | Return: oml.DataFrame\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class km(oml.algo.model.odmModel)\n | km(n_clusters=None, model_name=None, model_owner=None, **params)\n | \n | In-database `k-means `_ Model\n | \n | Builds a K-Means (KM) Model that uses a distance-based clustering algorithm to \n | partition data into a specified number of clusters. This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. Distance-based \n | algorithms rely on a distance function to measure the similarity between cases. \n | Cases are assigned to the nearest cluster according to the distance function used.\n | \n | :Attributes:\n | \n | **clusters** : oml.DataFrame\n | \n | The general per-cluster information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - row_cnt: The number of rows used in the build\n | - parent_cluster_id: The ID of the parent\n | - tree_level: The number of splits from the root\n | - dispersion: The measure of the quality of the cluster, and computationally, the sum of square errors\n | \n | **taxonomy**: oml.DataFrame\n | \n | The parent/child cluster relationship.\n | It includes the following components:\n | \n | - parent_cluster_id: The ID of the parent cluster\n | - child_cluster_id: The ID of the child cluster\n | \n | **centroids**: oml.DataFrame\n | \n | Per cluster-attribute center (centroid) information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - attribute_name: The attribute name\n | - mean: The average value of a numeric attribute\n | - mode_value: The most frequent value of a categorical attribute\n | - variance: The variance of a numeric attribute\n | \n | **leaf_cluster_counts**: pandas.DataFrame\n | \n | Leaf clusters with support.\n | It includes the following components:\n | \n | - cluster_id: The ID of a leaf cluster in the model\n | - cnt: The number of records in a leaf cluster\n | \n | **cluster_hists**: oml.DataFrame\n | \n | Cluster histogram information.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - variable: The attribute name\n | - bin.id: The ID of a bin\n | - lower.bound: The numeric lower bin boundary\n | - upper.bound: The numeric upper bin boundary\n | - label: The label of the cluster\n | - count: The histogram count\n | \n | **rules**: oml.DataFrame\n | \n | Conditions for a case to be assigned with some probability to a cluster.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - rhs.support: The record count\n | - rhs.conf: The record confidence\n | - lhr.support: The rule support\n | - lhs.conf: The rule confidence\n | - lhs.var: The attribute predicate name\n | - lhs.var.support: The attribute predicate support\n | - lhs.var.conf: The attribute predicate confidence\n | - predicate: The attribute predicate\n | \n | Method resolution order:\n | km\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, n_clusters=None, model_name=None, model_owner=None, **params)\n | Initializes an instance of km object.\n | \n | Parameters\n | ----------\n | n_clusters : positive integer, default None\n | Number of clusters. If n_clusters is None, the number of clusters will be determined\n | either by current setting parameters or automatically by the internal algorithm.\n | model_name : string or None (default)\n | The name of an existing database K-Means model to create an oml.km object from.\n | The specified database model is not dropped when the oml.km object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing K-Means model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Clustering and `Algorithm-specific\n | Settings `_\n | are applicable to K-Means model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits a K-Means Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.km object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.km object is deleted\n | unless oml.km object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None(default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each cluster on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer\n | A positive integer that restricts the returned clusters to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each cluster, the probability\n | belonging to that cluster.\n | \n | score(self, x)\n | Calculates the score value based on the input data ``x``.\n | \n | Parameters\n | ----------\n | x : an OML object\n | A new data set used to calculate score value.\n | \n | Returns\n | -------\n | pred : float\n | Score values, that is, opposite of the value of ``x`` on the K-means objective.\n | \n | transform(self, x)\n | Transforms ``x`` to a cluster-distance space.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the distance to each cluster.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class nb(oml.algo.model.odmModel)\n | nb(model_name=None, model_owner=None, **params)\n | \n | In-database `Naive Bayes `_ Model\n | \n | Builds a Naive Bayes Model that uses conditional probabilities to predict a target \n | variable (numeric or categorical column). Naive Bayes looks at the historical data \n | and calculates conditional probabilities for the target values by observing the \n | frequency of attribute values and of combinations of attribute values. Naive Bayes \n | assumes that each predictor is conditionally independent of the others. (Bayes' \n | Theorem requires that the predictors be independent.)\n | \n | :Attributes:\n | \n | **priors** : oml.DataFrame\n | \n | An optional named numerical vector that specifies the priors for the target classes.\n | It includes the following components:\n | \n | - target_name: The name of the target column\n | - target_value: The target value\n | - prior_probability: The prior probability for a given target_value\n | - count: The number of rows for a given target_value\n | \n | **conditionals** : oml.DataFrame\n | \n | Conditional probabilities for each predictor variable.\n | It includes the following components:\n | \n | - target_name: The name of the target column\n | - target_value: The target value\n | - attribute_name: The column name\n | - attribute_subname: The nested column subname.\n | - attribute_value: The mining attribute value\n | - conditional_probability: The conditional probability of a mining attribute for a given target\n | - count: The number of rows for a given mining attribute and a given target\n | \n | Method resolution order:\n | nb\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of nb object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Naive Bayes model to create an oml.nb object from.\n | The specified database model is not dropped when the oml.nb object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Naive Bayes model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Naive Bayes model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, priors=None, case_id=None)\n | Fits a Naive Bayes Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.nb object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.nb object is deleted\n | unless oml.nb object is saved into a datastore.\n | priors : OML DataFrame or dict or list of ints or floats or None (default)\n | The priors represent the overall distribution of the target in the\n | population. By default, the priors are computed from the sample.\n | If the sample is known to be a distortion of the population target\n | distribution, then the user can override the default by providing\n | a priors table as a setting for model creation. For OML DataFrame\n | input, the first value represents the target value. The second value\n | represents the prior probability. For dictionary type input, the key\n | represents the target value. The value represents the prior probability.\n | For list type input, the first value represents target value. The\n | second value represents the prior probability. See `Oracle Data\n | Mining Concepts Guide `_\n | for more details.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include the most likely target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | TopN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class nn(oml.algo.model.odmModel)\n | nn(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Neural Network `_ Model\n | \n | Builds a Neural Network (NN) Model that uses an algorithm inspired from biological\n | neural network for classification and regression. Neural Network is used to to estimate\n | or approximate functions that depend on a large number of generally unknown inputs.\n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | An artificial neural network is composed of a large number of interconnected neurons\n | which exchange messages between each other to solve specific problems. They learn by\n | examples and tune the weights of the connections among the neurons during the learning\n | process. Neural Network is capable of solving a wide variety of tasks such as computer\n | vision, speech recognition, and various complex business problems.\n | \n | :Attributes:\n | \n | **weights** : oml.DataFrame\n | \n | Weights of fitted model between nodes in different layers.\n | It includes the following components:\n | \n | - layer: The layer ID, 0 as an input layer\n | - idx_from: The node index that the weight connects from (attribute id for input layer)\n | - idx_to: The node index that the weights connects to\n | - attribute_name: The attribute name (only for the input layer)\n | - attribute_subname: The attribute subname\n | - attribute_value: The attribute value\n | - target_value: The target value.\n | - weight: The value of weight\n | \n | **topology** : oml.DataFrame\n | \n | Topology of the fitted model including number of nodes and hidden layers.\n | It includes the following components:\n | \n | - hidden_layer_id: The id number of the hidden layer\n | - num_node: The number of nodes in each layer\n | - activation_function: The activation function in each layer\n | \n | Method resolution order:\n | nn\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of nn object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' or 'REGRESSION', 'CLASSIFICATION' (default)\n | Type of model mining functionality\n | model_name : string or None (default)\n | The name of an existing database Neural Network model to create an oml.nn object from.\n | The specified database model is not dropped when the oml.nn object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Neural Network model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Neural Network model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, class_weight=None)\n | Fits a Neural Network Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.nn object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.nn object is deleted\n | unless oml.nn object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | class_weight : OML DataFrame or dict or list of ints or floats or None (default)\n | An optional matrix that is used to influence the weighting of\n | target classes during model creation. For OML DataFrame input, the first\n | value represents the target value. The second value represents the class weight.\n | For dictionary type input, the key represents the target value. The value\n | represents the class weight. For list type input, the first value represents\n | target value. The second value represents the predicted target value.\n | Refer to `Oracle Data Mining User's Guide `_\n | for more details about class weights.\n | \n | get_params(self, params=None, deep=False)\n | Fetches settings of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If params is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and its probability. For a regression\n | model, the results consist of a column for the prediction. For an\n | anomaly detection model, the results include a prediction and its\n | probability. If the prediction is 1, the case is considered typical.\n | If the prediction is 0, the case is considered anomalous. This\n | behavior reflects the fact that the model is trained with normal data.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class rf(oml.algo.model.odmModel)\n | rf(model_name=None, model_owner=None, **params)\n | \n | In-database `Random Forest `_ Model\n | \n | Builds a Random Forest (RF) Model that uses an ensemble (also called forest) of trees \n | for classification. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. Random Forest is a popular ensemble learning technique for \n | classification. By combining the ideas of bagging and random selection of variables, \n | the algorithm produces collection of decision trees with controlled variance, while \n | avoiding overfitting - a common problem for decision trees.\n | \n | :Attributes:\n | \n | **importance** : oml.DataFrame\n | \n | Attribute importance of the fitted model.\n | It includes the following components:\n | \n | - attribute_name: The attribute name\n | - attribute_subname: The attribute subname \n | - attribute_importance: The attribute importance for an attribute in the forest\n | \n | Method resolution order:\n | rf\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of rf object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Random Forest model to create an oml.rf object from.\n | The specified database model is not dropped when the oml.rf object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Random Forest model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Random Forest model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, cost_matrix=None, case_id=None)\n | Fits a Random Forest Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.rf object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.rf object is deleted\n | unless oml.rf object is saved into a datastore.\n | cost_matrix : OML DataFrame or list of ints or floats or None (default)\n | An optional numerical square matrix that specifies the costs for incorrectly\n | predicting the target values. The first value represents the actual target value.\n | The second value represents the predicted target value. The third value is the cost.\n | In general, the diagonal entries of the matrix are zeros. Refer to `Oracle Data\n | Mining User's Guide `_\n | for more details about cost matrix.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols``, and the most likely\n | target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class svd(oml.algo.model.odmModel)\n | svd(model_name=None, model_owner=None, **params)\n | \n | In-database `Singular Value Decomposition `_ Model\n | \n | Builds a Singular Value Decomposition (SVD) Model that can be used for feature \n | extraction. SVD provides orthogonal linear transformations that capture the \n | underlying variance of the data by decomposing a rectangular matrix into three \n | matrixes: U, D, and V. Matrix D is a diagonal matrix and its singular values \n | reflect the amount of data variance captured by the bases. Columns of matrix V \n | contain the right singular vectors and columns of matrix U contain the left singular\n | vectors.\n | \n | :Attributes:\n | \n | **features** : oml.DataFrame\n | \n | Features extracted by the fitted model including feature id and associated coefficient.\n | It includes the following components:\n | \n | - feature_id: The ID of a feature in the model\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - value: The matrix entry value\n | \n | **u** : oml.DataFrame\n | \n | A dataframe whose columns contain the left singular vectors.\n | The column name is the corresponding feature id.\n | \n | **v** : oml.DataFrame\n | \n | A dataframe whose columns contain the right singular vectors.\n | The column name is the corresponding feature id.\n | \n | **d** : oml.DataFrame\n | \n | A dataframe containing the singular values of the input data.\n | It includes the following components:\n | \n | - feature_id: The ID of a feature in the model\n | - value: The singular values of the input data\n | \n | Method resolution order:\n | svd\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of svd object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Singular Value Decomposition model to create an oml.svd object from.\n | The specified database model is not dropped when the oml.svd object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Singular Value Decomposition model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Feature Extraction and `Algorithm-specific\n | Settings `_\n | are applicable to Singular Value Decomposition model.\n | \n | __repr__(self)\n | \n | feature_compare(self, x, compare_cols=None, supplemental_cols=None)\n | Compares features of data and generates relatedness.\n | \n | Parameters\n | ----------\n | x : an OML object\n | The data used to measure relatedness.\n | compare_cols : str, a list of str or None (default)\n | The column(s) used to measure data relatedness.\n | If None, all the columns of ``x`` are compared to measure relatedness.\n | supplemental_cols : a list of str or None (default)\n | A list of columns to display along with the resulting 'SIMILARITY' column.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains a 'SIMILARITY' column that measures relatedness and supplementary columns if specified.\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits an SVD Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.svd object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.svd object is deleted\n | unless oml.svd object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model. \n | ``case_id`` and SVDS_U_MATRIX_OUTPUT in ``odm_settings`` \n | must be specified in order to produce matrix U.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the predicted feature index on the new data and the specified ``supplemental_cols``.\n | \n | transform(self, x, supplemental_cols=None, topN=None)\n | Performs dimensionality reduction and returns value for each feature on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned values to\n | the specified number of features that have the highest topN values.\n | If None, all features will be returned.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the values of new data after the SVD transform and the specified ``supplemental_cols``.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class svm(oml.algo.model.odmModel)\n | svm(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Support Vector Machine `_ Model\n | \n | Builds a Support Vector Machine (SVM) Model to be used for regression, classification, \n | or anomaly detection. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. SVM is a powerful, state-of-the-art algorithm with strong \n | theoretical foundations based on the Vapnik-Chervonenkis theory. SVM has strong \n | regularization properties. Regularization refers to the generalization of the model to \n | new data.\n | \n | \n | :Attributes:\n | \n | **coef** : oml.DataFrame\n | \n | The coefficients of the SVM model, one for each predictor variable.\n | It includes the following components:\n | \n | - target_value: The target value\n | - attribute_name: The attribute name\n | - attribute_subname: The attribute subname\n | - attribute_value: The attribute value\n | - coef: The projection coefficient value\n | \n | Method resolution order:\n | svm\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of svm object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' (default), 'REGRESSION' or 'ANOMALY_DETECTION'\n | Type of model mining functionality.\n | model_name : string or None (default)\n | The name of an existing database Support Vector Machine model to create an oml.svm object from.\n | The specified database model is not dropped when the oml.svm object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Support Vector Machine model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Support Vector Machine model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, ctx_settings=None)\n | Fits an SVM Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object or None, or string\n | Target values.\n | Must be specified when SVM algorithm is used for classification\n | or regression and must be None when used for anomaly detection.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.svm object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.svm object is deleted\n | unless oml.svm object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : oml.DataFrame\n | Predictor values used by the model to generate scores\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and its probability. For a regression\n | model, the results consist of a column for the prediction. For an\n | anomaly detection model, the results include a prediction and its\n | probability. If the prediction is 1, the case is considered normal.\n | If the prediction is 0, the case is considered anomalous. This\n | behavior reflects the fact that the model is trained with normal data.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n\nFUNCTIONS\n boxplot(x, notch=None, sym=None, vert=None, whis=None, positions=None, widths=None, patch_artist=None, usermedians=None, conf_intervals=None, meanline=None, showmeans=None, showcaps=None, showbox=None, showfliers=None, boxprops=None, labels=None, flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None, manage_ticks=True, autorange=False, zorder=None)\n Makes a box and whisker plot.\n \n For every column of ``x`` or for every column object in ``x``, makes a box and whisker plot.\n \n Parameters\n ----------\n x : oml.DataFrame or oml.Float or list of oml.Float\n The data to plot.\n notch : bool, False (default), optional\n If True, produces a notched box plot. Otherwise, a rectangular\n boxplot is produced. By default, the confidence intervals are approximated\n as ``median +/-1.57 IQR/sqrt(n)`` where ``n`` is the number of not-null/NA\n values in the column. \n conf_intervals : array-like, optional\n Array or sequence whose first dimension is equal to the number of columns\n in ``x`` and whose second dimension is 2.\n labels : sequence, optional\n Length must be equal to the number of columns in ``x``. When an element of\n ``labels`` is not None, the default label of the column, which is the name\n of the column, is overridden.\n \n Notes\n -----\n For information on the other parameters, see documentation for :py:func:`matplotlib.pyplot.boxplot`.\n \n Returns\n -------\n ax : :py:class:`matplotlib.axes.Axes`\n The :py:class:`matplotlib.axes.Axes` instance of the boxplot figure.\n result : dict\n A dict mapping each component of the boxplot to the corresponding list of\n :py:class:`matplotlib.lines.Lines2D` instances created.\n \n check_embed()\n Indicates whether embedded Python is set up in the connected Oracle Database.\n \n Returns\n -------\n embed_status : bool or None\n None when not connected.\n \n connect(user=None, password=None, host=None, port=None, sid=None, service_name=None, dsn=None, encoding='UTF-8', nencoding='UTF-8', automl=None, **kwargs)\n Establishes an Oracle Database connection.\n \n Just as with :py:func:`cx_Oracle.connect`, the user, password, and data\n source name can be provided separately or with host, port, sid or\n service_name.\n \n There can be only one active connection. Calling this method when an\n active connection already exists replaces the active connection with\n a new one. This results in the previous connection being implicitly\n disconnected with the corresponding release of resources.\n \n Parameters\n ----------\n user : str or None (default)\n password : str or None (default)\n host : str or None (default)\n Host name of the Oracle Database.\n port : int, str or None (default)\n The Oracle Database port number.\n sid : str or None (default)\n The Oracle Database SID.\n service_name : str or None (default)\n The service name to be used in the connection identifier for\n the Oracle Database.\n dsn : str or None (default)\n Data source name. The TNS entry of the database, or an TNS\n alias in the Oracle Wallet.\n encoding : str, 'UTF-8' (default)\n Encoding to use for regular database strings.\n nenconding : str, 'UTF-8' (default)\n Encoding to use for national character set database strings.\n automl : str, or bool or None (default)\n To enable automl, specify:\n * True: if ``host``, ``port``, ``sid`` or ``service_name``\n are specified and a connection pool is running for this\n (``host``, ``port``, ``sid`` or ``service_name``).\n * Data source name: for a running connection pool\n if ``dsn`` is specified with a data source name.\n * TNS alias in an Oracle Wallet: for a running connection pool\n if ``dsn`` is also specified with Wallet TNS alias.\n Otherwise, automl is disabled.\n \n Notes\n -----\n * Parameters ``sid`` and ``service_name`` are exclusive.\n * Parameters (``host``, ``port``, ``sid`` or ``service_name``),\n and ``dsn`` can only be specified exclusively.\n * Parameter ``user`` and ``password`` must be provided when\n (``host``, ``port``, ``sid`` or ``service_name``) is specified,\n or ``dsn`` (and optionally ``automl``) is specified with\n a data source name.\n * Parameter ``user`` and ``password`` should be set to empty str \"\",\n when ``dsn`` (and optionally ``automl``) is specified with\n Wallet TNS alias, to establish connection with Oracle Wallet.\n * Automl requires `Database Resident Connection Pooling (DRCP)\n `_\n running on the Database server.\n \n create(x, table, oranumber=True, dbtypes=None, append=False)\n Creates a table in Oracle Database from a Python data set.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n table : str\n A name for the table.\n oranumber : bool, True (default)\n If True, use SQL NUMBER for numeric columns. Otherwise, use BINARY_DOUBLE.\n Ignored if ``append`` is True.\n dbtypes : dict mapping str to str or list of str\n A list of SQL types to use on the new table. If a list, its length should\n be equal to the number of columns. If a dict, the keys are the names of the\n columns. Ignored if ``append`` is True.\n append : bool, False (default)\n Indicates whether to append the data to the existing table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n new_table : oml.DataFrame\n A proxy object that represents the newly-created table.\n \n cursor()\n Returns a cx_Oracle cursor object of the current OML database connection.\n It can be used to execute queries against Oracle Database.\n \n Returns\n -------\n cursor_obj : a cx_Oracle :ref:`cx:cursorobj`.\n \n dir()\n Returns the names of OML objects in the workspace.\n \n Returns\n -------\n obj_names : list of str\n \n disconnect(cleanup=True)\n Terminates the Oracle Database connection. By default, the OML\n objects created through this connection will be deleted.\n \n Parameters\n ----------\n cleanup : bool, True (default)\n Cleans up OML objects defined in Python's main module before\n disconnecting from the database.\n \n do_eval(func, func_owner=None, graphics=False, **kwargs)\n Runs the user-defined Python function using a Python engine spawned and \n controlled by the database environment.\n \n Parameters\n ----------\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : Python object or oml.embed.data_image._DataImage\n If no image is rendered in the script, returns whatever Python object returned\n by the function. Otherwise, returns an oml.embed.data_image._DataImage object.\n See :ref:`more-output`.\n \n drop(table=None, view=None, model=None)\n Drops a database table, view, or model.\n \n Parameters\n ----------\n table : str or None (default)\n The name of the table to drop.\n view : str or None (default)\n The name of the view to drop.\n model : str or None (default)\n The name of the model to drop.\n \n grant(name, typ='datastore', user=None)\n Grants read privilege for a Python script or datastore.\n Requires the user to have the `PYQADMIN` Oracle Database role.\n \n Parameters\n ----------\n name : str\n The name of Python script in the Python script repository or the name of\n a datastore. The current user must be the owner of the Python script or\n datastore.\n typ : 'datastore' (default) or 'pyqscript'\n A str specifying either 'datastore' or 'pyqscript' to grant the\n read privilege. 'pyqscript' requires Embedded Python.\n user : str or None (default)\n The user to grant read privilege of the named Python script or datastore\n to. Treated as case-sensitive if wrapped in double quotes. Treated as\n case-insensitive otherwise. If None, grant read privilege to public.\n \n group_apply(data, index, func, func_owner=None, parallel=None, orderby=None, graphics=False, **kwargs)\n Partitions database data by the column(s) specified in ``index``\n and runs the user-defined Python function on each partition using \n Python engines spawned and controlled by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The OML DataFrame that represents the in-database data that ``func`` is\n applied on.\n index : OML data object\n The columns to partition the ``data`` before sending it to ``func``.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by \n service level in ADW.\n orderby : oml.DataFrame, oml.Float, or oml.String\n An optional argument used to specify the ordering of group partitions.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : dict \n If no image is rendered in the script, returns a dict of Python objects\n returned by the function. Otherwise, returns a dict of \n oml.embed.data_image._DataImage objects. See :ref:`more-output`.\n \n hist(x, bins=None, range=None, density=False, weights=None, cumulative=False, bottom=None, align='mid', orientation='vertical', rwidth=None, log=False, color=None, label=None, **kwargs)\n Plots a histogram.\n \n Computes and draws a histogram for every data set column contained in ``x``.\n \n Parameters\n ----------\n x : oml.Float\n bins : int, strictly monotonic increasing sequence, 'auto', 'doane', 'fd', 'rice', 'scott', 'sqrt', or 'sturges', optional\n * If an integer, denotes the number of equal width bins to generate.\n * If a sequence, denotes bin edges and overrides the values of ``range``.\n * If a string, denotes the estimator to use calculate the optimal number\n of bins. 'auto' is the maximum of the 'fd' and 'sturges' estimators.\n * Default is taken from the matplotlib rcParam ``hist.bins``.\n weights : oml.Float\n Must come from the same table as ``x``.\n cumulative : int, float, or boolean, False (Default)\n If greater than zero, then a histogram is computed where each bin gives\n the counts in that bin plus all bins for smaller values. If ``density``\n is also True, then the histogram is normalized so the last bin equals 1.\n If less than zero, the direction of accumulation is reversed. In this\n case, if ``density`` is True, then the histogram is normalized so that the\n first bin equals 1. \n rwidth : int, float, or None (default)\n Ratio of the width of the bars to the bin widths. Values less than 0\n is treated as 0. Values more than 1 is treated as 1. If None,\n defaults to 1.\n color : str that indicates a color spec or None (default)\n If None, use the standard line color sequence. \n label : str or None (default) \n The label that is applied to the first patch of the histogram.\n \n Notes\n -----\n For information on the other parameters, see documentation for :py:func:`matplotlib.pyplot.hist`.\n \n Returns\n -------\n n : :py:class:`numpy.ndarray`\n The values of the histogram bins. \n bins : :py:class:`numpy.ndarray`\n The edges of the bins. An array of length #bins + 1.\n patches : list of :py:class:`matplotlib.patches.Rectangle`\n Individual patches used to create the histogram.\n \n index_apply(times, func, func_owner=None, parallel=None, graphics=False, **kwargs)\n Runs the user-defined Python function multiple times, passing the \n run index as first argument, using Python engines spawned and controlled \n by the database environment.\n \n Parameters\n ----------\n times : int\n The number of times to execute the function.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by\n service level in ADW.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : list\n If no image is rendered in the script, returns a list of Python objects\n returned by the function. Otherwise, returns a list of \n oml.embed.data_image._DataImage objects. See :ref:`more-output`.\n \n isconnected(check_automl=False)\n Indicates whether an active Oracle Database connection exists.\n \n Parameters\n ----------\n check_automl: bool, False (default)\n Indicates whether to check the connection is automl-enabled.\n \n Returns\n -------\n connected : bool\n \n push(x, oranumber=True, dbtypes=None)\n Pushes data into Oracle Database.\n \n Creates an internal table in Oracle Database and inserts the data\n into the table. The table exists as long as an OML object (either\n in the Python client or saved in the datastore) references the table.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n oranumber : bool\n If True (default), use SQL NUMBER for numeric columns. Otherwise\n use BINARY_DOUBLE. Ignored if ``append`` is True.\n dbtypes : dict or list of str\n The SQL data types to use in the table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n temp_table : oml.DataFrame\n \n revoke(name, typ='datastore', user=None)\n Revokes read privilege for a Python script or datastore.\n Requires the user to have the `PYQADMIN` Oracle Database role.\n \n Parameters\n ----------\n name : str\n The name of Python script in the Python script repository or the name of\n a datastore. The current user must be the owner of the Python script or\n datastore.\n typ : 'datastore' (default) or 'pyqscript'\n A str specifying either 'datastore' or 'pyqscript' to revoke the\n read privilege. 'pyqscript' requires Embedded Python.\n user : str or None (default)\n The user to revoke read privilege of the named Python script or datastore\n from. Treated as case-sensitive if wrapped in double quotes. Treated as\n case-insensitive otherwise. If None, revoke read privilege from public.\n \n row_apply(data, func, func_owner=None, rows=1, parallel=None, graphics=False, **kwargs)\n Partitions database data into chunks of rows and runs the user-defined \n Python function on each chunk using Python engines spawned and controlled \n by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The OML DataFrame that represents the in-database data that ``func``\n is applied on.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n rows : int, 1 (default)\n The maximum number of rows in each chunk.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by \n service level in ADW.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : pandas.DataFrame or a list of oml.embed.data_image._DataImage\n If no image is rendered in the script, returns a :py:class:`pandas.DataFrame`.\n Otherwise, returns a list of oml.embed.data_image._DataImage objects.\n See :ref:`more-output`.\n \n sync(schema=None, regex_match=False, **kwargs)\n Creates a DataFrame proxy object in Python that represents an Oracle\n Database data set.\n \n The data set can be one of the following: a database table, view, or query.\n \n Parameters\n ----------\n schema : str or None (default)\n The name of the schema where the database object exists;\n if None, then the current schema is used.\n regex_match : bool, False (default)\n Synchronizes tables or views that match a regular expression.\n Ignored if ``query`` is used.\n table, view, query : str or None (default)\n The name of a table, of a view, or of an Oracle SQL query to select\n from the database. When ``regex_match`` is True, this specifies the\n name pattern. Exactly one of these parameters must be a str and the\n other two must be None.\n \n Notes\n -----\n When ``regex_match`` is True, synchronizes the matched tables or views\n to a dict with the table or view name as the key.\n \n Returns\n -------\n data_set : oml.DataFrame, or if ``regex_match`` is used, returns\n a dict of oml.DataFrame\n \n table_apply(data, func, func_owner=None, graphics=False, **kwargs)\n Runs the user-defined Python function with data pulled from \n a database table or view using a Python engine spawned and \n controlled by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The oml.DataFrame that represents the data ``func`` is applied on.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : Python object or oml.embed.data_image._DataImage\n If no image is rendered in the script, returns whatever Python object returned\n by the function. Otherwise, returns an oml.embed.data_image._DataImage object.\n See :ref:`more-output`.\n\nDATA\n __all__ = ['connect', 'disconnect', 'isconnected', 'check_embed', 'cur...\n __build_serial__ = '1.0_08122021_1858'\n\nVERSION\n 1.0\n\nFILE\n /usr/local/lib/python3.9/site-packages/oml/__init__.py\n\n\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1634618100","id":"20211001-190306_1526941016","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:47+0000","dateFinished":"2021-09-22T20:18:48+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:385"},{"text":"%md\n## Learn More\n\n* Get Started with OML4Py and OML Notebooks\n* Oracle Machine Learning Notebooks\n \n**Last Updated Date** - September 2021\n \nCopyright (c) 2021 Oracle Corporation \n###### The Universal Permissive License (UPL), Version 1.0\n---","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:48+0000","config":{"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"results":{},"enabled":true,"editorHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Learn More

\n\n

Last Updated Date - September 2021

\n

Copyright (c) 2021 Oracle Corporation

\n
The Universal Permissive License (UPL), Version 1.0
\n
\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_1016371808","id":"20211001-190306_68494635","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:48+0000","dateFinished":"2021-09-22T20:18:48+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:386"},{"text":"%md\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:48+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"md","editOnDblClick":false},"editorMode":"ace/mode/markdown"},"settings":{"params":{},"forms":{}},"interrupted":false,"jobName":"paragraph_1633114986450_-1060179910","id":"20211001-190306_1958773210","dateCreated":"2021-04-23T13:27:18+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":false,"$$hashKey":"object:387"}],"id":"79628","noteParams":{},"noteForms":{},"angularObjects":{"ORA221A1BC345:OMLUSER:78556":[],"ORA36CEFD120D:OMLUSER:78556":[],"ORA9405AD2E1E:OMLUSER:78556":[],"MDW276C5A6A4D:shared_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{},"name":"Lab 1: Get Started with OML4Py on Autonomous Database"} \ No newline at end of file +{"paragraphs":[{"text":"%md\n## **Initiate a call to the Python interpreter**\nTo run Python commands in a notebook, you must first connect to the Python interpreter. This occurs as a result of running your first `%python` paragraph. To use OML4Py, you must import the `oml` module, which automatically establishes a connection to your database. In an Oracle Machine Learning notebook, you can add multiple paragraphs, and each paragraph can be connected to different interpreters such as SQL or Python. This example shows you how to:\n\n* Connect to a Python interpreter to run Python commands in a notebook\n* Import the Python modules—oml, pandas, numpy, and matplotlib\n* Check if the oml module is connected to the database\n\nNote: `z` is a reserved keyword and must not be used as a variable in %python paragraphs in Oracle Machine Learning Notebooks. You will see `z.show()` fucntion used in the examples to display Python object and proxy object content.\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:30+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"md","editOnDblClick":false},"editorMode":"ace/mode/markdown","editorHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Initiate a call to the Python interpreter

\n

To run Python commands in a notebook, you must first connect to the Python interpreter. This occurs as a result of running your first %python paragraph. To use OML4Py, you must import the oml module, which automatically establishes a connection to your database. In an Oracle Machine Learning notebook, you can add multiple paragraphs, and each paragraph can be connected to different interpreters such as SQL or Python. This example shows you how to:

\n
    \n
  • Connect to a Python interpreter to run Python commands in a notebook
  • \n
  • Import the Python modules—oml, pandas, numpy, and matplotlib
  • \n
  • Check if the oml module is connected to the database
  • \n
\n

Note: z is a reserved keyword and must not be used as a variable in %python paragraphs in Oracle Machine Learning Notebooks. You will see z.show() fucntion used in the examples to display Python object and proxy object content.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986446_2142145532","id":"20211001-190306_2044056072","dateCreated":"2021-09-21T20:48:10+0000","dateStarted":"2021-09-22T20:18:31+0000","dateFinished":"2021-09-22T20:18:33+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"focus":true,"$$hashKey":"object:376"},{"text":"%python\n\nimport oml","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:33+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true,"editorHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[]},"interrupted":false,"jobName":"paragraph_1633114986450_1302574953","id":"20211001-190306_576066479","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:34+0000","dateFinished":"2021-09-22T20:18:42+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:377"},{"text":"%md\n## **Verify Connection to the Autonomous Database**\nUsing the default interpreter bindings, OML Notebooks automatically establishes a database connection for the notebook. \n\nTo verify the Python interpreter has established a database connection through the `oml` module, run the command shown below. If the notebook is connected, the command returns `True`. \n\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:42+0000","config":{"tableHide":false,"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Verify Connection to the Autonomous Database

\n

Using the default interpreter bindings, OML Notebooks automatically establishes a database connection for the notebook.

\n

To verify the Python interpreter has established a database connection through the oml module, run the command shown below. If the notebook is connected, the command returns True.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_2092135881","id":"20211001-190306_1271780091","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:43+0000","dateFinished":"2021-09-22T20:18:43+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:378"},{"text":"%python\n\noml.isconnected()","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:43+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"True\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_371292639","id":"20211001-190306_2058125440","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:43+0000","dateFinished":"2021-09-22T20:18:44+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:379"},{"text":"%md\n\n## **View Help Files**\nThe Python help function is used to display the documentation of packages, modules, functions, classes, and keywords. The help function has the following syntax:","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:44+0000","config":{"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

View Help Files

\n

The Python help function is used to display the documentation of packages, modules, functions, classes, and keywords. The help function has the following syntax:

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_563552517","id":"20211001-190306_759056463","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:44+0000","dateFinished":"2021-09-22T20:18:44+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:380"},{"text":"%python\n\nhelp([object])","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:44+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on list object:\n\nclass list(object)\n | list(iterable=(), /)\n | \n | Built-in mutable sequence.\n | \n | If no argument is given, the constructor creates a new empty list.\n | The argument must be an iterable if specified.\n | \n | Methods defined here:\n | \n | __add__(self, value, /)\n | Return self+value.\n | \n | __contains__(self, key, /)\n | Return key in self.\n | \n | __delitem__(self, key, /)\n | Delete self[key].\n | \n | __eq__(self, value, /)\n | Return self==value.\n | \n | __ge__(self, value, /)\n | Return self>=value.\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __getitem__(...)\n | x.__getitem__(y) <==> x[y]\n | \n | __gt__(self, value, /)\n | Return self>value.\n | \n | __iadd__(self, value, /)\n | Implement self+=value.\n | \n | __imul__(self, value, /)\n | Implement self*=value.\n | \n | __init__(self, /, *args, **kwargs)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __iter__(self, /)\n | Implement iter(self).\n | \n | __le__(self, value, /)\n | Return self<=value.\n | \n | __len__(self, /)\n | Return len(self).\n | \n | __lt__(self, value, /)\n | Return self\n

For example,

\n
    \n
  • To view the help files for the oml.create function, type the below code.
  • \n
\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1567926762","id":"20211001-190306_437824049","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:46+0000","dateFinished":"2021-09-22T20:18:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:382"},{"text":"%python\n\nhelp(oml.create)","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:46+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on cython_function_or_method in module oml.core.methods:\n\ncreate(x, table, oranumber=True, dbtypes=None, append=False)\n Creates a table in Oracle Database from a Python data set.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n table : str\n A name for the table.\n oranumber : bool, True (default)\n If True, use SQL NUMBER for numeric columns. Otherwise, use BINARY_DOUBLE.\n Ignored if ``append`` is True.\n dbtypes : dict mapping str to str or list of str\n A list of SQL types to use on the new table. If a list, its length should\n be equal to the number of columns. If a dict, the keys are the names of the\n columns. Ignored if ``append`` is True.\n append : bool, False (default)\n Indicates whether to append the data to the existing table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n new_table : oml.DataFrame\n A proxy object that represents the newly-created table.\n\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_904721421","id":"20211001-190306_70160754","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:46+0000","dateFinished":"2021-09-22T20:18:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:383"},{"text":"%md\n\nTo view the help files for `oml` module, type the code below.","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:46+0000","config":{"tableHide":false,"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"
\n

To view the help files for oml module, type the code below.

\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1816292524","id":"20211001-190306_1536023944","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:47+0000","dateFinished":"2021-09-22T20:18:47+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:384"},{"text":"%python\n\nhelp(oml)","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:47+0000","config":{"editorSetting":{"language":"text","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/undefined","fontSize":9,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Help on package oml:\n\nNAME\n oml - Oracle Machine Learning for Python\n\nDESCRIPTION\n A component of the Oracle Advanced Analytics Option, Oracle Machine Learning\n for Python makes the open source Python programming language and environment\n ready for enterprise in-database data. Designed for problems involving both\n large and small volumes of data, Oracle Machine Learning for Python integrates\n Python with Oracle Database. Python users can run Python commands and scripts\n for statistical, machine learning, and graphical analyses on data stored in\n Oracle Database. Python users can develop, refine, and deploy Python scripts\n that leverage the parallelism and scalability of Oracle Database to automate\n data analysis. Data analysts and data scientists can run Python modules and\n develop and operationalize Python scripts for machine learning applications\n in one step without having to learn SQL. Oracle Machine Learning for Python\n performs function pushdown for in-database execution of core Python and\n popular Python module functions. Being integrated with Oracle Database,\n Oracle Machine Learning for Python can run any Python module via embedded\n Python while the database manages the data served to the Python engines.\n\nPACKAGE CONTENTS\n algo (package)\n automl (package)\n core (package)\n ds (package)\n embed (package)\n graphics (package)\n mlx (package)\n script (package)\n\nCLASSES\n oml.algo.model.odmModel(builtins.object)\n oml.algo.ai.ai\n oml.algo.ar.ar\n oml.algo.dt.dt\n oml.algo.em.em\n oml.algo.esa.esa\n oml.algo.glm.glm\n oml.algo.km.km\n oml.algo.nb.nb\n oml.algo.nn.nn\n oml.algo.rf.rf\n oml.algo.svd.svd\n oml.algo.svm.svm\n oml.core.number._Number(oml.core.series._Series)\n oml.core.float.Float\n oml.core.series._Series(oml.core.vector._Vector)\n oml.core.boolean.Boolean\n oml.core.bytes.Bytes\n oml.core.string.String\n oml.core.vector._Vector(builtins.object)\n oml.core.frame.DataFrame\n \n class Boolean(oml.core.series._Series)\n | Boolean series data class.\n | \n | Represents a single column of 0, 1, and NULL values in Oracle Database.\n | \n | Method resolution order:\n | Boolean\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __and__(self, other)\n | \n | __init__(self)\n | \n | __invert__(self)\n | \n | __or__(self, other)\n | \n | all(self)\n | Checks whether all elements in the Boolean series data object are True.\n | \n | Returns\n | =======\n | all: bool\n | \n | any(self)\n | Checks whether any elements in the Boolean series data object are True.\n | \n | Returns\n | -------\n | any: bool\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of bool and None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class Bytes(oml.core.series._Series)\n | Bytes(other, dbtype)\n | \n | Binary series data class.\n | \n | Represents a single column of RAW or BLOB data in Oracle Database.\n | \n | Method resolution order:\n | Bytes\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, other, dbtype)\n | Convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.Bytes\n | dbtype : 'raw' or 'blob'\n | \n | len(self)\n | Computes the length of each byte string.\n | \n | Returns\n | -------\n | length : oml.Float\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of bytes and None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class DataFrame(oml.core.vector._Vector)\n | DataFrame(other)\n | \n | Tabular dataframe class.\n | \n | Represents multiple columns of Boolean, Bytes, Float, and/or String data.\n | \n | Method resolution order:\n | DataFrame\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | KFold(self, n_splits=3, seed=12345, strata_cols=None, use_hash=True, hash_cols=None, nvl=None)\n | Splits the oml.DataFrame object randomly into k consecutive folds.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | strata_cols : a list of string values or None (default)\n | Names of the columns used for stratification. If None, stratification\n | is not performed. Must be None when ``use_hash`` is False.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | hash_cols : a list of string values or None (default)\n | If a list of string values, use the values from these named columns\n | to hash to split the data. If None, use the values from the 1st 10\n | columns to hash.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of k 2-tuples of oml.DataFrame objects\n | \n | Raises\n | ------\n | ValueError\n | * If ``hash_cols`` refers to a single LOB column.\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean, str, list of str, 2-tuple\n | * oml.Boolean : select only the rows satisfying the condition. Must be from the same data\n | source as self.\n | * str : select the column of the same name\n | * list of str : select the columns whose names matches the elements in the list.\n | * 2-tuple : The first element in the tuple denotes which rows to select.\n | It can be either a oml.Boolean or ``slice(None)`` (this selects all\n | rows). The second element in the tuple denotes which columns to select.\n | It can be either ``slice(None)`` (this selects all columns), str, list\n | of str, int, or list of int. If int or list of int, selects the\n | column(s) in the corresponding position(s).\n | \n | Returns\n | -------\n | subset : OML data object\n | Is a oml.DataFrame if has more than one column, otherwise is a OML series data object.\n | \n | __init__(self, other)\n | Convert OML series data object(s) to oml.DataFrame.\n | \n | Parameters\n | ----------\n | other : OML series data object or dict mapping str to OML series data objects\n | * OML series data object : initializes a single-column oml.DataFrame containing the\n | same data. \n | * dict : initializes a oml.DataFrame that comprises all the OML series data objects\n | in the dict in an arbitrary order. Each column in the resulting oml.DataFrame has as\n | its column name its corresponding key in the dict.\n | \n | corr(self, method='pearson', min_periods=1, skipna=True)\n | Computes pairwise correlation between all columns where possible,\n | given the type of coefficient.\n | \n | Parameters\n | ----------\n | method : 'pearson' (default), 'kendall', or 'spearman'\n | * pearson : Uses Pearson's correlation coefficient. Can only calculate\n | correlations between Float or Boolean columns.\n | * kendall : Uses Kendall's tau-b coefficient.\n | * spearman : Uses Spearman's rho coefficient.\n | min_periods : int, optional, 1 (default)\n | The minimum number of observations required per pair of columns to \n | have a valid result.\n | skipna : bool, True (default)\n | If True, NaN and (+/-)Inf values are mapped to NULL.\n | \n | Returns\n | -------\n | y : :py:class:`pandas.DataFrame`\n | \n | count(self, numeric_only=False)\n | Returns the number of elements that are not NULL for each column.\n | \n | Parameters\n | ----------\n | numeric_only : boolean, False (default)\n | Includes only Float and Boolean columns.\n | \n | Returns\n | -------\n | count : :py:class:`pandas.Series`\n | \n | crosstab(self, index, columns=None, values=None, rownames=None, colnames=None, aggfunc=None, margins=False, margins_name='All', dropna=True, normalize=False, pivot=False)\n | Computes a simple cross-tabulation of two or more columns. By default,\n | computes a frequency table for the columns unless a column and\n | an aggregation function have been passed.\n | \n | Parameters\n | ----------\n | index : str or list of str\n | Names of the column(s) of the DataFrame to group by. If ``pivot`` is\n | True, these columns are displayed in the rows of the result table.\n | columns : str or list of str, optional\n | Names of the other column(s) of the Dataframe to group by. If ``pivot``\n | is True, these columns are displayed in the columns of the result\n | table.\n | values : str, optional\n | The name of the column to aggregate according to the grouped columns.\n | Requires ``aggfunc`` to be specified.\n | aggfunc : OML DataFrame aggregation function object, optional\n | The supported oml.DataFrame aggregation functions include: count, \n | max, mean, median, min, nunique, std and sum. To use ``aggfunc``, \n | specify the function object using its full name, for example, \n | ``oml.DataFrame.sum``, ``oml.DataFrame.nunique``, and so on.\n | If specified, requires ``values`` to also be specified.\n | rownames : str or list of str, None (default)\n | If specified, must match number of names in ``index``. If None, names in\n | ``index`` are used. \n | colnames : str or list of str, None (default)\n | If specified, must match number of strings in ``columns``. If None,\n | names in ``columns`` are used. Ignored if ``pivot`` is True.\n | margins : bool, False (default)\n | Includes row and column margins (subtotals)\n | margins_name : str, 'All' (default)\n | Names of the row and column that contain the totals when ``margins``\n | is True. Should be a value not contained in any of the columns specified\n | by ``index`` and ``columns``. \n | dropna : bool, True (default)\n | In addition, if ``pivot`` is True, drops columns from the result\n | table if all the entries of the column are NaN.\n | normalize : boolean, {'all', 'index', 'columns'} or {0, 1}, False (default)\n | Normalizes by dividing the values by their sum.\n | \n | * If 'all' or True, normalizes over all values.\n | * If 'index' or 0, normalizes over each row.\n | * If 'columns' or 1, normalizes over each column.\n | * If ``margins`` is True, also normalizes margin values.\n | pivot : bool, False (default)\n | If True, returns results in pivot table format. Else, returns results in\n | relational table format.\n | \n | Returns\n | -------\n | crosstab : oml.DataFrame\n | \n | See Also\n | --------\n | DataFrame.pivot_table\n | \n | cumsum(self, by, ascending=True, na_position='last', skipna=True)\n | Gets the cumulative sum of each ``Float`` or ``Boolean`` column after the\n | ``DataFrame`` object is sorted.\n | \n | Parameters\n | ----------\n | by : str or list of str\n | A single column name or list of column names by which to sort the \n | DataFrame object. Columns in ``by`` do not have to be ``Float`` or \n | ``Boolean``.\n | ascending : bool or list of bool, True (default)\n | If True, sort is in ascending order, otherwise descending. Specify \n | list for multiple sort orders. If this is a list of bools, must match\n | the length of ``by``.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NaN and None at the beginning, ``last`` places them \n | at the end.\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | cumsum : oml.DataFrame\n | \n | describe(self, percentiles=None, include=None, exclude=None)\n | Generates descriptive statistics that summarize the central tendency,\n | dispersion, and shape of the data in each column\n | \n | Parameters\n | ----------\n | percentiles : bool, list-like of numbers, or None (default), optional \n | The percentiles to include in the output for `Float` columns. All\n | must be between 0 and 1. If ``percentiles`` is None or True,\n | ``percentiles`` is set to ``[.25, .5, .75]``, which corresponds\n | to the 25th, 50th, and 75th percentiles. If `percentiles` is False,\n | only ``min`` and ``max`` stats and no other percentiles is\n | included.\n | include : 'all', list-like of OML column types or None (default), optional\n | Types of columns to include in the result. Available options:\n | \n | - 'all': Includes all columns.\n | - List of OML column types : Only includes specified types in\n | the results.\n | - None (default) : If ``Float`` columns exist and ``exclude`` is\n | None, only includes ``Float`` columns. Otherwise, includes all\n | columns.\n | exclude : list of OML column types or None (default), optional\n | Types of columns to exclude from the result. Available options:\n | \n | - List of OML column types : Excludes specified types from\n | the results.\n | - None (default) : Result excludes nothing.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.DataFrame`\n | The concatenation of the summary statistics for each column.\n | \n | See Also\n | --------\n | DataFrame.count\n | DataFrame.max\n | DataFrame.min\n | DataFrame.mean\n | DataFrame.std\n | DataFrame.select_types\n | \n | drop(self, columns)\n | Drops specified columns.\n | \n | Parameters\n | ----------\n | columns : str or list of str\n | Columns to drop from the object.\n | \n | Returns\n | -------\n | dropped : oml.DataFrame\n | \n | drop_duplicates(self, subset=None)\n | Removes duplicated rows from oml.DataFrame object.\n | \n | Use ``subset`` to consider a set of rows duplicates if they have\n | identical values for only a subset of the columns. In this case, after\n | deduplication, each of the other columns contains the minimum value\n | found across the set.\n | \n | Parameters\n | ----------\n | subset : str or list of str, optional\n | Columns to consider for identifying duplicates. If None, use all\n | columns.\n | \n | Returns\n | -------\n | deduplicated : oml.DataFrame\n | \n | dropna(self, how='any', thresh=None, subset=None)\n | Removes rows containing missing values.\n | \n | Parameters\n | ----------\n | how : {'any', 'all'}, 'any' (default)\n | Determines if row is removed from DataFrame when at least one or all\n | values are missing.\n | thresh : int, optional\n | Requires that many of missing values to drop a row from DataFrame.\n | subset : list, optional\n | The names of the columns to check for missing values.\n | \n | Returns\n | -------\n | dropped : oml.DataFrame\n | DataFrame without missing values.\n | \n | kurtosis(self, skipna=True)\n | Returns the sample kurtosis of the values for each ``Float``\n | column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | kurt : :py:class:`pandas.Series`\n | \n | max(self, skipna=True, numeric_only=False)\n | Returns the maximum value in each column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | numeric_only : boolean, False (default)\n | Includes only ``Float`` and ``Boolean`` columns. \n | \n | Returns\n | -------\n | max : :py:class:`pandas.Series`\n | \n | mean(self, skipna=True)\n | Returns the mean of the values for each ``Float`` or ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | mean : :py:class:`pandas.Series`\n | \n | median(self, skipna=True)\n | Returns the median of the values for each ``Float`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Exclude NaN values when computing the result\n | \n | Returns\n | -------\n | median : :py:class:`pandas.Series`\n | \n | merge(self, other, on=None, left_on=None, right_on=None, how='left', suffixes=('_l', '_r'), nvl=True)\n | Joins data sets.\n | \n | Parameters\n | ----------\n | other : an OML data set object\n | on : str or list of str, optional\n | Column names to join on. Must be found in both ``self`` and ``other``.\n | left_on : str or list of str, optional\n | Column names of ``self`` to join on.\n | right_on : str or list of str, optional\n | Column names of ``other`` to join on. If specified, must specify the same\n | number of columns as ``left_on``.\n | how : 'left' (default), 'right', 'inner', 'full'\n | * left : left outer join\n | * right : right outer join\n | * full : full outer join\n | * inner : inner join\n | \n | If ``on`` and ``left_on`` are both None, then ``how`` is ignored,\n | and a cross join is performed.\n | suffixes : sequence of length 2\n | Suffix to apply to column names on the left and right side,\n | respectively.\n | nvl : True (default), False, dict \n | * True : join condition includes NULL value\n | * False : join condition excludes NULL value\n | * dict : specifies the values that join columns use in replacement of NULL value with column names as keys\n | \n | Returns\n | -------\n | merged : oml.DataFrame\n | \n | min(self, skipna=True, numeric_only=False)\n | Returns the minimum value in each column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | numeric_only : boolean, False (default)\n | Includes only ``Float`` and ``Boolean`` columns\n | \n | Returns\n | -------\n | min : :py:class:`pandas.Series`\n | \n | nunique(self, dropna=True)\n | Returns number of unique values for each column of DataFrame.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : pandas.Series\n | \n | pivot_table(self, index, columns=None, values=None, aggfunc=, margins=False, dropna=True, margins_name='All')\n | Converts data set to a spreadsheet-style pivot table. Due to the Oracle\n | 1000 column limit, pivot tables with more than 1000 columns are\n | automatically truncated to display the categories with the most entries\n | for each value column.\n | \n | Parameters\n | ----------\n | index : str or list of str\n | Names of columns containing the keys to group by on the pivot table\n | index.\n | columns : str or list of str, optional\n | Names of columns containing the keys to group by on the pivot table\n | columns. \n | values : str or list of str, optional\n | Names of columns to aggregate on. If None, values are inferred \n | as all columns not in ``index`` or ``columns``.\n | aggfunc : OML DataFrame aggregation function or a list of them, oml.DataFrame.mean (default)\n | The supported oml.DataFrame aggregation functions include: count, max,\n | mean, median, min, nunique, std and sum. When using aggregation\n | functions, specify the function object using its full name, for example,\n | ``oml.DataFrame.sum``, ``oml.DataFrame.nunique``, and so on.\n | If ``aggfunc`` contains more than one function, each function is \n | applied to each column in ``values``. If the function does not apply to\n | the type of a column in ``values``, the result table skips applying \n | the function to the particular column. \n | margins : bool, False (default)\n | Include row and column margins (subtotals)\n | dropna : bool, True (default)\n | Unless ``columns`` is None, drop column labels from the result table if\n | all the entries corresponding to the column label are NaN for all\n | aggregations.\n | margins_name : string, 'All' (default)\n | Names of the row and column that contain the totals when ``margins``\n | is True. Should be a value not contained in any of the columns specified\n | by ``index`` and ``columns``. \n | \n | Returns\n | -------\n | pivoted : oml.DataFrame\n | \n | See Also\n | --------\n | DataFrame.crosstab\n | \n | pull(self, aslist=False)\n | Pulls data represented by the DataFrame from Oracle Database\n | into an in-memory Python object.\n | \n | Parameters\n | ----------\n | aslist : bool\n | If False, returns a pandas.DataFrame. Otherwise, returns the data\n | as a list of tuples.\n | \n | Returns\n | -------\n | pulled_obj : :py:class:`pandas.DataFrame` or list of tuples\n | \n | rename(self, columns)\n | Renames columns.\n | \n | Parameters\n | ----------\n | columns : dict or list\n | ``dict`` contains old and new column names.\n | ``list`` contains the new names for all the columns in order.\n | \n | Notes\n | -----\n | The method changes the column names of the caller DataFrame object too.\n | \n | Returns\n | -------\n | renamed : DataFrame\n | \n | replace(self, old, new, default=None, columns=None)\n | Replace values given in `old` with `new` in specified columns.\n | \n | Parameters\n | ----------\n | columns : list of str or None (default)\n | Columns to look for values in `old`. If None, then all columns\n | of DataFrame will be replaced.\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it \n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying \n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain \n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.DataFrame\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | in the target columns\n | * if `default` is specifed with a non-None value which has data type\n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different \n | data types\n | \n | round(self, decimals=0)\n | Rounds oml.Float values in the oml.DataFrame object to \n | the specified decimal place.\n | \n | Parameters\n | ----------\n | decimals : non-negative int\n | \n | Returns\n | -------\n | rounded: oml.DataFrame\n | \n | sample(self, frac=None, n=None, random_state=None)\n | Return a random sample data sets from an oml.DataFrame object.\n | \n | Parameters\n | ----------\n | frac : a float value\n | Fraction of data sets to return. The value should be between 0 and 1.\n | Cannot be used with n.\n | n : an integer value\n | Number of rows to return. Default = 1 if frac = None.\n | Cannot be used with frac. \n | random_state : int or 12345 (default)\n | The seed to use for random sampling.\n | \n | Returns\n | -------\n | sample_data : an oml.DataFrame objects\n | It contains the random sample rows from an oml.DataFrame object.\n | The fraction of returned data sets is specified by the frac parameter.\n | \n | select_types(self, include=None, exclude=None)\n | Returns the subset of columns include/excluding columns based on their OML\n | type.\n | \n | Parameters\n | ----------\n | include, exclude : list of OML column types\n | A selection of OML column types to be included/excluded. At least one of\n | these parameters must be supplied. \n | \n | Raises\n | ------\n | ValueError\n | * If both of ``include`` and ``exclude`` are None.\n | * If ``include`` and ``exclude`` have overlapping elements.\n | \n | Returns\n | -------\n | subset : oml.DataFrame\n | \n | skew(self, skipna=True)\n | Returns the sample skewness of the values for each ``Float``\n | column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | skew : :py:class:`pandas.Series`\n | \n | sort_values(self, by, ascending=True, na_position='last')\n | Specifies the order in which rows appear in the result set.\n | \n | Parameters\n | ----------\n | by : str or list of str\n | Column names or list of column names.\n | ascending : bool or list of bool, True (default)\n | If True, sort is in ascending order. Sort is in descending order\n | otherwise. Specify list for multiple sort orders. If this is a list of\n | bools, must match the length of ``by``.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them \n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : oml.DataFrame\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, strata_cols=None, use_hash=True, hash_cols=None, nvl=None)\n | Splits the oml.DataFrame object randomly into multiple data sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) default\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | strata_cols: a list of string values or None (default):\n | Names of the columns used for stratification. If None, stratification\n | is not performed. Must be None when use_hash is False.\n | use_hash: boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use random\n | number to split the data.\n | hash_cols: a list of string values or None (default):\n | If a list of string values, use the values from these named columns\n | to hash to split the data. If None, use the values from the 1st 10\n | columns to hash.\n | nvl: numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of oml.DataFrame objects\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | ValueError\n | * If ``hash_cols`` refers to a single LOB column.\n | \n | std(self, skipna=True)\n | Returns the sample standard deviation of the values of each ``Float`` or\n | ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result\n | \n | Returns\n | -------\n | std : :py:class:`pandas.Series`\n | \n | sum(self, skipna=True)\n | Returns the sum of the values of each ``Float`` or ``Boolean`` column.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Exclude NaN values when computing the result\n | \n | Returns\n | -------\n | sum : :py:class:`pandas.Series`\n | \n | t_dot(self, other=None, skipna=True, pull_from_db=True)\n | Calculates the matrix cross-product of self with other.\n | \n | Equivalent to transposing self first, then multiplying it with other. \n | \n | Parameters\n | ----------\n | other : oml.DataFrame, optional\n | If not specified, self is used.\n | skipna : bool, True (default)\n | Treats NaN entries as 0.\n | pull_from_db : bool, True (default)\n | If True, returns a pandas.DataFrame. If False, returns a\n | oml.DataFrame consisting of three columns:\n | \n | - ROWID: the row number of the resulting matrix \n | - COLID: the column number of the resulting matrix \n | - VALUE: the value at the corresponding position of the matrix \n | \n | Returns\n | -------\n | prod : float, :py:class:`pandas.Series`, or :py:class:`pandas.DataFrame`\n | \n | See Also\n | --------\n | oml.Float.dot\n | \n | ----------------------------------------------------------------------\n | Readonly properties defined here:\n | \n | columns\n | The column names of the data set.\n | \n | dtypes\n | The types of the columns of the data set.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class Float(oml.core.number._Number)\n | Float(other, dbtype=None)\n | \n | Numeric series data class.\n | \n | Represents a single column of NUMBER, BINARY_DOUBLE or BINARY_FLOAT data \n | in Oracle Database.\n | \n | Method resolution order:\n | Float\n | oml.core.number._Number\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __abs__(self)\n | Return the absolute value of every element in ``self``.\n | \n | Equivalent to ``abs(self)``.\n | \n | Returns\n | -------\n | absval : oml.Float\n | \n | __add__(self, other)\n | Equivalent to ``self + other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : add the scalar to each element in ``self``. \n | * oml.Float : must come from the same data source. Add corresponding\n | elements in ``self`` and ``other``.\n | \n | Returns\n | -------\n | sum : oml.Float\n | \n | __contains__(self, item)\n | Check whether all elements in ``item`` exists in the Float series\n | \n | Equivalent to ``item in self``.\n | \n | Parameters\n | ----------\n | item : int/float, list of int/float, oml.Float\n | Values to check in series\n | \n | Returns\n | -------\n | contains : bool\n | Returns `True` if all elements exists, otherwise `False`\n | \n | __divmod__(self, other)\n | Equivalent to ``divmod(self, other)``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Find the quotient and remainder when each element in ``self`` is\n | divided by the scalar.\n | * oml.Float : must come from the same data source. Find the quotient and\n | remainder when each element in ``self`` is divided by the corresponding element\n | in ``other``.\n | \n | Returns\n | -------\n | divrem : oml.DataFrame\n | The first column contains the floor of the quotient, and the second column\n | contains the remainder.\n | \n | __floordiv__(self, other)\n | Equivalent to ``self // other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : divide each element in ``self`` by the scalar. \n | * oml.Float : must come from the same data source. Divide each element in\n | ``self`` by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | quotient : oml.Float\n | \n | __init__(self, other, dbtype=None)\n | Convert to oml.Float, or convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.Boolean or oml.Float\n | * oml.Boolean : initialize a oml.Float object that has value 1 (resp. 0)\n | wherever ``other`` has value True (resp. False).\n | * oml.Float : initialize a oml.Float object with the same data as \n | ``other``, except the underlying Oracle Database type has been converted\n | to the one specified by ``dbtype``. \n | dbtype : 'number' or 'binary_double'\n | Ignored if ``other`` is type ``oml.Boolean``. Must be specified if ``other``\n | is type ``oml.Float``.\n | \n | __matmul__(self, other)\n | Equivalent to ``self @ other`` and ``self.dot(other)``.\n | \n | Returns the inner product with an oml.Float. Matrix multiplication with a\n | oml.DataFrame.\n | \n | Parameters\n | ----------\n | other : oml.Float or oml.DataFrame\n | \n | Returns\n | -------\n | matprod : oml.Float\n | \n | See Also\n | --------\n | Float.dot\n | \n | __mod__(self, other)\n | Equivalent to ``self % other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Find the remainder when each element in ``self`` is divided by the\n | scalar.\n | * oml.Float : must come from the same data source. Find the remainder when each\n | element in ``self`` is divided by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | remainder : oml.Float\n | \n | __mul__(self, other)\n | Equivalent to ``self * other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : multiply the scalar with each element in ``self``. \n | * oml.Float : must come from the same data source. Multiply corresponding\n | elements in ``self`` and ``other``.\n | \n | Returns\n | -------\n | product : oml.Float\n | \n | __neg__(self)\n | Return the negation of every element in ``self``. Equivalent to ``-self``.\n | \n | Returns\n | -------\n | negation : oml.Float\n | \n | __pow__(self, other)\n | Equivalent to ``self ** other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : Raise each element in ``self`` to the power of the scalar. \n | * oml.Float : must come from the same data source. Raise each element in \n | ``self`` to the power of the corresponding element in ``other``.\n | \n | Returns\n | -------\n | power : oml.Float\n | \n | __sub__(self, other)\n | Equivalent to ``self - other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : subtract the scalar from each element in ``self``. \n | * oml.Float : must come from the same data source. From each element in\n | ``self``, subtract the corresponding element in ``other``.\n | \n | Returns\n | -------\n | difference : oml.Float\n | \n | __truediv__(self, other)\n | Equivalent to ``self / other``.\n | \n | Parameters\n | ----------\n | other : int, float, or oml.Float\n | * scalar : divide each element in ``self`` by the scalar. \n | * oml.Float : must come from the same data source. Divide each element in\n | ``self`` by the corresponding element in ``other``.\n | \n | Returns\n | -------\n | quotient : oml.Float\n | \n | ceil(self)\n | Returns the ceiling of each element in the Float series data object.\n | \n | Returns\n | -------\n | ceil : oml.Float\n | \n | cut(self, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False)\n | Returns the indices of half-open bins to which each value belongs.\n | \n | Parameters\n | ----------\n | bins : int or strictly monotonically increasing sequence of float/int\n | If int, defines number of equal-width bins in the range of this column.\n | In this case, to include the min and max value, the range is extended by\n | .1% on each side where the bin does not include the endpoint.\n | If a sequence, defines bin edges allowing for non-uniform bin-widths. In \n | this case, the range of x is not extended.\n | right : bool, True (default)\n | Indicates whether the bins include the rightmost edge or the leftmost\n | edge.\n | labels : sequence of unique str, int, or float values, False, or None (default)\n | If a sequence, must be the same length as the resulting number of bins\n | and must have values of same type. If False, bins are sequentially\n | labeled with integers. If None, bins are labeled with the intervals\n | they correspond to.\n | retbins : bool, False (default)\n | Indicates whether to return the bin edges or not. \n | precision : int, 3 (default)\n | When ``labels`` is None, determines the precision of the bin labels.\n | include_lowest : bool, False (default) \n | Indicates whether the first interval should be left-inclusive.\n | \n | Returns\n | -------\n | out : oml.Float or oml.String\n | If labels are ints or floats, return oml.Float.\n | If labels are str, return oml.String.\n | bins : :py:class:`numpy.ndarray` of floats\n | Returned only if ``retbins`` is True.\n | \n | describe(self, percentiles=None)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of the OML series data distribution.\n | \n | Parameters\n | ----------\n | percentiles : list-like of numbers, optional \n | The percentiles to include in the output. All must be between 0 and 1.\n | The default is [.25, .5, .75], which corresponds to the inclusion of \n | the 25th, 50th, and 75th percentiles.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``mean``, ``std``,\n | ``min``, ``max``, and the specified ``percentiles``. The 50th\n | percentile is always included.\n | \n | dot(self, other=None, skipna=True)\n | Returns the inner product with an oml.Float. Matrix multiplication with a\n | oml.DataFrame.\n | \n | Can be called using self @ other.\n | \n | Parameters\n | ----------\n | other : oml.Float or oml.DataFrame, optional\n | If not specified, self is used.\n | skipna : bool, True (default)\n | Treats NaN entries as 0.\n | \n | Returns\n | -------\n | dot_product : :py:class:`pandas.Series` or float\n | \n | exp(self)\n | Returns element-wise e to the power of values in the Float series data object.\n | \n | Returns\n | -------\n | exp : oml.Float\n | \n | floor(self)\n | Returns the floor of each element in the Float series data object.\n | \n | Returns\n | -------\n | floor : oml.Float\n | \n | isinf(self)\n | Detects infinite values element-wise in the Float series data object.\n | \n | Returns\n | -------\n | isinf : oml.Boolean\n | \n | isnan(self)\n | Detects a NaN (not a number) element from Float object.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates NaN for each element.\n | \n | log(self, base=None)\n | Returns element-wise logarithm, to the given ``base``, of values\n | in the Float series data object.\n | \n | Parameters\n | ----------\n | base : int, float, optional\n | The base of the logarithm, by default natural logarithm\n | \n | Returns\n | -------\n | log : oml.Float\n | \n | replace(self, old, new, default=None)\n | Replace values given in `old` with `new`.\n | \n | Parameters\n | ----------\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying\n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain\n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.Float\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | * if `default` is specifed with a non-None value which has data type \n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different\n | data types\n | \n | round(self, decimals=0)\n | Rounds oml.Float values to the specified decimal place.\n | \n | Parameters\n | ----------\n | decimals : non-negative int\n | \n | Returns\n | -------\n | rounded : oml.Float\n | \n | sqrt(self)\n | Returns the square root of each element in the Float series data object.\n | \n | Returns\n | -------\n | sqrt : oml.Float\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.number._Number:\n | \n | cumsum(self, ascending=True, na_position='last', skipna=True)\n | Gets the cumulative sum after the OML series data object is sorted.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | Sorts ascending, otherwise descending.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NaN and None at the beginning, ``last`` places them \n | at the end.\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | cumsum : oml.Float\n | \n | kurtosis(self, skipna=True)\n | Returns the sample kurtosis of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | kurt : float or nan\n | \n | mean(self, skipna=True)\n | Returns the mean of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | mean : float or numpy.nan\n | \n | median(self, skipna=True)\n | Returns the median of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | median : float or numpy.nan\n | \n | skew(self, skipna=True)\n | Returns the sample skewness of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | skew : float or nan\n | \n | std(self, skipna=True)\n | Returns the sample standard deviation of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | std : float or numpy.nan\n | \n | sum(self, skipna=True)\n | Returns the sum of the values.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | sum : float or numpy.nan\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | pull(self)\n | Pulls data represented by the series data object from Oracle Database\n | into an in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class String(oml.core.series._Series)\n | String(other, dbtype)\n | \n | Character series data class.\n | \n | Represents a single column of VARCHAR2, CHAR, or CLOB data in Oracle Database.\n | \n | Method resolution order:\n | String\n | oml.core.series._Series\n | oml.core.vector._Vector\n | builtins.object\n | \n | Methods defined here:\n | \n | __contains__(self, item)\n | Check whether all elements in ``item`` exists in the String series\n | \n | Equivalent to ``item in self``.\n | \n | Parameters\n | ----------\n | item : str, list of str, oml.String\n | Values to check in series\n | \n | Returns\n | -------\n | contains : bool\n | Returns ``True`` if all elements exist, otherwise ``False``.\n | \n | __init__(self, other, dbtype)\n | Convert underlying Oracle Database type.\n | \n | Parameters\n | ----------\n | other : oml.String\n | dbtype : 'varchar2' or 'clob'\n | \n | count_pattern(self, pat, flags=0)\n | Counts the number of occurrences of the pattern in each string. \n | \n | Parameters\n | ----------\n | pat : str that is a valid regular expression conforming to the POSIX standard\n | flags : int, 0 (default, no flags)\n | The following :py:mod:`python:re` module flags are supported:\n | \n | - :py:data:`python:re.I`/:py:data:`python:re.IGNORECASE` : Performs case-insensitive matching.\n | - :py:data:`python:re.M`/:py:data:`python:re.MULTILINE` : Treats the source string as multiple lines.\n | Interprets the caret (^) and dollar sign ($) as the start and end,\n | respectively, of any line anywhere in source string. Without this flag,\n | the caret and dollar sign match only the start and end, respectively, of\n | the source string.\n | - :py:data:`python:re.S`/:py:data:`python:re.DOTALL` : Allows the period (.) to match all characters,\n | including the newline character. Without this flag, the period matches all\n | characters except the newline character.\n | \n | Multiple flags can be specifed by bitwise OR-ing them.\n | \n | Returns\n | -------\n | counts : oml.Float\n | \n | find(self, sub, start=0)\n | Returns the lowest index in each string where substring is found that is\n | greater than or equal to ``start``. Returns -1 on failure.\n | \n | Parameters\n | ----------\n | sub : str\n | The text expression to search.\n | start : int\n | A nonnegative integer indicating when the function begins the search. \n | \n | Returns\n | -------\n | found : oml.Float\n | \n | len(self)\n | Computes the length of each string.\n | \n | Returns\n | -------\n | length : oml.Float\n | \n | pull(self)\n | Pulls data represented by this object from Oracle Database into an\n | in-memory Python object.\n | \n | Returns\n | -------\n | pulled_obj : list of str and None\n | \n | replace(self, old, new, default=None)\n | Replace values given in `old` with `new`.\n | \n | Parameters\n | ----------\n | old : list of float, or list of str\n | Specifying the old values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | new : list of float, or list of str\n | A list of the same length as argument `old` specifying\n | the new values. When specified with a list of float, it\n | can contain float('nan') and None. When specified with a list of str, it\n | can contain None.\n | default : float, str, or None (default)\n | A single value to use for the non-matched elements in argument\n | `old`. If None, non-matched elements will preserve their\n | original values. If not None, data type should be consistent\n | with values in `new`. Must be set when `old` and `new` contain \n | values of different data types.\n | \n | Returns\n | -------\n | replaced : oml.String\n | \n | Raises\n | ------\n | ValueError\n | * if values in `old` have data types inconsistent with original values\n | * if `default` is specifed with a non-None value which has data type\n | inconsistent with values in `new`\n | * if `default` is None when `old` and `new` contain values of different\n | data types\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.series._Series:\n | \n | KFold(self, n_splits=3, seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into k consecutive folds \n | for use with k-fold cross validation.\n | \n | Parameters\n | ----------\n | n_splits : int, 3 (default)\n | The number of folds. Must be greater than or equal to 2.\n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default):\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | kfold_data : a list of pairs of series objects of the same type as caller\n | Each pair within the list is a fold. The first element of the pair is the\n | train set, and the second element is the test set, which consists of all\n | elements not in the train set.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | __eq__(self, other)\n | Equivalent to ``self == other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | equal : oml.Boolean\n | \n | __ge__(self, other)\n | Equivalent to ``self >= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterequal : oml.Boolean\n | \n | __gt__(self, other)\n | Equivalent to ``self > other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | greaterthan : oml.Boolean\n | \n | __le__(self, other)\n | Equivalent to ``self <= other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessequal : oml.Boolean\n | \n | __lt__(self, other)\n | Equivalent to ``self < other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | lessthan : oml.Boolean\n | \n | __ne__(self, other)\n | Equivalent to ``self != other``.\n | \n | Parameters\n | ----------\n | other : OML series data object of compatible type or corresponding built-in python scalar\n | * scalar : every element in ``self`` will be compared to the scalar.\n | * a OML series : must come from the same data source. Every element in ``self`` will be\n | compared to the corresponding element in ``other``. oml.Float and oml.Boolean series\n | can be compared to each other. oml.String and oml.Bytes series can be compared to each\n | other.\n | \n | Returns\n | -------\n | notequal : oml.Boolean\n | \n | count(self)\n | Returns the number of elements that are not NULL.\n | \n | Returns\n | -------\n | nobs : int\n | \n | describe(self)\n | Generates descriptive statistics that summarize the central tendency, \n | dispersion, and shape of an OML series data distribution.\n | \n | Returns\n | -------\n | summary : :py:class:`pandas.Series`\n | Includes ``count`` (number of non-null entries), ``unique``\n | (number of unique entries), ``top`` (most common value), ``freq``,\n | (frequency of the most common value).\n | \n | drop_duplicates(self)\n | Removes duplicated elements.\n | \n | Returns\n | -------\n | deduplicated : type of caller\n | \n | dropna(self)\n | Removes missing values.\n | \n | Missing values include None and/or nan if applicable.\n | \n | Returns\n | -------\n | dropped : type of caller\n | \n | isnull(self)\n | Detects the missing value None.\n | \n | Returns\n | -------\n | isnull : oml.Boolean\n | Indicates missing value None for each element.\n | \n | max(self, skipna=True)\n | Returns the maximum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | max : Python type corresponding to the column or numpy.nan\n | \n | min(self, skipna=True)\n | Returns the minimum value.\n | \n | Parameters\n | ----------\n | skipna : boolean, True (default)\n | Excludes NaN values when computing the result.\n | \n | Returns\n | -------\n | min : Python type corresponding to the column or numpy.nan\n | \n | nunique(self, dropna=True)\n | Returns the number of unique values.\n | \n | Parameters\n | ----------\n | dropna : bool, True (default)\n | If True, NULL values are not included in the count. \n | \n | Returns\n | -------\n | nunique : int\n | \n | sort_values(self, ascending=True, na_position='last')\n | Sorts the values in the series data object.\n | \n | Parameters\n | ----------\n | ascending : bool, True (default)\n | If True, sorts in ascending order. Otherwise, sorts in descending order.\n | na_position : {'first', 'last'}, 'last' (default)\n | ``first`` places NANs and Nones at the beginning; ``last`` places them\n | at the end.\n | \n | Returns\n | -------\n | sorted_obj : type of caller\n | \n | split(self, ratio=(0.7, 0.3), seed=12345, use_hash=True, nvl=None)\n | Splits the series data object randomly into multiple sets.\n | \n | Parameters\n | ----------\n | ratio : a list of float values or (0.7, 0.3) (default)\n | All the numbers must be positive and the sum of them are no more than \n | 1. Each number represents the ratio of split data in one set. \n | seed : int or 12345 (default)\n | The seed to use for random splitting.\n | use_hash : boolean, True (default)\n | If True, use hashing to randomly split the data. If False, use a random\n | number to split the data.\n | nvl : numeric value, str, or None (default)\n | If not None, the specified values are used to hash in place of Null.\n | \n | Returns\n | -------\n | split_data : a list of series objects of the same type as caller\n | Each of which contains the portion of data by the specified ratio.\n | \n | Raises\n | ------\n | TypeError\n | * If ``use_hash`` is True, and the underlying database column type of\n | ``self`` is a LOB.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from oml.core.series._Series:\n | \n | __hash__ = None\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.core.vector._Vector:\n | \n | __del__(self)\n | \n | __getitem__(self, key)\n | Index self. Equivalent to ``self[key]``.\n | \n | Parameters\n | ----------\n | key : oml.Boolean\n | Must be from the same data source as self.\n | \n | Returns\n | -------\n | subset : same type as self\n | Contains only the rows satisfying the condition in ``key``.\n | \n | __len__(self)\n | Returns number of rows. Equivalent to ``len(self)``.\n | \n | Returns\n | -------\n | rownum : int\n | \n | __repr__(self)\n | \n | append(self, other, all=True)\n | Appends the ``other`` OML data object of the same class to this data object.\n | \n | Parameters\n | ----------\n | other : An OML data object of the same class\n | all : boolean, True (default)\n | Keeps the duplicated elements from the two data objects.\n | \n | Returns\n | -------\n | appended : type of caller\n | A new data object containing the elements from both objects.\n | \n | concat(self, other, auto_name=False)\n | Combines current OML data object with the ``other`` data objects column-wise.\n | \n | Current object and the ``other`` data objects must be combinable, that is,\n | they both represent data from the same underlying database table, view, or query.\n | \n | Parameters\n | ----------\n | other : an OML data object, a list of OML data objects, or a dict mapping str to OML data objects.\n | * OML data object: an OML series data object or an oml.DataFrame\n | * list: a sequence of OML series and DataFrame objects to concat.\n | * dict: a dict mapping str to OML series and DataFrame objects,\n | the column name of concatenated OML series object is replaced with str,\n | column names of concatenated OML DataFrame object is prefixed with str.\n | Need to specify a :py:obj:`python:collections.OrderedDict` if the \n | concatenation order is expected to follow the key insertion order.\n | auto_name : boolean, False (default)\n | Indicates whether to automatically resolve conflict column names.\n | If True, append duplicated column names with suffix ``[column_index]``.\n | \n | Notes\n | -----\n | After concatenation is done, if there is any empty column names in the resulting \n | oml.DataFrame, they will be renamed with ``COL[column_index]``.\n | \n | Raises\n | ------\n | ValueError\n | * If ``other`` is not a single nor a list/dict of OML objects, or if ``other`` is empty.\n | * If objects in ``other`` are not from same data source.\n | * If ``auto_name`` is False and duplicated column names are detected.\n | \n | Returns\n | -------\n | concat_table : oml.DataFrame\n | An oml.DataFrame data object with its original columns followed by the \n | columns of ``other``.\n | \n | create_view(self, view=None, use_colname=False)\n | Creates an Oracle Database view for the data represented by the OML data \n | object.\n | \n | Parameters\n | ----------\n | view : str or None (default)\n | The name of a database view. If ``view`` is None, the created view is managed\n | by OML and the view is automatically dropped when no longer needed.\n | If a ``view`` is specified, then it is up to the user to drop the view.\n | use_colname : bool, False (default)\n | Indicates whether to create the view with the same column names as the\n | DataFrame object. Ignored if ``view`` is specified.\n | \n | Raises\n | ------\n | TypeError\n | * If the object represents data from a temporary table or view, and the view\n | to create is meant to persist past the current session.\n | \n | Returns\n | -------\n | new_view : oml.DataFrame\n | \n | head(self, n=5)\n | Returns the first n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_head : type of caller\n | \n | materialize(self, table=None)\n | Pushes the contents represented by an OML proxy object (a view, a table and so on) \n | into a table in Oracle Database.\n | \n | Parameters\n | ----------\n | table : str or None (default)\n | The name of a table. If ``table`` is None, an OML-managed table is \n | created, that is, OML drops the table when it is no longer used by \n | any OML object or when you invoke ``oml.disconnect(cleanup=True)`` to \n | terminate the database connection. If a table is specified, \n | then it's up to the user to drop the named table.\n | \n | Returns\n | -------\n | new_table : same type as self\n | \n | tail(self, n=5)\n | Returns the last n elements.\n | \n | Parameters\n | ----------\n | n : int, 5 (default)\n | The number of elements to return.\n | \n | Returns\n | -------\n | obj_tail : type of caller\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.core.vector._Vector:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | shape\n | The dimensions of the data set.\n | The first element is the number of rows and the second is the number of\n | columns.\n \n class ai(oml.algo.model.odmModel)\n | ai(model_name=None, model_owner=None, **params)\n | \n | In-database `Attribute Importance `_ Model\n | \n | Computes the relative importance of variables (aka attributes or columns) when predicting\n | a target variable (numeric or categorical column). This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. \n | Oracle Machine Learning does not support the prediction functions\n | for attribute importance. The results of attribute importance are the attributes\n | of the build data ranked according to their predictive influence. The ranking and\n | the measure of importance can be used for selecting attributes.\n | \n | :Attributes:\n | \n | **importance** : oml.DataFrame\n | \n | Relative importance of predictor variables for predicting a response variable.\n | It includes the following components:\n | \n | - variable: The name of the predictor variable\n | - importance: The importance of the predictor variable\n | - rank: The predictor variable rank based on the importance value.\n | \n | Method resolution order:\n | ai\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of Attribute Importance object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Attribute Importance model to create an oml.ai object from.\n | The specified database model is not dropped when the oml.ai object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Attribute Importance model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings or Algorithm-specific Settings are not\n | applicable to Attribute Importance model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None)\n | Fits an Attribute Importance Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.ai object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.ai object is deleted\n | unless oml.ai object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class ar(oml.algo.model.odmModel)\n | ar(model_name=None, model_owner=None, **params)\n | \n | In-database `Association Rules `_ Model\n | \n | Builds an Association Rules Model used to discover the probability of item co-occurrence\n | in a collection. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. The relationships between co-occurring items are expressed as \n | association rules.\n | Oracle Machine Learning does not support the prediction functions for association modeling.\n | The results of an association model are the rules that identify patterns of\n | association within the data. Association rules can be ranked by support\n | (How often do these items occur together in the data?) and confidence\n | (How likely are these items to occur together in the data?).\n | \n | :Attributes:\n | \n | **rules** : oml.DataFrame\n | \n | Details of each rule that shows \n | how the appearance of a set of items in a transactional\n | record implies the existence of another set of items.\n | It includes the following components:\n | \n | - rule_id: The identifier of the rule\n | - number_of_items: The total number of attributes referenced in the antecedent and consequent of the rule\n | - lhs_name: The name of the antecedent.\n | - lhs_value: The value of the antecedent.\n | - rhs_name: The name of the consequent.\n | - rhs_value: The value of the consequent.\n | - support: The number of transactions that satisfy the rule.\n | - confidence: The likelihood of a transaction satisfying the rule.\n | - revconfidence: The number of transactions in which the rule occurs divided by the number of transactions in which the consequent occurs.\n | - lift: The degree of improvement in the prediction over random chance when the rule is satisfied.\n | \n | **itemsets** : oml.DataFrame\n | \n | Description of the item sets from the model built.\n | It includes the following components:\n | \n | - itemset_id: The itemset identifier\n | - support: The support of the itemset\n | - number_of_items: The number of items in the itemset\n | - item_name: The name of the item\n | - item_value: The value of the item\n | \n | Method resolution order:\n | ar\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of Association Rules object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Association Rules model to create an oml.ar object from.\n | The specified database model is not dropped when the oml.ar object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Association Rules model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic\n | Data Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Association are applicable.\n | No algorithm-specific Settings are applicable to Association model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None)\n | Fits an Association Rules Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.ar object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.ar object is deleted\n | unless oml.ar object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class dt(oml.algo.model.odmModel)\n | dt(model_name=None, model_owner=None, **params)\n | \n | In-database `Decision Tree `_ Model\n | \n | Builds a Decision Tree Model used to generate rules (conditional statements \n | that can easily be understood by humans and be used within a database to identify \n | a set of records) to predict a target value (numeric or categorical column). This \n | function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | A decision tree predicts a target value by asking a sequence of questions. \n | At a given stage in the sequence, the question that is asked depends upon the \n | answers to the previous questions. The goal is to ask questions that, taken \n | together, uniquely identify specific target values. Graphically, this process \n | forms a tree structure. During the training process, the Decision Tree algorithm \n | must repeatedly find the most efficient way to split a set of cases (records) \n | into two child nodes. The model offers two homogeneity metrics, gini and entropy, \n | for calculating the splits. The default metric is gini.\n | \n | \n | :Attributes:\n | \n | **nodes** : oml.DataFrame\n | \n | The node summary information with tree node details.\n | It includes the following components:\n | \n | - parent: The node ID of the parent\n | - node.id: The node ID\n | - row.count: The number of records in the training set that belong to the node\n | - prediction: The predicted Target value\n | - split: The main split\n | - surrogate: The surrogate split\n | - full.splits: The full splitting criterion\n | \n | **distributions** : oml.DataFrame\n | \n | The target class distributions at each tree node.\n | It includes the following components:\n | \n | - node_id: The node ID\n | - target_value: The target value\n | - target_count: The number of rows for a given target_value\n | \n | Method resolution order:\n | dt\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of dt object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Decision Tree model to create an oml.dt object from.\n | The specified database model is not dropped when the oml.dt object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Decision Tree model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Decision Tree model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, cost_matrix=None, case_id=None)\n | Fits a decision tree model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.dt object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.dt object is deleted\n | unless oml.dt object is saved into a datastore.\n | cost_matrix : OML DataFrame, list of ints, floats or None (default)\n | An optional numerical matrix that specifies the costs for incorrectly\n | predicting the target values. The first value represents the actual target value.\n | The second value represents the predicted target value. The third value is the cost.\n | In general, the diagonal entries of the matrix are zeros. Refer to `Oracle Data\n | Mining User's Guide `_\n | for more details about cost matrix.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if proba is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols``, and the\n | results. The results include the most likely target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class em(oml.algo.model.odmModel)\n | em(n_clusters=None, model_name=None, model_owner=None, **params)\n | \n | In-database `Expectation Maximization `_ Model\n | \n | Builds an Expectation Maximization (EM) Model used to performs probabilistic \n | clustering based on a density estimation algorithm. This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. In density estimation, \n | the goal is to construct a density function that captures how a given population is \n | distributed. The density estimate is based on observed data that represents a \n | sample of the population.\n | \n | :Attributes:\n | \n | **clusters** : oml.DataFrame\n | \n | The general per-cluster information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - cluster_name: The name of a cluster in the model\n | - record_count: The number of rows used in the build\n | - parent: The ID of the parent\n | - tree_level: The number of splits from the root\n | - left_child_id: The ID of the left child\n | - right_child_id: The ID of the right child\n | \n | **taxonomy**: oml.DataFrame\n | \n | The parent/child cluster relationship.\n | It includes the following components:\n | \n | - parent_cluster_id: The ID of the parent cluster\n | - child_cluster_id: The ID of the child cluster\n | \n | **centroids**: oml.DataFrame\n | \n | Per cluster-attribute center (centroid) information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - attribute_name: The attribute name\n | - mean: The average value of a numeric attribute\n | - mode_value: The most frequent value of a categorical attribute\n | - variance: The variance of a numeric attribute\n | \n | **leaf_cluster_counts**: pandas.DataFrame\n | \n | Leaf clusters with support.\n | It includes the following components:\n | \n | - cluster_id: The ID of a leaf cluster in the model\n | - cnt: The number of records in a leaf cluster\n | \n | **attribute_importance**: oml.DataFrame\n | \n | Attribute importance of the fitted model.\n | It includes the following components:\n | \n | - attribute_name: The attribute name\n | - attribute_importance_value: The attribute importance for an attribute\n | - attribute_rank: The rank of the attribute based on importance\n | \n | **projection**: oml.DataFrame\n | \n | The coefficients used by random projections to map nested columns to a lower dimensional space.\n | It exists only when nested or text data is present in the build data.\n | It includes the following components:\n | \n | - feature_name: The name of feature\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - coefficient: The projection coefficient for an attribute\n | \n | **components**: oml.DataFrame\n | \n | EM components information about their prior probabilities and what cluster they map to.\n | It includes the following components:\n | \n | - component_id: The unique identifier of a component\n | - cluster_id: The ID of a cluster in the model\n | - prior_probability: The component prior probability\n | \n | **cluster_hists**: oml.DataFrame\n | Cluster histogram information.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - variable: The attribute name\n | - bin.id: The ID of a bin\n | - lower.bound: The numeric lower bin boundary\n | - upper.bound: The numeric upper bin boundary\n | - label: The label of the cluster\n | - count: The histogram count\n | \n | **rules**: oml.DataFrame\n | \n | Conditions for a case to be assigned with some probability to a cluster.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - rhs.support: The record count\n | - rhs.conf: The record confidence\n | - lhr.support: The rule support\n | - lhs.conf: The rule confidence\n | - lhs.var: The attribute predicate name\n | - lhs.var.support: The attribute predicate support\n | - lhs.var.conf: The attribute predicate confidence\n | - predicate: The attribute predicate\n | \n | Method resolution order:\n | em\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, n_clusters=None, model_name=None, model_owner=None, **params)\n | Initializes an instance of em object.\n | \n | Parameters\n | ----------\n | n_clusters : positive integer, None (default)\n | The number of clusters. If n_clusters is None, the number of clusters will be determined\n | either by current setting parameters or automatically by the algorithm.\n | model_name : string or None (default)\n | The name of an existing database Expectation Maximization model to create an oml.em object from.\n | The specified database model is not dropped when the oml.em object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Expectation Maximization model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Clustering and `Algorithm-specific\n | Settings `_\n | are applicable to Expectation Maximization model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None)\n | Fits an Expectation Maximization Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.em object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.em object is deleted\n | unless oml.em object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. If the mode is 'class', the results include the most likely\n | target class and its probability. If mode is 'raw', the results\n | include for each target class, the probability belonging\n | to that class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each cluster on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned clusters to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each cluster, the probability\n | belonging to that cluster.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class esa(oml.algo.model.odmModel)\n | esa(model_name=None, model_owner=None, **params)\n | \n | In-database `Explicit Semantic Analysis `_ Model\n | \n | Builds an Explicit Semantic Analysis (ESA) Model to be used for feature extraction. \n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | ESA uses concepts of an existing knowledge base as features rather than latent\n | features derived by latent semantic analysis methods such as Singular\n | Value Decomposition and Latent Dirichlet Allocation. Each row, for example,\n | a document in the training data maps to a feature, that is, a concept.\n | ESA works best with concepts represented by text documents.\n | It has multiple applications in the area of text processing, most\n | notably semantic relatedness (similarity) and explicit topic modeling.\n | Text similarity use cases might involve, for example, resume matching, searching\n | for similar blog postings, and so on.\n | \n | :Attributes:\n | \n | **features** : oml.DataFrame\n | \n | Description of each feature extracted. \n | It includes the following components:\n | \n | - feature_id: The unique identifier of a feature as it appears in the training data\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - coefficient: The coefficient (weight) associated with the attribute in a particular feature.\n | \n | Method resolution order:\n | esa\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of esa object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Explicit Semantic Analysis model to create an oml.esa object from.\n | The specified database model is not dropped when the oml.esa object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Explicit Semantic Analysis model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Feature Extraction and `Algorithm-specific\n | Settings `_\n | are applicable to Explicit Semantic Analysis model.\n | \n | __repr__(self)\n | \n | feature_compare(self, x, compare_cols=None, supplemental_cols=None)\n | Compares features of data and generates relatedness.\n | \n | Parameters\n | ----------\n | x : an OML object\n | The data used to measure relatedness.\n | compare_cols : str, a list of str or None (default)\n | The column(s) used to measure data relatedness.\n | If None, all the columns of ``x`` are compared to measure relatedness.\n | supplemental_cols : a list of str or None (default)\n | A list of columns to display along with the resulting 'SIMILARITY' column.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains a 'SIMILARITY' column that measures relatedness and supplementary columns if specified.\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits an ESA Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.esa object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.esa object is deleted\n | unless oml.esa object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include the most likely feature and its probability.\n | \n | transform(self, x, supplemental_cols=None, topN=None)\n | Make predictions and return relevancy for each feature on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned values to\n | the specified number of features that have the highest topN values.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the relevancy for each feature on new data and the specified ``supplemental_cols``.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class glm(oml.algo.model.odmModel)\n | glm(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Generalized Linear Models `_\n | \n | Builds Generalized Linear Models (GLM), which include and extend the class of \n | linear models (linear regression), to be used for classification or regression. \n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | Generalized linear models relax the restrictions on linear models, which are often \n | violated in practice. For example, binary (yes/no or 0/1) responses do not have same \n | variance across classes. This model uses a parametric modeling technique. Parametric \n | models make assumptions about the distribution of the data. When the assumptions are \n | met, parametric models can be more efficient than non-parametric models.\n | \n | :Attributes:\n | \n | **coef** : oml.DataFrame\n | \n | The coefficients of the GLM model, one for each predictor variable.\n | It includes the following components:\n | \n | - nonreference: The target value used as nonreference\n | - attribute name: The attribute name\n | - attribute value: The attribute value\n | - coefficient: The estimated coefficient\n | - std error: The standard error\n | - t value: The test statistics\n | - p value: The statistical significance\n | \n | **fit_details**: oml.DataFrame\n | \n | The model fit details such as adjusted_r_square, error_mean_square and so on.\n | It includes the following components:\n | \n | - name: The fit detail name\n | - value: The fit detail value\n | \n | **deviance**: float\n | \n | Minus twice the maximized log-likelihood, up to a constant.\n | \n | **null_deviance**: float\n | \n | The deviance for the null (intercept only) model.\n | \n | **aic**: float\n | \n | Akaike information criterion.\n | \n | **rank**: integer\n | \n | The numeric rank of the fitted model.\n | \n | **df_residual**: float\n | \n | The residual degrees of freedom.\n | \n | **df_null**: float\n | \n | The residual degrees of freedom for the null model.\n | \n | **converged**: bool\n | \n | The indicator for whether the model converged.\n | \n | **nonreference**: int or str\n | \n | For logistic regression, the response values that represents success.\n | \n | Method resolution order:\n | glm\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of glm object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' or 'REGRESSION', 'CLASSIFICATION' (default)\n | Type of model mining functionality\n | model_name : string or None (default)\n | The name of an existing database Generalized Linear Model to create an oml.glm object from.\n | The specified database model is not dropped when the oml.glm object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Generalized Linear Model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Generalized Linear Model model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, ctx_settings=None)\n | Fits a GLM Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.glm object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.glm object is deleted\n | unless oml.glm object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None, confint=None, level=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | confint : bool, False (default)\n | A logical indicator for whether to produce confidence intervals.\n | for the predicted values.\n | level : float between 0 and 1 or None (default)\n | A numeric value within [0, 1] to use for the confidence level.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True for classification.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and optionally its probability and confidence\n | intervals. For a linear regression model, the results consist of a column\n | for the prediction and optionally its confidence intervals.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | residuals(self, x, y)\n | Return the deviance residuals, which includes the following components:\n | - deviance: The deviance residual\n | - pearson: The Pearson residual\n | - response: The residual of the working response.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | \n | Return: oml.DataFrame\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class km(oml.algo.model.odmModel)\n | km(n_clusters=None, model_name=None, model_owner=None, **params)\n | \n | In-database `k-means `_ Model\n | \n | Builds a K-Means (KM) Model that uses a distance-based clustering algorithm to \n | partition data into a specified number of clusters. This function exposes the \n | corresponding Oracle Machine Learning in-database algorithm. Distance-based \n | algorithms rely on a distance function to measure the similarity between cases. \n | Cases are assigned to the nearest cluster according to the distance function used.\n | \n | :Attributes:\n | \n | **clusters** : oml.DataFrame\n | \n | The general per-cluster information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - row_cnt: The number of rows used in the build\n | - parent_cluster_id: The ID of the parent\n | - tree_level: The number of splits from the root\n | - dispersion: The measure of the quality of the cluster, and computationally, the sum of square errors\n | \n | **taxonomy**: oml.DataFrame\n | \n | The parent/child cluster relationship.\n | It includes the following components:\n | \n | - parent_cluster_id: The ID of the parent cluster\n | - child_cluster_id: The ID of the child cluster\n | \n | **centroids**: oml.DataFrame\n | \n | Per cluster-attribute center (centroid) information.\n | It includes the following components:\n | \n | - cluster_id: The ID of a cluster in the model\n | - attribute_name: The attribute name\n | - mean: The average value of a numeric attribute\n | - mode_value: The most frequent value of a categorical attribute\n | - variance: The variance of a numeric attribute\n | \n | **leaf_cluster_counts**: pandas.DataFrame\n | \n | Leaf clusters with support.\n | It includes the following components:\n | \n | - cluster_id: The ID of a leaf cluster in the model\n | - cnt: The number of records in a leaf cluster\n | \n | **cluster_hists**: oml.DataFrame\n | \n | Cluster histogram information.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - variable: The attribute name\n | - bin.id: The ID of a bin\n | - lower.bound: The numeric lower bin boundary\n | - upper.bound: The numeric upper bin boundary\n | - label: The label of the cluster\n | - count: The histogram count\n | \n | **rules**: oml.DataFrame\n | \n | Conditions for a case to be assigned with some probability to a cluster.\n | It includes the following components:\n | \n | - cluster.id: The ID of a cluster in the model\n | - rhs.support: The record count\n | - rhs.conf: The record confidence\n | - lhr.support: The rule support\n | - lhs.conf: The rule confidence\n | - lhs.var: The attribute predicate name\n | - lhs.var.support: The attribute predicate support\n | - lhs.var.conf: The attribute predicate confidence\n | - predicate: The attribute predicate\n | \n | Method resolution order:\n | km\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, n_clusters=None, model_name=None, model_owner=None, **params)\n | Initializes an instance of km object.\n | \n | Parameters\n | ----------\n | n_clusters : positive integer, default None\n | Number of clusters. If n_clusters is None, the number of clusters will be determined\n | either by current setting parameters or automatically by the internal algorithm.\n | model_name : string or None (default)\n | The name of an existing database K-Means model to create an oml.km object from.\n | The specified database model is not dropped when the oml.km object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing K-Means model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Clustering and `Algorithm-specific\n | Settings `_\n | are applicable to K-Means model.\n | \n | __repr__(self)\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits a K-Means Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.km object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.km object is deleted\n | unless oml.km object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None(default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each cluster on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer\n | A positive integer that restricts the returned clusters to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each cluster, the probability\n | belonging to that cluster.\n | \n | score(self, x)\n | Calculates the score value based on the input data ``x``.\n | \n | Parameters\n | ----------\n | x : an OML object\n | A new data set used to calculate score value.\n | \n | Returns\n | -------\n | pred : float\n | Score values, that is, opposite of the value of ``x`` on the K-means objective.\n | \n | transform(self, x)\n | Transforms ``x`` to a cluster-distance space.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the distance to each cluster.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class nb(oml.algo.model.odmModel)\n | nb(model_name=None, model_owner=None, **params)\n | \n | In-database `Naive Bayes `_ Model\n | \n | Builds a Naive Bayes Model that uses conditional probabilities to predict a target \n | variable (numeric or categorical column). Naive Bayes looks at the historical data \n | and calculates conditional probabilities for the target values by observing the \n | frequency of attribute values and of combinations of attribute values. Naive Bayes \n | assumes that each predictor is conditionally independent of the others. (Bayes' \n | Theorem requires that the predictors be independent.)\n | \n | :Attributes:\n | \n | **priors** : oml.DataFrame\n | \n | An optional named numerical vector that specifies the priors for the target classes.\n | It includes the following components:\n | \n | - target_name: The name of the target column\n | - target_value: The target value\n | - prior_probability: The prior probability for a given target_value\n | - count: The number of rows for a given target_value\n | \n | **conditionals** : oml.DataFrame\n | \n | Conditional probabilities for each predictor variable.\n | It includes the following components:\n | \n | - target_name: The name of the target column\n | - target_value: The target value\n | - attribute_name: The column name\n | - attribute_subname: The nested column subname.\n | - attribute_value: The mining attribute value\n | - conditional_probability: The conditional probability of a mining attribute for a given target\n | - count: The number of rows for a given mining attribute and a given target\n | \n | Method resolution order:\n | nb\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of nb object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Naive Bayes model to create an oml.nb object from.\n | The specified database model is not dropped when the oml.nb object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Naive Bayes model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Naive Bayes model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, priors=None, case_id=None)\n | Fits a Naive Bayes Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.nb object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.nb object is deleted\n | unless oml.nb object is saved into a datastore.\n | priors : OML DataFrame or dict or list of ints or floats or None (default)\n | The priors represent the overall distribution of the target in the\n | population. By default, the priors are computed from the sample.\n | If the sample is known to be a distortion of the population target\n | distribution, then the user can override the default by providing\n | a priors table as a setting for model creation. For OML DataFrame\n | input, the first value represents the target value. The second value\n | represents the prior probability. For dictionary type input, the key\n | represents the target value. The value represents the prior probability.\n | For list type input, the first value represents target value. The\n | second value represents the prior probability. See `Oracle Data\n | Mining Concepts Guide `_\n | for more details.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include the most likely target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | TopN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class nn(oml.algo.model.odmModel)\n | nn(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Neural Network `_ Model\n | \n | Builds a Neural Network (NN) Model that uses an algorithm inspired from biological\n | neural network for classification and regression. Neural Network is used to to estimate\n | or approximate functions that depend on a large number of generally unknown inputs.\n | This function exposes the corresponding Oracle Machine Learning in-database algorithm.\n | An artificial neural network is composed of a large number of interconnected neurons\n | which exchange messages between each other to solve specific problems. They learn by\n | examples and tune the weights of the connections among the neurons during the learning\n | process. Neural Network is capable of solving a wide variety of tasks such as computer\n | vision, speech recognition, and various complex business problems.\n | \n | :Attributes:\n | \n | **weights** : oml.DataFrame\n | \n | Weights of fitted model between nodes in different layers.\n | It includes the following components:\n | \n | - layer: The layer ID, 0 as an input layer\n | - idx_from: The node index that the weight connects from (attribute id for input layer)\n | - idx_to: The node index that the weights connects to\n | - attribute_name: The attribute name (only for the input layer)\n | - attribute_subname: The attribute subname\n | - attribute_value: The attribute value\n | - target_value: The target value.\n | - weight: The value of weight\n | \n | **topology** : oml.DataFrame\n | \n | Topology of the fitted model including number of nodes and hidden layers.\n | It includes the following components:\n | \n | - hidden_layer_id: The id number of the hidden layer\n | - num_node: The number of nodes in each layer\n | - activation_function: The activation function in each layer\n | \n | Method resolution order:\n | nn\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of nn object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' or 'REGRESSION', 'CLASSIFICATION' (default)\n | Type of model mining functionality\n | model_name : string or None (default)\n | The name of an existing database Neural Network model to create an oml.nn object from.\n | The specified database model is not dropped when the oml.nn object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Neural Network model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Neural Network model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, class_weight=None)\n | Fits a Neural Network Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.nn object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.nn object is deleted\n | unless oml.nn object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | class_weight : OML DataFrame or dict or list of ints or floats or None (default)\n | An optional matrix that is used to influence the weighting of\n | target classes during model creation. For OML DataFrame input, the first\n | value represents the target value. The second value represents the class weight.\n | For dictionary type input, the key represents the target value. The value\n | represents the class weight. For list type input, the first value represents\n | target value. The second value represents the predicted target value.\n | Refer to `Oracle Data Mining User's Guide `_\n | for more details about class weights.\n | \n | get_params(self, params=None, deep=False)\n | Fetches settings of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If params is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and its probability. For a regression\n | model, the results consist of a column for the prediction. For an\n | anomaly detection model, the results include a prediction and its\n | probability. If the prediction is 1, the case is considered typical.\n | If the prediction is 0, the case is considered anomalous. This\n | behavior reflects the fact that the model is trained with normal data.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class rf(oml.algo.model.odmModel)\n | rf(model_name=None, model_owner=None, **params)\n | \n | In-database `Random Forest `_ Model\n | \n | Builds a Random Forest (RF) Model that uses an ensemble (also called forest) of trees \n | for classification. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. Random Forest is a popular ensemble learning technique for \n | classification. By combining the ideas of bagging and random selection of variables, \n | the algorithm produces collection of decision trees with controlled variance, while \n | avoiding overfitting - a common problem for decision trees.\n | \n | :Attributes:\n | \n | **importance** : oml.DataFrame\n | \n | Attribute importance of the fitted model.\n | It includes the following components:\n | \n | - attribute_name: The attribute name\n | - attribute_subname: The attribute subname \n | - attribute_importance: The attribute importance for an attribute in the forest\n | \n | Method resolution order:\n | rf\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of rf object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Random Forest model to create an oml.rf object from.\n | The specified database model is not dropped when the oml.rf object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Random Forest model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each dict element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Random Forest model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, cost_matrix=None, case_id=None)\n | Fits a Random Forest Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object, or string\n | Target values.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.rf object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.rf object is deleted\n | unless oml.rf object is saved into a datastore.\n | cost_matrix : OML DataFrame or list of ints or floats or None (default)\n | An optional numerical square matrix that specifies the costs for incorrectly\n | predicting the target values. The first value represents the actual target value.\n | The second value represents the predicted target value. The third value is the cost.\n | In general, the diagonal entries of the matrix are zeros. Refer to `Oracle Data\n | Mining User's Guide `_\n | for more details about cost matrix.\n | case_id : string or None (default)\n | The column name used as case id for building the model.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols``, and the most likely\n | target class.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data and returns the mean accuracy.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class svd(oml.algo.model.odmModel)\n | svd(model_name=None, model_owner=None, **params)\n | \n | In-database `Singular Value Decomposition `_ Model\n | \n | Builds a Singular Value Decomposition (SVD) Model that can be used for feature \n | extraction. SVD provides orthogonal linear transformations that capture the \n | underlying variance of the data by decomposing a rectangular matrix into three \n | matrixes: U, D, and V. Matrix D is a diagonal matrix and its singular values \n | reflect the amount of data variance captured by the bases. Columns of matrix V \n | contain the right singular vectors and columns of matrix U contain the left singular\n | vectors.\n | \n | :Attributes:\n | \n | **features** : oml.DataFrame\n | \n | Features extracted by the fitted model including feature id and associated coefficient.\n | It includes the following components:\n | \n | - feature_id: The ID of a feature in the model\n | - attribute_name: The attribute name\n | - attribute_value: The attribute value\n | - value: The matrix entry value\n | \n | **u** : oml.DataFrame\n | \n | A dataframe whose columns contain the left singular vectors.\n | The column name is the corresponding feature id.\n | \n | **v** : oml.DataFrame\n | \n | A dataframe whose columns contain the right singular vectors.\n | The column name is the corresponding feature id.\n | \n | **d** : oml.DataFrame\n | \n | A dataframe containing the singular values of the input data.\n | It includes the following components:\n | \n | - feature_id: The ID of a feature in the model\n | - value: The singular values of the input data\n | \n | Method resolution order:\n | svd\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, model_name=None, model_owner=None, **params)\n | Initializes an instance of svd object.\n | \n | Parameters\n | ----------\n | model_name : string or None (default)\n | The name of an existing database Singular Value Decomposition model to create an oml.svd object from.\n | The specified database model is not dropped when the oml.svd object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Singular Value Decomposition model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Feature Extraction and `Algorithm-specific\n | Settings `_\n | are applicable to Singular Value Decomposition model.\n | \n | __repr__(self)\n | \n | feature_compare(self, x, compare_cols=None, supplemental_cols=None)\n | Compares features of data and generates relatedness.\n | \n | Parameters\n | ----------\n | x : an OML object\n | The data used to measure relatedness.\n | compare_cols : str, a list of str or None (default)\n | The column(s) used to measure data relatedness.\n | If None, all the columns of ``x`` are compared to measure relatedness.\n | supplemental_cols : a list of str or None (default)\n | A list of columns to display along with the resulting 'SIMILARITY' column.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains a 'SIMILARITY' column that measures relatedness and supplementary columns if specified.\n | \n | fit(self, x, model_name=None, case_id=None, ctx_settings=None)\n | Fits an SVD Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.svd object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.svd object is deleted\n | unless oml.svd object is saved into a datastore.\n | case_id : string or None (default)\n | The column name used as case id for building the model. \n | ``case_id`` and SVDS_U_MATRIX_OUTPUT in ``odm_settings`` \n | must be specified in order to produce matrix U.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the predicted feature index on the new data and the specified ``supplemental_cols``.\n | \n | transform(self, x, supplemental_cols=None, topN=None)\n | Performs dimensionality reduction and returns value for each feature on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned values to\n | the specified number of features that have the highest topN values.\n | If None, all features will be returned.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the values of new data after the SVD transform and the specified ``supplemental_cols``.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n \n class svm(oml.algo.model.odmModel)\n | svm(mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | \n | In-database `Support Vector Machine `_ Model\n | \n | Builds a Support Vector Machine (SVM) Model to be used for regression, classification, \n | or anomaly detection. This function exposes the corresponding Oracle Machine Learning \n | in-database algorithm. SVM is a powerful, state-of-the-art algorithm with strong \n | theoretical foundations based on the Vapnik-Chervonenkis theory. SVM has strong \n | regularization properties. Regularization refers to the generalization of the model to \n | new data.\n | \n | \n | :Attributes:\n | \n | **coef** : oml.DataFrame\n | \n | The coefficients of the SVM model, one for each predictor variable.\n | It includes the following components:\n | \n | - target_value: The target value\n | - attribute_name: The attribute name\n | - attribute_subname: The attribute subname\n | - attribute_value: The attribute value\n | - coef: The projection coefficient value\n | \n | Method resolution order:\n | svm\n | oml.algo.model.odmModel\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, mining_function='CLASSIFICATION', model_name=None, model_owner=None, **params)\n | Initializes an instance of svm object.\n | \n | Parameters\n | ----------\n | mining_function : 'CLASSIFICATION' (default), 'REGRESSION' or 'ANOMALY_DETECTION'\n | Type of model mining functionality.\n | model_name : string or None (default)\n | The name of an existing database Support Vector Machine model to create an oml.svm object from.\n | The specified database model is not dropped when the oml.svm object is deleted.\n | model_owner: string or None (default)\n | The owner name of the existing Support Vector Machine model\n | The current database user by default\n | params : key-value pairs or dict\n | Oracle Machine Learning parameter settings. Each list element's name and\n | value refer to the parameter setting name and value, respectively.\n | The setting value must be numeric or string. Refer to `Oracle Data Mining\n | Model Settings `_\n | for applicable parameters and valid values. Global and Automatic Data\n | Preparation Settings in Table 5-5 apply generally to the model.\n | Mining Function Settings for Classification and `Algorithm-specific\n | Settings `_\n | are applicable to Support Vector Machine model.\n | \n | __repr__(self)\n | \n | fit(self, x, y, model_name=None, case_id=None, ctx_settings=None)\n | Fits an SVM Model according to the training data and parameter settings.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object or None, or string\n | Target values.\n | Must be specified when SVM algorithm is used for classification\n | or regression and must be None when used for anomaly detection.\n | If y is a single column OML object, target values specified by y must be combinable with x.\n | If y is a string, y is the name of the column in x that specifies the target values.\n | model_name : string or None (default)\n | User-specified model name.\n | The user-specified database model is not dropped when oml.svm object is deleted.\n | If None, a system-generated model name will be used.\n | The system-generated model is dropped when oml.svm object is deleted\n | unless oml.svm object is saved into a datastore.\n | case_id : string or None (default)\n | The name of a column that contains unique case identifiers.\n | ctx_settings : dict or None (default)\n | A list to specify Oracle Text attribute-specific settings.\n | This argument is applicable to building models in Oracle Database 12.2 or later.\n | The name of each list element refers to the text column while the list value\n | is a scalar string specifying the attribute-specific text transformation.\n | The valid entries in the string include TEXT, POLICY_NAME, TOKEN_TYPE, and\n | MAX_FEATURES.\n | \n | predict(self, x, supplemental_cols=None, proba=False, topN_attrs=False)\n | Makes predictions on new data.\n | \n | Parameters\n | ----------\n | x : oml.DataFrame\n | Predictor values used by the model to generate scores\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | proba : boolean, False (default)\n | Returns prediction probability if ``proba`` is True.\n | topN_attrs : boolean, positive integer, False (default)\n | Returns the top N most influence attributes of the predicted target value\n | for regression if topN_attrs is not False.\n | Returns the top N most influence attributes of the highest probability class\n | for classification if topN_attrs is not False.\n | N is equal to the specified positive integer or 5 if topN_attrs is True.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. For a classification model, the results include the most\n | likely target class and its probability. For a regression\n | model, the results consist of a column for the prediction. For an\n | anomaly detection model, the results include a prediction and its\n | probability. If the prediction is 1, the case is considered normal.\n | If the prediction is 0, the case is considered anomalous. This\n | behavior reflects the fact that the model is trained with normal data.\n | \n | predict_proba(self, x, supplemental_cols=None, topN=None)\n | Makes predictions and returns probability for each class on new data.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values used by the model to generate scores.\n | supplemental_cols : oml.DataFrame, oml.Float, oml.String, or None (default)\n | Data set presented with the prediction result.\n | It must be concatenatable with ``x``.\n | topN : positive integer or None (default)\n | A positive integer that restricts the returned target classes to\n | the specified number of those that have the highest probability.\n | \n | Returns\n | -------\n | pred : oml.DataFrame\n | Contains the features specified by ``supplemental_cols`` and the\n | results. The results include for each target class, the probability\n | belonging to that class.\n | \n | score(self, x, y)\n | Makes predictions on new data, returns the mean accuracy for classifications\n | or the coefficient of determination R^2 of the prediction for regressions.\n | \n | Parameters\n | ----------\n | x : an OML object\n | Attribute values for building the model.\n | y : a single column OML object\n | Target values.\n | \n | Returns\n | -------\n | score : float\n | Mean accuracy for classifications or the coefficient of\n | determination R^2 of the prediction for regressions.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from oml.algo.model.odmModel:\n | \n | export_sermodel(self, table=None, partition=None)\n | Export model.\n | \n | Parameters\n | ----------\n | table : string or None (default)\n | A name for the new table where the serialized model is saved.\n | If None, the serialized model will be saved to a temporary table.\n | partition : string or None (default)\n | Name of the partition that needs to be exported.\n | If partition is None, all partitions are exported\n | \n | Returns\n | -------\n | oml_bytes : an oml.Bytes object\n | Contains the BLOB content from the model export\n | \n | get_params(self, params=None, deep=False)\n | Fetches parameters of the model.\n | \n | Parameters\n | ----------\n | params : iterable of strings, None (default)\n | Names of parameters to fetch. If ``params`` is None,\n | fetches all settings.\n | deep : boolean, False (default)\n | Includes the computed and default parameters or not.\n | \n | Returns\n | -------\n | settings : dict mapping str to str\n | \n | set_params(self, **params)\n | Changes parameters of the model.\n | \n | Parameters\n | ----------\n | params : dict object mapping str to str\n | The key should be the name of the setting, and the value should be\n | the new setting.\n | \n | Returns\n | -------\n | model : the model itself.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from oml.algo.model.odmModel:\n | \n | model_owner\n | The owner name of database mining model\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from oml.algo.model.odmModel:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | model_name\n | The given name of database mining model\n | \n | pivot_limit\n | The maximum number of classes, clusters, or features for which the predicted probabilities are presented after pivoted\n\nFUNCTIONS\n boxplot(x, notch=None, sym=None, vert=None, whis=None, positions=None, widths=None, patch_artist=None, usermedians=None, conf_intervals=None, meanline=None, showmeans=None, showcaps=None, showbox=None, showfliers=None, boxprops=None, labels=None, flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None, manage_ticks=True, autorange=False, zorder=None)\n Makes a box and whisker plot.\n \n For every column of ``x`` or for every column object in ``x``, makes a box and whisker plot.\n \n Parameters\n ----------\n x : oml.DataFrame or oml.Float or list of oml.Float\n The data to plot.\n notch : bool, False (default), optional\n If True, produces a notched box plot. Otherwise, a rectangular\n boxplot is produced. By default, the confidence intervals are approximated\n as ``median +/-1.57 IQR/sqrt(n)`` where ``n`` is the number of not-null/NA\n values in the column. \n conf_intervals : array-like, optional\n Array or sequence whose first dimension is equal to the number of columns\n in ``x`` and whose second dimension is 2.\n labels : sequence, optional\n Length must be equal to the number of columns in ``x``. When an element of\n ``labels`` is not None, the default label of the column, which is the name\n of the column, is overridden.\n \n Notes\n -----\n For information on the other parameters, see documentation for :py:func:`matplotlib.pyplot.boxplot`.\n \n Returns\n -------\n ax : :py:class:`matplotlib.axes.Axes`\n The :py:class:`matplotlib.axes.Axes` instance of the boxplot figure.\n result : dict\n A dict mapping each component of the boxplot to the corresponding list of\n :py:class:`matplotlib.lines.Lines2D` instances created.\n \n check_embed()\n Indicates whether embedded Python is set up in the connected Oracle Database.\n \n Returns\n -------\n embed_status : bool or None\n None when not connected.\n \n connect(user=None, password=None, host=None, port=None, sid=None, service_name=None, dsn=None, encoding='UTF-8', nencoding='UTF-8', automl=None, **kwargs)\n Establishes an Oracle Database connection.\n \n Just as with :py:func:`cx_Oracle.connect`, the user, password, and data\n source name can be provided separately or with host, port, sid or\n service_name.\n \n There can be only one active connection. Calling this method when an\n active connection already exists replaces the active connection with\n a new one. This results in the previous connection being implicitly\n disconnected with the corresponding release of resources.\n \n Parameters\n ----------\n user : str or None (default)\n password : str or None (default)\n host : str or None (default)\n Host name of the Oracle Database.\n port : int, str or None (default)\n The Oracle Database port number.\n sid : str or None (default)\n The Oracle Database SID.\n service_name : str or None (default)\n The service name to be used in the connection identifier for\n the Oracle Database.\n dsn : str or None (default)\n Data source name. The TNS entry of the database, or an TNS\n alias in the Oracle Wallet.\n encoding : str, 'UTF-8' (default)\n Encoding to use for regular database strings.\n nenconding : str, 'UTF-8' (default)\n Encoding to use for national character set database strings.\n automl : str, or bool or None (default)\n To enable automl, specify:\n * True: if ``host``, ``port``, ``sid`` or ``service_name``\n are specified and a connection pool is running for this\n (``host``, ``port``, ``sid`` or ``service_name``).\n * Data source name: for a running connection pool\n if ``dsn`` is specified with a data source name.\n * TNS alias in an Oracle Wallet: for a running connection pool\n if ``dsn`` is also specified with Wallet TNS alias.\n Otherwise, automl is disabled.\n \n Notes\n -----\n * Parameters ``sid`` and ``service_name`` are exclusive.\n * Parameters (``host``, ``port``, ``sid`` or ``service_name``),\n and ``dsn`` can only be specified exclusively.\n * Parameter ``user`` and ``password`` must be provided when\n (``host``, ``port``, ``sid`` or ``service_name``) is specified,\n or ``dsn`` (and optionally ``automl``) is specified with\n a data source name.\n * Parameter ``user`` and ``password`` should be set to empty str \"\",\n when ``dsn`` (and optionally ``automl``) is specified with\n Wallet TNS alias, to establish connection with Oracle Wallet.\n * Automl requires `Database Resident Connection Pooling (DRCP)\n `_\n running on the Database server.\n \n create(x, table, oranumber=True, dbtypes=None, append=False)\n Creates a table in Oracle Database from a Python data set.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n table : str\n A name for the table.\n oranumber : bool, True (default)\n If True, use SQL NUMBER for numeric columns. Otherwise, use BINARY_DOUBLE.\n Ignored if ``append`` is True.\n dbtypes : dict mapping str to str or list of str\n A list of SQL types to use on the new table. If a list, its length should\n be equal to the number of columns. If a dict, the keys are the names of the\n columns. Ignored if ``append`` is True.\n append : bool, False (default)\n Indicates whether to append the data to the existing table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n new_table : oml.DataFrame\n A proxy object that represents the newly-created table.\n \n cursor()\n Returns a cx_Oracle cursor object of the current OML database connection.\n It can be used to execute queries against Oracle Database.\n \n Returns\n -------\n cursor_obj : a cx_Oracle :ref:`cx:cursorobj`.\n \n dir()\n Returns the names of OML objects in the workspace.\n \n Returns\n -------\n obj_names : list of str\n \n disconnect(cleanup=True)\n Terminates the Oracle Database connection. By default, the OML\n objects created through this connection will be deleted.\n \n Parameters\n ----------\n cleanup : bool, True (default)\n Cleans up OML objects defined in Python's main module before\n disconnecting from the database.\n \n do_eval(func, func_owner=None, graphics=False, **kwargs)\n Runs the user-defined Python function using a Python engine spawned and \n controlled by the database environment.\n \n Parameters\n ----------\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : Python object or oml.embed.data_image._DataImage\n If no image is rendered in the script, returns whatever Python object returned\n by the function. Otherwise, returns an oml.embed.data_image._DataImage object.\n See :ref:`more-output`.\n \n drop(table=None, view=None, model=None)\n Drops a database table, view, or model.\n \n Parameters\n ----------\n table : str or None (default)\n The name of the table to drop.\n view : str or None (default)\n The name of the view to drop.\n model : str or None (default)\n The name of the model to drop.\n \n grant(name, typ='datastore', user=None)\n Grants read privilege for a Python script or datastore.\n Requires the user to have the `PYQADMIN` Oracle Database role.\n \n Parameters\n ----------\n name : str\n The name of Python script in the Python script repository or the name of\n a datastore. The current user must be the owner of the Python script or\n datastore.\n typ : 'datastore' (default) or 'pyqscript'\n A str specifying either 'datastore' or 'pyqscript' to grant the\n read privilege. 'pyqscript' requires Embedded Python.\n user : str or None (default)\n The user to grant read privilege of the named Python script or datastore\n to. Treated as case-sensitive if wrapped in double quotes. Treated as\n case-insensitive otherwise. If None, grant read privilege to public.\n \n group_apply(data, index, func, func_owner=None, parallel=None, orderby=None, graphics=False, **kwargs)\n Partitions database data by the column(s) specified in ``index``\n and runs the user-defined Python function on each partition using \n Python engines spawned and controlled by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The OML DataFrame that represents the in-database data that ``func`` is\n applied on.\n index : OML data object\n The columns to partition the ``data`` before sending it to ``func``.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by \n service level in ADW.\n orderby : oml.DataFrame, oml.Float, or oml.String\n An optional argument used to specify the ordering of group partitions.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : dict \n If no image is rendered in the script, returns a dict of Python objects\n returned by the function. Otherwise, returns a dict of \n oml.embed.data_image._DataImage objects. See :ref:`more-output`.\n \n hist(x, bins=None, range=None, density=False, weights=None, cumulative=False, bottom=None, align='mid', orientation='vertical', rwidth=None, log=False, color=None, label=None, **kwargs)\n Plots a histogram.\n \n Computes and draws a histogram for every data set column contained in ``x``.\n \n Parameters\n ----------\n x : oml.Float\n bins : int, strictly monotonic increasing sequence, 'auto', 'doane', 'fd', 'rice', 'scott', 'sqrt', or 'sturges', optional\n * If an integer, denotes the number of equal width bins to generate.\n * If a sequence, denotes bin edges and overrides the values of ``range``.\n * If a string, denotes the estimator to use calculate the optimal number\n of bins. 'auto' is the maximum of the 'fd' and 'sturges' estimators.\n * Default is taken from the matplotlib rcParam ``hist.bins``.\n weights : oml.Float\n Must come from the same table as ``x``.\n cumulative : int, float, or boolean, False (Default)\n If greater than zero, then a histogram is computed where each bin gives\n the counts in that bin plus all bins for smaller values. If ``density``\n is also True, then the histogram is normalized so the last bin equals 1.\n If less than zero, the direction of accumulation is reversed. In this\n case, if ``density`` is True, then the histogram is normalized so that the\n first bin equals 1. \n rwidth : int, float, or None (default)\n Ratio of the width of the bars to the bin widths. Values less than 0\n is treated as 0. Values more than 1 is treated as 1. If None,\n defaults to 1.\n color : str that indicates a color spec or None (default)\n If None, use the standard line color sequence. \n label : str or None (default) \n The label that is applied to the first patch of the histogram.\n \n Notes\n -----\n For information on the other parameters, see documentation for :py:func:`matplotlib.pyplot.hist`.\n \n Returns\n -------\n n : :py:class:`numpy.ndarray`\n The values of the histogram bins. \n bins : :py:class:`numpy.ndarray`\n The edges of the bins. An array of length #bins + 1.\n patches : list of :py:class:`matplotlib.patches.Rectangle`\n Individual patches used to create the histogram.\n \n index_apply(times, func, func_owner=None, parallel=None, graphics=False, **kwargs)\n Runs the user-defined Python function multiple times, passing the \n run index as first argument, using Python engines spawned and controlled \n by the database environment.\n \n Parameters\n ----------\n times : int\n The number of times to execute the function.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by\n service level in ADW.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : list\n If no image is rendered in the script, returns a list of Python objects\n returned by the function. Otherwise, returns a list of \n oml.embed.data_image._DataImage objects. See :ref:`more-output`.\n \n isconnected(check_automl=False)\n Indicates whether an active Oracle Database connection exists.\n \n Parameters\n ----------\n check_automl: bool, False (default)\n Indicates whether to check the connection is automl-enabled.\n \n Returns\n -------\n connected : bool\n \n push(x, oranumber=True, dbtypes=None)\n Pushes data into Oracle Database.\n \n Creates an internal table in Oracle Database and inserts the data\n into the table. The table exists as long as an OML object (either\n in the Python client or saved in the datastore) references the table.\n \n Parameters\n ----------\n x : pandas.DataFrame or a list of tuples of equal size\n If ``x`` is a list of tuples of equal size, each tuple represents\n a row in the table. The column names are set to COL1, COL2, ... and so on.\n oranumber : bool\n If True (default), use SQL NUMBER for numeric columns. Otherwise\n use BINARY_DOUBLE. Ignored if ``append`` is True.\n dbtypes : dict or list of str\n The SQL data types to use in the table.\n \n Notes\n -----\n * When creating a new table, for columns whose SQL types are not specified in\n ``dbtypes``, NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. Users should set\n ``oranumber`` to False when the data contains NaN values. For string columns,\n the default type is VARCHAR2(4000), and for bytes columns, the default type\n is BLOB.\n * When ``x`` is specified with an empty pandas.DataFrame, OML creates an\n empty table. NUMBER is used for numeric columns when ``oranumber`` is True\n and BINARY_DOUBLE is used when ``oranumber`` is False. VARCHAR2(4000) is\n used for columns of object dtype in the pandas.DataFrame.\n * OML does not support columns containing values of multiple data types,\n data conversion is needed or a TypeError may be raised.\n * OML determines default column types by looking at 20 random rows sampled\n from the table. For tables with less than 20 rows, all rows are used\n in column type determination. NaN values are considered as float type.\n If a column has all Nones, or has inconsistent data types that are not\n None in the sampled rows, a default column type cannot be determined,\n and a ValueError is raised unless a SQL type for the column is specified\n in ``dbtypes``.\n \n Returns\n -------\n temp_table : oml.DataFrame\n \n revoke(name, typ='datastore', user=None)\n Revokes read privilege for a Python script or datastore.\n Requires the user to have the `PYQADMIN` Oracle Database role.\n \n Parameters\n ----------\n name : str\n The name of Python script in the Python script repository or the name of\n a datastore. The current user must be the owner of the Python script or\n datastore.\n typ : 'datastore' (default) or 'pyqscript'\n A str specifying either 'datastore' or 'pyqscript' to revoke the\n read privilege. 'pyqscript' requires Embedded Python.\n user : str or None (default)\n The user to revoke read privilege of the named Python script or datastore\n from. Treated as case-sensitive if wrapped in double quotes. Treated as\n case-insensitive otherwise. If None, revoke read privilege from public.\n \n row_apply(data, func, func_owner=None, rows=1, parallel=None, graphics=False, **kwargs)\n Partitions database data into chunks of rows and runs the user-defined \n Python function on each chunk using Python engines spawned and controlled \n by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The OML DataFrame that represents the in-database data that ``func``\n is applied on.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n rows : int, 1 (default)\n The maximum number of rows in each chunk.\n parallel : bool or int or None (default)\n A preferred degree of parallelism to use in the embedded Python job;\n either a positive integer greater than or equal to 1\n for a specific degree of parallelism,\n a value of 'None', 'False' or '0' for no parallelism,\n a value of 'True' for the ``data`` default parallelism.\n Cannot exceed the degree of parallelism limit controlled by \n service level in ADW.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : pandas.DataFrame or a list of oml.embed.data_image._DataImage\n If no image is rendered in the script, returns a :py:class:`pandas.DataFrame`.\n Otherwise, returns a list of oml.embed.data_image._DataImage objects.\n See :ref:`more-output`.\n \n sync(schema=None, regex_match=False, **kwargs)\n Creates a DataFrame proxy object in Python that represents an Oracle\n Database data set.\n \n The data set can be one of the following: a database table, view, or query.\n \n Parameters\n ----------\n schema : str or None (default)\n The name of the schema where the database object exists;\n if None, then the current schema is used.\n regex_match : bool, False (default)\n Synchronizes tables or views that match a regular expression.\n Ignored if ``query`` is used.\n table, view, query : str or None (default)\n The name of a table, of a view, or of an Oracle SQL query to select\n from the database. When ``regex_match`` is True, this specifies the\n name pattern. Exactly one of these parameters must be a str and the\n other two must be None.\n \n Notes\n -----\n When ``regex_match`` is True, synchronizes the matched tables or views\n to a dict with the table or view name as the key.\n \n Returns\n -------\n data_set : oml.DataFrame, or if ``regex_match`` is used, returns\n a dict of oml.DataFrame\n \n table_apply(data, func, func_owner=None, graphics=False, **kwargs)\n Runs the user-defined Python function with data pulled from \n a database table or view using a Python engine spawned and \n controlled by the database environment.\n \n Parameters\n ----------\n data : oml.DataFrame\n The oml.DataFrame that represents the data ``func`` is applied on.\n func : function, str or :py:func:`oml.script.Callable `\n ``func`` can be one of the following:\n \n * A Python function.\n * Name of a registered script that defines a Python function.\n * A string that if evaluated, defines a Python function.\n * A callable object returned from ``script_load`` function.\n func_owner : str or None (default)\n An optional value specifying the owner of the registered script\n when argument ``func`` is set to a registered script name.\n graphics : bool, False (default)\n If True, images rendered from extant :py:class:`matplotlib.figure.Figure`\n objects are included in the result.\n **kwargs :\n Contains any combinaton of the following:\n \n * :ref:`special-control`\n * additional arguments to ``func``.\n \n Returns\n -------\n result : Python object or oml.embed.data_image._DataImage\n If no image is rendered in the script, returns whatever Python object returned\n by the function. Otherwise, returns an oml.embed.data_image._DataImage object.\n See :ref:`more-output`.\n\nDATA\n __all__ = ['connect', 'disconnect', 'isconnected', 'check_embed', 'cur...\n __build_serial__ = '1.0_08122021_1858'\n\nVERSION\n 1.0\n\nFILE\n /usr/local/lib/python3.9/site-packages/oml/__init__.py\n\n\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_-1634618100","id":"20211001-190306_1526941016","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:47+0000","dateFinished":"2021-09-22T20:18:48+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:385"},{"text":"%md\n## Learn More\n\n* Get Started with OML4Py and OML Notebooks\n* Oracle Machine Learning Notebooks\n \n**Last Updated Date** - September 2021\n \nCopyright (c) 2021 Oracle Corporation \n###### The Universal Permissive License (UPL), Version 1.0\n---","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:48+0000","config":{"editorSetting":{"language":"md","editOnDblClick":false},"colWidth":12,"editorMode":"ace/mode/markdown","fontSize":9,"results":{},"enabled":true,"editorHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"

Learn More

\n\n

Last Updated Date - September 2021

\n

Copyright (c) 2021 Oracle Corporation

\n
The Universal Permissive License (UPL), Version 1.0
\n
\n"}]},"interrupted":false,"jobName":"paragraph_1633114986450_1016371808","id":"20211001-190306_68494635","dateCreated":"2021-03-22T18:18:00+0000","dateStarted":"2021-09-22T20:18:48+0000","dateFinished":"2021-09-22T20:18:48+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":true,"$$hashKey":"object:386"},{"text":"%md\n","user":"OMLUSER","dateUpdated":"2021-09-22T20:18:48+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"md","editOnDblClick":false},"editorMode":"ace/mode/markdown"},"settings":{"params":{},"forms":{}},"interrupted":false,"jobName":"paragraph_1633114986450_-1060179910","id":"20211001-190306_1958773210","dateCreated":"2021-04-23T13:27:18+0000","status":"FINISHED","progressUpdateIntervalMs":500,"commited":false,"$$hashKey":"object:387"}],"id":"79628","noteParams":{},"noteForms":{},"angularObjects":{"ORA221A1BC345:OMLUSER:78556":[],"ORA36CEFD120D:OMLUSER:78556":[],"ORA9405AD2E1E:OMLUSER:78556":[],"MDW276C5A6A4D:shared_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{},"name":"Lab 1: Get Started with OML4Py on Autonomous Database"} \ No newline at end of file diff --git a/machine-learning/labs/oml4py-live-labs/Lab2_Select_and_manipulate_data_using_the_Transparency_Layer.json b/machine-learning/labs/oml4py-live-labs/Lab2_Select_and_manipulate_data_using_the_Transparency_Layer.json index 1ea73aba..5d2a3527 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab2_Select_and_manipulate_data_using_the_Transparency_Layer.json +++ b/machine-learning/labs/oml4py-live-labs/Lab2_Select_and_manipulate_data_using_the_Transparency_Layer.json @@ -84,7 +84,7 @@ "commited": true }, { - "text": "%md\n---\n1.2. Here, you load the IRIS data and combine the target and predictors into a single DataFrame, which matches the form the data would have as a database table. You use the `oml.push` function to load this Pandas DataFrame into the database, which creates a temporary table and returns a proxy object that you assign to IRIS_TMP.\n\nSuch temporary tables will be automatically deleted when the database connection is terminated unless saved in a datastore. You learn more about datastore in lab 4.\nIn OML Notebooks, you use the notebook-context `z.show` method to display Python objects and proxy object content. \n \nDisplay the first few rows of IRIS_TMP using `z.show` for displaying DataFrame results in the notebook viewer.", + "text": "%md\n\n1.2. Here, you load the IRIS data and combine the target and predictors into a single DataFrame, which matches the form the data would have as a database table. You use the `oml.push` function to load this Pandas DataFrame into the database, which creates a temporary table and returns a proxy object that you assign to IRIS_TMP.\n\nSuch temporary tables will be automatically deleted when the database connection is terminated unless saved in a datastore. You learn more about datastore in lab 4.\nIn OML Notebooks, you use the notebook-context `z.show` method to display Python objects and proxy object content. \n \nDisplay the first few rows of IRIS_TMP using `z.show` for displaying DataFrame results in the notebook viewer.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:18+0000", "config": { @@ -338,7 +338,7 @@ "commited": true }, { - "text": "%md\n---\n1.4. Use the transparency layer function `shape` to view the shape, or number of rows and columns, of the IRIS table.", + "text": "%md\n\n1.4. Use the transparency layer function `shape` to view the shape, or number of rows and columns, of the IRIS table.", "user": "OMLUSER", "dateUpdated": "2021-10-01T19:04:33+0000", "config": { @@ -416,7 +416,7 @@ "commited": true }, { - "text": "%md\n---\n1.5. Use the transparency layer function `head()` to view the first three rows of the IRIS table.\n", + "text": "%md\n\n1.5. Use the transparency layer function `head()` to view the first three rows of the IRIS table.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:23+0000", "config": { @@ -493,7 +493,7 @@ "commited": true }, { - "text": "%md\n---\n1.6. Use the transparency layer function `describe()` to calculate descriptive statistics that summarize the central tendency, dispersion, and shape of the IRIS table in each numeric column.\n", + "text": "%md\n\n1.6. Use the transparency layer function `describe()` to calculate descriptive statistics that summarize the central tendency, dispersion, and shape of the IRIS table in each numeric column.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:25+0000", "config": { @@ -706,7 +706,7 @@ "commited": true }, { - "text": "%md\n---\n2.2 Run the following script to select specific columns by name and return the first three records with the specified column names.\n", + "text": "%md\n\n2.2 Run the following script to select specific columns by name and return the first three records with the specified column names.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:29+0000", "config": { @@ -812,7 +812,7 @@ "commited": true }, { - "text": "%md\n---\n2.3. Select the first four columns in `CUST_DF` using an index range. Note that Python uses `0` as a starting element for indexing. The starting element is the first column, `MARITAL_STATUS`. The second element is the second column, `STATE`, and so on. The script returns the columns by specified by the index range.", + "text": "%md\n\n2.3. Select the first four columns in `CUST_DF` using an index range. Note that Python uses `0` as a starting element for indexing. The starting element is the first column, `MARITAL_STATUS`. The second element is the second column, `STATE`, and so on. The script returns the columns by specified by the index range.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:30+0000", "config": { @@ -922,7 +922,7 @@ "commited": true }, { - "text": "%md\n---\n2.4. Run the following script to select columns by data type, specifying the data type as `oml.string`. The script returns the MARITAL_STATUS, STATE, CUSTOMER_ID, GENDER, PROFESSION, REGION, BUY_INSURANCE, LTV_BIN, FIRST_NAME, LAST_NAME columns.", + "text": "%md\n\n2.4. Run the following script to select columns by data type, specifying the data type as `oml.string`. The script returns the MARITAL_STATUS, STATE, CUSTOMER_ID, GENDER, PROFESSION, REGION, BUY_INSURANCE, LTV_BIN, FIRST_NAME, LAST_NAME columns.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:32+0000", "config": { @@ -1148,7 +1148,7 @@ "commited": true }, { - "text": "%md\n---\n3.2. This step shows an example of a compound row selection using the `OR` filtering condition, denoted by the `|` symbol. Run the following scripts to select all rows in which `MORTGAGE_AMOUNT` is `less than 1500` OR `STATE` is `equal to MA`:", + "text": "%md\n\n3.2. This step shows an example of a compound row selection using the `OR` filtering condition, denoted by the `|` symbol. Run the following scripts to select all rows in which `MORTGAGE_AMOUNT` is `less than 1500` OR `STATE` is `equal to MA`:", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:35+0000", "config": { @@ -1288,7 +1288,7 @@ "commited": true }, { - "text": "%md\n---\n3.3. This step shows an example of a compound row selection using `AND`, denoted by the `\u0026` symbol. Run the following script to select all rows in which `MORTGAGE_AMOUNT` is `less than 1500` AND `BANK_FUNDS` is `greater than 5000`:\n", + "text": "%md\n\n3.3. This step shows an example of a compound row selection using `AND`, denoted by the `\u0026` symbol. Run the following script to select all rows in which `MORTGAGE_AMOUNT` is `less than 1500` AND `BANK_FUNDS` is `greater than 5000`:\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:36+0000", "config": { @@ -1468,7 +1468,7 @@ "commited": true }, { - "text": "%md\n---\n4.1.1. In this step, you use the `append()` function to append an `oml.Float` series object to another, and then append an `oml.DataFrame` object to another.\n\n**Note:** An `oml.Float` is numeric series data class that represents a single column of `NUMBER`, `BINARY_DOUBLE`, or `BINARY_FLOAT` database data types.\n", + "text": "%md\n\n4.1.1. In this step, you use the `append()` function to append an `oml.Float` series object to another, and then append an `oml.DataFrame` object to another.\n\n**Note:** An `oml.Float` is numeric series data class that represents a single column of `NUMBER`, `BINARY_DOUBLE`, or `BINARY_FLOAT` database data types.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:38+0000", "config": { @@ -1579,7 +1579,7 @@ "commited": true }, { - "text": "%md\n---\n### 4.2. Use the replace () function\n\n4.2.1. In this step, you use the `replace()` function to replace a particular value in columns of an OML dataframe. ", + "text": "%md\n\n### 4.2. Use the replace () function\n\n4.2.1. In this step, you use the `replace()` function to replace a particular value in columns of an OML dataframe. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:40+0000", "config": { @@ -1865,7 +1865,7 @@ "commited": true }, { - "text": "%md\n---\n4.3.2. This step shows how to create an `oml.Float` object by taking product of `MORTGAGE_AMOUNT` column and `NUM_MORTGAGES` column, and then concatenate it with the preivous `oml.DataFrame`, with a new column name `MORTGAGE_TOTAL`. \n**Note:** An oml.Float is a numeric series data class that represents a single column of `NUMBER`, `BINARY_DOUBLE`, or `BINARY_FLOAT` database data types.\n\n", + "text": "%md\n\n4.3.2. This step shows how to create an `oml.Float` object by taking product of `MORTGAGE_AMOUNT` column and `NUM_MORTGAGES` column, and then concatenate it with the preivous `oml.DataFrame`, with a new column name `MORTGAGE_TOTAL`. \n**Note:** An oml.Float is a numeric series data class that represents a single column of `NUMBER`, `BINARY_DOUBLE`, or `BINARY_FLOAT` database data types.\n\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:44+0000", "config": { @@ -1974,7 +1974,7 @@ "commited": true }, { - "text": "%md\n---\n4.3.3. Concatenate object CUST_DF with ID2 and turn on automatic name conflict resolution. In this example, `auto_name\u003dTrue` controls whether to call automatic name conflict resolution if one or more column names are duplicates in the two data frames:\n", + "text": "%md\n\n4.3.3. Concatenate object CUST_DF with ID2 and turn on automatic name conflict resolution. In this example, `auto_name\u003dTrue` controls whether to call automatic name conflict resolution if one or more column names are duplicates in the two data frames:\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:45+0000", "config": { @@ -2081,7 +2081,7 @@ "commited": true }, { - "text": "%md\n---\n4.3.4. Run the following script to concatenate multiple OML data objects and perform customized renaming. ", + "text": "%md\n\n4.3.4. Run the following script to concatenate multiple OML data objects and perform customized renaming. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:47+0000", "config": { @@ -2299,7 +2299,7 @@ "commited": true }, { - "text": "%md\n---\n4.4.2. Run the following script to perform a left outer join on SUBSET1 with the `oml.DataFrame` object on the shared column `CUSTOMER_ID` and apply the suffixes `.l` and `.r` to column names on the left and right side, respectively.\n", + "text": "%md\n\n4.4.2. Run the following script to perform a left outer join on SUBSET1 with the `oml.DataFrame` object on the shared column `CUSTOMER_ID` and apply the suffixes `.l` and `.r` to column names on the left and right side, respectively.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:55+0000", "config": { @@ -2583,7 +2583,7 @@ "commited": true }, { - "text": "%md\n---\n4.5.2 Run the script below to check the number of rows with `NULL` for `MARITAL_STATUS` in the `CUSTOMER_INSURANCE_LTV_NOISE` proxy object. Then we check the top 5 records.", + "text": "%md\n\n4.5.2 Run the script below to check the number of rows with `NULL` for `MARITAL_STATUS` in the `CUSTOMER_INSURANCE_LTV_NOISE` proxy object. Then we check the top 5 records.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:23:58+0000", "config": { @@ -2779,7 +2779,7 @@ "commited": true }, { - "text": "%md\n---\n4.5.3. Run the script to drop rows with any missing values and create a new proxy object `RES_DF`. Verify its shape and first 100 records.", + "text": "%md\n\n4.5.3. Run the script to drop rows with any missing values and create a new proxy object `RES_DF`. Verify its shape and first 100 records.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:24:00+0000", "config": { @@ -2976,7 +2976,7 @@ "commited": true }, { - "text": "%md\n---\n4.5.4. Run the following script to drop rows with missing values for a particular subset of colummns and create a new proxy object `RES_DF`. Verify its shape and first 100 records. ", + "text": "%md\n\n4.5.4. Run the following script to drop rows with missing values for a particular subset of colummns and create a new proxy object `RES_DF`. Verify its shape and first 100 records. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:24:01+0000", "config": { @@ -3313,7 +3313,7 @@ "commited": true }, { - "text": "%md\n---\n4.5.6. Use the `drop_duplicates()` function to drop duplicate rows. In this case, because we are requesting only the columns `STATE` and `MARITAL_STATUS`, the final unique combination of these columns has a much smaller count than the original data. ", + "text": "%md\n\n4.5.6. Use the `drop_duplicates()` function to drop duplicate rows. In this case, because we are requesting only the columns `STATE` and `MARITAL_STATUS`, the final unique combination of these columns has a much smaller count than the original data. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:24:04+0000", "config": { @@ -3452,7 +3452,7 @@ "commited": true }, { - "text": "%md\n---\n4.5.7. Run the following script to drop specific columns and create a new proxy object `DROPPED_DF`. Verify its shape and first 100 records.", + "text": "%md\n\n4.5.7. Run the following script to drop specific columns and create a new proxy object `DROPPED_DF`. Verify its shape and first 100 records.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:24:06+0000", "config": { @@ -3722,7 +3722,7 @@ "commited": true }, { - "text": "%md\n---\n5.2. Run the following script to split the data set into samples of 20% and 80% size.", + "text": "%md\n\n5.2. Run the following script to split the data set into samples of 20% and 80% size.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:11:29+0000", "config": { @@ -3800,7 +3800,7 @@ "commited": true }, { - "text": "%md\n---\n5.3. Run the following script to perform stratified sampling on the column `BUY_INSURANCE`. In this example, the column in which you perform stratified sampling is `BUY_INSURANCE`. Stratified sampling divides the data into different groups based on shared characteristics prior to sampling, ensuring that members from each subgroup is included in the analysis.", + "text": "%md\n\n5.3. Run the following script to perform stratified sampling on the column `BUY_INSURANCE`. In this example, the column in which you perform stratified sampling is `BUY_INSURANCE`. Stratified sampling divides the data into different groups based on shared characteristics prior to sampling, ensuring that members from each subgroup is included in the analysis.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:11:40+0000", "config": { @@ -3878,7 +3878,7 @@ "commited": true }, { - "text": "%md\n---\n5.4. Verify that the stratified sampling generates splits in which all of the different categories of `BUY_INSURANCE` (\u0027No\u0027 and \u0027Yes\u0027) are present in each split", + "text": "%md\n\n5.4. Verify that the stratified sampling generates splits in which all of the different categories of `BUY_INSURANCE` (\u0027No\u0027 and \u0027Yes\u0027) are present in each split", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:11:48+0000", "config": { @@ -3957,7 +3957,7 @@ "commited": true }, { - "text": "%md\n---\n5.5. Split by computing hash on the target column.", + "text": "%md\n\n5.5. Split by computing hash on the target column.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:11:56+0000", "config": { @@ -4035,7 +4035,7 @@ "commited": true }, { - "text": "%md\n---\n5.6. Verify that the different categories of `BUY_INSURANCE` are present in only one of the splits generated by hashing on the category column:", + "text": "%md\n\n5.6. Verify that the different categories of `BUY_INSURANCE` are present in only one of the splits generated by hashing on the category column:", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:12:05+0000", "config": { @@ -4113,7 +4113,7 @@ "commited": true }, { - "text": "%md\n---\n5.7. Split the data randomly into 4 consecutive folds using the `KFold` function.", + "text": "%md\n\n5.7. Split the data randomly into 4 consecutive folds using the `KFold` function.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:12:12+0000", "config": { @@ -4304,7 +4304,7 @@ "commited": true }, { - "text": "%md\n---\n6.2. Use the `crosstab` function to find the categories that the most entries belonged to. This example shows how to use the `crosstab` function to find the count of state and gender customers in descending order in the CUST_DF OML frame", + "text": "%md\n\n6.2. Use the `crosstab` function to find the categories that the most entries belonged to. This example shows how to use the `crosstab` function to find the count of state and gender customers in descending order in the CUST_DF OML frame", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:12:38+0000", "config": { @@ -4412,7 +4412,7 @@ "commited": true }, { - "text": "%md\n---\n6.3. Use the `crosstab` function to find the ratio of the genders for State and across entries in the `CUST_DF` dataframe.", + "text": "%md\n\n6.3. Use the `crosstab` function to find the ratio of the genders for State and across entries in the `CUST_DF` dataframe.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:12:49+0000", "config": { @@ -4520,7 +4520,7 @@ "commited": true }, { - "text": "%md\n---\n6.4. Use the `pivot_table` function to find the mean bank funds across all State and gender combinations in the `CUST_DF` dataframe.", + "text": "%md\n\n6.4. Use the `pivot_table` function to find the mean bank funds across all State and gender combinations in the `CUST_DF` dataframe.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:13:01+0000", "config": { @@ -4628,7 +4628,7 @@ "commited": true }, { - "text": "%md\n---\n6.5. Use `pivot_table` function to find the maximum and minimum bank funds for every State and gender combination (plus the overall for all genders) in the `CUST_DF` dataframe:\n", + "text": "%md\n\n6.5. Use `pivot_table` function to find the maximum and minimum bank funds for every State and gender combination (plus the overall for all genders) in the `CUST_DF` dataframe:\n", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:13:10+0000", "config": { @@ -4818,7 +4818,7 @@ "commited": true }, { - "text": "%md\n---\n7.2. Run the following script to render the data in a histogram for `MORTGAGE_AMOUNT`.", + "text": "%md\n\n7.2. Run the following script to render the data in a histogram for `MORTGAGE_AMOUNT`.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:13:33+0000", "config": { @@ -5010,7 +5010,7 @@ "commited": true }, { - "text": "%md\n---\n8.1.2. Run the following script to view all persistent database tables that match the name `IRIS`.", + "text": "%md\n\n8.1.2. Run the following script to view all persistent database tables that match the name `IRIS`.", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:14:05+0000", "config": { @@ -5167,7 +5167,7 @@ "commited": true }, { - "text": "%md\n---\n9.2. To close the cursor, run `cr.close`.\n", + "text": "%md\n\n9.2. To close the cursor, run `cr.close`.\n", "user": "OMLUSER", "dateUpdated": "2021-09-24T17:18:30+0000", "config": { diff --git a/machine-learning/labs/oml4py-live-labs/Lab3_Use_in_database_algorithms_and_models.json b/machine-learning/labs/oml4py-live-labs/Lab3_Use_in_database_algorithms_and_models.json index ada4e6ec..4018b3dd 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab3_Use_in_database_algorithms_and_models.json +++ b/machine-learning/labs/oml4py-live-labs/Lab3_Use_in_database_algorithms_and_models.json @@ -147,7 +147,7 @@ "commited": true }, { - "text": "%md\n---\n2.2. Run the following command to display the first few rows of table `CUST_DF`:", + "text": "%md\n\n2.2. Run the following command to display the first few rows of table `CUST_DF`:", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:09+0000", "config": { @@ -283,7 +283,7 @@ "commited": true }, { - "text": "%md\n---\n2.3. Run the following script to randomly split and select the data into 60% for train and 40% for test.", + "text": "%md\n\n2.3. Run the following script to randomly split and select the data into 60% for train and 40% for test.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:11+0000", "config": { @@ -356,7 +356,7 @@ "commited": true }, { - "text": "%md\n---\n2.4. Now, build a GLM regression model for predicting lifetime value `LTV` using the `oml.glm` function.\n\nThis method runs the `oml.glm` algorithm in-database using the given settings. The settings are supplied as key-value pairs. In this example, feature generation and feature selection are specified.\n\n**Note:** For a complete list of settings, refer to the OML4Py product documentation. You may also refer to the `oml.glm` help file.\n", + "text": "%md\n\n2.4. Now, build a GLM regression model for predicting lifetime value `LTV` using the `oml.glm` function.\n\nThis method runs the `oml.glm` algorithm in-database using the given settings. The settings are supplied as key-value pairs. In this example, feature generation and feature selection are specified.\n\n**Note:** For a complete list of settings, refer to the OML4Py product documentation. You may also refer to the `oml.glm` help file.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:13+0000", "config": { @@ -434,7 +434,7 @@ "commited": true }, { - "text": "%md\n---\n2.5. Run the following script to view model fit details to understand the key statistics of the model. Locate the values of Root Mean Square Error `ROOT_MEAN_SQ` and R-squared `R_SQ` from the output. RMSE and R-squared are used to evaluate baseline performance of the model.\n\n* RMSE is a measure of the differences between values predicted by a model and the values observed. A good model should have a low RMSE. But at the same time, a model with very low RMSE has the potential to overfit.\n* R-Squared is a statistical measure that represents the goodness of fit of a regression model. The ideal value for R-squared is 1. The closer the value of R-squared is to 1, the better the model fit. For instance, if the R-squared of a model is 0.50, then approximately half of the observed variation can be explained by the model\u0027s inputs", + "text": "%md\n\n2.5. Run the following script to view model fit details to understand the key statistics of the model. Locate the values of Root Mean Square Error `ROOT_MEAN_SQ` and R-squared `R_SQ` from the output. RMSE and R-squared are used to evaluate baseline performance of the model.\n\n* RMSE is a measure of the differences between values predicted by a model and the values observed. A good model should have a low RMSE. But at the same time, a model with very low RMSE has the potential to overfit.\n* R-Squared is a statistical measure that represents the goodness of fit of a regression model. The ideal value for R-squared is 1. The closer the value of R-squared is to 1, the better the model fit. For instance, if the R-squared of a model is 0.50, then approximately half of the observed variation can be explained by the model\u0027s inputs", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:30+0000", "config": { @@ -541,7 +541,7 @@ "commited": true }, { - "text": "%md\n---\n2.6. Run the following command to display and view the model coefficients:", + "text": "%md\n\n2.6. Run the following command to display and view the model coefficients:", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:32+0000", "config": { @@ -652,7 +652,7 @@ "commited": true }, { - "text": "%md\n---\n2.7. Run the following script to make predictions using the test data and display the results. From the RES_DF DataFrame proxy object, the predicted values and lifetime value are displayed in the `PREDICTION` and `LTV` columns respectively.\n", + "text": "%md\n\n2.7. Run the following script to make predictions using the test data and display the results. From the RES_DF DataFrame proxy object, the predicted values and lifetime value are displayed in the `PREDICTION` and `LTV` columns respectively.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:34+0000", "config": { @@ -760,7 +760,7 @@ "commited": true }, { - "text": "%md\n---\n2.8. Run the following command to plot the predicted versus the actual years of residence and then click the **Scatter Chart** icon to see the visualization. Click **Settings** to see how the plot was specified.", + "text": "%md\n\n2.8. Run the following command to plot the predicted versus the actual years of residence and then click the **Scatter Chart** icon to see the visualization. Click **Settings** to see how the plot was specified.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:36+0000", "config": { @@ -868,7 +868,7 @@ "commited": true }, { - "text": "%md\n---\n2.9. Using matplotlib, plot the predicted and actual years of residence and visually compare it against the perfect fit line, `y\u003dx` in `red`. The plot indicates how far the prediction deviated from actual value, which is known as the prediction error. Ideally, the predictions will converge to the perfect fit line.", + "text": "%md\n\n2.9. Using matplotlib, plot the predicted and actual years of residence and visually compare it against the perfect fit line, `y\u003dx` in `red`. The plot indicates how far the prediction deviated from actual value, which is known as the prediction error. Ideally, the predictions will converge to the perfect fit line.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:38+0000", "config": { @@ -908,7 +908,7 @@ "commited": true }, { - "text": "%md\n---\n\n\n2.10. Run the following script to plot the residuals using matplotlib. Ideally prediction errors (the residuals) would show up as small random noise around the perfect fit `y\u003d0` line in `red`. When there are strange patterns found, usually that means that some other confounding effect is at play, and further investigation is required to identify more attributes that might not be considered currently.", + "text": "%md\n\n\n\n2.10. Run the following script to plot the residuals using matplotlib. Ideally prediction errors (the residuals) would show up as small random noise around the perfect fit `y\u003d0` line in `red`. When there are strange patterns found, usually that means that some other confounding effect is at play, and further investigation is required to identify more attributes that might not be considered currently.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:39+0000", "config": { @@ -956,7 +956,7 @@ "commited": true }, { - "text": "%md\n---", + "text": "%md\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:40+0000", "config": { @@ -1071,7 +1071,7 @@ "commited": true }, { - "text": "%md\n---\n2.11. Run the following script to calculate the RMSE manually on the prediction results on the testing test and the R- squared on the testing set using the score method.\n\n**Note:** Both the RMSE and R-squared calculations are similar to the values produced by `oml.glm`.", + "text": "%md\n\n2.11. Run the following script to calculate the RMSE manually on the prediction results on the testing test and the R- squared on the testing set using the score method.\n\n**Note:** Both the RMSE and R-squared calculations are similar to the values produced by `oml.glm`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:45+0000", "config": { @@ -1222,7 +1222,7 @@ "commited": true }, { - "text": "%md\n---\n3.2. To display and view the K-means model details, run the following command:", + "text": "%md\n\n3.2. To display and view the K-means model details, run the following command:", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:53+0000", "config": { @@ -1300,7 +1300,7 @@ "commited": true }, { - "text": "%md\n---\n3.3. To view the cluster details, run the following command. The command displays the cluster details for all clusters in the hierarchy with row counts and dispersion.\nThe dispersion value is a measure of how compact or how spread out the data is within a cluster. The dispersion value is a number greater than 0. The lower the dispersion value, the more compact the cluster, that is, the data points are closer to the centroid of the cluster. A larger dispersion value indicates that the data points are more disperse or spread out from the centroid.\n", + "text": "%md\n\n3.3. To view the cluster details, run the following command. The command displays the cluster details for all clusters in the hierarchy with row counts and dispersion.\nThe dispersion value is a measure of how compact or how spread out the data is within a cluster. The dispersion value is a number greater than 0. The lower the dispersion value, the more compact the cluster, that is, the data points are closer to the centroid of the cluster. A larger dispersion value indicates that the data points are more disperse or spread out from the centroid.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:55+0000", "config": { @@ -1410,7 +1410,7 @@ "commited": true }, { - "text": "%md\n---\n3.4. Run the following script to display the taxonomy. The taxonomy shows the hierarchy of the child clusters in relationship to the parent clusters.", + "text": "%md\n\n3.4. Run the following script to display the taxonomy. The taxonomy shows the hierarchy of the child clusters in relationship to the parent clusters.", "user": "OMLUSER", "dateUpdated": "2021-10-01T19:24:47+0000", "config": { @@ -1517,7 +1517,7 @@ "commited": true }, { - "text": "%md\n---\n3.5. Run the following command to predict the cluster membership. The `supplemental_cols` argument carries the target column to the output to retain the relationship between the predictions and their original preditor values. These predictors may include a case id, for example to join with other tables, or multiple (or all) columns of the scoring data. You should be aware that unlike Pandas DataFrames, which are explicitly ordered in memory, results from relational databases do not have a specific order unless explicitly specified by an `ORDER BY` clause. As such, you cannot rely on results to maintain the same order across different data sets (tables and DataFrame proxy objects).", + "text": "%md\n\n3.5. Run the following command to predict the cluster membership. The `supplemental_cols` argument carries the target column to the output to retain the relationship between the predictions and their original preditor values. These predictors may include a case id, for example to join with other tables, or multiple (or all) columns of the scoring data. You should be aware that unlike Pandas DataFrames, which are explicitly ordered in memory, results from relational databases do not have a specific order unless explicitly specified by an `ORDER BY` clause. As such, you cannot rely on results to maintain the same order across different data sets (tables and DataFrame proxy objects).", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:50:59+0000", "config": { @@ -1590,7 +1590,7 @@ "commited": true }, { - "text": "%md\n---\n3.6. Run the following command to view the pred computed in step 5.", + "text": "%md\n\n3.6. Run the following command to view the pred computed in step 5.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:51:01+0000", "config": { @@ -1727,7 +1727,7 @@ "commited": true }, { - "text": "%md\n---\n3.7. Run the following command to view the cluster results using a matplotlib scatterplot. We are using a sample of 10% of the total data (by using `sample(0.1)`) to make it more visible.", + "text": "%md\n\n3.7. Run the following command to view the cluster results using a matplotlib scatterplot. We are using a sample of 10% of the total data (by using `sample(0.1)`) to make it more visible.", "user": "OMLUSER", "dateUpdated": "2021-09-22T20:59:36+0000", "config": { @@ -1878,7 +1878,7 @@ "commited": true }, { - "text": "%md\n---\n4.2. Build a partitioned model using the SVM algorithm to predict `LTV`, partitioned by `GENDER`. The script builds an SVM partitioned model. Scroll down the notebook paragraph for complete details of the model.\n", + "text": "%md\n\n4.2. Build a partitioned model using the SVM algorithm to predict `LTV`, partitioned by `GENDER`. The script builds an SVM partitioned model. Scroll down the notebook paragraph for complete details of the model.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:34+0000", "config": { @@ -1956,7 +1956,7 @@ "commited": true }, { - "text": "%md\n---\n4.3. Run the following script to predict on the test set and display prediction result. Note the use of the top level model only.\n\nThe script makes predictions using the test data returning an OML DataFrame proxy object and displays the result in a table. The predicted values are listed in the PREDICTION column.", + "text": "%md\n\n4.3. Run the following script to predict on the test set and display prediction result. Note the use of the top level model only.\n\nThe script makes predictions using the test data returning an OML DataFrame proxy object and displays the result in a table. The predicted values are listed in the PREDICTION column.", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:40+0000", "config": { @@ -2094,7 +2094,7 @@ "commited": true }, { - "text": "%md\n---\n4.4. Run the following command to show the model global statistics for each partitioned sub-model. The partition name column contains the values from the partition column. If multiple columns were specified, then there would be one column for each with corresponding value.\n", + "text": "%md\n\n4.4. Run the following command to show the model global statistics for each partitioned sub-model. The partition name column contains the values from the partition column. If multiple columns were specified, then there would be one column for each with corresponding value.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:42+0000", "config": { @@ -2202,7 +2202,7 @@ "commited": true }, { - "text": "%md\n---\nUse Python to score data and display prediction details for every customer by using the `topN_attrs \u003d True` argument to the `.predict` call. The prediction details return the actual value of attributes used for scoring and the relative importance of the attributes in determining the score of each customer, in descending order of importance. ", + "text": "%md\n\nUse Python to score data and display prediction details for every customer by using the `topN_attrs \u003d True` argument to the `.predict` call. The prediction details return the actual value of attributes used for scoring and the relative importance of the attributes in determining the score of each customer, in descending order of importance. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:44+0000", "config": { @@ -2325,7 +2325,7 @@ "commited": true }, { - "text": "%md\n---\n4.5. Run the following command to materialize the test dataset. The `materialize` method creates a new database table based on the content specified by an OML DataFrame proxy object.\n\nHere, you materialize the data to table `TEST_DATA` so that it can be queried from SQL.", + "text": "%md\n\n4.5. Run the following command to materialize the test dataset. The `materialize` method creates a new database table based on the content specified by an OML DataFrame proxy object.\n\nHere, you materialize the data to table `TEST_DATA` so that it can be queried from SQL.", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:56+0000", "config": { @@ -2398,7 +2398,7 @@ "commited": true }, { - "text": "%md\n---\n4.6. Use SQL to score data and display prediction details.", + "text": "%md\n\n4.6. Use SQL to score data and display prediction details.", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:03:59+0000", "config": { @@ -2581,7 +2581,7 @@ "commited": true }, { - "text": "%md\n---\n5.2. Split the data set into train and test sets.", + "text": "%md\n\n5.2. Split the data set into train and test sets.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:15:22+0000", "config": { @@ -2654,7 +2654,7 @@ "commited": true }, { - "text": "%md\n---\n5.3. Train an SVM model.", + "text": "%md\n\n5.3. Train an SVM model.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:15:28+0000", "config": { @@ -2732,7 +2732,7 @@ "commited": true }, { - "text": "%md\n---\n5.4. Create the MLX Global Feature Importance explainer gfi, using the `f1_weighted` metric.", + "text": "%md\n\n5.4. Create the MLX Global Feature Importance explainer gfi, using the `f1_weighted` metric.", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:04:17+0000", "config": { @@ -2805,7 +2805,7 @@ "commited": true }, { - "text": "%md\n---\n5.5. Run the explainer `gfi.explain` to generate the global feature importance for the test data. The explainer returns the list of attributes in order of importance to the model.\n\nNote: this can take almost a minute, depending on your configuration.", + "text": "%md\n\n5.5. Run the explainer `gfi.explain` to generate the global feature importance for the test data. The explainer returns the list of attributes in order of importance to the model.\n\nNote: this can take almost a minute, depending on your configuration.", "user": "OMLUSER", "dateUpdated": "2021-10-01T19:26:56+0000", "config": { @@ -2883,7 +2883,7 @@ "commited": true }, { - "text": "%md\n---\nAnother way of seeing the Global Feature Importance results with the `z.show` function in Zeppelin", + "text": "%md\n\nAnother way of seeing the Global Feature Importance results with the `z.show` function in Zeppelin", "user": "OMLUSER", "dateUpdated": "2021-09-22T19:05:02+0000", "config": { diff --git a/machine-learning/labs/oml4py-live-labs/Lab4_Store_and_manage_Python_objects_and_user_defined_functions.json b/machine-learning/labs/oml4py-live-labs/Lab4_Store_and_manage_Python_objects_and_user_defined_functions.json index 3bc56d60..0bedba07 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab4_Store_and_manage_Python_objects_and_user_defined_functions.json +++ b/machine-learning/labs/oml4py-live-labs/Lab4_Store_and_manage_Python_objects_and_user_defined_functions.json @@ -152,7 +152,7 @@ "commited": true }, { - "text": "%md\n---\n2.2. Run the following script to create the temporary Diabetes table:", + "text": "%md\n\n2.2. Run the following script to create the temporary Diabetes table:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:30:49+0000", "config": { @@ -230,7 +230,7 @@ "commited": true }, { - "text": "%md\n---\n2.3. Run the following script to create the Boston table:", + "text": "%md\n\n2.3. Run the following script to create the Boston table:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:30:50+0000", "config": { @@ -381,7 +381,7 @@ "commited": true }, { - "text": "%md\n---\n3.2. Save the `DIABETES_TMP` tables into the database.\n\n**Note:** The condition `append\u003dTRUE` adds the object to the datastore, if it already exists. The default is `append\u003dFalse`, and in that case, you will receive an error stating that the datastore exists and it won\u0027t be able to create it again.", + "text": "%md\n\n3.2. Save the `DIABETES_TMP` tables into the database.\n\n**Note:** The condition `append\u003dTRUE` adds the object to the datastore, if it already exists. The default is `append\u003dFalse`, and in that case, you will receive an error stating that the datastore exists and it won\u0027t be able to create it again.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:00+0000", "config": { @@ -454,7 +454,7 @@ "commited": true }, { - "text": "%md\n---\n3.3. Save the `IRIS` table to a new datastore, and then list the datastores. Notice that you see the datastore name, the number of objects in the datastore, the size in bytes consumed, when the datastore was create/updated, and any description provided by the user. The two datastores `ds_iris_data` and `ds_pydata` are present, with the latter containing the three objects you added.", + "text": "%md\n\n3.3. Save the `IRIS` table to a new datastore, and then list the datastores. Notice that you see the datastore name, the number of objects in the datastore, the size in bytes consumed, when the datastore was create/updated, and any description provided by the user. The two datastores `ds_iris_data` and `ds_pydata` are present, with the latter containing the three objects you added.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:03+0000", "config": { @@ -637,7 +637,7 @@ "commited": true }, { - "text": "%md\n---\n4.2. Run the following script to save the objects `regr1` and `regr2` to the datastore `ds_pymodels`, and allow the read privilege to be granted to them.\n\n**Note:** `overwrite\u003dTrue` indicates that the contents of the datastore should be replaced.", + "text": "%md\n\n4.2. Run the following script to save the objects `regr1` and `regr2` to the datastore `ds_pymodels`, and allow the read privilege to be granted to them.\n\n**Note:** `overwrite\u003dTrue` indicates that the contents of the datastore should be replaced.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:08+0000", "config": { @@ -747,7 +747,7 @@ "commited": true }, { - "text": "%md\n---\n4.3. Now grant the read privilege to all users by specifying `user\u003dNone`. Finally, list the datastores to which the read privilege has been granted.", + "text": "%md\n\n4.3. Now grant the read privilege to all users by specifying `user\u003dNone`. Finally, list the datastores to which the read privilege has been granted.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:09+0000", "config": { @@ -903,7 +903,7 @@ "commited": true }, { - "text": "%md\n---\n5.2. Run the following script to load the named Python object `regr2` (regression model), from the datastore to the global workspace.\n\n**Note:** Using the boolean `to_globals` parameter, you can specify whether the objects are loaded to a global workspace or to a dictionary object. If the argument is `to_globals\u003dTrue`, then `oml.ds.load` function loads the objects into the global workspace. If the argument is `to_globals\u003dFalse`, then the function returns a dict object that contains pairs of object names and values.", + "text": "%md\n\n5.2. Run the following script to load the named Python object `regr2` (regression model), from the datastore to the global workspace.\n\n**Note:** Using the boolean `to_globals` parameter, you can specify whether the objects are loaded to a global workspace or to a dictionary object. If the argument is `to_globals\u003dTrue`, then `oml.ds.load` function loads the objects into the global workspace. If the argument is `to_globals\u003dFalse`, then the function returns a dict object that contains pairs of object names and values.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:22+0000", "config": { @@ -981,7 +981,7 @@ "commited": true }, { - "text": "%md\n---\n5.3. Run the following script to view the model details", + "text": "%md\n\n5.3. Run the following script to view the model details", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:29+0000", "config": { @@ -1059,7 +1059,7 @@ "commited": true }, { - "text": "%md\n---\n5.4. Run the following script to load the named Python object `regr1`, from the datastore to the user\u0027s workspace.\n\n**Note:** Using the boolean `to_globals` parameter, you can specify whether the objects are loaded to a global workspace or to a dictionary object. If the argument is `to_globals\u003dTrue`, then `oml.ds.load` function loads the objects into the global workspace. If the argument is `to_globals\u003dFalse`, then the function returns a dict object that contains pairs of object names and values.", + "text": "%md\n\n5.4. Run the following script to load the named Python object `regr1`, from the datastore to the user\u0027s workspace.\n\n**Note:** Using the boolean `to_globals` parameter, you can specify whether the objects are loaded to a global workspace or to a dictionary object. If the argument is `to_globals\u003dTrue`, then `oml.ds.load` function loads the objects into the global workspace. If the argument is `to_globals\u003dFalse`, then the function returns a dict object that contains pairs of object names and values.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:30+0000", "config": { @@ -1245,7 +1245,7 @@ "commited": true }, { - "text": "%md\n---\n6.2. Run the following script to list the datastores to which other users have been granted the read privilege:", + "text": "%md\n\n6.2. Run the following script to list the datastores to which other users have been granted the read privilege:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:38+0000", "config": { @@ -1512,7 +1512,7 @@ "commited": true }, { - "text": "%md\n---\n8.2. Run the following script to grant read privilege to `OMLUSER2`.\n\nNote: If you are running this Notebook on your own tenancy, make sure to follow the instructions on creationg a new OMLUSER2, otherwise you will get an error. ", + "text": "%md\n\n8.2. Run the following script to grant read privilege to `OMLUSER2`.\n\nNote: If you are running this Notebook on your own tenancy, make sure to follow the instructions on creationg a new OMLUSER2, otherwise you will get an error. ", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:46+0000", "config": { @@ -1772,7 +1772,7 @@ "commited": true }, { - "text": "%md\n---\n10.2. Run the following script to view the string that you just created:", + "text": "%md\n\n10.2. Run the following script to view the string that you just created:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:55+0000", "config": { @@ -1921,7 +1921,7 @@ "commited": true }, { - "text": "%md\n---\n11.2. Run the `oml.script.dir` script to list the scripts to which the read privilege has been granted, and where `sctype` is set to `grant`.", + "text": "%md\n\n11.2. Run the `oml.script.dir` script to list the scripts to which the read privilege has been granted, and where `sctype` is set to `grant`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:57+0000", "config": { @@ -1998,7 +1998,7 @@ "commited": true }, { - "text": "%md\n---\n11.3. Run the following script to load the named function `MyLM_function` into the Python engine for use as a typical Python function using `oml.script.load`.", + "text": "%md\n\n11.3. Run the following script to load the named function `MyLM_function` into the Python engine for use as a typical Python function using `oml.script.load`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:31:59+0000", "config": { @@ -2075,7 +2075,7 @@ "commited": true }, { - "text": "%md\n---\n11.4. Extract the function text string from the function object and use this to save in the script repository using `oml.script_create`.", + "text": "%md\n\n11.4. Extract the function text string from the function object and use this to save in the script repository using `oml.script_create`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:00+0000", "config": { @@ -2152,7 +2152,7 @@ "commited": true }, { - "text": "%md\n---\n11.5. Run the script `oml.script.create` to create a test function `MyTEST_function`:", + "text": "%md\n\n11.5. Run the script `oml.script.create` to create a test function `MyTEST_function`:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:01+0000", "config": { @@ -2224,7 +2224,7 @@ "commited": true }, { - "text": "%md\n---\n11.6. Use `oml.script.dir` to list all the available scripts.", + "text": "%md\n\n11.6. Use `oml.script.dir` to list all the available scripts.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:02+0000", "config": { @@ -2333,7 +2333,7 @@ "commited": true }, { - "text": "%md\n---\n11.7. Call the `table_apply` on `build_lm_str` and `loaded_str` functions. Note that these strings represent the same function `build_lm_str` that was saved to the script repository after assigning the function to a string object. The `loaded_str` is the string representation of the function extracted using `get_source().read()`.", + "text": "%md\n\n11.7. Call the `table_apply` on `build_lm_str` and `loaded_str` functions. Note that these strings represent the same function `build_lm_str` that was saved to the script repository after assigning the function to a string object. The `loaded_str` is the string representation of the function extracted using `get_source().read()`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:04+0000", "config": { @@ -2410,7 +2410,7 @@ "commited": true }, { - "text": "%md\n---\nRun the same function on `loaded_str`:", + "text": "%md\n\nRun the same function on `loaded_str`:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:05+0000", "config": { @@ -2559,7 +2559,7 @@ "commited": true }, { - "text": "%md\n---\n12.2. Call the function `build_lm3` to build the model or model `MyGlobalML_function`:", + "text": "%md\n\n12.2. Call the function `build_lm3` to build the model or model `MyGlobalML_function`:", "user": "OMLUSER", "dateUpdated": "2021-09-22T21:32:07+0000", "config": { diff --git a/machine-learning/labs/oml4py-live-labs/Lab5_Run_user_defined_functions_using_Embedded_Python_Execution.json b/machine-learning/labs/oml4py-live-labs/Lab5_Run_user_defined_functions_using_Embedded_Python_Execution.json index e7f489d9..19749839 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab5_Run_user_defined_functions_using_Embedded_Python_Execution.json +++ b/machine-learning/labs/oml4py-live-labs/Lab5_Run_user_defined_functions_using_Embedded_Python_Execution.json @@ -75,7 +75,7 @@ "commited": true }, { - "text": "%md\n---\n1.2. Run the following script to obtain a proxy object to the `IRIS` table.", + "text": "%md\n\n1.2. Run the following script to obtain a proxy object to the `IRIS` table.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:38+0000", "config": { @@ -233,7 +233,7 @@ "commited": true }, { - "text": "%md\n---\n2.2. Run the script to predict the petal length using the `predict` function, and show the first 10 records.", + "text": "%md\n\n2.2. Run the script to predict the petal length using the `predict` function, and show the first 10 records.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:42+0000", "config": { @@ -312,7 +312,7 @@ "commited": true }, { - "text": "%md\n---\n2.3. Run the following script to assess model quality using mean squared error and R^2:", + "text": "%md\n\n2.3. Run the following script to assess model quality using mean squared error and R^2:", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:44+0000", "config": { @@ -391,7 +391,7 @@ "commited": true }, { - "text": "%md\n---\n2.4. Run the following script to generate a scatterplot of the data along with a plot of the regression line:", + "text": "%md\n\n2.4. Run the following script to generate a scatterplot of the data along with a plot of the regression line:", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:46+0000", "config": { @@ -544,7 +544,7 @@ "commited": true }, { - "text": "%md\n---\n3.2. Now, call the user-defined function `build_lm_1` to build the model and plot the petal length predictions.:", + "text": "%md\n\n3.2. Now, call the user-defined function `build_lm_1` to build the model and plot the petal length predictions.:", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:51+0000", "config": { @@ -697,7 +697,7 @@ "commited": true }, { - "text": "%md\n---\n3.4. By calling the `table_apply`, a Python engine is spawned and the user-defined function `build_lm_1` is called on that engine with the data referenced by IRIS being passed in as a pandas DataFrame. Part of the return value is the image, which is automatically displayed. In this example, we are passing the function object to the `table_apply` function.", + "text": "%md\n\n3.4. By calling the `table_apply`, a Python engine is spawned and the user-defined function `build_lm_1` is called on that engine with the data referenced by IRIS being passed in as a pandas DataFrame. Part of the return value is the image, which is automatically displayed. In this example, we are passing the function object to the `table_apply` function.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:11:56+0000", "config": { @@ -810,7 +810,7 @@ "commited": true }, { - "text": "%md\n---\n3.5. Run the following script to print the object, model, type and coefficient.", + "text": "%md\n\n3.5. Run the following script to print the object, model, type and coefficient.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:01+0000", "config": { @@ -963,7 +963,7 @@ "commited": true }, { - "text": "%md\n---\n3.7. Use the `row_apply` to call this user-defined function and return a single DataFrame proxy object as the result. The `row_apply` function takes as arguments the proxy object `IRIS`, that we want 10 rows scored at a time (resulting in 15 function calls), the user-defined function, the linear model object, and that we want the result to be returned as a single table by specifying the table definition.", + "text": "%md\n\n3.7. Use the `row_apply` to call this user-defined function and return a single DataFrame proxy object as the result. The `row_apply` function takes as arguments the proxy object `IRIS`, that we want 10 rows scored at a time (resulting in 15 function calls), the user-defined function, the linear model object, and that we want the result to be returned as a single table by specifying the table definition.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:05+0000", "config": { @@ -1151,7 +1151,7 @@ "commited": true }, { - "text": "%md\n---\n4.2. Change the user-defined function to save the models in a datastore. The datastore allows storing Python objects in the database under the provided name. The object assumes the name it is assigned in the Python environment. Here, you construct a name concatenating `mod_` as a prefix and the corresponding `Species` value.", + "text": "%md\n\n4.2. Change the user-defined function to save the models in a datastore. The datastore allows storing Python objects in the database under the provided name. The object assumes the name it is assigned in the Python environment. Here, you construct a name concatenating `mod_` as a prefix and the corresponding `Species` value.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:11+0000", "config": { @@ -1225,7 +1225,7 @@ "commited": true }, { - "text": "%md\n---\n4.3. Use `group_apply` to call the user-defined function and list the resulting models, which are a dictionary of three elements each assigned the model object name. The `group_apply` function takes the data, the index parameter that specifies the column or columns to partition on, the user-defined function, and the database to which you connect from the Python engine. Connecting to the database is necessary when using the datastore functionality.\n\nHere, the model object names are `mod_versicolor`, `mod_virginica`, and `mod_setosa`.\nWhen you load the datastore, you get the three models loaded into the client Python engine, assigned to their respective variables.\n\n**Note:** If the datastore exists, then delete it so that the `group_apply` function completes successfully.\n**Note:** Embedded Python execution can also leverage functions from third-party packages, for example, sklearn and matplotlib, as provided with the Autonomous Database environment. These packages can be used inside the user-defined function as shown here using LinearSVC.", + "text": "%md\n\n4.3. Use `group_apply` to call the user-defined function and list the resulting models, which are a dictionary of three elements each assigned the model object name. The `group_apply` function takes the data, the index parameter that specifies the column or columns to partition on, the user-defined function, and the database to which you connect from the Python engine. Connecting to the database is necessary when using the datastore functionality.\n\nHere, the model object names are `mod_versicolor`, `mod_virginica`, and `mod_setosa`.\nWhen you load the datastore, you get the three models loaded into the client Python engine, assigned to their respective variables.\n\n**Note:** If the datastore exists, then delete it so that the `group_apply` function completes successfully.\n**Note:** Embedded Python execution can also leverage functions from third-party packages, for example, sklearn and matplotlib, as provided with the Autonomous Database environment. These packages can be used inside the user-defined function as shown here using LinearSVC.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:13+0000", "config": { @@ -1542,7 +1542,7 @@ "commited": true }, { - "text": "%md\n---\n5.2. Use the `oml.do_eval` function to call the function `RandomRedDots` that you created in step 1:", + "text": "%md\n\n5.2. Use the `oml.do_eval` function to call the function `RandomRedDots` that you created in step 1:", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:24+0000", "config": { @@ -1664,7 +1664,7 @@ "commited": true }, { - "text": "%md\n---\n5.3. In this example, you modify the function to use subplots, thereby creating separate figure objects for the scatter plots. Store this in the script repository as `RandomRedDots2` and call the function to see the results. As expected, you get both plots.\nRun the following script to define the `RandomRedDots2` function that generates two scatter plots, and returns a two column DataFrame. Note that you can pass arguments to these functions here, `num_dots_1` and `num_dots_2`.", + "text": "%md\n\n5.3. In this example, you modify the function to use subplots, thereby creating separate figure objects for the scatter plots. Store this in the script repository as `RandomRedDots2` and call the function to see the results. As expected, you get both plots.\nRun the following script to define the `RandomRedDots2` function that generates two scatter plots, and returns a two column DataFrame. Note that you can pass arguments to these functions here, `num_dots_1` and `num_dots_2`.", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:32+0000", "config": { @@ -1743,7 +1743,7 @@ "commited": true }, { - "text": "%md\n---\n**Note:** When you call `RandomRedDots2` using embedded Python execution, you will get both plots as shown in the result.\n\n5.4. Use the `oml.do_eval` function to call the function `RandomRedDots2`. Here, you are specifying arguments to `do_eval` for `num_dots_1` and `num_dots_2`. These are specified as you would any other argument to `do_eval`. This applies to the other embedded Python functions as well.\n", + "text": "%md\n\n**Note:** When you call `RandomRedDots2` using embedded Python execution, you will get both plots as shown in the result.\n\n5.4. Use the `oml.do_eval` function to call the function `RandomRedDots2`. Here, you are specifying arguments to `do_eval` for `num_dots_1` and `num_dots_2`. These are specified as you would any other argument to `do_eval`. This applies to the other embedded Python functions as well.\n", "user": "OMLUSER", "dateUpdated": "2021-09-22T23:12:34+0000", "config": { diff --git a/machine-learning/labs/oml4py-live-labs/Lab6_Use_AutoML.json b/machine-learning/labs/oml4py-live-labs/Lab6_Use_AutoML.json index 95e06c34..26c303dc 100755 --- a/machine-learning/labs/oml4py-live-labs/Lab6_Use_AutoML.json +++ b/machine-learning/labs/oml4py-live-labs/Lab6_Use_AutoML.json @@ -74,7 +74,7 @@ "commited": true }, { - "text": "%md\n---\n1.2. Use the `oml.sync` function to create an OML Dataframe `CUST_DF` as a proxy for the database table `CUSTOMER_INSURANCE_LTV`.", + "text": "%md\n\n1.2. Use the `oml.sync` function to create an OML Dataframe `CUST_DF` as a proxy for the database table `CUSTOMER_INSURANCE_LTV`.", "user": "OMLUSER", "dateUpdated": "2021-09-30T04:59:31+0000", "config": { @@ -226,7 +226,7 @@ "commited": true }, { - "text": "%md\n---\n2.2. Run the following script to select the top `classification` algorithms for predicting whether a customer will buy insurance (`BUY_INSURANCE`). It displays the top 4 ranked algorithms (requested by the `k\u003d4`) and their `accuracy` (the metric we requested in `score_metric`). \n\nThe script returns Random Forest (rf), Decision Tree (dt), Neural Networks (nn) and Generalized Linear Models (glm) as the Top 4 algorithms, and among these the Random Forest is ranked highest for this problem, and the string `selected_insur_alg` will be used in subsequent AutoML function calls.\n \n**Note**: AutoML also supports regression, and the AlgorithmSelection function has other options and metrics that can be found in the documentation links at the bottom of this notebook.", + "text": "%md\n\n2.2. Run the following script to select the top `classification` algorithms for predicting whether a customer will buy insurance (`BUY_INSURANCE`). It displays the top 4 ranked algorithms (requested by the `k\u003d4`) and their `accuracy` (the metric we requested in `score_metric`). \n\nThe script returns Random Forest (rf), Decision Tree (dt), Neural Networks (nn) and Generalized Linear Models (glm) as the Top 4 algorithms, and among these the Random Forest is ranked highest for this problem, and the string `selected_insur_alg` will be used in subsequent AutoML function calls.\n \n**Note**: AutoML also supports regression, and the AlgorithmSelection function has other options and metrics that can be found in the documentation links at the bottom of this notebook.", "user": "OMLUSER", "dateUpdated": "2021-09-30T04:59:35+0000", "config": { @@ -500,7 +500,7 @@ "commited": true }, { - "text": "%md\n---\n4.2. Run the following script to **list the hyperparameters and their values** that were tried for the `top two models`, along with the corresponding model\u0027s score metric value (`accuracy`).", + "text": "%md\n\n4.2. Run the following script to **list the hyperparameters and their values** that were tried for the `top two models`, along with the corresponding model\u0027s score metric value (`accuracy`).", "user": "OMLUSER", "dateUpdated": "2021-09-30T05:01:48+0000", "config": { @@ -578,7 +578,7 @@ "commited": true }, { - "text": "%md\n---\n4.3. Run the following script to **list the hyperparameters and their values** that were tried for the `top ten models`, along with the corresponding model\u0027s score metric value (`accuracy`) in a better formatted way.", + "text": "%md\n\n4.3. Run the following script to **list the hyperparameters and their values** that were tried for the `top ten models`, along with the corresponding model\u0027s score metric value (`accuracy`) in a better formatted way.", "user": "OMLUSER", "dateUpdated": "2021-09-30T05:01:50+0000", "config": { @@ -730,7 +730,7 @@ "commited": true }, { - "text": "%md\n---\n4.3. Run the following script to specify a custom search space to explore for model building using the `param_space` argument to the `tune` function. With this specification, model tuning will narrow the set of important hyperparameter values.\n**Note**: For illustration purposes we are using a different scoring metric, `f1_macro`.", + "text": "%md\n\n4.3. Run the following script to specify a custom search space to explore for model building using the `param_space` argument to the `tune` function. With this specification, model tuning will narrow the set of important hyperparameter values.\n**Note**: For illustration purposes we are using a different scoring metric, `f1_macro`.", "user": "OMLUSER", "dateUpdated": "2021-09-30T05:01:53+0000", "config": { diff --git a/machine-learning/labs/oml4py-live-labs/README.md b/machine-learning/labs/oml4py-live-labs/README.md index e5109c82..e39a7230 100644 --- a/machine-learning/labs/oml4py-live-labs/README.md +++ b/machine-learning/labs/oml4py-live-labs/README.md @@ -1,9 +1,9 @@ # Oracle Machine Learning for Python -This set of notebooks from the OML4Py workshop [Introduction to Oracle Machine Learning for Python on Autonomous Database](https://bit.ly/oml4pyhol) introduces you to Oracle Machine Learning for Python (OML4Py) on Oracle Autonomous Database. +This set of notebooks from the OML4Py workshop [Introduction to Oracle Machine Learning for Python on Autonomous AI Database](https://livelabs.oracle.com/ords/r/dbpm/livelabs/view-workshop?wid=786) introduces you to Oracle Machine Learning for Python (OML4Py) on Oracle Autonomous Database. Oracle Machine Learning for Python (OML4Py) supports scalable in-database data exploration and preparation using native Python syntax, scalable in-database algorithms for machine learning model building and scoring, and automated machine learning (AutoML). Users can also invoke user-defined Python functions from Python and REST APIs using database-spawned Python engines. OML4Py increases data scientist productivity and reduces solution deployment complexity. Join us for this tour of OML4Py. -Python is a major programming language used for data science and machine learning. OML4Py is a feature on Oracle Autonomous Database that provides Python users access to powerful in-database functionality supporting data scientists for both scalability, performance, and ease of solution deployment. +Python is a major programming language used for data science and machine learning. OML4Py is a feature on Oracle Autonomous AI Database that provides Python users access to powerful in-database functionality supporting data scientists for both scalability, performance, and ease of solution deployment. Oracle Machine Learning Notebooks is a collaborative user interface for data scientists and business and data analysts who perform machine learning in Oracle Autonomous Database. @@ -13,7 +13,7 @@ Key Features: * Enables sharing of notebooks and templates with permissions and execution scheduling * Access to 30+ parallel, scalable in-Database implementations of machine learning algorithms * Python, SQL and PL/SQL scripting language supported -* Enables and supports deployments of enterprise machine learning methodologies in Autonomous Data Warehouse (ADW), Autonomous Transactional Database (ATP) and Autonomous JSON Database (AJD) +* Enables and supports deployments of enterprise machine learning methodologies in Autonomous AI Lakehouse, Autonomous AI Transactional Database and Autonomous AI JSON Database The current folder contains the examples based on Oracle Machine Learning for Python (OML4Py) used for the Live Labs "Python Users: Build intelligent applications faster with Oracle Machine Learning": @@ -25,12 +25,12 @@ The current folder contains the examples based on Oracle Machine Learning for Py * **Lab 5 OML4Py Embedded Python Execution** - Examples of using open-source Python scripts and algorithms (like SciKit-Learn) with OML4Py * **Lab 6 OML4Py AutoML** - Examples of how to execute OML AutoML processes to identify the ideal algorithms and tune the models. -More information on the Live Labs workshop [Introduction to Oracle Machine Learning for Python on Autonomous Database](https://bit.ly/oml4pyhol) +More information on the Live Labs workshop [Introduction to Oracle Machine Learning for Python on Autonomous Database](https://livelabs.oracle.com/ords/r/dbpm/livelabs/view-workshop?wid=786) -See also [Announcing next generation OML Notebooks on Oracle Autonomous Database](https://blogs.oracle.com/machinelearning/post/announcing-next-generation-to-oml-notebooks-on-oracle-autonomous-database) blog post for more information on OML Notebooks. +See also [Announcing next generation OML Notebooks on Oracle Autonomous AI Database](https://blogs.oracle.com/machinelearning/post/announcing-next-generation-to-oml-notebooks-on-oracle-autonomous-database) blog post for more information on OML Notebooks. -Last updated: April 2023 +Last updated: November 2025 -#### Copyright (c) 2023 Oracle Corporation and/or its affilitiates. +#### Copyright (c) 2025 Oracle Corporation and/or its affilitiates. ###### [The Universal Permissive License (UPL), Version 1.0](https://oss.oracle.com/licenses/upl/) diff --git a/machine-learning/notebooks/notebooks-classic/python/OML Run-me-first.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML Run-me-first.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML Run-me-first.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML Run-me-first.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML Third-Party Packages - Environment Creation.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML Third-Party Packages - Environment Creation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML Third-Party Packages - Environment Creation.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML Third-Party Packages - Environment Creation.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML Third-Party Packages - Python Environment Usage .json b/machine-learning/notebooks-oml/notebooks-classic/python/OML Third-Party Packages - Python Environment Usage .json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML Third-Party Packages - Python Environment Usage .json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML Third-Party Packages - Python Environment Usage .json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -0- Tour.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -0- Tour.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -0- Tour.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -0- Tour.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -1- Introduction.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -1- Introduction.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -1- Introduction.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -1- Introduction.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -2- Data Selection and Manipulation.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -2- Data Selection and Manipulation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -2- Data Selection and Manipulation.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -2- Data Selection and Manipulation.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -3- Datastore and Script Repository.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -3- Datastore and Script Repository.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -3- Datastore and Script Repository.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -3- Datastore and Script Repository.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -4- Embedded Python Execution.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -4- Embedded Python Execution.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -4- Embedded Python Execution.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -4- Embedded Python Execution.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py -5- AutoML.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -5- AutoML.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py -5- AutoML.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py -5- AutoML.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Anomaly Detection SVM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Anomaly Detection SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Anomaly Detection SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Anomaly Detection SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Association Rules Apriori.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Association Rules Apriori.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Association Rules Apriori.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Association Rules Apriori.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Attribute Importance MDL.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Attribute Importance MDL.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Attribute Importance MDL.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Attribute Importance MDL.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification DT.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification DT.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification DT.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification DT.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification GLM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification NB.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification NB.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification NB.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification NB.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification NN.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification NN.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification NN.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification NN.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification RF.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification RF.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification RF.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification RF.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Classification SVM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Classification SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Classification SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Clustering EM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Clustering EM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Clustering EM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Clustering EM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Clustering KM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Clustering KM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Clustering KM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Clustering KM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Duplicates Removal.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Duplicates Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Duplicates Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Duplicates Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Missing Data.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Missing Data.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Missing Data.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Missing Data.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Outlier Removal.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Outlier Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Outlier Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Outlier Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Recode Synonymous Values.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Recode Synonymous Values.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Cleaning Recode Synonymous Values.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Cleaning Recode Synonymous Values.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Binning.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Binning.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Binning.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Binning.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Categorical.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Categorical.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Categorical.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Categorical.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Normalization and Scaling.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Normalization and Scaling.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation Normalization and Scaling.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation Normalization and Scaling.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation One Hot Encoding.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation One Hot Encoding.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Data Transformation One Hot Encoding.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Data Transformation One Hot Encoding.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Dataset Creation.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Dataset Creation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Dataset Creation.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Dataset Creation.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Date and Time Classes.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Date and Time Classes.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Date and Time Classes.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Date and Time Classes.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Engineering Aggregation.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Engineering Aggregation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Engineering Aggregation.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Engineering Aggregation.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction ESA Wiki Model.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction ESA Wiki Model.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction ESA Wiki Model.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction ESA Wiki Model.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction ESA.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction ESA.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction ESA.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction ESA.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction SVD.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction SVD.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Extraction SVD.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Extraction SVD.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Selection Algorithm-based.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Selection Algorithm-based.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Selection Algorithm-based.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Selection Algorithm-based.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Selection Using Summary Statistics.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Selection Using Summary Statistics.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Feature Selection Using Summary Statistics.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Feature Selection Using Summary Statistics.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Importing Wide Datasets.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Importing Wide Datasets.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Importing Wide Datasets.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Importing Wide Datasets.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Partitioned Model SVM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Partitioned Model SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Partitioned Model SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Partitioned Model SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py REST API.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py REST API.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py REST API.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py REST API.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Regression GLM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Regression GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Regression NN.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression NN.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Regression NN.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression NN.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Regression SVM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Regression SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Regression SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Statistical Functions.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Statistical Functions.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Statistical Functions.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Statistical Functions.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Text Mining SVM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Text Mining SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Text Mining SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Text Mining SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OML4Py Time Series ESM.json b/machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Time Series ESM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OML4Py Time Series ESM.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OML4Py Time Series ESM.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Cross_Validation_AUC_ML_101.json b/machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Cross_Validation_AUC_ML_101.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Cross_Validation_AUC_ML_101.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Cross_Validation_AUC_ML_101.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Embedded_execution_using_third_party_packages.json b/machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Embedded_execution_using_third_party_packages.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Embedded_execution_using_third_party_packages.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Embedded_execution_using_third_party_packages.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Weight_of_Evidence_ML_101.json b/machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Weight_of_Evidence_ML_101.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML4Py_Weight_of_Evidence_ML_101.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML4Py_Weight_of_Evidence_ML_101.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML_Datastore_for_R_and_Python.json b/machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML_Datastore_for_R_and_Python.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OfficeHours_OML_Datastore_for_R_and_Python.json rename to machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_OML_Datastore_for_R_and_Python.json diff --git a/machine-learning/notebooks/notebooks-classic/python/OfficeHours_Python_Deploy_an_XGBoost_model_in_OML_Services.ipynb b/machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_Python_Deploy_an_XGBoost_model_in_OML_Services.ipynb similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/OfficeHours_Python_Deploy_an_XGBoost_model_in_OML_Services.ipynb rename to machine-learning/notebooks-oml/notebooks-classic/python/OfficeHours_Python_Deploy_an_XGBoost_model_in_OML_Services.ipynb diff --git a/machine-learning/notebooks/notebooks-classic/python/README.md b/machine-learning/notebooks-oml/notebooks-classic/python/README.md similarity index 100% rename from machine-learning/notebooks/notebooks-classic/python/README.md rename to machine-learning/notebooks-oml/notebooks-classic/python/README.md diff --git a/machine-learning/notebooks/notebooks-classic/r/OML Import Wiki ESA Model.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML Import Wiki ESA Model.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML Import Wiki ESA Model.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML Import Wiki ESA Model.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML Run-me-first.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML Run-me-first.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML Run-me-first.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML Run-me-first.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML Third-Party Packages - Environment Creation.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML Third-Party Packages - Environment Creation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML Third-Party Packages - Environment Creation.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML Third-Party Packages - Environment Creation.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML Third-Party Packages - R Environment Usage.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML Third-Party Packages - R Environment Usage.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML Third-Party Packages - R Environment Usage.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML Third-Party Packages - R Environment Usage.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R -1- Introduction.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R -1- Introduction.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R -1- Introduction.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R -1- Introduction.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R -2- Data Selection and Manipulation.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R -2- Data Selection and Manipulation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R -2- Data Selection and Manipulation.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R -2- Data Selection and Manipulation.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R -3- Datastore and Script Repository.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R -3- Datastore and Script Repository.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R -3- Datastore and Script Repository.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R -3- Datastore and Script Repository.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R -4- Embedded R Execution.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R -4- Embedded R Execution.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R -4- Embedded R Execution.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R -4- Embedded R Execution.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Anomaly Detection SVM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Anomaly Detection SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Anomaly Detection SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Anomaly Detection SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Association Rules Apriori.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Association Rules Apriori.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Association Rules Apriori.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Association Rules Apriori.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Attribute Importance MDL.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Attribute Importance MDL.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Attribute Importance MDL.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Attribute Importance MDL.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Classification DT.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification DT.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Classification DT.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification DT.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Classification GLM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Classification GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Classification NB.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification NB.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Classification NB.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification NB.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Classification RF.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification RF.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Classification RF.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification RF.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Classification SVM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Classification SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Classification SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Clustering EM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering EM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Clustering EM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering EM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Clustering KM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering KM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Clustering KM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering KM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Clustering OC.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering OC.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Clustering OC.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Clustering OC.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Duplicate Removal.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Duplicate Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Duplicate Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Duplicate Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Missing Data.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Missing Data.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Missing Data.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Missing Data.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Outlier Removal.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Outlier Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Outlier Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Outlier Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Recode Synonymous Values.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Recode Synonymous Values.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Cleaning Recode Synonymous Values.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Cleaning Recode Synonymous Values.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Binning.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Binning.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Binning.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Binning.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Categorical Recode.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Categorical Recode.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Categorical Recode.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Categorical Recode.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Date Datatypes.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Date Datatypes.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Date Datatypes.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Date Datatypes.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Normalization and Scaling.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Normalization and Scaling.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation Normalization and Scaling.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation Normalization and Scaling.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation One-Hot Encoding.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation One-Hot Encoding.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Data Transformation One-Hot Encoding.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Data Transformation One-Hot Encoding.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Dataset Creation.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Dataset Creation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Dataset Creation.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Dataset Creation.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Engineering Aggregation.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Engineering Aggregation.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Engineering Aggregation.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Engineering Aggregation.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction ESA Wiki Model.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction ESA Wiki Model.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction ESA Wiki Model.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction ESA Wiki Model.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction ESA.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction ESA.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction ESA.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction ESA.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction SVD.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction SVD.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Extraction SVD.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Extraction SVD.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Selection Algorithm-based.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Selection Algorithm-based.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Selection Algorithm-based.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Selection Algorithm-based.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Feature Selection Using Summary Statistics.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Selection Using Summary Statistics.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Feature Selection Using Summary Statistics.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Feature Selection Using Summary Statistics.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Partitioned Model SVM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Partitioned Model SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Partitioned Model SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Partitioned Model SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R REST API.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R REST API.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R REST API.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R REST API.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Regression GLM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Regression GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Regression NN .json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression NN .json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Regression NN .json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression NN .json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Regression SVM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Regression SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Regression SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Statistical Functions .json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Statistical Functions .json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Statistical Functions .json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Statistical Functions .json diff --git a/machine-learning/notebooks/notebooks-classic/r/OML4R Text Mining SVM.json b/machine-learning/notebooks-oml/notebooks-classic/r/OML4R Text Mining SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OML4R Text Mining SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OML4R Text Mining SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OfficeHours_OML4R_Demonstration.json b/machine-learning/notebooks-oml/notebooks-classic/r/OfficeHours_OML4R_Demonstration.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OfficeHours_OML4R_Demonstration.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OfficeHours_OML4R_Demonstration.json diff --git a/machine-learning/notebooks/notebooks-classic/r/OfficeHours_OML_Datastore_for_R_and_Python.json b/machine-learning/notebooks-oml/notebooks-classic/r/OfficeHours_OML_Datastore_for_R_and_Python.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/OfficeHours_OML_Datastore_for_R_and_Python.json rename to machine-learning/notebooks-oml/notebooks-classic/r/OfficeHours_OML_Datastore_for_R_and_Python.json diff --git a/machine-learning/notebooks/notebooks-classic/r/README.md b/machine-learning/notebooks-oml/notebooks-classic/r/README.md similarity index 100% rename from machine-learning/notebooks/notebooks-classic/r/README.md rename to machine-learning/notebooks-oml/notebooks-classic/r/README.md diff --git a/machine-learning/notebooks/notebooks-classic/sql/Credit_Scoring_100K_SQL_Create_Table.sql b/machine-learning/notebooks-oml/notebooks-classic/sql/Credit_Scoring_100K_SQL_Create_Table.sql similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/Credit_Scoring_100K_SQL_Create_Table.sql rename to machine-learning/notebooks-oml/notebooks-classic/sql/Credit_Scoring_100K_SQL_Create_Table.sql diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML Export and Import Serialized Models.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML Export and Import Serialized Models.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML Export and Import Serialized Models.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML Export and Import Serialized Models.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Anomaly Detection MSET.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Anomaly Detection MSET.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Anomaly Detection MSET.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Anomaly Detection MSET.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Classification XGBoost.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Classification XGBoost.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Classification XGBoost.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Classification XGBoost.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Regression XGBoost.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Regression XGBoost.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL 21c or 23c Regression XGBoost.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL 21c or 23c Regression XGBoost.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Anomaly Detection SVM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Anomaly Detection SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Anomaly Detection SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Anomaly Detection SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Association Rules Apriori.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Association Rules Apriori.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Association Rules Apriori.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Association Rules Apriori.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Attribute Importance MDL.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Attribute Importance MDL.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Attribute Importance MDL.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Attribute Importance MDL.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification DT.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification DT.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification DT.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification DT.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification GLM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification NB.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification NB.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification NB.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification NB.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification NN.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification NN.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification NN.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification NN.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification RF.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification RF.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification RF.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification RF.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification SVM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Classification SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Classification SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering EM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering EM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering EM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering EM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering KM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering KM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering KM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering KM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering OC.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering OC.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Clustering OC.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Clustering OC.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Duplicates Removal.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Duplicates Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Duplicates Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Duplicates Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Missing Data.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Missing Data.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Missing Data.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Missing Data.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Outlier Removal.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Outlier Removal.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Outlier Removal.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Outlier Removal.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Recode Synonymous Values.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Recode Synonymous Values.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Cleaning Recode Synonymous Values.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Cleaning Recode Synonymous Values.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Binning.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Binning.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Binning.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Binning.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Categorical.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Categorical.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Categorical.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Categorical.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Normalization and Scale.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Normalization and Scale.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Data Transformation Normalization and Scale.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Data Transformation Normalization and Scale.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Exporting Serialized Models.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Exporting Serialized Models.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Exporting Serialized Models.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Exporting Serialized Models.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Engineering Aggregation and Time.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Engineering Aggregation and Time.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Engineering Aggregation and Time.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Engineering Aggregation and Time.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction ESA Wiki Model.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction ESA Wiki Model.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction ESA Wiki Model.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction ESA Wiki Model.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction NMF.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction NMF.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction NMF.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction NMF.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction SVD.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction SVD.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Extraction SVD.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Extraction SVD.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Algorithm Based.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Algorithm Based.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Algorithm Based.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Algorithm Based.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Unsupervised Attribute Importance.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Using Summary Statistics.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Using Summary Statistics.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Feature Selection Using Summary Statistics.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Feature Selection Using Summary Statistics.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Nested Columns.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Nested Columns.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Nested Columns.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Nested Columns.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Partitioned Model SVM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Partitioned Model SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Partitioned Model SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Partitioned Model SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Procedure for Importing Data to ADB.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Procedure for Importing Data to ADB.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Procedure for Importing Data to ADB.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Procedure for Importing Data to ADB.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression GLM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression GLM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression GLM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression GLM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression NN.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression NN.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression NN.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression NN.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression SVM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Regression SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Regression SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Statistical Functions.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Statistical Functions.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Statistical Functions.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Statistical Functions.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Text Mining SVM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Text Mining SVM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Text Mining SVM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Text Mining SVM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL Time Series ESM.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Time Series ESM.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL Time Series ESM.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL Time Series ESM.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Automated_Text_Mining_Example.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Automated_Text_Mining_Example.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Automated_Text_Mining_Example.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Automated_Text_Mining_Example.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Credit_Score_Predictions.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Credit_Score_Predictions.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Credit_Score_Predictions.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Credit_Score_Predictions.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Insurance_Claims_Fraud.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Insurance_Claims_Fraud.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Insurance_Claims_Fraud.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Insurance_Claims_Fraud.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_My_First_Notebook.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_My_First_Notebook.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_My_First_Notebook.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_My_First_Notebook.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Picking_a_Good_Wine_for_20_dollars_with_ADW_OML.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Picking_a_Good_Wine_for_20_dollars_with_ADW_OML.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Picking_a_Good_Wine_for_20_dollars_with_ADW_OML.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Picking_a_Good_Wine_for_20_dollars_with_ADW_OML.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Predicting_Customer_Lifetime_Value_LTV.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Predicting_Customer_Lifetime_Value_LTV.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Predicting_Customer_Lifetime_Value_LTV.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Predicting_Customer_Lifetime_Value_LTV.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_SQLFORMAT_and_Forms_Examples.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_SQLFORMAT_and_Forms_Examples.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_SQLFORMAT_and_Forms_Examples.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_SQLFORMAT_and_Forms_Examples.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_10k.json b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_10k.json similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_10k.json rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_10k.json diff --git a/machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_Desktop_Viz_companion.dva b/machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_Desktop_Viz_companion.dva similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_Desktop_Viz_companion.dva rename to machine-learning/notebooks-oml/notebooks-classic/sql/OML4SQL_Targeting_Top_Customers_Desktop_Viz_companion.dva diff --git a/machine-learning/notebooks/notebooks-classic/sql/README.md b/machine-learning/notebooks-oml/notebooks-classic/sql/README.md similarity index 100% rename from machine-learning/notebooks/notebooks-classic/sql/README.md rename to machine-learning/notebooks-oml/notebooks-classic/sql/README.md diff --git a/machine-learning/notebooks-oml/python/OML Run-me-first.dsnb b/machine-learning/notebooks-oml/python/OML Run-me-first.dsnb new file mode 100644 index 00000000..1b4dbbab --- /dev/null +++ b/machine-learning/notebooks-oml/python/OML Run-me-first.dsnb @@ -0,0 +1 @@ +[{"layout":null,"template":null,"templateConfig":null,"name":"OML Run-me-first","description":null,"readOnly":false,"type":"low","paragraphs":[{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":null,"title":null,"message":["%md"," "],"enabled":true,"result":{"startTime":1737138360559,"interpreter":"md.low","endTime":1737138361104,"results":[],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":true,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","","# OML Run-me-first: Data preparation and creation for examples","","In this notebook, we will load data used in other notebooks that require the `CUSTOMER_INSURANCE_LTV` table, and will also prepare the dataset used by the `Data Preparation` notebooks, in particular those for data cleaning and feature selection.","","The original dataset contains customer insurance lifetime value information, including customer financial information, lifetime value, and whether or not the customer bought insurance.","","We will take the original table and will tweak the data, with some original values being replaced by `NULL`, and adding `duplicated rows` artificially to generate a resulting table we will call `CUSTOMER_INSURANCE_LTV_NOISE`. ","","The source of the data is a CSV file stored in Oracle Machine Learning GitHub folder<\/a>, which will be imported into the current user's schema using the `DBMS_CLOUD.CREATE_EXTERNAL_TABLE` function among others.","","###### `IMPORTANT` Please run this notebook before running any of the Data Preparation notebooks.","","Copyright (c) 2025 Oracle Corporation ","###### The Universal Permissive License (UPL), Version 1.0<\/a>","---"],"enabled":true,"result":{"startTime":1737138361570,"interpreter":"md.low","endTime":1737138362088,"results":[{"message":"

OML Run-me-first: Data preparation and creation for examples<\/h1>\n

In this notebook, we will load data used in other notebooks that require the CUSTOMER_INSURANCE_LTV<\/code> table, and will also prepare the dataset used by the Data Preparation<\/code> notebooks, in particular those for data cleaning and feature selection.<\/p>\n

The original dataset contains customer insurance lifetime value information, including customer financial information, lifetime value, and whether or not the customer bought insurance.<\/p>\n

We will take the original table and will tweak the data, with some original values being replaced by NULL<\/code>, and adding duplicated rows<\/code> artificially to generate a resulting table we will call CUSTOMER_INSURANCE_LTV_NOISE<\/code>.<\/p>\n

The source of the data is a CSV file stored in Oracle Machine Learning GitHub folder<\/a>, which will be imported into the current user's schema using the DBMS_CLOUD.CREATE_EXTERNAL_TABLE<\/code> function among others.<\/p>\n

IMPORTANT<\/code> Please run this notebook before running any of the Data Preparation notebooks.<\/h6>\n

Copyright (c) 2025 Oracle Corporation<\/p>\n

The Universal Permissive License (UPL), Version 1.0<\/a><\/h6>\n
\n","type":"HTML"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Cleanup if tables exist","message":["%script","","-- DELETE PREVIOUS DATA IF IT EXISTS","BEGIN "," BEGIN"," EXECUTE IMMEDIATE 'DROP TABLE EXT_CUSTOMER_INSURANCE_LTV';"," EXCEPTION WHEN OTHERS THEN NULL;"," END;"," BEGIN"," EXECUTE IMMEDIATE 'DROP TABLE CUSTOMER_INSURANCE_LTV';"," EXCEPTION WHEN OTHERS THEN NULL;"," END;"," BEGIN"," EXECUTE IMMEDIATE 'DROP TABLE CUSTOMER_INSURANCE_LTV_NOISE';"," EXCEPTION WHEN OTHERS THEN NULL;"," END;"," BEGIN"," EXECUTE IMMEDIATE 'DROP TABLE CUSTOMER_INSURANCE_LTV_SQL';"," EXCEPTION WHEN OTHERS THEN NULL;"," END;"," BEGIN"," EXECUTE IMMEDIATE 'DROP TABLE CUSTOMER_INSURANCE_LTV_PY';"," EXCEPTION WHEN OTHERS THEN NULL;"," END;","END; "],"enabled":true,"result":{"startTime":1737138362545,"interpreter":"script.low","endTime":1737138365251,"results":[{"message":"\nPL/SQL procedure successfully completed.\n\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","","#### Create the `CUSTOMER_INSURANCE_LTV` table ","","This table will be used in other notebooks and allow us to build the `CUSTOMER_INSURANCE_LTV_NOISE` that will also be used.","","We will pull the data from OML dataset storage in GitHub using the `DBMS_CLOUD` capabilities.","","More details on `DBMS_CLOUD` can be found
in the Documentation<\/a>"],"enabled":true,"result":{"startTime":1737138365721,"interpreter":"md.low","endTime":1737138366172,"results":[{"message":"

Create the CUSTOMER_INSURANCE_LTV<\/code> table<\/h4>\n

This table will be used in other notebooks and allow us to build the CUSTOMER_INSURANCE_LTV_NOISE<\/code> that will also be used.<\/p>\n

We will pull the data from OML dataset storage in GitHub using the DBMS_CLOUD<\/code> capabilities.<\/p>\n

More details on DBMS_CLOUD<\/code> can be found in the Documentation<\/a><\/p>\n","type":"HTML"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Load the data from Object Storage","message":["%script","","-- LOADS THE DATA FROM GITHUB BY CREATING AN EXTERNAL TABLE AND THEN CREATING A TABLE WITH THAT","DECLARE"," uri_data varchar2(1000) := 'https://raw.githubusercontent.com/oracle/oracle-db-examples/master/machine-learning/datasets/CUST_INSUR_LTV_APPLY.csv';"," csv_format varchar2(1000) := '{\"dateformat\":\"YYYY-MM-DD\", \"skipheaders\":\"1\", \"delimiter\":\",\", \"ignoreblanklines\":\"true\", \"removequotes\":\"true\", \"blankasnull\":\"true\", \"trimspaces\":\"lrtrim\", \"truncatecol\":\"true\", \"ignoremissingcolumns\":\"true\"}';","BEGIN","DBMS_CLOUD.CREATE_EXTERNAL_TABLE("," TABLE_NAME => 'EXT_CUSTOMER_INSURANCE_LTV',"," FILE_URI_LIST => uri_data,"," FORMAT => csv_format,"," COLUMN_LIST => 'MARITAL_STATUS VARCHAR2(26),"," STATE CHAR(26),"," CREDIT_BALANCE NUMBER(8,0),"," CUSTOMER_TENURE NUMBER(3,0),"," MORTGAGE_AMOUNT NUMBER(7,0),"," BANK_FUNDS NUMBER(7,0),"," NUM_DEPENDENTS NUMBER(3,0),"," HAS_CHILDREN NUMBER(3,0),"," INCOME NUMBER(7,0),"," CUSTOMER_ID CHAR(26),"," GENDER CHAR(26),"," PROFESSION VARCHAR2(35),"," CREDIT_CARD_LIMITS NUMBER(6,0),"," REGION VARCHAR2(26),"," HOME_OWNERSHIP NUMBER(3,0),"," NUM_ONLINE_TRANS NUMBER(6,0),"," BUY_INSURANCE VARCHAR2(26),"," MONTHLY_CHECKS NUMBER(4,0),"," NUM_TRANS_KIOSK NUMBER(3,0),"," AGE NUMBER(4,0),"," MONEY_MONTLY_OVERDRAWN NUMBER(6,2),"," LTV NUMBER(9,2),"," TOTAL_AUTOM_PAYMENTS NUMBER(8,0),"," NUM_TRANS_TELLER NUMBER(3,0),"," CHECKING_BALANCE NUMBER(7,0),"," NUM_TRANS_ATM NUMBER(3,0),"," LTV_BIN VARCHAR2(26),"," FIRST_NAME VARCHAR2(26),"," NUM_MORTGAGES NUMBER(3,0),"," CAR_OWNERSHIP NUMBER(3,0),"," LAST_NAME VARCHAR2(26)');"," ","-- WRITE A TABLE INTO THE CURRENT USER WITH THE NAME \"CUSTOMER_INSURANCE_LTV\" "," EXECUTE IMMEDIATE 'create table CUSTOMER_INSURANCE_LTV as select * from EXT_CUSTOMER_INSURANCE_LTV';"," EXCEPTION WHEN OTHERS THEN NULL;","","END;"],"enabled":true,"result":{"startTime":1737138366645,"interpreter":"script.low","endTime":1737138369552,"results":[{"message":"\nPL/SQL procedure successfully completed.\n\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Count the number of records","message":["%sql","","SELECT count(*) FROM CUSTOMER_INSURANCE_LTV;"],"enabled":true,"result":{"startTime":1737138370067,"interpreter":"sql.low","endTime":1737138370613,"results":[{"message":"COUNT(*)\n13880\n","type":"TABLE"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":4,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"table","title":"Review the contents of the CUSTOMER_INSURANCE_LTV table","message":["%sql","","SELECT * ","FROM CUSTOMER_INSURANCE_LTV","FETCH FIRST 10 ROWS ONLY;"],"enabled":true,"result":{"startTime":1737138371093,"interpreter":"sql.low","endTime":1737138371596,"results":[{"message":"MARITAL_STATUS\tSTATE\tCREDIT_BALANCE\tCUSTOMER_TENURE\tMORTGAGE_AMOUNT\tBANK_FUNDS\tNUM_DEPENDENTS\tHAS_CHILDREN\tINCOME\tCUSTOMER_ID\tGENDER\tPROFESSION\tCREDIT_CARD_LIMITS\tREGION\tHOME_OWNERSHIP\tNUM_ONLINE_TRANS\tBUY_INSURANCE\tMONTHLY_CHECKS\tNUM_TRANS_KIOSK\tAGE\tMONEY_MONTLY_OVERDRAWN\tLTV\tTOTAL_AUTOM_PAYMENTS\tNUM_TRANS_TELLER\tCHECKING_BALANCE\tNUM_TRANS_ATM\tLTV_BIN\tFIRST_NAME\tNUM_MORTGAGES\tCAR_OWNERSHIP\tLAST_NAME\nSINGLE\tCA \t0\t3\t0\t0\t3\t0\t65871\tCU15154 \tM \tNurse\t1000\tWest\t0\t0\tNo\t0\t1\t24\t53.06\t14367.75\t0\t0\t25\t0\tMEDIUM\tGAYLE\t0\t0\tDURANT\nSINGLE\tNY \t0\t4\t0\t290\t4\t0\t68747\tCU15155 \tM \tProgrammer/Developer\t700\tNorthEast\t0\t0\tYes\t1\t1\t35\t53.84\t14686.75\t287\t2\t25\t4\tMEDIUM\tQUINTON\t0\t1\tMASSEY\nMARRIED\tMI \t0\t3\t1000\t550\t3\t0\t68684\tCU15157 \tM \tProgrammer/Developer\t1000\tMidwest\t1\t1000\tYes\t14\t1\t26\t53.48\t25271\t132\t2\t25\t4\tHIGH\tANIBAL\t1\t1\tJIMENEZ\nMARRIED\tUT \t0\t5\t1200\t1000\t5\t0\t59354\tCU15286 \tF \tFireman\t2500\tSouthwest\t1\t1200\tNo\t4\t5\t24\t53.08\t19738.5\t628\t3\t619\t1\tMEDIUM\tJUNITA\t1\t1\tROBERTSON\nMARRIED\tUT \t0\t4\t1800\t0\t3\t0\t84801\tCU15287 \tF \tPROF-26\t2500\tSouthwest\t1\t1800\tNo\t0\t5\t47\t53.06\t31900.25\t0\t0\t25\t0\tVERY HIGH\tCHASITY\t1\t1\tELLIS\nMARRIED\tUT \t0\t1\t1400\t0\t1\t0\t73987\tCU15289 \tM \tProfessor\t2500\tSouthwest\t1\t1400\tNo\t0\t5\t46\t53.06\t31596.75\t0\t0\t25\t0\tVERY HIGH\tFRANKLIN\t1\t1\tKNOX\nSINGLE\tUT \t0\t3\t578\t0\t3\t0\t51452\tCU15290 \tM \tSales Representative\t2500\tSouthwest\t1\t578\tNo\t0\t5\t33\t53.06\t21663\t0\t0\t25\t0\tMEDIUM\tLINCOLN\t1\t1\tMATTSON\nSINGLE\tUT \t0\t3\t0\t0\t3\t0\t63181\tCU15291 \tM \tConstruction Laborer\t2500\tSouthwest\t0\t0\tNo\t1\t5\t49\t53.07\t16195.25\t0\t0\t25\t1\tMEDIUM\tSTEPHEN\t0\t0\tCARROLL\nSINGLE\tUT \t0\t5\t117\t0\t5\t0\t66654\tCU15292 \tF \tPROF-3\t2500\tSouthwest\t1\t117\tNo\t0\t5\t21\t53.06\t21263.5\t0\t0\t25\t0\tMEDIUM\tCEOLA\t1\t1\tHARRISON\nSINGLE\tUT \t0\t3\t0\t250\t3\t0\t61716\tCU15294 \tM \tProgrammer/Developer\t1500\tSouthwest\t0\t0\tNo\t3\t5\t26\t53.04\t13529\t0\t2\t25\t2\tLOW\tLLOYD\t0\t0\tHOLLEY\n","type":"TABLE"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":8,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","","#### Create the `CUSTOMER_INSURANCE_LTV_NOISE` table","","---"],"enabled":true,"result":{"startTime":1737138372065,"interpreter":"md.low","endTime":1737138372531,"results":[{"message":"

Create the CUSTOMER_INSURANCE_LTV_NOISE<\/code> table<\/h4>\n
\n","type":"HTML"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Create table with noise from the customer insurance LTV table","message":["%script","","CREATE TABLE CUSTOMER_INSURANCE_LTV_NOISE AS","SELECT * FROM CUSTOMER_INSURANCE_LTV"],"enabled":true,"result":{"startTime":1737138372995,"interpreter":"script.low","endTime":1737138374786,"results":[{"message":"\nTable CUSTOMER_INSURANCE_LTV_NOISE created.\n\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Insert duplicated rows into the table with noise","message":["%script","","BEGIN "," INSERT INTO CUSTOMER_INSURANCE_LTV_NOISE"," SELECT * FROM CUSTOMER_INSURANCE_LTV "," WHERE ORA_HASH(CUSTOMER_ID, 13, 10) = 0"," FETCH FIRST 1000 ROWS ONLY;"," COMMIT;","END;"],"enabled":true,"result":{"startTime":1737138375250,"interpreter":"script.low","endTime":1737138376373,"results":[{"message":"\nPL/SQL procedure successfully completed.\n\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":"For customer id < 'CU2', change INCOME, BANK_FUNDS, CREDIT_BALANCE, MARITAL_STATUS to null ","message":["%sql","","UPDATE CUSTOMER_INSURANCE_LTV_NOISE","SET INCOME = NULL, BANK_FUNDS = NULL, CREDIT_BALANCE = NULL, MARITAL_STATUS = NULL","WHERE CUSTOMER_ID < 'CU2';"],"enabled":true,"result":{"startTime":1737138376837,"interpreter":"sql.low","endTime":1737138377592,"results":[],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":"For customer id < CU8, change the LAST_NAME to 'THANOS'","message":["%sql","","UPDATE CUSTOMER_INSURANCE_LTV_NOISE","SET LAST_NAME = 'THANOS'","WHERE CUSTOMER_ID < 'CU8';"],"enabled":true,"result":{"startTime":1737138378180,"interpreter":"sql.low","endTime":1737138379016,"results":[],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"For customer id >= 'CU3' and < 'CU4', use synonyms","message":["%script","","BEGIN"," UPDATE CUSTOMER_INSURANCE_LTV_NOISE"," SET MARITAL_STATUS = 'DIV'"," WHERE CUSTOMER_ID >= 'CU3' and CUSTOMER_ID < 'CU4'"," AND MARITAL_STATUS = 'DIVORCED';"," "," COMMIT;"," "," UPDATE CUSTOMER_INSURANCE_LTV_NOISE"," SET MARITAL_STATUS = 'M'"," WHERE CUSTOMER_ID >= 'CU3' and CUSTOMER_ID < 'CU4'"," AND MARITAL_STATUS = 'MARRIED';"," "," COMMIT;","END;"],"enabled":true,"result":{"startTime":1737138379481,"interpreter":"script.low","endTime":1737138380033,"results":[{"message":"\nPL/SQL procedure successfully completed.\n\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","","#### Create the `CUSTOMER_INSURANCE_LTV_SQL` and the `CUSTOMER_INSURANCE_LTV_PY` tables ","","---"],"enabled":true,"result":{"startTime":1737138380501,"interpreter":"md.low","endTime":1737138380947,"results":[{"message":"

Create the CUSTOMER_INSURANCE_LTV_SQL<\/code> and the CUSTOMER_INSURANCE_LTV_PY<\/code> tables<\/h4>\n
\n","type":"HTML"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"raw","title":"Create noise tables for SQL and Python","message":["%script","","CREATE TABLE CUSTOMER_INSURANCE_LTV_SQL AS SELECT * FROM CUSTOMER_INSURANCE_LTV_NOISE;","CREATE TABLE CUSTOMER_INSURANCE_LTV_PY AS SELECT * FROM CUSTOMER_INSURANCE_LTV_NOISE;"],"enabled":true,"result":{"startTime":1737138381425,"interpreter":"script.low","endTime":1737138384411,"results":[{"message":"\nTable CUSTOMER_INSURANCE_LTV_SQL created.\n\n\n---------------------------\n\nTable CUSTOMER_INSURANCE_LTV_PY created.\n\n\n---------------------------\n","type":"TEXT"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":false,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":true,"hideVizConfig":false,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","","## End of Script"],"enabled":true,"result":{"startTime":1737138384885,"interpreter":"md.low","endTime":1737138385400,"results":[{"message":"

End of Script<\/h2>\n","type":"HTML"}],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":false,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%sql"],"enabled":true,"result":{"startTime":1737138385871,"interpreter":"sql.low","endTime":1737138386333,"results":[],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":12,"hideResult":true,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"}],"version":"6","snapshot":false,"tags":null}] \ No newline at end of file diff --git a/machine-learning/notebooks-oml/python/OML Third-Party Packages - Environment Creation.dsnb b/machine-learning/notebooks-oml/python/OML Third-Party Packages - Environment Creation.dsnb new file mode 100644 index 00000000..03576052 --- /dev/null +++ b/machine-learning/notebooks-oml/python/OML Third-Party Packages - Environment Creation.dsnb @@ -0,0 +1,2512 @@ +[ + { + "name" : "OML Third-Party Packages - Environment Creation", + "description" : null, + "tags" : null, + "version" : "7", + "layout" : "jupyter", + "type" : "medium", + "snapshot" : false, + "isEditable" : true, + "isRunnable" : true, + "template" : null, + "templateConfig" : null, + "paragraphs" : [ + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + " " + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : true, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988358548, + "endTime" : 1739988359568, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "# OML Third-Party Packages - Environment Creation", + "", + "Oracle Machine Learning Notebooks provide a conda interpreter to install third-party Python and R packages in a conda environment for use within OML Notebooks sessions, as well as within OML4R and OML4Py embedded execution invocations. Conda is an open source package and environment management system that enables the use of virtual environments containing third-party R and Python packages. With conda environments, you can install and update packages and their dependencies, and switch between environments to use project-specific packages. ", + "", + "Administrators create conda environments and install packages that can then be accessed by non-administrator users and loaded into their OML Notebooks session. The conda environments can be used in the OML4Py Python, SQL, and REST APIs, and the OML4R R, SQL, and REST APIs.", + "", + "In this notebook, we demonstrate a typical workflow for third-party environment creation and package installation in OML notebooks. Section 1 contains common commands used by ADMIN while creating and testing conda environments. In section 2, the ADMIN user creates a conda environment, installs packages, and uploads the conda environment to an Object Storage bucket associated with the Autonomous Database. ", + "", + "In the template notebooks titled, *OML Third-Party Packages - R Environment Usage* and *OML Third-Party Packages - Python Environment Usage*, the OML user downloads and activates the environment, and uses the packages in their notebook session.", + "", + "---", + "Copyright (c) 2025 Oracle Corporation ", + "###### The Universal Permissive License (UPL), Version 1.0" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988360092, + "endTime" : 1739988360575, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

OML Third-Party Packages - Environment Creation

\n

Oracle Machine Learning Notebooks provide a conda interpreter to install third-party Python and R packages in a conda environment for use within OML Notebooks sessions, as well as within OML4R and OML4Py embedded execution invocations. Conda is an open source package and environment management system that enables the use of virtual environments containing third-party R and Python packages. With conda environments, you can install and update packages and their dependencies, and switch between environments to use project-specific packages.

\n

Administrators create conda environments and install packages that can then be accessed by non-administrator users and loaded into their OML Notebooks session. The conda environments can be used in the OML4Py Python, SQL, and REST APIs, and the OML4R R, SQL, and REST APIs.

\n

In this notebook, we demonstrate a typical workflow for third-party environment creation and package installation in OML notebooks. Section 1 contains common commands used by ADMIN while creating and testing conda environments. In section 2, the ADMIN user creates a conda environment, installs packages, and uploads the conda environment to an Object Storage bucket associated with the Autonomous Database.

\n

In the template notebooks titled, OML Third-Party Packages - R Environment Usage and OML Third-Party Packages - Python Environment Usage, the OML user downloads and activates the environment, and uses the packages in their notebook session.

\n
\n

Copyright (c) 2025 Oracle Corporation

\n
The Universal Permissive License (UPL), Version 1.0
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Supported conda commands", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Refer to [Using Oracle Machine Learning on Autonomous Database](https://docs.oracle.com/en/database/oracle/machine-learning/oml-notebooks/omlug/notebooks-classic.html#GUID-5A206265-9EB0-4E49-A882-BFBF2DB5DB71) for a table of supported conda commands.", + "", + "This notebook reviews the following conda commands:", + "", + "- `--help`", + "- `install`", + "- `info`", + "- `search`", + "- `env list`", + "- `create`", + "- `activate`", + "- `list`", + "- `install`", + "- `uninstall`", + "- `remove`", + "- `list-saved-envs`", + "- `upload`", + "- `delete`" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988361065, + "endTime" : 1739988361522, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Refer to Using Oracle Machine Learning on Autonomous Database for a table of supported conda commands.

\n

This notebook reviews the following conda commands:

\n
    \n
  • --help
  • \n
  • install
  • \n
  • info
  • \n
  • search
  • \n
  • env list
  • \n
  • create
  • \n
  • activate
  • \n
  • list
  • \n
  • install
  • \n
  • uninstall
  • \n
  • remove
  • \n
  • list-saved-envs
  • \n
  • upload
  • \n
  • delete
  • \n
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "### Section 1: Commands for Creating and Managing Environments" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988362001, + "endTime" : 1739988362457, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Section 1: Commands for Creating and Managing Environments

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "---", + "### Conda Help", + "---", + "", + "To get help for conda commands, run the command name followed by the `--help` flag. The `conda` command is not run explicitly because the %conda interpreter provides the conda context." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988362933, + "endTime" : 1739988363373, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

To get help for conda commands, run the command name followed by the --help flag. The conda command is not run explicitly because the %conda interpreter provides the conda context.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Get help for all conda commands", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "--help" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988375735, + "endTime" : 1739988379219, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "usage: conda [-h] [-v] [--no-plugins] [-V] COMMAND ...\n\nconda is a tool for managing and deploying applications, environments and packages.\n\noptions:\n -h, --help Show this help message and exit.\n -v, --verbose Can be used multiple times. Once for detailed output,\n twice for INFO logging, thrice for DEBUG logging, four\n times for TRACE logging.\n --no-plugins Disable all plugins that are not built into conda.\n -V, --version Show the conda version number and exit.\n\ncommands:\n The following built-in and plugins subcommands are available.\n\n COMMAND\n activate Activate a conda environment.\n clean Remove unused packages and caches.\n compare Compare packages between conda environments.\n config Modify configuration values in .condarc.\n content-trust Signing and verification tools for Conda\n create Create a new conda environment from a list of specified\n packages.\n deactivate Deactivate the current active conda environment.\n doctor Display a health report for your environment.\n env-lcm See `conda env-lcm --help`.\n info Display information about current conda install.\n init Initialize conda for shell interaction.\n install Install a list of packages into a specified conda\n environment.\n list List installed packages in a conda environment.\n notices Retrieve latest channel notifications.\n pack See `conda pack --help`.\n package Create low-level conda packages. (EXPERIMENTAL)\n remove (uninstall)\n Remove a list of packages from a specified conda\n environment.\n rename Rename an existing environment.\n repoquery Advanced search for repodata.\n run Run an executable in a conda environment.\n search Search for packages and display associated information\n using the MatchSpec format.\n update (upgrade) Update conda packages to the latest compatible version.\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Get help for a specific conda command", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "install --help" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988379732, + "endTime" : 1739988381813, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "usage: conda install [-h] [--revision REVISION] [-n ENVIRONMENT | -p PATH]\n [-c CHANNEL] [--use-local] [--override-channels]\n [--repodata-fn REPODATA_FNS] [--experimental {jlap,lock}]\n [--no-lock] [--repodata-use-zst | --no-repodata-use-zst]\n [--strict-channel-priority] [--no-channel-priority]\n [--no-deps | --only-deps] [--no-pin] [--copy]\n [--no-shortcuts] [--shortcuts-only SHORTCUTS_ONLY] [-C]\n [-k] [--offline] [--json] [-v] [-q] [-d] [-y]\n [--download-only] [--show-channel-urls] [--file FILE]\n [--solver {classic,libmamba}] [--force-reinstall]\n [--freeze-installed | --update-deps | -S | --update-all | --update-specs]\n [-m] [--clobber] [--dev]\n [package_spec ...]\n\nInstall a list of packages into a specified conda environment.\n\nThis command accepts a list of package specifications (e.g, bitarray=0.8)\nand installs a set of packages consistent with those specifications and\ncompatible with the underlying environment. If full compatibility cannot\nbe assured, an error is reported and the environment is not changed.\n\nConda attempts to install the newest versions of the requested packages. To\naccomplish this, it may update some packages that are already installed, or\ninstall additional packages. To prevent existing packages from updating,\nuse the --freeze-installed option. This may force conda to install older\nversions of the requested packages, and it does not prevent additional\ndependency packages from being installed.\n\nIf you wish to skip dependency checking altogether, use the '--no-deps'\noption. This may result in an environment with incompatible packages, so\nthis option must be used with great caution.\n\nconda can also be called with a list of explicit conda package filenames\n(e.g. ./lxml-3.2.0-py27_0.tar.bz2). Using conda in this mode implies the\n--no-deps option, and should likewise be used with great caution. Explicit\nfilenames and package specifications cannot be mixed in a single command.\n\npositional arguments:\n package_spec List of packages to install or update in the conda\n environment.\n\noptions:\n -h, --help Show this help message and exit.\n --revision REVISION Revert to the specified REVISION.\n --file FILE Read package versions from the given file. Repeated\n file specifications can be passed (e.g. --file=file1\n --file=file2).\n --dev Use `sys.executable -m conda` in wrapper scripts\n instead of CONDA_EXE. This is mainly for use during\n tests where we test new conda sources against old\n Python versions.\n\nTarget Environment Specification:\n -n ENVIRONMENT, --name ENVIRONMENT\n Name of environment.\n -p PATH, --prefix PATH\n Full path to environment location (i.e. prefix).\n\nChannel Customization:\n -c CHANNEL, --channel CHANNEL\n Additional channel to search for packages. These are\n URLs searched in the order they are given (including\n local directories using the 'file://' syntax or simply\n a path like '/home/conda/mychan' or '../mychan').\n Then, the defaults or channels from .condarc are\n searched (unless --override-channels is given). You\n can use 'defaults' to get the default packages for\n conda. You can also use any name and the .condarc\n channel_alias value will be prepended. The default\n channel_alias is https://conda.anaconda.org/.\n --use-local Use locally built packages. Identical to '-c local'.\n --override-channels Do not search default or .condarc channels. Requires\n --channel.\n --repodata-fn REPODATA_FNS\n Specify file name of repodata on the remote server\n where your channels are configured or within local\n backups. Conda will try whatever you specify, but will\n ultimately fall back to repodata.json if your specs\n are not satisfiable with what you specify here. This\n is used to employ repodata that is smaller and reduced\n in time scope. You may pass this flag more than once.\n Leftmost entries are tried first, and the fallback to\n repodata.json is added for you automatically. For more\n information, see conda config --describe repodata_fns.\n --experimental {jlap,lock}\n jlap: Download incremental package index data from\n repodata.jlap; implies 'lock'. lock: use locking when\n reading, updating index (repodata.json) cache. Now\n enabled.\n --no-lock Disable locking when reading, updating index\n (repodata.json) cache.\n --repodata-use-zst, --no-repodata-use-zst\n Check for/do not check for repodata.json.zst. Enabled\n by default.\n\nSolver Mode Modifiers:\n --strict-channel-priority\n Packages in lower priority channels are not considered\n if a package with the same name appears in a higher\n priority channel.\n --no-channel-priority\n Package version takes precedence over channel\n priority. Overrides the value given by `conda config\n --show channel_priority`.\n --no-deps Do not install, update, remove, or change\n dependencies. This WILL lead to broken environments\n and inconsistent behavior. Use at your own risk.\n --only-deps Only install dependencies.\n --no-pin Ignore pinned file.\n --solver {classic,libmamba}\n Choose which solver backend to use.\n --force-reinstall Ensure that any user-requested package for the current\n operation is uninstalled and reinstalled, even if that\n package already exists in the environment.\n --freeze-installed, --no-update-deps\n Do not update or change already-installed\n dependencies.\n --update-deps Update dependencies that have available updates.\n -S, --satisfied-skip-solve\n Exit early and do not run the solver if the requested\n specs are satisfied. Also skips aggressive updates as\n configured by the 'aggressive_update_packages' config\n setting. Use 'conda info --describe\n aggressive_update_packages' to view your setting.\n --satisfied-skip-solve is similar to the default\n behavior of 'pip install'.\n --update-all, --all Update all installed packages in the environment.\n --update-specs Update based on provided specifications.\n\nPackage Linking and Install-time Options:\n --copy Install all packages using copies instead of hard- or\n soft-linking.\n --no-shortcuts Don't install start menu shortcuts\n --shortcuts-only SHORTCUTS_ONLY\n Install shortcuts only for this package name. Can be\n used several times.\n -m, --mkdir Create the environment directory, if necessary.\n --clobber Allow clobbering (i.e. overwriting) of overlapping\n file paths within packages and suppress related\n warnings.\n\nNetworking Options:\n -C, --use-index-cache\n Use cache of channel index files, even if it has\n expired. This is useful if you don't want conda to\n check whether a new version of the repodata file\n exists, which will save bandwidth.\n -k, --insecure Allow conda to perform \"insecure\" SSL connections and\n transfers. Equivalent to setting 'ssl_verify' to\n 'false'.\n --offline Offline mode. Don't connect to the Internet.\n\nOutput, Prompt, and Flow Control Options:\n --json Report all output as json. Suitable for using conda\n programmatically.\n -v, --verbose Can be used multiple times. Once for detailed output,\n twice for INFO logging, thrice for DEBUG logging, four\n times for TRACE logging.\n -q, --quiet Do not display progress bar.\n -d, --dry-run Only display what would have been done.\n -y, --yes Sets any confirmation values to 'yes' automatically.\n Users will not be asked to confirm any adding,\n deleting, backups, etc.\n --download-only Solve an environment and ensure package caches are\n populated, but exit prior to unlinking and linking\n packages into the prefix.\n --show-channel-urls Show channel urls. Overrides the value given by `conda\n config --show show_channel_urls`.\n\nExamples:\n\nInstall the package 'scipy' into the currently-active environment::\n\n conda install scipy\n\nInstall a list of packages into an environment, myenv::\n\n conda install -n myenv scipy curl wheel\n\nInstall a specific version of 'python' into an environment, myenv::\n\n conda install -p path/to/myenv python=3.11\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "---", + "### Conda Info", + "---", + "The `info` command provides information about the conda installation, including the conda version and available channels." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988382297, + "endTime" : 1739988382750, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

The info command provides information about the conda installation, including the conda version and available channels.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda ", + "", + "info" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988383230, + "endTime" : 1739988385409, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n active environment : base\n active env location : /opt/conda\n shell level : 1\n user config file : /u01/.condarc\n populated config files : /u01/.condarc\n conda version : 24.1.2\n conda-build version : not installed\n python version : 3.12.1.final.0\n solver : libmamba (default)\n virtual packages : __archspec=1=zen3\n __conda=24.1.2=0\n __glibc=2.17=0\n __linux=5.4.17=0\n __unix=0=0\n base environment : /opt/conda (read only)\n conda av data dir : /opt/conda/etc/conda\n conda av metadata url : None\n channel URLs : https://repo.anaconda.com/pkgs/main/linux-64\n https://repo.anaconda.com/pkgs/main/noarch\n https://repo.anaconda.com/pkgs/r/linux-64\n https://repo.anaconda.com/pkgs/r/noarch\n package cache : /u01/.conda/pkgs\n envs directories : /u01/.conda/envs\n /opt/conda/envs\n platform : linux-64\n user-agent : conda/24.1.2 requests/2.32.3 CPython/3.12.1 Linux/5.4.17-2136.338.4.2.el7uek.x86_64 oracle/7.9 glibc/2.17 solver/libmamba conda-libmamba-solver/23.12.0 libmambapy/1.5.3\n UID:GID : 1001:1001\n netrc file : None\n offline mode : False\n\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "---", + "### Conda Search", + "---", + "The `search` command allows the user to search for packages and display associated information, including the package version and the channel where it resides." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988385886, + "endTime" : 1739988386329, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

The search command allows the user to search for packages and display associated information, including the package version and the channel where it resides.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Search for a specific package", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "search scikit-learn" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988386829, + "endTime" : 1739988390371, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Loading channels: ...working... done\n# Name Version Build Channel \nscikit-learn 0.19.0 py27_nomklh0ffebdf_2 pkgs/main \nscikit-learn 0.19.0 py27hd893acb_2 pkgs/main \nscikit-learn 0.19.0 py35_nomklh375dd1d_2 pkgs/main \nscikit-learn 0.19.0 py35h25e8076_2 pkgs/main \nscikit-learn 0.19.0 py36_nomklh41feb14_2 pkgs/main \nscikit-learn 0.19.0 py36h97ac459_2 pkgs/main \nscikit-learn 0.19.1 py27_nomklh6479e79_0 pkgs/main \nscikit-learn 0.19.1 py27_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py27h445a80a_0 pkgs/main \nscikit-learn 0.19.1 py27hedc7406_0 pkgs/main \nscikit-learn 0.19.1 py35_nomklh26d41a3_0 pkgs/main \nscikit-learn 0.19.1 py35hbf1f462_0 pkgs/main \nscikit-learn 0.19.1 py36_nomklh27f7947_0 pkgs/main \nscikit-learn 0.19.1 py36_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py36h7aa7ec6_0 pkgs/main \nscikit-learn 0.19.1 py36hedc7406_0 pkgs/main \nscikit-learn 0.19.1 py37_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py37hedc7406_0 pkgs/main \nscikit-learn 0.19.2 py27h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py27h4989274_0 pkgs/main \nscikit-learn 0.19.2 py35h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py35h4989274_0 pkgs/main \nscikit-learn 0.19.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py36h4989274_0 pkgs/main \nscikit-learn 0.19.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py37h4989274_0 pkgs/main \nscikit-learn 0.20.0 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py27h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py27h4989274_0 pkgs/main \nscikit-learn 0.20.0 py27h4989274_1 pkgs/main \nscikit-learn 0.20.0 py35h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py35h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py35h4989274_0 pkgs/main \nscikit-learn 0.20.0 py35h4989274_1 pkgs/main \nscikit-learn 0.20.0 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py36h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py36h4989274_0 pkgs/main \nscikit-learn 0.20.0 py36h4989274_1 pkgs/main \nscikit-learn 0.20.0 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py37h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py37h4989274_0 pkgs/main \nscikit-learn 0.20.0 py37h4989274_1 pkgs/main \nscikit-learn 0.20.1 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py27h4989274_0 pkgs/main \nscikit-learn 0.20.1 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py36h4989274_0 pkgs/main \nscikit-learn 0.20.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py37h4989274_0 pkgs/main \nscikit-learn 0.20.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py37hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py38h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py38hd81dba3_0 pkgs/main \nscikit-learn 0.21.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.2 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.2 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.3 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.3 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.3 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.3 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22 py36h22eb022_0 pkgs/main \nscikit-learn 0.22 py36hd81dba3_0 pkgs/main \nscikit-learn 0.22 py37h22eb022_0 pkgs/main \nscikit-learn 0.22 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22 py38h22eb022_0 pkgs/main \nscikit-learn 0.22 py38hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py38h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py38hd81dba3_0 pkgs/main \nscikit-learn 0.23.1 py36h423224d_0 pkgs/main \nscikit-learn 0.23.1 py36h7ea95a0_0 pkgs/main \nscikit-learn 0.23.1 py37h423224d_0 pkgs/main \nscikit-learn 0.23.1 py37h7ea95a0_0 pkgs/main \nscikit-learn 0.23.1 py38h423224d_0 pkgs/main \nscikit-learn 0.23.1 py38h7ea95a0_0 pkgs/main \nscikit-learn 0.23.2 py36h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py37h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py38h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py39ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py36ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py37ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py38ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py39ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py36ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py37ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py38ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py39ha9443f7_0 pkgs/main \nscikit-learn 1.0.1 py310h00e6091_0 pkgs/main \nscikit-learn 1.0.1 py37h51133e4_0 pkgs/main \nscikit-learn 1.0.1 py38h51133e4_0 pkgs/main \nscikit-learn 1.0.1 py39h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py37h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py37h51133e4_1 pkgs/main \nscikit-learn 1.0.2 py38h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py38h51133e4_1 pkgs/main \nscikit-learn 1.0.2 py39h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py39h51133e4_1 pkgs/main \nscikit-learn 1.1.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.1 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.1 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py310h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py311h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py38h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py39h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py310h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py38h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py39h6a678d5_1 pkgs/main \nscikit-learn 1.2.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py311h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py310h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py311h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py311h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py38h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py39h6a678d5_1 pkgs/main \nscikit-learn 1.3.0 py310h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py310h1128e8f_1 pkgs/main \nscikit-learn 1.3.0 py311ha02d727_0 pkgs/main \nscikit-learn 1.3.0 py311ha02d727_1 pkgs/main \nscikit-learn 1.3.0 py312h526ad5a_2 pkgs/main \nscikit-learn 1.3.0 py38h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py38h1128e8f_1 pkgs/main \nscikit-learn 1.3.0 py39h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py39h1128e8f_1 pkgs/main \nscikit-learn 1.4.2 py310h1128e8f_1 pkgs/main \nscikit-learn 1.4.2 py311ha02d727_1 pkgs/main \nscikit-learn 1.4.2 py312h526ad5a_1 pkgs/main \nscikit-learn 1.4.2 py39h1128e8f_1 pkgs/main \nscikit-learn 1.5.1 py310h1128e8f_0 pkgs/main \nscikit-learn 1.5.1 py311ha02d727_0 pkgs/main \nscikit-learn 1.5.1 py312h526ad5a_0 pkgs/main \nscikit-learn 1.5.1 py39h1128e8f_0 pkgs/main \nscikit-learn 1.5.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py311h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py312h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py313h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py311h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py312h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py313h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py39h6a678d5_0 pkgs/main \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Search for packages containing 'scikit' in the package name", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "search '*scikit*'" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988390852, + "endTime" : 1739988397641, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Loading channels: ...working... done\n# Name Version Build Channel \nscikit-bio 0.5.2 py35h3010b51_0 pkgs/main \nscikit-bio 0.5.2 py36h3010b51_0 pkgs/main \nscikit-bio 0.5.4 py35hdd07704_0 pkgs/main \nscikit-bio 0.5.4 py36hdd07704_0 pkgs/main \nscikit-bio 0.5.6 py36h6323ea4_0 pkgs/main \nscikit-bio 0.5.6 py37h6323ea4_0 pkgs/main \nscikit-bio 0.5.6 py38h6323ea4_0 pkgs/main \nscikit-bio 0.5.6 py39h6323ea4_0 pkgs/main \nscikit-build 0.11.1 py310h295c915_2 pkgs/main \nscikit-build 0.11.1 py36h2531618_2 pkgs/main \nscikit-build 0.11.1 py37h2531618_2 pkgs/main \nscikit-build 0.11.1 py38h2531618_2 pkgs/main \nscikit-build 0.11.1 py39h2531618_2 pkgs/main \nscikit-build 0.15.0 py310h6a678d5_0 pkgs/main \nscikit-build 0.15.0 py311h6a678d5_0 pkgs/main \nscikit-build 0.15.0 py312h6a678d5_0 pkgs/main \nscikit-build 0.15.0 py37h6a678d5_0 pkgs/main \nscikit-build 0.15.0 py38h6a678d5_0 pkgs/main \nscikit-build 0.15.0 py39h6a678d5_0 pkgs/main \nscikit-build 0.18.1 py310h6a678d5_0 pkgs/main \nscikit-build 0.18.1 py311h6a678d5_0 pkgs/main \nscikit-build 0.18.1 py312h6a678d5_0 pkgs/main \nscikit-build 0.18.1 py313h6a678d5_0 pkgs/main \nscikit-build 0.18.1 py39h6a678d5_0 pkgs/main \nscikit-build-core 0.6.1 py310h1128e8f_0 pkgs/main \nscikit-build-core 0.6.1 py310h1128e8f_1 pkgs/main \nscikit-build-core 0.6.1 py311ha02d727_0 pkgs/main \nscikit-build-core 0.6.1 py311ha02d727_1 pkgs/main \nscikit-build-core 0.6.1 py312h526ad5a_1 pkgs/main \nscikit-build-core 0.6.1 py38h1128e8f_0 pkgs/main \nscikit-build-core 0.6.1 py38h1128e8f_1 pkgs/main \nscikit-build-core 0.6.1 py39h1128e8f_0 pkgs/main \nscikit-build-core 0.6.1 py39h1128e8f_1 pkgs/main \nscikit-build-core 0.10.1 py310h1128e8f_0 pkgs/main \nscikit-build-core 0.10.1 py311ha02d727_0 pkgs/main \nscikit-build-core 0.10.1 py312h526ad5a_0 pkgs/main \nscikit-build-core 0.10.1 py313h06d7b56_0 pkgs/main \nscikit-build-core 0.10.1 py38h1128e8f_0 pkgs/main \nscikit-build-core 0.10.1 py39h1128e8f_0 pkgs/main \nscikit-build-core 0.10.7 py310h1128e8f_0 pkgs/main \nscikit-build-core 0.10.7 py311ha02d727_0 pkgs/main \nscikit-build-core 0.10.7 py312h526ad5a_0 pkgs/main \nscikit-build-core 0.10.7 py313h06d7b56_0 pkgs/main \nscikit-build-core 0.10.7 py39h1128e8f_0 pkgs/main \nscikit-image 0.13.0 py27h06cb35d_1 pkgs/main \nscikit-image 0.13.0 py35h3573165_1 pkgs/main \nscikit-image 0.13.0 py36had3c07a_1 pkgs/main \nscikit-image 0.13.1 py27h14c3975_1 pkgs/main \nscikit-image 0.13.1 py27h44232b9_0 pkgs/main \nscikit-image 0.13.1 py35h14c3975_1 pkgs/main \nscikit-image 0.13.1 py35h7a281a6_0 pkgs/main \nscikit-image 0.13.1 py36h14c3975_1 pkgs/main \nscikit-image 0.13.1 py36ha4a0841_0 pkgs/main \nscikit-image 0.14.0 py27hf484d3e_1 pkgs/main \nscikit-image 0.14.0 py35hf484d3e_1 pkgs/main \nscikit-image 0.14.0 py36hf484d3e_1 pkgs/main \nscikit-image 0.14.0 py37hf484d3e_1 pkgs/main \nscikit-image 0.14.1 py27he6710b0_0 pkgs/main \nscikit-image 0.14.1 py36he6710b0_0 pkgs/main \nscikit-image 0.14.1 py37he6710b0_0 pkgs/main \nscikit-image 0.14.2 py27he6710b0_0 pkgs/main \nscikit-image 0.14.2 py36he6710b0_0 pkgs/main \nscikit-image 0.14.2 py37he6710b0_0 pkgs/main \nscikit-image 0.15.0 py36he6710b0_0 pkgs/main \nscikit-image 0.15.0 py37he6710b0_0 pkgs/main \nscikit-image 0.15.0 py38he6710b0_0 pkgs/main \nscikit-image 0.16.2 py310h6a678d5_1 pkgs/main \nscikit-image 0.16.2 py36h0573a6f_0 pkgs/main \nscikit-image 0.16.2 py37h0573a6f_0 pkgs/main \nscikit-image 0.16.2 py37h6a678d5_1 pkgs/main \nscikit-image 0.16.2 py38h0573a6f_0 pkgs/main \nscikit-image 0.16.2 py38h6a678d5_1 pkgs/main \nscikit-image 0.16.2 py39h6a678d5_1 pkgs/main \nscikit-image 0.16.2 py39ha9443f7_0 pkgs/main \nscikit-image 0.17.2 py36hdf5156a_0 pkgs/main \nscikit-image 0.17.2 py37hdf5156a_0 pkgs/main \nscikit-image 0.17.2 py38hdf5156a_0 pkgs/main \nscikit-image 0.17.2 py39ha9443f7_0 pkgs/main \nscikit-image 0.18.1 py37ha9443f7_0 pkgs/main \nscikit-image 0.18.1 py38ha9443f7_0 pkgs/main \nscikit-image 0.18.1 py39ha9443f7_0 pkgs/main \nscikit-image 0.18.3 py310h00e6091_0 pkgs/main \nscikit-image 0.18.3 py37h51133e4_0 pkgs/main \nscikit-image 0.18.3 py38h51133e4_0 pkgs/main \nscikit-image 0.18.3 py39h51133e4_0 pkgs/main \nscikit-image 0.19.2 py310h00e6091_0 pkgs/main \nscikit-image 0.19.2 py37h51133e4_0 pkgs/main \nscikit-image 0.19.2 py38h51133e4_0 pkgs/main \nscikit-image 0.19.2 py39h51133e4_0 pkgs/main \nscikit-image 0.19.3 py310h6a678d5_1 pkgs/main \nscikit-image 0.19.3 py311h6a678d5_2 pkgs/main \nscikit-image 0.19.3 py37h6a678d5_1 pkgs/main \nscikit-image 0.19.3 py38h6a678d5_1 pkgs/main \nscikit-image 0.19.3 py39h6a678d5_1 pkgs/main \nscikit-image 0.20.0 py310h6a678d5_0 pkgs/main \nscikit-image 0.20.0 py311h6a678d5_0 pkgs/main \nscikit-image 0.20.0 py38h6a678d5_0 pkgs/main \nscikit-image 0.20.0 py39h6a678d5_0 pkgs/main \nscikit-image 0.21.0 py312h526ad5a_0 pkgs/main \nscikit-image 0.22.0 py310h1128e8f_0 pkgs/main \nscikit-image 0.22.0 py311ha02d727_0 pkgs/main \nscikit-image 0.22.0 py312h526ad5a_0 pkgs/main \nscikit-image 0.22.0 py39h1128e8f_0 pkgs/main \nscikit-image 0.23.2 py310h1128e8f_0 pkgs/main \nscikit-image 0.23.2 py311ha02d727_0 pkgs/main \nscikit-image 0.23.2 py312h526ad5a_0 pkgs/main \nscikit-image 0.24.0 py310h1128e8f_0 pkgs/main \nscikit-image 0.24.0 py311ha02d727_0 pkgs/main \nscikit-image 0.24.0 py312h526ad5a_0 pkgs/main \nscikit-image 0.24.0 py39h1128e8f_0 pkgs/main \nscikit-image 0.25.0 py310h6a678d5_0 pkgs/main \nscikit-image 0.25.0 py311h6a678d5_0 pkgs/main \nscikit-image 0.25.0 py312h6a678d5_0 pkgs/main \nscikit-image 0.25.0 py313h6a678d5_0 pkgs/main \nscikit-learn 0.19.0 py27_nomklh0ffebdf_2 pkgs/main \nscikit-learn 0.19.0 py27hd893acb_2 pkgs/main \nscikit-learn 0.19.0 py35_nomklh375dd1d_2 pkgs/main \nscikit-learn 0.19.0 py35h25e8076_2 pkgs/main \nscikit-learn 0.19.0 py36_nomklh41feb14_2 pkgs/main \nscikit-learn 0.19.0 py36h97ac459_2 pkgs/main \nscikit-learn 0.19.1 py27_nomklh6479e79_0 pkgs/main \nscikit-learn 0.19.1 py27_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py27h445a80a_0 pkgs/main \nscikit-learn 0.19.1 py27hedc7406_0 pkgs/main \nscikit-learn 0.19.1 py35_nomklh26d41a3_0 pkgs/main \nscikit-learn 0.19.1 py35hbf1f462_0 pkgs/main \nscikit-learn 0.19.1 py36_nomklh27f7947_0 pkgs/main \nscikit-learn 0.19.1 py36_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py36h7aa7ec6_0 pkgs/main \nscikit-learn 0.19.1 py36hedc7406_0 pkgs/main \nscikit-learn 0.19.1 py37_nomklh6cfcb94_0 pkgs/main \nscikit-learn 0.19.1 py37hedc7406_0 pkgs/main \nscikit-learn 0.19.2 py27h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py27h4989274_0 pkgs/main \nscikit-learn 0.19.2 py35h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py35h4989274_0 pkgs/main \nscikit-learn 0.19.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py36h4989274_0 pkgs/main \nscikit-learn 0.19.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.19.2 py37h4989274_0 pkgs/main \nscikit-learn 0.20.0 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py27h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py27h4989274_0 pkgs/main \nscikit-learn 0.20.0 py27h4989274_1 pkgs/main \nscikit-learn 0.20.0 py35h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py35h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py35h4989274_0 pkgs/main \nscikit-learn 0.20.0 py35h4989274_1 pkgs/main \nscikit-learn 0.20.0 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py36h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py36h4989274_0 pkgs/main \nscikit-learn 0.20.0 py36h4989274_1 pkgs/main \nscikit-learn 0.20.0 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.0 py37h22eb022_1 pkgs/main \nscikit-learn 0.20.0 py37h4989274_0 pkgs/main \nscikit-learn 0.20.0 py37h4989274_1 pkgs/main \nscikit-learn 0.20.1 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py27h4989274_0 pkgs/main \nscikit-learn 0.20.1 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py36h4989274_0 pkgs/main \nscikit-learn 0.20.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.1 py37h4989274_0 pkgs/main \nscikit-learn 0.20.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.2 py37hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py27h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py27hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py36h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py36hd81dba3_0 pkgs/main \nscikit-learn 0.20.3 py37h22eb022_0 pkgs/main \nscikit-learn 0.20.3 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.1 py38h22eb022_0 pkgs/main \nscikit-learn 0.21.1 py38hd81dba3_0 pkgs/main \nscikit-learn 0.21.2 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.2 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.2 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.2 py37hd81dba3_0 pkgs/main \nscikit-learn 0.21.3 py36h22eb022_0 pkgs/main \nscikit-learn 0.21.3 py36hd81dba3_0 pkgs/main \nscikit-learn 0.21.3 py37h22eb022_0 pkgs/main \nscikit-learn 0.21.3 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22 py36h22eb022_0 pkgs/main \nscikit-learn 0.22 py36hd81dba3_0 pkgs/main \nscikit-learn 0.22 py37h22eb022_0 pkgs/main \nscikit-learn 0.22 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22 py38h22eb022_0 pkgs/main \nscikit-learn 0.22 py38hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py36h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py36hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py37h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py37hd81dba3_0 pkgs/main \nscikit-learn 0.22.1 py38h22eb022_0 pkgs/main \nscikit-learn 0.22.1 py38hd81dba3_0 pkgs/main \nscikit-learn 0.23.1 py36h423224d_0 pkgs/main \nscikit-learn 0.23.1 py36h7ea95a0_0 pkgs/main \nscikit-learn 0.23.1 py37h423224d_0 pkgs/main \nscikit-learn 0.23.1 py37h7ea95a0_0 pkgs/main \nscikit-learn 0.23.1 py38h423224d_0 pkgs/main \nscikit-learn 0.23.1 py38h7ea95a0_0 pkgs/main \nscikit-learn 0.23.2 py36h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py37h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py38h0573a6f_0 pkgs/main \nscikit-learn 0.23.2 py39ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py36ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py37ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py38ha9443f7_0 pkgs/main \nscikit-learn 0.24.1 py39ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py36ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py37ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py38ha9443f7_0 pkgs/main \nscikit-learn 0.24.2 py39ha9443f7_0 pkgs/main \nscikit-learn 1.0.1 py310h00e6091_0 pkgs/main \nscikit-learn 1.0.1 py37h51133e4_0 pkgs/main \nscikit-learn 1.0.1 py38h51133e4_0 pkgs/main \nscikit-learn 1.0.1 py39h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py37h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py37h51133e4_1 pkgs/main \nscikit-learn 1.0.2 py38h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py38h51133e4_1 pkgs/main \nscikit-learn 1.0.2 py39h51133e4_0 pkgs/main \nscikit-learn 1.0.2 py39h51133e4_1 pkgs/main \nscikit-learn 1.1.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.1 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.1 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py310h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py310h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py311h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py38h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py38h6a678d5_1 pkgs/main \nscikit-learn 1.1.3 py39h6a678d5_0 pkgs/main \nscikit-learn 1.1.3 py39h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py310h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py38h6a678d5_1 pkgs/main \nscikit-learn 1.2.0 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.0 py39h6a678d5_1 pkgs/main \nscikit-learn 1.2.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py311h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.1 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py310h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py311h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py311h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py38h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py38h6a678d5_1 pkgs/main \nscikit-learn 1.2.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.2.2 py39h6a678d5_1 pkgs/main \nscikit-learn 1.3.0 py310h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py310h1128e8f_1 pkgs/main \nscikit-learn 1.3.0 py311ha02d727_0 pkgs/main \nscikit-learn 1.3.0 py311ha02d727_1 pkgs/main \nscikit-learn 1.3.0 py312h526ad5a_2 pkgs/main \nscikit-learn 1.3.0 py38h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py38h1128e8f_1 pkgs/main \nscikit-learn 1.3.0 py39h1128e8f_0 pkgs/main \nscikit-learn 1.3.0 py39h1128e8f_1 pkgs/main \nscikit-learn 1.4.2 py310h1128e8f_1 pkgs/main \nscikit-learn 1.4.2 py311ha02d727_1 pkgs/main \nscikit-learn 1.4.2 py312h526ad5a_1 pkgs/main \nscikit-learn 1.4.2 py39h1128e8f_1 pkgs/main \nscikit-learn 1.5.1 py310h1128e8f_0 pkgs/main \nscikit-learn 1.5.1 py311ha02d727_0 pkgs/main \nscikit-learn 1.5.1 py312h526ad5a_0 pkgs/main \nscikit-learn 1.5.1 py39h1128e8f_0 pkgs/main \nscikit-learn 1.5.2 py310h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py311h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py312h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py313h6a678d5_0 pkgs/main \nscikit-learn 1.5.2 py39h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py310h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py311h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py312h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py313h6a678d5_0 pkgs/main \nscikit-learn 1.6.1 py39h6a678d5_0 pkgs/main \nscikit-learn-intelex 2021.2.2 py36h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.2.2 py37h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.2.2 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.2.2 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.3.0 py36h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.3.0 py37h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.3.0 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.3.0 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.4.0 py310h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.4.0 py37h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.4.0 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.4.0 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.5.0 py37h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.5.0 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.5.0 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.6.0 py310h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.6.0 py37h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.6.0 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2021.6.0 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.0.2 py310h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.0.2 py311h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.0.2 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.0.2 py39h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.1.1 py310h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.1.1 py311h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.1.1 py38h06a4308_0 pkgs/main \nscikit-learn-intelex 2023.1.1 py39h06a4308_0 pkgs/main \nscikit-plot 0.3.7 py310h06a4308_0 pkgs/main \nscikit-plot 0.3.7 py311h06a4308_0 pkgs/main \nscikit-plot 0.3.7 py312h06a4308_0 pkgs/main \nscikit-plot 0.3.7 py38h06a4308_0 pkgs/main \nscikit-plot 0.3.7 py39h06a4308_0 pkgs/main \nscikit-rf 0.14.5 py27h6f8029f_0 pkgs/main \nscikit-rf 0.14.5 py35hc292ca9_0 pkgs/main \nscikit-rf 0.14.5 py36h9a41a6d_0 pkgs/main \nscikit-rf 0.14.8 py27_0 pkgs/main \nscikit-rf 0.14.8 py35_0 pkgs/main \nscikit-rf 0.14.8 py36_0 pkgs/main \nscikit-rf 0.14.9 py27_0 pkgs/main \nscikit-rf 0.14.9 py35_0 pkgs/main \nscikit-rf 0.14.9 py36_0 pkgs/main \nscikit-rf 0.15.4 py36_0 pkgs/main \nscikit-rf 0.15.4 py37_0 pkgs/main \nscikit-rf 0.15.4 py38_0 pkgs/main \nscikit-rf 0.16.0 py36h06a4308_0 pkgs/main \nscikit-rf 0.16.0 py37h06a4308_0 pkgs/main \nscikit-rf 0.16.0 py38h06a4308_0 pkgs/main \nscikit-rf 0.16.0 py39h06a4308_0 pkgs/main \nscikit-rf 0.17.0 pyhd3eb1b0_0 pkgs/main \nscikit-rf 0.18.1 pyhd3eb1b0_0 pkgs/main \nscikit-rf 1.2.0 py310h06a4308_0 pkgs/main \nscikit-rf 1.2.0 py311h06a4308_0 pkgs/main \nscikit-rf 1.2.0 py312h06a4308_0 pkgs/main \nscikit-rf 1.2.0 py313h06a4308_0 pkgs/main \nscikit-rf 1.2.0 py38h06a4308_0 pkgs/main \nscikit-rf 1.2.0 py39h06a4308_0 pkgs/main \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Search for a specific version of a package", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "search 'numpy==1.25'" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988398125, + "endTime" : 1739988401507, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Loading channels: ...working... done\n# Name Version Build Channel \nnumpy 1.25.0 py310h5f9d8c6_0 pkgs/main \nnumpy 1.25.0 py310heeff2f4_0 pkgs/main \nnumpy 1.25.0 py311h08b1b3b_0 pkgs/main \nnumpy 1.25.0 py311h24aa872_0 pkgs/main \nnumpy 1.25.0 py39h5f9d8c6_0 pkgs/main \nnumpy 1.25.0 py39heeff2f4_0 pkgs/main \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "search 'numpy>=1.21'" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988401976, + "endTime" : 1739988405356, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Loading channels: ...working... done\n# Name Version Build Channel \nnumpy 1.21.2 py310h20f2e39_0 pkgs/main \nnumpy 1.21.2 py310hd8d4704_0 pkgs/main \nnumpy 1.21.2 py37h20f2e39_0 pkgs/main \nnumpy 1.21.2 py37hd8d4704_0 pkgs/main \nnumpy 1.21.2 py38h20f2e39_0 pkgs/main \nnumpy 1.21.2 py38hd8d4704_0 pkgs/main \nnumpy 1.21.2 py39h20f2e39_0 pkgs/main \nnumpy 1.21.2 py39hd8d4704_0 pkgs/main \nnumpy 1.21.5 py310h1794996_3 pkgs/main \nnumpy 1.21.5 py310h4f1e569_1 pkgs/main \nnumpy 1.21.5 py310h4f1e569_2 pkgs/main \nnumpy 1.21.5 py310h5f9d8c6_4 pkgs/main \nnumpy 1.21.5 py310hac523dd_3 pkgs/main \nnumpy 1.21.5 py310hfa59a62_1 pkgs/main \nnumpy 1.21.5 py310hfa59a62_2 pkgs/main \nnumpy 1.21.5 py37h6c91a56_3 pkgs/main \nnumpy 1.21.5 py37h7a5d4dd_1 pkgs/main \nnumpy 1.21.5 py37h7a5d4dd_2 pkgs/main \nnumpy 1.21.5 py37he7a7128_1 pkgs/main \nnumpy 1.21.5 py37he7a7128_2 pkgs/main \nnumpy 1.21.5 py37hf838250_3 pkgs/main \nnumpy 1.21.5 py38h6c91a56_3 pkgs/main \nnumpy 1.21.5 py38h7a5d4dd_1 pkgs/main \nnumpy 1.21.5 py38h7a5d4dd_2 pkgs/main \nnumpy 1.21.5 py38he7a7128_1 pkgs/main \nnumpy 1.21.5 py38he7a7128_2 pkgs/main \nnumpy 1.21.5 py38hf6e8229_4 pkgs/main \nnumpy 1.21.5 py38hf838250_3 pkgs/main \nnumpy 1.21.5 py39h6c91a56_3 pkgs/main \nnumpy 1.21.5 py39h7a5d4dd_1 pkgs/main \nnumpy 1.21.5 py39h7a5d4dd_2 pkgs/main \nnumpy 1.21.5 py39he7a7128_1 pkgs/main \nnumpy 1.21.5 py39he7a7128_2 pkgs/main \nnumpy 1.21.5 py39hf6e8229_4 pkgs/main \nnumpy 1.21.5 py39hf838250_3 pkgs/main \nnumpy 1.21.6 py310h5f9d8c6_0 pkgs/main \nnumpy 1.21.6 py310h5f9d8c6_1 pkgs/main \nnumpy 1.21.6 py310hac523dd_0 pkgs/main \nnumpy 1.21.6 py310hac523dd_1 pkgs/main \nnumpy 1.21.6 py38h5f9d8c6_0 pkgs/main \nnumpy 1.21.6 py38h5f9d8c6_1 pkgs/main \nnumpy 1.21.6 py38hac523dd_0 pkgs/main \nnumpy 1.21.6 py38hac523dd_1 pkgs/main \nnumpy 1.21.6 py39h5f9d8c6_0 pkgs/main \nnumpy 1.21.6 py39h5f9d8c6_1 pkgs/main \nnumpy 1.21.6 py39hac523dd_0 pkgs/main \nnumpy 1.21.6 py39hac523dd_1 pkgs/main \nnumpy 1.22.3 py310h4f1e569_0 pkgs/main \nnumpy 1.22.3 py310h5f9d8c6_2 pkgs/main \nnumpy 1.22.3 py310hfa59a62_0 pkgs/main \nnumpy 1.22.3 py311h5585df3_1 pkgs/main \nnumpy 1.22.3 py311h75bd12f_1 pkgs/main \nnumpy 1.22.3 py38h7a5d4dd_0 pkgs/main \nnumpy 1.22.3 py38he7a7128_0 pkgs/main \nnumpy 1.22.3 py38hf6e8229_2 pkgs/main \nnumpy 1.22.3 py39h7a5d4dd_0 pkgs/main \nnumpy 1.22.3 py39he7a7128_0 pkgs/main \nnumpy 1.22.3 py39hf6e8229_2 pkgs/main \nnumpy 1.23.1 py310h1794996_0 pkgs/main \nnumpy 1.23.1 py310hac523dd_0 pkgs/main \nnumpy 1.23.1 py38h6c91a56_0 pkgs/main \nnumpy 1.23.1 py38hf838250_0 pkgs/main \nnumpy 1.23.1 py39h6c91a56_0 pkgs/main \nnumpy 1.23.1 py39hf838250_0 pkgs/main \nnumpy 1.23.3 py310hac523dd_0 pkgs/main \nnumpy 1.23.3 py310hac523dd_1 pkgs/main \nnumpy 1.23.3 py310hd5efca6_0 pkgs/main \nnumpy 1.23.3 py310hd5efca6_1 pkgs/main \nnumpy 1.23.3 py38h14f4228_0 pkgs/main \nnumpy 1.23.3 py38h14f4228_1 pkgs/main \nnumpy 1.23.3 py38hf838250_0 pkgs/main \nnumpy 1.23.3 py38hf838250_1 pkgs/main \nnumpy 1.23.3 py39h14f4228_0 pkgs/main \nnumpy 1.23.3 py39h14f4228_1 pkgs/main \nnumpy 1.23.3 py39hf838250_0 pkgs/main \nnumpy 1.23.3 py39hf838250_1 pkgs/main \nnumpy 1.23.4 py310hac523dd_0 pkgs/main \nnumpy 1.23.4 py310hd5efca6_0 pkgs/main \nnumpy 1.23.4 py38h14f4228_0 pkgs/main \nnumpy 1.23.4 py38hf838250_0 pkgs/main \nnumpy 1.23.4 py39h14f4228_0 pkgs/main \nnumpy 1.23.4 py39hf838250_0 pkgs/main \nnumpy 1.23.5 py310h5f9d8c6_1 pkgs/main \nnumpy 1.23.5 py310hac523dd_0 pkgs/main \nnumpy 1.23.5 py310hd5efca6_0 pkgs/main \nnumpy 1.23.5 py311h08b1b3b_1 pkgs/main \nnumpy 1.23.5 py311h5585df3_0 pkgs/main \nnumpy 1.23.5 py311h75bd12f_0 pkgs/main \nnumpy 1.23.5 py38h14f4228_0 pkgs/main \nnumpy 1.23.5 py38hf6e8229_1 pkgs/main \nnumpy 1.23.5 py38hf838250_0 pkgs/main \nnumpy 1.23.5 py39h14f4228_0 pkgs/main \nnumpy 1.23.5 py39hf6e8229_1 pkgs/main \nnumpy 1.23.5 py39hf838250_0 pkgs/main \nnumpy 1.24.3 py310h5f9d8c6_1 pkgs/main \nnumpy 1.24.3 py310hac523dd_0 pkgs/main \nnumpy 1.24.3 py310hd5efca6_0 pkgs/main \nnumpy 1.24.3 py311h08b1b3b_1 pkgs/main \nnumpy 1.24.3 py311h434b4ae_0 pkgs/main \nnumpy 1.24.3 py311hc206e33_0 pkgs/main \nnumpy 1.24.3 py38h14f4228_0 pkgs/main \nnumpy 1.24.3 py38hf6e8229_1 pkgs/main \nnumpy 1.24.3 py38hf838250_0 pkgs/main \nnumpy 1.24.3 py39h14f4228_0 pkgs/main \nnumpy 1.24.3 py39hf6e8229_1 pkgs/main \nnumpy 1.24.3 py39hf838250_0 pkgs/main \nnumpy 1.25.0 py310h5f9d8c6_0 pkgs/main \nnumpy 1.25.0 py310heeff2f4_0 pkgs/main \nnumpy 1.25.0 py311h08b1b3b_0 pkgs/main \nnumpy 1.25.0 py311h24aa872_0 pkgs/main \nnumpy 1.25.0 py39h5f9d8c6_0 pkgs/main \nnumpy 1.25.0 py39heeff2f4_0 pkgs/main \nnumpy 1.25.2 py310h5f9d8c6_0 pkgs/main \nnumpy 1.25.2 py310heeff2f4_0 pkgs/main \nnumpy 1.25.2 py311h08b1b3b_0 pkgs/main \nnumpy 1.25.2 py311h24aa872_0 pkgs/main \nnumpy 1.25.2 py39h5f9d8c6_0 pkgs/main \nnumpy 1.25.2 py39heeff2f4_0 pkgs/main \nnumpy 1.26.0 py310h5f9d8c6_0 pkgs/main \nnumpy 1.26.0 py310heeff2f4_0 pkgs/main \nnumpy 1.26.0 py311h08b1b3b_0 pkgs/main \nnumpy 1.26.0 py311h24aa872_0 pkgs/main \nnumpy 1.26.0 py312h2809609_0 pkgs/main \nnumpy 1.26.0 py312hc5e2394_0 pkgs/main \nnumpy 1.26.0 py39h5f9d8c6_0 pkgs/main \nnumpy 1.26.0 py39heeff2f4_0 pkgs/main \nnumpy 1.26.2 py310h5f9d8c6_0 pkgs/main \nnumpy 1.26.2 py310heeff2f4_0 pkgs/main \nnumpy 1.26.2 py311h08b1b3b_0 pkgs/main \nnumpy 1.26.2 py311h24aa872_0 pkgs/main \nnumpy 1.26.2 py312h2809609_0 pkgs/main \nnumpy 1.26.2 py312hc5e2394_0 pkgs/main \nnumpy 1.26.2 py39h5f9d8c6_0 pkgs/main \nnumpy 1.26.2 py39heeff2f4_0 pkgs/main \nnumpy 1.26.3 py310h5f9d8c6_0 pkgs/main \nnumpy 1.26.3 py310heeff2f4_0 pkgs/main \nnumpy 1.26.3 py311h08b1b3b_0 pkgs/main \nnumpy 1.26.3 py311h24aa872_0 pkgs/main \nnumpy 1.26.3 py312h2809609_0 pkgs/main \nnumpy 1.26.3 py312hc5e2394_0 pkgs/main \nnumpy 1.26.3 py39h5f9d8c6_0 pkgs/main \nnumpy 1.26.3 py39heeff2f4_0 pkgs/main \nnumpy 1.26.4 py310h5f9d8c6_0 pkgs/main \nnumpy 1.26.4 py310heeff2f4_0 pkgs/main \nnumpy 1.26.4 py311h08b1b3b_0 pkgs/main \nnumpy 1.26.4 py311h24aa872_0 pkgs/main \nnumpy 1.26.4 py312h2809609_0 pkgs/main \nnumpy 1.26.4 py312hc5e2394_0 pkgs/main \nnumpy 1.26.4 py39h5f9d8c6_0 pkgs/main \nnumpy 1.26.4 py39heeff2f4_0 pkgs/main \nnumpy 2.0.0 py310h5f9d8c6_1 pkgs/main \nnumpy 2.0.0 py310heeff2f4_1 pkgs/main \nnumpy 2.0.0 py311h08b1b3b_1 pkgs/main \nnumpy 2.0.0 py311h24aa872_1 pkgs/main \nnumpy 2.0.0 py312h2809609_1 pkgs/main \nnumpy 2.0.0 py312hc5e2394_1 pkgs/main \nnumpy 2.0.0 py39h5f9d8c6_1 pkgs/main \nnumpy 2.0.0 py39heeff2f4_1 pkgs/main \nnumpy 2.0.1 py310h5f9d8c6_1 pkgs/main \nnumpy 2.0.1 py310heeff2f4_1 pkgs/main \nnumpy 2.0.1 py311h08b1b3b_1 pkgs/main \nnumpy 2.0.1 py311h24aa872_1 pkgs/main \nnumpy 2.0.1 py312h2809609_1 pkgs/main \nnumpy 2.0.1 py312hc5e2394_1 pkgs/main \nnumpy 2.0.1 py39h5f9d8c6_1 pkgs/main \nnumpy 2.0.1 py39heeff2f4_1 pkgs/main \nnumpy 2.0.2 py310h5f9d8c6_0 pkgs/main \nnumpy 2.0.2 py310heeff2f4_0 pkgs/main \nnumpy 2.0.2 py311h08b1b3b_0 pkgs/main \nnumpy 2.0.2 py311h24aa872_0 pkgs/main \nnumpy 2.0.2 py312h2809609_0 pkgs/main \nnumpy 2.0.2 py312hc5e2394_0 pkgs/main \nnumpy 2.0.2 py39h5f9d8c6_0 pkgs/main \nnumpy 2.0.2 py39heeff2f4_0 pkgs/main \nnumpy 2.1.1 py310h5f9d8c6_0 pkgs/main \nnumpy 2.1.1 py310heeff2f4_0 pkgs/main \nnumpy 2.1.1 py311h08b1b3b_0 pkgs/main \nnumpy 2.1.1 py311h24aa872_0 pkgs/main \nnumpy 2.1.1 py312h2809609_0 pkgs/main \nnumpy 2.1.1 py312hc5e2394_0 pkgs/main \nnumpy 2.1.3 py310h5f9d8c6_0 pkgs/main \nnumpy 2.1.3 py310heeff2f4_0 pkgs/main \nnumpy 2.1.3 py311h08b1b3b_0 pkgs/main \nnumpy 2.1.3 py311h24aa872_0 pkgs/main \nnumpy 2.1.3 py312h2809609_0 pkgs/main \nnumpy 2.1.3 py312hc5e2394_0 pkgs/main \nnumpy 2.1.3 py313h3a69d60_0 pkgs/main \nnumpy 2.1.3 py313hf4aebb8_0 pkgs/main \nnumpy 2.2.1 py310h5f9d8c6_0 pkgs/main \nnumpy 2.2.1 py310heeff2f4_0 pkgs/main \nnumpy 2.2.1 py311h08b1b3b_0 pkgs/main \nnumpy 2.2.1 py311h24aa872_0 pkgs/main \nnumpy 2.2.1 py312h2809609_0 pkgs/main \nnumpy 2.2.1 py312hc5e2394_0 pkgs/main \nnumpy 2.2.1 py313h3a69d60_0 pkgs/main \nnumpy 2.2.1 py313hf4aebb8_0 pkgs/main \nnumpy 2.2.2 py310h5f9d8c6_0 pkgs/main \nnumpy 2.2.2 py310heeff2f4_0 pkgs/main \nnumpy 2.2.2 py311h08b1b3b_0 pkgs/main \nnumpy 2.2.2 py311h24aa872_0 pkgs/main \nnumpy 2.2.2 py312h2809609_0 pkgs/main \nnumpy 2.2.2 py312hc5e2394_0 pkgs/main \nnumpy 2.2.2 py313h3a69d60_0 pkgs/main \nnumpy 2.2.2 py313hf4aebb8_0 pkgs/main \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Search for a package on a specific channel", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "search conda-forge::numpy" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":305,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988405836, + "endTime" : 1739988423436, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Loading channels: ...working... done\n# Name Version Build Channel \nnumpy 1.7.2 py27_blas_openblas_201 conda-forge \nnumpy 1.7.2 py27_blas_openblas_202 conda-forge \nnumpy 1.7.2 py34_blas_openblas_202 conda-forge \nnumpy 1.8.2 py27_blas_openblas_200 conda-forge \nnumpy 1.8.2 py27_blas_openblas_201 conda-forge \nnumpy 1.8.2 py34_blas_openblas_200 conda-forge \nnumpy 1.8.2 py34_blas_openblas_201 conda-forge \nnumpy 1.8.2 py35_blas_openblas_200 conda-forge \nnumpy 1.8.2 py35_blas_openblas_201 conda-forge \nnumpy 1.8.2 py36_blas_openblas_200 conda-forge \nnumpy 1.8.2 py36_blas_openblas_201 conda-forge \nnumpy 1.9.3 py27_blas_openblas_200 conda-forge \nnumpy 1.9.3 py27_blas_openblas_201 conda-forge \nnumpy 1.9.3 py27_blas_openblas_202 conda-forge \nnumpy 1.9.3 py27_blas_openblas_203 conda-forge \nnumpy 1.9.3 py27_blas_openblash1522bff_1006 conda-forge \nnumpy 1.9.3 py27_blas_openblash1522bff_1007 conda-forge \nnumpy 1.9.3 py27_blas_openblash1522bff_1207 conda-forge \nnumpy 1.9.3 py27_blas_openblashb06ca3d_207 conda-forge \nnumpy 1.9.3 py27_blas_openblashb06ca3d_6 conda-forge \nnumpy 1.9.3 py27_blas_openblashb06ca3d_7 conda-forge \nnumpy 1.9.3 py27_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.9.3 py27_blas_openblashd3ea46f_206 conda-forge \nnumpy 1.9.3 py27h8b7e671_1208 conda-forge \nnumpy 1.9.3 py27he5ce36f_1209 conda-forge \nnumpy 1.9.3 py34_blas_openblas_200 conda-forge \nnumpy 1.9.3 py34_blas_openblas_201 conda-forge \nnumpy 1.9.3 py34_blas_openblas_202 conda-forge \nnumpy 1.9.3 py34_blas_openblas_203 conda-forge \nnumpy 1.9.3 py35_blas_openblas_200 conda-forge \nnumpy 1.9.3 py35_blas_openblas_201 conda-forge \nnumpy 1.9.3 py35_blas_openblas_202 conda-forge \nnumpy 1.9.3 py35_blas_openblas_203 conda-forge \nnumpy 1.9.3 py35_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.9.3 py35_blas_openblashd3ea46f_206 conda-forge \nnumpy 1.9.3 py36_blas_openblas_200 conda-forge \nnumpy 1.9.3 py36_blas_openblas_201 conda-forge \nnumpy 1.9.3 py36_blas_openblas_202 conda-forge \nnumpy 1.9.3 py36_blas_openblas_203 conda-forge \nnumpy 1.9.3 py36_blas_openblash1522bff_1006 conda-forge \nnumpy 1.9.3 py36_blas_openblash1522bff_1007 conda-forge \nnumpy 1.9.3 py36_blas_openblash1522bff_1207 conda-forge \nnumpy 1.9.3 py36_blas_openblashb06ca3d_207 conda-forge \nnumpy 1.9.3 py36_blas_openblashb06ca3d_6 conda-forge \nnumpy 1.9.3 py36_blas_openblashb06ca3d_7 conda-forge \nnumpy 1.9.3 py36_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.9.3 py36_blas_openblashd3ea46f_206 conda-forge \nnumpy 1.9.3 py36h8b7e671_1208 conda-forge \nnumpy 1.9.3 py36he5ce36f_1209 conda-forge \nnumpy 1.9.3 py37_blas_openblash1522bff_1006 conda-forge \nnumpy 1.9.3 py37_blas_openblash1522bff_1007 conda-forge \nnumpy 1.9.3 py37_blas_openblash1522bff_1207 conda-forge \nnumpy 1.9.3 py37_blas_openblashb06ca3d_207 conda-forge \nnumpy 1.9.3 py37_blas_openblashb06ca3d_6 conda-forge \nnumpy 1.9.3 py37_blas_openblashb06ca3d_7 conda-forge \nnumpy 1.9.3 py37_blas_openblashd3ea46f_206 conda-forge \nnumpy 1.9.3 py37h8b7e671_1208 conda-forge \nnumpy 1.9.3 py37he5ce36f_1209 conda-forge \nnumpy 1.10.4 py27_blas_openblas_200 conda-forge \nnumpy 1.10.4 py27_blas_openblas_201 conda-forge \nnumpy 1.10.4 py27_blas_openblas_202 conda-forge \nnumpy 1.10.4 py27_blas_openblas_203 conda-forge \nnumpy 1.10.4 py27_blas_openblas_204 conda-forge \nnumpy 1.10.4 py27_blas_openblas_205 conda-forge \nnumpy 1.10.4 py34_blas_openblas_200 conda-forge \nnumpy 1.10.4 py34_blas_openblas_201 conda-forge \nnumpy 1.10.4 py34_blas_openblas_202 conda-forge \nnumpy 1.10.4 py34_blas_openblas_203 conda-forge \nnumpy 1.10.4 py34_blas_openblas_204 conda-forge \nnumpy 1.10.4 py34_blas_openblas_205 conda-forge \nnumpy 1.10.4 py35_blas_openblas_200 conda-forge \nnumpy 1.10.4 py35_blas_openblas_201 conda-forge \nnumpy 1.10.4 py35_blas_openblas_202 conda-forge \nnumpy 1.10.4 py35_blas_openblas_203 conda-forge \nnumpy 1.10.4 py35_blas_openblas_204 conda-forge \nnumpy 1.10.4 py35_blas_openblas_205 conda-forge \nnumpy 1.10.4 py36_blas_openblas_205 conda-forge \nnumpy 1.11.0 py27_blas_openblas_200 conda-forge \nnumpy 1.11.0 py27_blas_openblas_201 conda-forge \nnumpy 1.11.0 py34_blas_openblas_200 conda-forge \nnumpy 1.11.0 py34_blas_openblas_201 conda-forge \nnumpy 1.11.0 py35_blas_openblas_200 conda-forge \nnumpy 1.11.0 py35_blas_openblas_201 conda-forge \nnumpy 1.11.1 py27_blas_openblas_200 conda-forge \nnumpy 1.11.1 py27_blas_openblas_201 conda-forge \nnumpy 1.11.1 py27_blas_openblas_202 conda-forge \nnumpy 1.11.1 py34_blas_openblas_200 conda-forge \nnumpy 1.11.1 py34_blas_openblas_201 conda-forge \nnumpy 1.11.1 py34_blas_openblas_202 conda-forge \nnumpy 1.11.1 py35_blas_openblas_200 conda-forge \nnumpy 1.11.1 py35_blas_openblas_201 conda-forge \nnumpy 1.11.1 py35_blas_openblas_202 conda-forge \nnumpy 1.11.2 py27_blas_openblas_200 conda-forge \nnumpy 1.11.2 py27_blas_openblas_201 conda-forge \nnumpy 1.11.2 py27_blas_openblas_202 conda-forge \nnumpy 1.11.2 py34_blas_openblas_200 conda-forge \nnumpy 1.11.2 py34_blas_openblas_201 conda-forge \nnumpy 1.11.2 py34_blas_openblas_202 conda-forge \nnumpy 1.11.2 py35_blas_openblas_200 conda-forge \nnumpy 1.11.2 py35_blas_openblas_201 conda-forge \nnumpy 1.11.2 py35_blas_openblas_202 conda-forge \nnumpy 1.11.3 py27_blas_openblas_200 conda-forge \nnumpy 1.11.3 py27_blas_openblas_201 conda-forge \nnumpy 1.11.3 py27_blas_openblas_202 conda-forge \nnumpy 1.11.3 py27_blas_openblas_203 conda-forge \nnumpy 1.11.3 py27_blas_openblash1522bff_1205 conda-forge \nnumpy 1.11.3 py27_blas_openblashb06ca3d_205 conda-forge \nnumpy 1.11.3 py27_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.11.3 py27h8b7e671_1206 conda-forge \nnumpy 1.11.3 py27he5ce36f_1207 conda-forge \nnumpy 1.11.3 py34_blas_openblas_200 conda-forge \nnumpy 1.11.3 py34_blas_openblas_201 conda-forge \nnumpy 1.11.3 py34_blas_openblas_202 conda-forge \nnumpy 1.11.3 py34_blas_openblas_203 conda-forge \nnumpy 1.11.3 py35_blas_openblas_200 conda-forge \nnumpy 1.11.3 py35_blas_openblas_201 conda-forge \nnumpy 1.11.3 py35_blas_openblas_202 conda-forge \nnumpy 1.11.3 py35_blas_openblas_203 conda-forge \nnumpy 1.11.3 py35_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.11.3 py36_blas_openblas_200 conda-forge \nnumpy 1.11.3 py36_blas_openblas_201 conda-forge \nnumpy 1.11.3 py36_blas_openblas_202 conda-forge \nnumpy 1.11.3 py36_blas_openblas_203 conda-forge \nnumpy 1.11.3 py36_blas_openblash1522bff_1205 conda-forge \nnumpy 1.11.3 py36_blas_openblashb06ca3d_205 conda-forge \nnumpy 1.11.3 py36_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.11.3 py36h8b7e671_1206 conda-forge \nnumpy 1.11.3 py36he5ce36f_1207 conda-forge \nnumpy 1.11.3 py37_blas_openblash1522bff_1205 conda-forge \nnumpy 1.11.3 py37_blas_openblashb06ca3d_205 conda-forge \nnumpy 1.11.3 py37_blas_openblashd3ea46f_205 conda-forge \nnumpy 1.11.3 py37h8b7e671_1206 conda-forge \nnumpy 1.11.3 py37he5ce36f_1207 conda-forge \nnumpy 1.12.0 py27_blas_openblas_200 conda-forge \nnumpy 1.12.0 py34_blas_openblas_200 conda-forge \nnumpy 1.12.0 py35_blas_openblas_200 conda-forge \nnumpy 1.12.0 py36_blas_openblas_200 conda-forge \nnumpy 1.12.1 py27_blas_openblas_200 conda-forge \nnumpy 1.12.1 py27_blas_openblas_201 conda-forge \nnumpy 1.12.1 py27_blas_openblash1522bff_1001 conda-forge \nnumpy 1.12.1 py27_blas_openblash24bf2e0_201 conda-forge \nnumpy 1.12.1 py27_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.12.1 py34_blas_openblas_200 conda-forge \nnumpy 1.12.1 py34_blas_openblas_201 conda-forge \nnumpy 1.12.1 py35_blas_openblas_200 conda-forge \nnumpy 1.12.1 py35_blas_openblas_201 conda-forge \nnumpy 1.12.1 py36_blas_openblas_200 conda-forge \nnumpy 1.12.1 py36_blas_openblas_201 conda-forge \nnumpy 1.12.1 py36_blas_openblash1522bff_1001 conda-forge \nnumpy 1.12.1 py36_blas_openblash24bf2e0_201 conda-forge \nnumpy 1.12.1 py36_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.13.0 py27_blas_openblas_200 conda-forge \nnumpy 1.13.0 py34_blas_openblas_200 conda-forge \nnumpy 1.13.0 py35_blas_openblas_200 conda-forge \nnumpy 1.13.0 py36_blas_openblas_200 conda-forge \nnumpy 1.13.1 py27_blas_openblas_200 conda-forge \nnumpy 1.13.1 py27_blas_openblas_201 conda-forge \nnumpy 1.13.1 py34_blas_openblas_200 conda-forge \nnumpy 1.13.1 py34_blas_openblas_201 conda-forge \nnumpy 1.13.1 py35_blas_openblas_200 conda-forge \nnumpy 1.13.1 py35_blas_openblas_201 conda-forge \nnumpy 1.13.1 py36_blas_openblas_200 conda-forge \nnumpy 1.13.1 py36_blas_openblas_201 conda-forge \nnumpy 1.13.2 py27_blas_openblas_200 conda-forge \nnumpy 1.13.2 py35_blas_openblas_200 conda-forge \nnumpy 1.13.2 py36_blas_openblas_200 conda-forge \nnumpy 1.13.3 py27_blas_openblas_200 conda-forge \nnumpy 1.13.3 py27_blas_openblas_201 conda-forge \nnumpy 1.13.3 py27_blas_openblash1522bff_1001 conda-forge \nnumpy 1.13.3 py27_blas_openblash1522bff_1201 conda-forge \nnumpy 1.13.3 py27_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.13.3 py27_blas_openblashb06ca3d_201 conda-forge \nnumpy 1.13.3 py35_blas_openblas_200 conda-forge \nnumpy 1.13.3 py35_blas_openblas_201 conda-forge \nnumpy 1.13.3 py36_blas_openblas_200 conda-forge \nnumpy 1.13.3 py36_blas_openblas_201 conda-forge \nnumpy 1.13.3 py36_blas_openblash1522bff_1001 conda-forge \nnumpy 1.13.3 py36_blas_openblash1522bff_1201 conda-forge \nnumpy 1.13.3 py36_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.13.3 py36_blas_openblashb06ca3d_201 conda-forge \nnumpy 1.14.0 py27_blas_openblas_200 conda-forge \nnumpy 1.14.0 py34_blas_openblas_200 conda-forge \nnumpy 1.14.0 py35_blas_openblas_200 conda-forge \nnumpy 1.14.0 py36_blas_openblas_200 conda-forge \nnumpy 1.14.1 py27_blas_openblas_200 conda-forge \nnumpy 1.14.1 py35_blas_openblas_200 conda-forge \nnumpy 1.14.1 py36_blas_openblas_200 conda-forge \nnumpy 1.14.2 py27_blas_openblas_200 conda-forge \nnumpy 1.14.2 py35_blas_openblas_200 conda-forge \nnumpy 1.14.2 py36_blas_openblas_200 conda-forge \nnumpy 1.14.3 py27_blas_openblas_200 conda-forge \nnumpy 1.14.3 py35_blas_openblas_200 conda-forge \nnumpy 1.14.3 py36_blas_openblas_200 conda-forge \nnumpy 1.14.4 py27_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.4 py35_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.4 py36_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.5 py27_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.5 py27_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.5 py27_blas_openblashd3ea46f_201 conda-forge \nnumpy 1.14.5 py27_blas_openblashd3ea46f_202 conda-forge \nnumpy 1.14.5 py35_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.5 py35_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.5 py35_blas_openblashd3ea46f_201 conda-forge \nnumpy 1.14.5 py35_blas_openblashd3ea46f_202 conda-forge \nnumpy 1.14.5 py36_blas_openblash24bf2e0_200 conda-forge \nnumpy 1.14.5 py36_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.5 py36_blas_openblashd3ea46f_201 conda-forge \nnumpy 1.14.5 py36_blas_openblashd3ea46f_202 conda-forge \nnumpy 1.14.5 py37_blas_openblashd3ea46f_202 conda-forge \nnumpy 1.14.6 py27_blas_openblash1522bff_1000 conda-forge \nnumpy 1.14.6 py27_blas_openblash1522bff_1200 conda-forge \nnumpy 1.14.6 py27_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.14.6 py27_blas_openblashb06ca3d_200 conda-forge \nnumpy 1.14.6 py27_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.6 py27h95a1406_1201 conda-forge \nnumpy 1.14.6 py27he5ce36f_1201 conda-forge \nnumpy 1.14.6 py35_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.6 py36_blas_openblash1522bff_1000 conda-forge \nnumpy 1.14.6 py36_blas_openblash1522bff_1200 conda-forge \nnumpy 1.14.6 py36_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.14.6 py36_blas_openblashb06ca3d_200 conda-forge \nnumpy 1.14.6 py36_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.6 py36h95a1406_1201 conda-forge \nnumpy 1.14.6 py36he5ce36f_1201 conda-forge \nnumpy 1.14.6 py37_blas_openblash1522bff_1000 conda-forge \nnumpy 1.14.6 py37_blas_openblash1522bff_1200 conda-forge \nnumpy 1.14.6 py37_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.14.6 py37_blas_openblashb06ca3d_200 conda-forge \nnumpy 1.14.6 py37_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.14.6 py37h95a1406_1201 conda-forge \nnumpy 1.14.6 py37he5ce36f_1201 conda-forge \nnumpy 1.14.6 py38h95a1406_1201 conda-forge \nnumpy 1.15.0 py27_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.15.0 py35_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.15.0 py36_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.15.0 py37_blas_openblashd3ea46f_200 conda-forge \nnumpy 1.15.1 py27_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.1 py27_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.1 py35_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.1 py35_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.1 py36_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.1 py36_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.1 py37_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.2 py27_blas_openblash1522bff_1001 conda-forge \nnumpy 1.15.2 py27_blas_openblash1522bff_1201 conda-forge \nnumpy 1.15.2 py27_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.15.2 py27_blas_openblashb06ca3d_201 conda-forge \nnumpy 1.15.2 py27_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.2 py27_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.2 py35_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.2 py36_blas_openblash1522bff_1001 conda-forge \nnumpy 1.15.2 py36_blas_openblash1522bff_1201 conda-forge \nnumpy 1.15.2 py36_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.15.2 py36_blas_openblashb06ca3d_201 conda-forge \nnumpy 1.15.2 py36_blas_openblashd3ea46f_0 conda-forge \nnumpy 1.15.2 py36_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.2 py37_blas_openblash1522bff_1001 conda-forge \nnumpy 1.15.2 py37_blas_openblash1522bff_1201 conda-forge \nnumpy 1.15.2 py37_blas_openblashb06ca3d_1 conda-forge \nnumpy 1.15.2 py37_blas_openblashb06ca3d_201 conda-forge \nnumpy 1.15.2 py37_blas_openblashd3ea46f_1 conda-forge \nnumpy 1.15.3 py27_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.3 py27_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.3 py36_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.3 py36_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.3 py37_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.3 py37_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.4 py27_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.4 py27_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.4 py27h8b7e671_1001 conda-forge \nnumpy 1.15.4 py27h8b7e671_1002 conda-forge \nnumpy 1.15.4 py36_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.4 py36_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.4 py36h8b7e671_1001 conda-forge \nnumpy 1.15.4 py36h8b7e671_1002 conda-forge \nnumpy 1.15.4 py37_blas_openblash1522bff_1000 conda-forge \nnumpy 1.15.4 py37_blas_openblashb06ca3d_0 conda-forge \nnumpy 1.15.4 py37h8b7e671_1001 conda-forge \nnumpy 1.15.4 py37h8b7e671_1002 conda-forge \nnumpy 1.16.0 py27_blas_openblash1522bff_1000 conda-forge \nnumpy 1.16.0 py36_blas_openblash1522bff_1000 conda-forge \nnumpy 1.16.0 py37_blas_openblash1522bff_1000 conda-forge \nnumpy 1.16.1 py27_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.1 py36_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.1 py37_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.2 py27_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.2 py27h7fc5cc3_0 conda-forge \nnumpy 1.16.2 py27h8b7e671_1 conda-forge \nnumpy 1.16.2 py36_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.2 py36h7fc5cc3_0 conda-forge \nnumpy 1.16.2 py36h8b7e671_1 conda-forge \nnumpy 1.16.2 py37_blas_openblash1522bff_0 conda-forge \nnumpy 1.16.2 py37h7fc5cc3_0 conda-forge \nnumpy 1.16.2 py37h8b7e671_1 conda-forge \nnumpy 1.16.3 py27he5ce36f_0 conda-forge \nnumpy 1.16.3 py36he5ce36f_0 conda-forge \nnumpy 1.16.3 py37he5ce36f_0 conda-forge \nnumpy 1.16.4 py27h95a1406_0 conda-forge \nnumpy 1.16.4 py36h95a1406_0 conda-forge \nnumpy 1.16.4 py37h95a1406_0 conda-forge \nnumpy 1.16.5 py27h95a1406_0 conda-forge \nnumpy 1.16.5 py36h2aa4a07_1 conda-forge \nnumpy 1.16.5 py36h95a1406_0 conda-forge \nnumpy 1.16.5 py37h95a1406_0 conda-forge \nnumpy 1.16.5 py37haa41c4c_1 conda-forge \nnumpy 1.16.5 py38h18fd61f_1 conda-forge \nnumpy 1.16.5 py38h95a1406_0 conda-forge \nnumpy 1.16.6 py36h2aa4a07_0 conda-forge \nnumpy 1.16.6 py37haa41c4c_0 conda-forge \nnumpy 1.16.6 py38h18fd61f_0 conda-forge \nnumpy 1.17.0 py36h95a1406_0 conda-forge \nnumpy 1.17.0 py37h95a1406_0 conda-forge \nnumpy 1.17.1 py36h95a1406_0 conda-forge \nnumpy 1.17.1 py37h95a1406_0 conda-forge \nnumpy 1.17.2 py36h95a1406_0 conda-forge \nnumpy 1.17.2 py37h95a1406_0 conda-forge \nnumpy 1.17.3 py36h95a1406_0 conda-forge \nnumpy 1.17.3 py37h95a1406_0 conda-forge \nnumpy 1.17.3 py38h95a1406_0 conda-forge \nnumpy 1.17.5 py36h2aa4a07_1 conda-forge \nnumpy 1.17.5 py36h95a1406_0 conda-forge \nnumpy 1.17.5 py37h95a1406_0 conda-forge \nnumpy 1.17.5 py37haa41c4c_1 conda-forge \nnumpy 1.17.5 py38h18fd61f_1 conda-forge \nnumpy 1.17.5 py38h95a1406_0 conda-forge \nnumpy 1.18.1 py36h7314795_1 conda-forge \nnumpy 1.18.1 py36h95a1406_0 conda-forge \nnumpy 1.18.1 py36he0f5f23_1 conda-forge \nnumpy 1.18.1 py37h8960a57_1 conda-forge \nnumpy 1.18.1 py37h95a1406_0 conda-forge \nnumpy 1.18.1 py38h8854b6b_1 conda-forge \nnumpy 1.18.1 py38h95a1406_0 conda-forge \nnumpy 1.18.4 py36h7314795_0 conda-forge \nnumpy 1.18.4 py36he0f5f23_0 conda-forge \nnumpy 1.18.4 py37h8960a57_0 conda-forge \nnumpy 1.18.4 py38h8854b6b_0 conda-forge \nnumpy 1.18.5 py36h7314795_0 conda-forge \nnumpy 1.18.5 py36he0f5f23_0 conda-forge \nnumpy 1.18.5 py37h8960a57_0 conda-forge \nnumpy 1.18.5 py38h8854b6b_0 conda-forge \nnumpy 1.19.0 py36h7314795_0 conda-forge \nnumpy 1.19.0 py36he0f5f23_0 conda-forge \nnumpy 1.19.0 py37h8960a57_0 conda-forge \nnumpy 1.19.0 py38h8854b6b_0 conda-forge \nnumpy 1.19.1 py36h3849536_1 conda-forge \nnumpy 1.19.1 py36h3849536_2 conda-forge \nnumpy 1.19.1 py36h7314795_0 conda-forge \nnumpy 1.19.1 py36he0f5f23_0 conda-forge \nnumpy 1.19.1 py36he0f5f23_1 conda-forge \nnumpy 1.19.1 py36he0f5f23_2 conda-forge \nnumpy 1.19.1 py37h7ea13bd_1 conda-forge \nnumpy 1.19.1 py37h7ea13bd_2 conda-forge \nnumpy 1.19.1 py37h8960a57_0 conda-forge \nnumpy 1.19.1 py38h8854b6b_0 conda-forge \nnumpy 1.19.1 py38hbc27379_1 conda-forge \nnumpy 1.19.1 py38hbc27379_2 conda-forge \nnumpy 1.19.2 py36h3849536_0 conda-forge \nnumpy 1.19.2 py36h3849536_1 conda-forge \nnumpy 1.19.2 py36h68c22af_1 conda-forge \nnumpy 1.19.2 py36h865be6f_1 conda-forge \nnumpy 1.19.2 py36he0f5f23_1 conda-forge \nnumpy 1.19.2 py37h7008fea_1 conda-forge \nnumpy 1.19.2 py37h7ea13bd_0 conda-forge \nnumpy 1.19.2 py37h7ea13bd_1 conda-forge \nnumpy 1.19.2 py38hbc27379_0 conda-forge \nnumpy 1.19.2 py38hbc27379_1 conda-forge \nnumpy 1.19.2 py38hf89b668_1 conda-forge \nnumpy 1.19.2 py39h2bb7b6c_1 conda-forge \nnumpy 1.19.2 py39hb68c0c8_1 conda-forge \nnumpy 1.19.4 py36h2aa4a07_2 conda-forge \nnumpy 1.19.4 py36h8732dcd_0 conda-forge \nnumpy 1.19.4 py36h8732dcd_1 conda-forge \nnumpy 1.19.4 py36hf5aa452_0 conda-forge \nnumpy 1.19.4 py37h7e9df27_0 conda-forge \nnumpy 1.19.4 py37h7e9df27_1 conda-forge \nnumpy 1.19.4 py37haa41c4c_2 conda-forge \nnumpy 1.19.4 py38h18fd61f_2 conda-forge \nnumpy 1.19.4 py38hf0fd68c_0 conda-forge \nnumpy 1.19.4 py38hf0fd68c_1 conda-forge \nnumpy 1.19.4 py39h57d35e7_0 conda-forge \nnumpy 1.19.4 py39h57d35e7_1 conda-forge \nnumpy 1.19.4 py39hdbf815f_2 conda-forge \nnumpy 1.19.5 py36h2aa4a07_0 conda-forge \nnumpy 1.19.5 py36h2aa4a07_1 conda-forge \nnumpy 1.19.5 py36h7e87304_0 conda-forge \nnumpy 1.19.5 py36h7e87304_1 conda-forge \nnumpy 1.19.5 py36hfc0c790_2 conda-forge \nnumpy 1.19.5 py37h038b26d_2 conda-forge \nnumpy 1.19.5 py37h3e96413_3 conda-forge \nnumpy 1.19.5 py37h620df1f_1 conda-forge \nnumpy 1.19.5 py37h620df1f_2 conda-forge \nnumpy 1.19.5 py37haa41c4c_0 conda-forge \nnumpy 1.19.5 py37haa41c4c_1 conda-forge \nnumpy 1.19.5 py37hf0d26b2_3 conda-forge \nnumpy 1.19.5 py38h18fd61f_0 conda-forge \nnumpy 1.19.5 py38h18fd61f_1 conda-forge \nnumpy 1.19.5 py38h8246c76_3 conda-forge \nnumpy 1.19.5 py38h9894fe3_2 conda-forge \nnumpy 1.19.5 py38hd7c341c_3 conda-forge \nnumpy 1.19.5 py39hbb6b2ec_3 conda-forge \nnumpy 1.19.5 py39hd249d9e_3 conda-forge \nnumpy 1.19.5 py39hdbf815f_0 conda-forge \nnumpy 1.19.5 py39hdbf815f_1 conda-forge \nnumpy 1.19.5 py39hdbf815f_2 conda-forge \nnumpy 1.20.0 py37h620df1f_0 conda-forge \nnumpy 1.20.0 py37haa41c4c_0 conda-forge \nnumpy 1.20.0 py38h18fd61f_0 conda-forge \nnumpy 1.20.0 py39hdbf815f_0 conda-forge \nnumpy 1.20.1 py37h620df1f_0 conda-forge \nnumpy 1.20.1 py37haa41c4c_0 conda-forge \nnumpy 1.20.1 py38h18fd61f_0 conda-forge \nnumpy 1.20.1 py39hdbf815f_0 conda-forge \nnumpy 1.20.2 py37h038b26d_0 conda-forge \nnumpy 1.20.2 py37h620df1f_0 conda-forge \nnumpy 1.20.2 py38h9894fe3_0 conda-forge \nnumpy 1.20.2 py39hdbf815f_0 conda-forge \nnumpy 1.20.3 py37h038b26d_0 conda-forge \nnumpy 1.20.3 py37h038b26d_1 conda-forge \nnumpy 1.20.3 py37h3e96413_2 conda-forge \nnumpy 1.20.3 py37h620df1f_0 conda-forge \nnumpy 1.20.3 py37h620df1f_1 conda-forge \nnumpy 1.20.3 py37hf0d26b2_2 conda-forge \nnumpy 1.20.3 py38h8246c76_2 conda-forge \nnumpy 1.20.3 py38h9894fe3_0 conda-forge \nnumpy 1.20.3 py38h9894fe3_1 conda-forge \nnumpy 1.20.3 py38hd7c341c_2 conda-forge \nnumpy 1.20.3 py39hbb6b2ec_2 conda-forge \nnumpy 1.20.3 py39hd249d9e_2 conda-forge \nnumpy 1.20.3 py39hdbf815f_0 conda-forge \nnumpy 1.20.3 py39hdbf815f_1 conda-forge \nnumpy 1.21.0 py37h038b26d_0 conda-forge \nnumpy 1.21.0 py37h620df1f_0 conda-forge \nnumpy 1.21.0 py38h9894fe3_0 conda-forge \nnumpy 1.21.0 py39hdbf815f_0 conda-forge \nnumpy 1.21.1 py37h038b26d_0 conda-forge \nnumpy 1.21.1 py37h620df1f_0 conda-forge \nnumpy 1.21.1 py38h9894fe3_0 conda-forge \nnumpy 1.21.1 py39hdbf815f_0 conda-forge \nnumpy 1.21.2 py37h31617e3_0 conda-forge \nnumpy 1.21.2 py37h620df1f_0 conda-forge \nnumpy 1.21.2 py38he2449b9_0 conda-forge \nnumpy 1.21.2 py39hdbf815f_0 conda-forge \nnumpy 1.21.3 py310h57288b1_1 conda-forge \nnumpy 1.21.3 py37h31617e3_0 conda-forge \nnumpy 1.21.3 py37h31617e3_1 conda-forge \nnumpy 1.21.3 py37h620df1f_0 conda-forge \nnumpy 1.21.3 py37h620df1f_1 conda-forge \nnumpy 1.21.3 py38he2449b9_0 conda-forge \nnumpy 1.21.3 py38he2449b9_1 conda-forge \nnumpy 1.21.3 py39hdbf815f_0 conda-forge \nnumpy 1.21.3 py39hdbf815f_1 conda-forge \nnumpy 1.21.4 py310h57288b1_0 conda-forge \nnumpy 1.21.4 py37h31617e3_0 conda-forge \nnumpy 1.21.4 py37h620df1f_0 conda-forge \nnumpy 1.21.4 py38he2449b9_0 conda-forge \nnumpy 1.21.4 py39hdbf815f_0 conda-forge \nnumpy 1.21.5 py310h45f3432_1 conda-forge \nnumpy 1.21.5 py310h647a097_0 conda-forge \nnumpy 1.21.5 py37h18e8e3d_0 conda-forge \nnumpy 1.21.5 py37h976b520_1 conda-forge \nnumpy 1.21.5 py37hf2998dd_0 conda-forge \nnumpy 1.21.5 py38h1d589f8_1 conda-forge \nnumpy 1.21.5 py38h87f13fb_0 conda-forge \nnumpy 1.21.5 py38hf95d648_1 conda-forge \nnumpy 1.21.5 py39h18676bf_1 conda-forge \nnumpy 1.21.5 py39h264d414_1 conda-forge \nnumpy 1.21.5 py39haac66dc_0 conda-forge \nnumpy 1.21.6 py310h45f3432_0 conda-forge \nnumpy 1.21.6 py37h976b520_0 conda-forge \nnumpy 1.21.6 py38h1d589f8_0 conda-forge \nnumpy 1.21.6 py38hf95d648_0 conda-forge \nnumpy 1.21.6 py39h18676bf_0 conda-forge \nnumpy 1.21.6 py39h264d414_0 conda-forge \nnumpy 1.22.0 py310h454958d_0 conda-forge \nnumpy 1.22.0 py310h454958d_1 conda-forge \nnumpy 1.22.0 py38h6ae9a64_0 conda-forge \nnumpy 1.22.0 py38h6ae9a64_1 conda-forge \nnumpy 1.22.0 py39h91f2184_0 conda-forge \nnumpy 1.22.0 py39h91f2184_1 conda-forge \nnumpy 1.22.1 py310h454958d_0 conda-forge \nnumpy 1.22.1 py38h6ae9a64_0 conda-forge \nnumpy 1.22.1 py39h91f2184_0 conda-forge \nnumpy 1.22.2 py310h454958d_0 conda-forge \nnumpy 1.22.2 py38h6ae9a64_0 conda-forge \nnumpy 1.22.2 py39h91f2184_0 conda-forge \nnumpy 1.22.3 py310h45f3432_0 conda-forge \nnumpy 1.22.3 py310h45f3432_1 conda-forge \nnumpy 1.22.3 py310h45f3432_2 conda-forge \nnumpy 1.22.3 py310h4ef5377_2 conda-forge \nnumpy 1.22.3 py38h05e7239_0 conda-forge \nnumpy 1.22.3 py38h05e7239_1 conda-forge \nnumpy 1.22.3 py38h1d589f8_2 conda-forge \nnumpy 1.22.3 py38h649d9f0_2 conda-forge \nnumpy 1.22.3 py38h99721a1_2 conda-forge \nnumpy 1.22.3 py38hf95d648_2 conda-forge \nnumpy 1.22.3 py39h18676bf_0 conda-forge \nnumpy 1.22.3 py39h18676bf_1 conda-forge \nnumpy 1.22.3 py39h18676bf_2 conda-forge \nnumpy 1.22.3 py39h264d414_2 conda-forge \nnumpy 1.22.3 py39hc58783e_2 conda-forge \nnumpy 1.22.3 py39hceb6dda_2 conda-forge \nnumpy 1.22.4 py310h4ef5377_0 conda-forge \nnumpy 1.22.4 py38h649d9f0_0 conda-forge \nnumpy 1.22.4 py38h99721a1_0 conda-forge \nnumpy 1.22.4 py39hc58783e_0 conda-forge \nnumpy 1.22.4 py39hceb6dda_0 conda-forge \nnumpy 1.23.0 py310h53a5b5f_0 conda-forge \nnumpy 1.23.0 py38h10123e4_0 conda-forge \nnumpy 1.23.0 py38h3a7f9d9_0 conda-forge \nnumpy 1.23.0 py39hba7629e_0 conda-forge \nnumpy 1.23.0 py39hf05da4a_0 conda-forge \nnumpy 1.23.1 py310h53a5b5f_0 conda-forge \nnumpy 1.23.1 py38h10123e4_0 conda-forge \nnumpy 1.23.1 py38h3a7f9d9_0 conda-forge \nnumpy 1.23.1 py39hba7629e_0 conda-forge \nnumpy 1.23.1 py39hf05da4a_0 conda-forge \nnumpy 1.23.2 py310h53a5b5f_0 conda-forge \nnumpy 1.23.2 py38h10123e4_0 conda-forge \nnumpy 1.23.2 py38h3a7f9d9_0 conda-forge \nnumpy 1.23.2 py39hba7629e_0 conda-forge \nnumpy 1.23.2 py39hf05da4a_0 conda-forge \nnumpy 1.23.3 py310h53a5b5f_0 conda-forge \nnumpy 1.23.3 py38h10123e4_0 conda-forge \nnumpy 1.23.3 py38h3a7f9d9_0 conda-forge \nnumpy 1.23.3 py39hba7629e_0 conda-forge \nnumpy 1.23.3 py39hf05da4a_0 conda-forge \nnumpy 1.23.4 py310h53a5b5f_0 conda-forge \nnumpy 1.23.4 py310h53a5b5f_1 conda-forge \nnumpy 1.23.4 py311h7d28db0_1 conda-forge \nnumpy 1.23.4 py38h7042d01_0 conda-forge \nnumpy 1.23.4 py38h7042d01_1 conda-forge \nnumpy 1.23.4 py38h8e54316_0 conda-forge \nnumpy 1.23.4 py38h8e54316_1 conda-forge \nnumpy 1.23.4 py39h3d75532_0 conda-forge \nnumpy 1.23.4 py39h3d75532_1 conda-forge \nnumpy 1.23.4 py39h4fa106f_0 conda-forge \nnumpy 1.23.4 py39h4fa106f_1 conda-forge \nnumpy 1.23.5 py310h53a5b5f_0 conda-forge \nnumpy 1.23.5 py311h7d28db0_0 conda-forge \nnumpy 1.23.5 py38h7042d01_0 conda-forge \nnumpy 1.23.5 py38h8e54316_0 conda-forge \nnumpy 1.23.5 py39h3d75532_0 conda-forge \nnumpy 1.23.5 py39h4fa106f_0 conda-forge \nnumpy 1.24.0 py310h08bbf29_0 conda-forge \nnumpy 1.24.0 py311hbde0eaa_0 conda-forge \nnumpy 1.24.0 py38h5571e69_0 conda-forge \nnumpy 1.24.0 py38hab0fcb9_0 conda-forge \nnumpy 1.24.0 py39h223a676_0 conda-forge \nnumpy 1.24.0 py39hb10b683_0 conda-forge \nnumpy 1.24.1 py310h08bbf29_0 conda-forge \nnumpy 1.24.1 py310h8deb116_0 conda-forge \nnumpy 1.24.1 py311h8e6699e_0 conda-forge \nnumpy 1.24.1 py311hbde0eaa_0 conda-forge \nnumpy 1.24.1 py38h10c12cc_0 conda-forge \nnumpy 1.24.1 py38h5571e69_0 conda-forge \nnumpy 1.24.1 py38hab0fcb9_0 conda-forge \nnumpy 1.24.1 py38hab853d4_0 conda-forge \nnumpy 1.24.1 py39h223a676_0 conda-forge \nnumpy 1.24.1 py39h60c9533_0 conda-forge \nnumpy 1.24.1 py39h7360e5f_0 conda-forge \nnumpy 1.24.1 py39hb10b683_0 conda-forge \nnumpy 1.24.2 py310h8deb116_0 conda-forge \nnumpy 1.24.2 py311h8e6699e_0 conda-forge \nnumpy 1.24.2 py38h10c12cc_0 conda-forge \nnumpy 1.24.2 py38hab853d4_0 conda-forge \nnumpy 1.24.2 py39h60c9533_0 conda-forge \nnumpy 1.24.2 py39h7360e5f_0 conda-forge \nnumpy 1.24.3 py310ha4c1d20_0 conda-forge \nnumpy 1.24.3 py311h64a7726_0 conda-forge \nnumpy 1.24.3 py38h59b608b_0 conda-forge \nnumpy 1.24.3 py38hdd4dd61_0 conda-forge \nnumpy 1.24.3 py39h129f8d9_0 conda-forge \nnumpy 1.24.3 py39h6183b62_0 conda-forge \nnumpy 1.24.4 py310ha4c1d20_0 conda-forge \nnumpy 1.24.4 py311h64a7726_0 conda-forge \nnumpy 1.24.4 py38h59b608b_0 conda-forge \nnumpy 1.24.4 py38hdd4dd61_0 conda-forge \nnumpy 1.24.4 py39h129f8d9_0 conda-forge \nnumpy 1.24.4 py39h6183b62_0 conda-forge \nnumpy 1.25.0 py310ha4c1d20_0 conda-forge \nnumpy 1.25.0 py311h64a7726_0 conda-forge \nnumpy 1.25.0 py39h129f8d9_0 conda-forge \nnumpy 1.25.0 py39h6183b62_0 conda-forge \nnumpy 1.25.1 py310ha4c1d20_0 conda-forge \nnumpy 1.25.1 py311h64a7726_0 conda-forge \nnumpy 1.25.1 py39h129f8d9_0 conda-forge \nnumpy 1.25.1 py39h6183b62_0 conda-forge \nnumpy 1.25.2 py310ha4c1d20_0 conda-forge \nnumpy 1.25.2 py311h64a7726_0 conda-forge \nnumpy 1.25.2 py39h129f8d9_0 conda-forge \nnumpy 1.25.2 py39h6183b62_0 conda-forge \nnumpy 1.26.0 py310ha4c1d20_0 conda-forge \nnumpy 1.26.0 py310hb13e2d6_0 conda-forge \nnumpy 1.26.0 py311h64a7726_0 conda-forge \nnumpy 1.26.0 py312heda63a1_0 conda-forge \nnumpy 1.26.0 py39h129f8d9_0 conda-forge \nnumpy 1.26.0 py39h474f0d3_0 conda-forge \nnumpy 1.26.0 py39h6183b62_0 conda-forge \nnumpy 1.26.0 py39h6dedee3_0 conda-forge \nnumpy 1.26.2 py310hb13e2d6_0 conda-forge \nnumpy 1.26.2 py311h64a7726_0 conda-forge \nnumpy 1.26.2 py312heda63a1_0 conda-forge \nnumpy 1.26.2 py39h474f0d3_0 conda-forge \nnumpy 1.26.2 py39h6dedee3_0 conda-forge \nnumpy 1.26.3 py310hb13e2d6_0 conda-forge \nnumpy 1.26.3 py311h64a7726_0 conda-forge \nnumpy 1.26.3 py312heda63a1_0 conda-forge \nnumpy 1.26.3 py39h474f0d3_0 conda-forge \nnumpy 1.26.3 py39h6dedee3_0 conda-forge \nnumpy 1.26.4 py310hb13e2d6_0 conda-forge \nnumpy 1.26.4 py311h64a7726_0 conda-forge \nnumpy 1.26.4 py312heda63a1_0 conda-forge \nnumpy 1.26.4 py39h474f0d3_0 conda-forge \nnumpy 1.26.4 py39h6dedee3_0 conda-forge \nnumpy 2.0.0rc1 py310h515e003_0 conda-forge \nnumpy 2.0.0rc1 py311h1461c94_0 conda-forge \nnumpy 2.0.0rc1 py312h22e1c76_0 conda-forge \nnumpy 2.0.0rc1 py39ha0965c0_0 conda-forge \nnumpy 2.0.0rc1 py39hb0d58de_0 conda-forge \nnumpy 2.0.0rc2 py310h515e003_0 conda-forge \nnumpy 2.0.0rc2 py311h1461c94_0 conda-forge \nnumpy 2.0.0rc2 py312h22e1c76_0 conda-forge \nnumpy 2.0.0rc2 py39ha0965c0_0 conda-forge \nnumpy 2.0.0rc2 py39hb0d58de_0 conda-forge \nnumpy 2.0.0 py310h515e003_0 conda-forge \nnumpy 2.0.0 py311h1461c94_0 conda-forge \nnumpy 2.0.0 py312h22e1c76_0 conda-forge \nnumpy 2.0.0 py39ha0965c0_0 conda-forge \nnumpy 2.0.1 py310hf9f9071_0 conda-forge \nnumpy 2.0.1 py311hed25524_0 conda-forge \nnumpy 2.0.1 py312h1103770_0 conda-forge \nnumpy 2.0.1 py39h2fd3214_0 conda-forge \nnumpy 2.0.2 py310hd6e36ab_0 conda-forge \nnumpy 2.0.2 py310hd6e36ab_1 conda-forge \nnumpy 2.0.2 py311h71ddf71_0 conda-forge \nnumpy 2.0.2 py311h71ddf71_1 conda-forge \nnumpy 2.0.2 py312h58c1407_0 conda-forge \nnumpy 2.0.2 py312h58c1407_1 conda-forge \nnumpy 2.0.2 py39h9cb892a_0 conda-forge \nnumpy 2.0.2 py39h9cb892a_1 conda-forge \nnumpy 2.1.0rc1 py310hf9f9071_0 conda-forge \nnumpy 2.1.0rc1 py311hed25524_0 conda-forge \nnumpy 2.1.0rc1 py312h1103770_0 conda-forge \nnumpy 2.1.0 py310hd6e36ab_1 conda-forge \nnumpy 2.1.0 py310hf9f9071_0 conda-forge \nnumpy 2.1.0 py311h71ddf71_1 conda-forge \nnumpy 2.1.0 py311hed25524_0 conda-forge \nnumpy 2.1.0 py312h1103770_0 conda-forge \nnumpy 2.1.0 py312h58c1407_1 conda-forge \nnumpy 2.1.0 py313h4bf6692_1 conda-forge \nnumpy 2.1.1 py310hd6e36ab_0 conda-forge \nnumpy 2.1.1 py311h71ddf71_0 conda-forge \nnumpy 2.1.1 py312h58c1407_0 conda-forge \nnumpy 2.1.1 py313h4bf6692_0 conda-forge \nnumpy 2.1.2 py310hd6e36ab_0 conda-forge \nnumpy 2.1.2 py311h71ddf71_0 conda-forge \nnumpy 2.1.2 py312h58c1407_0 conda-forge \nnumpy 2.1.2 py313h4bf6692_0 conda-forge \nnumpy 2.1.2 py313hb01392b_0 conda-forge \nnumpy 2.1.3 py310hd6e36ab_0 conda-forge \nnumpy 2.1.3 py311h71ddf71_0 conda-forge \nnumpy 2.1.3 py312h58c1407_0 conda-forge \nnumpy 2.1.3 py313h4bf6692_0 conda-forge \nnumpy 2.1.3 py313hb01392b_0 conda-forge \nnumpy 2.2.0rc1 py310h5851e9f_0 conda-forge \nnumpy 2.2.0rc1 py311hf916aec_0 conda-forge \nnumpy 2.2.0rc1 py312h7e784f5_0 conda-forge \nnumpy 2.2.0rc1 py313h151ba9f_0 conda-forge \nnumpy 2.2.0rc1 py313hb30382a_0 conda-forge \nnumpy 2.2.0 py310h5851e9f_0 conda-forge \nnumpy 2.2.0 py311hf916aec_0 conda-forge \nnumpy 2.2.0 py312h7e784f5_0 conda-forge \nnumpy 2.2.0 py313h151ba9f_0 conda-forge \nnumpy 2.2.0 py313hb30382a_0 conda-forge \nnumpy 2.2.1 py310h5851e9f_0 conda-forge \nnumpy 2.2.1 py311hf916aec_0 conda-forge \nnumpy 2.2.1 py312h7e784f5_0 conda-forge \nnumpy 2.2.1 py313h151ba9f_0 conda-forge \nnumpy 2.2.1 py313hb30382a_0 conda-forge \nnumpy 2.2.2 py310hefbff90_0 conda-forge \nnumpy 2.2.2 py311h5d046bc_0 conda-forge \nnumpy 2.2.2 py312h72c5963_0 conda-forge \nnumpy 2.2.2 py313h103f029_0 conda-forge \nnumpy 2.2.2 py313h17eae1a_0 conda-forge \nnumpy 2.2.3 py310hefbff90_0 conda-forge \nnumpy 2.2.3 py311h5d046bc_0 conda-forge \nnumpy 2.2.3 py312h72c5963_0 conda-forge \nnumpy 2.2.3 py313h103f029_0 conda-forge \nnumpy 2.2.3 py313h17eae1a_0 conda-forge \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "### Enhanced Conda Commands", + "", + "A set of enhanced conda commands in the conda environment lifecycle management package `env-lcm` supports the management of environments saved to Object Storage, including uploading, downloading, listing, and deleting available environments." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988423923, + "endTime" : 1739988424366, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Enhanced Conda Commands

\n

A set of enhanced conda commands in the conda environment lifecycle management package env-lcm supports the management of environments saved to Object Storage, including uploading, downloading, listing, and deleting available environments.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Help for conda lifecycle environment commands", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "env-lcm --help " + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988424857, + "endTime" : 1739988427045, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Usage: conda-env-lcm [OPTIONS] COMMAND [ARGS]...\n\n ADB-S Command Line Interface (CLI) to manage persistence of conda\n environments\n\nOptions:\n -v, --version Show the version and exit.\n --help Show this message and exit.\n\nCommands:\n delete Delete a saved conda environment\n download Download a saved conda environment\n import Create or update a conda environment from saved metadata\n list-local-envs List locally available environments for use\n list-saved-envs List saved conda environments\n upload Save conda environment for later use\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "### Creating Conda Environments", + "", + "The ADMIN user has the permissions to create conda environments and install packages.", + "", + "Start by listing the environments available by default. Conda contains default environments with some core system libraries and conda dependencies. The active environment is marked with an asterisk (*)." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988427522, + "endTime" : 1739988427965, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Creating Conda Environments

\n

The ADMIN user has the permissions to create conda environments and install packages.

\n

Start by listing the environments available by default. Conda contains default environments with some core system libraries and conda dependencies. The active environment is marked with an asterisk (*).

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "List environments", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "env list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988428467, + "endTime" : 1739988430351, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# conda environments:\n#\nbase * /opt/conda\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Installing and removing packges from conda environments", + "hasTitle" : true, + "message" : [ + "%md", + "---", + "### Create a Conda Environment", + "---", + "", + "This section demonstrates creating and installing packages to a conda environment, then removes the environment. We illustrate commonly used options available for environment creation and testing. The environment exists for the duration of the notebook session and does not persist between sessions unless it is saved to Object Storage. For instructions that include both creating and persisting an environment for OML users, refer to the section 2, *Create a Conda Environment and Upload to Object Storage*.", + "", + "As ADMIN user:", + "", + "- Use the `create` command to create an environment and install the Python *keras* package. ", + "- Verify that the new environment is created, and activate the environment.", + "- Install, then uninstall an additional Python package into the environment, *pytorch*.", + "- Deactivate and remove the environment.", + "", + "Notes:", + "", + "- When conda installs a package into an environment it also installs any required dependencies. While we are demonstrating that it's possible to install packages to an existing environment, it is a best practice to install all the packages that you want in a specific environment at the same time to avoid package dependency conflicts.", + "", + "- The ADMIN user can access the conda environment from Python and R, but does not have the capability to run embedded Python and R execution commands.", + "", + "For help with the conda `create` command, enter `create --help` in a %conda paragraph." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988430843, + "endTime" : 1739988431290, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

This section demonstrates creating and installing packages to a conda environment, then removes the environment. We illustrate commonly used options available for environment creation and testing. The environment exists for the duration of the notebook session and does not persist between sessions unless it is saved to Object Storage. For instructions that include both creating and persisting an environment for OML users, refer to the section 2, Create a Conda Environment and Upload to Object Storage.

\n

As ADMIN user:

\n
    \n
  • Use the create command to create an environment and install the Python keras package.
  • \n
  • Verify that the new environment is created, and activate the environment.
  • \n
  • Install, then uninstall an additional Python package into the environment, pytorch.
  • \n
  • Deactivate and remove the environment.
  • \n
\n

Notes:

\n
    \n
  • \n

    When conda installs a package into an environment it also installs any required dependencies. While we are demonstrating that it's possible to install packages to an existing environment, it is a best practice to install all the packages that you want in a specific environment at the same time to avoid package dependency conflicts.

    \n
  • \n
  • \n

    The ADMIN user can access the conda environment from Python and R, but does not have the capability to run embedded Python and R execution commands.

    \n
  • \n
\n

For help with the conda create command, enter create --help in a %conda paragraph.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create conda environment", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Create conda environment called *myenv* with Python 3.12 for OML4Py compatibility and install the *keras* package." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988431792, + "endTime" : 1739988432236, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Create conda environment called myenv with Python 3.12 for OML4Py compatibility and install the keras package.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda ", + "", + "create -n myenv python=3.12.6 keras " + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988432743, + "endTime" : 1739988451473, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/myenv\n\n added / updated specs:\n - keras\n - python=3.12.6\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n _libgcc_mutex-0.1 | main 3 KB\n _openmp_mutex-5.1 | 1_gnu 21 KB\n absl-py-2.1.0 | py312h06a4308_0 239 KB\n blas-1.0 | mkl 6 KB\n bzip2-1.0.8 | h5eee18b_6 262 KB\n c-ares-1.19.1 | h5eee18b_0 118 KB\n ca-certificates-2024.12.31 | h06a4308_0 128 KB\n expat-2.6.4 | h6a678d5_0 180 KB\n h5py-3.12.1 | py312h5842655_1 1.4 MB\n hdf5-1.14.5 | h2b7332f_2 5.9 MB\n intel-openmp-2023.1.0 | hdb19cb5_46306 17.2 MB\n keras-3.6.0 | py312h06a4308_0 2.9 MB\n krb5-1.20.1 | h143b758_1 1.3 MB\n ld_impl_linux-64-2.40 | h12ee557_0 710 KB\n libcurl-8.12.0 | hc9e6f67_0 468 KB\n libedit-3.1.20230828 | h5eee18b_0 179 KB\n libev-4.33 | h7f8727e_1 111 KB\n libffi-3.4.4 | h6a678d5_1 141 KB\n libgcc-ng-11.2.0 | h1234567_1 5.3 MB\n libgfortran-ng-11.2.0 | h00389a5_1 20 KB\n libgfortran5-11.2.0 | h1234567_1 2.0 MB\n libgomp-11.2.0 | h1234567_1 474 KB\n libnghttp2-1.57.0 | h2d74bed_0 674 KB\n libssh2-1.11.1 | h251f7ec_0 308 KB\n libstdcxx-ng-11.2.0 | h1234567_1 4.7 MB\n libuuid-1.41.5 | h5eee18b_0 27 KB\n lz4-c-1.9.4 | h6a678d5_1 156 KB\n markdown-it-py-2.2.0 | py312h06a4308_1 134 KB\n mdurl-0.1.0 | py312h06a4308_0 22 KB\n mkl-2023.1.0 | h213fc3f_46344 171.5 MB\n mkl-service-2.4.0 | py312h5eee18b_2 67 KB\n mkl_fft-1.3.11 | py312h5eee18b_0 205 KB\n mkl_random-1.2.8 | py312h526ad5a_0 324 KB\n ml_dtypes-0.5.0 | py312h6a678d5_0 302 KB\n namex-0.0.7 | py312h06a4308_0 15 KB\n ncurses-6.4 | h6a678d5_0 914 KB\n numpy-2.0.1 | py312hc5e2394_1 11 KB\n numpy-base-2.0.1 | py312h0da6c21_1 8.5 MB\n openssl-3.0.15 | h5eee18b_0 5.2 MB\n optree-0.12.1 | py312hdb19cb5_0 326 KB\n packaging-24.2 | py312h06a4308_0 195 KB\n pip-25.0 | py312h06a4308_0 2.8 MB\n pygments-2.15.1 | py312h06a4308_1 1.7 MB\n python-3.12.6 | h5148396_1 34.6 MB\n readline-8.2 | h5eee18b_0 357 KB\n rich-13.9.4 | py312h06a4308_0 615 KB\n setuptools-75.8.0 | py312h06a4308_0 2.2 MB\n sqlite-3.45.3 | h5eee18b_0 1.2 MB\n tbb-2021.8.0 | hdb19cb5_0 1.6 MB\n tk-8.6.14 | h39e8969_0 3.4 MB\n typing-extensions-4.12.2 | py312h06a4308_0 9 KB\n typing_extensions-4.12.2 | py312h06a4308_0 79 KB\n tzdata-2025a | h04d1e81_0 117 KB\n wheel-0.45.1 | py312h06a4308_0 147 KB\n xz-5.6.4 | h5eee18b_1 567 KB\n zlib-1.2.13 | h5eee18b_1 111 KB\n zstd-1.5.6 | hc292b87_0 664 KB\n ------------------------------------------------------------\n Total: 282.8 MB\n\nThe following NEW packages will be INSTALLED:\n\n _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main \n _openmp_mutex pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu \n absl-py pkgs/main/linux-64::absl-py-2.1.0-py312h06a4308_0 \n blas pkgs/main/linux-64::blas-1.0-mkl \n bzip2 pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_6 \n c-ares pkgs/main/linux-64::c-ares-1.19.1-h5eee18b_0 \n ca-certificates pkgs/main/linux-64::ca-certificates-2024.12.31-h06a4308_0 \n expat pkgs/main/linux-64::expat-2.6.4-h6a678d5_0 \n h5py pkgs/main/linux-64::h5py-3.12.1-py312h5842655_1 \n hdf5 pkgs/main/linux-64::hdf5-1.14.5-h2b7332f_2 \n intel-openmp pkgs/main/linux-64::intel-openmp-2023.1.0-hdb19cb5_46306 \n keras pkgs/main/linux-64::keras-3.6.0-py312h06a4308_0 \n krb5 pkgs/main/linux-64::krb5-1.20.1-h143b758_1 \n ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.40-h12ee557_0 \n libcurl pkgs/main/linux-64::libcurl-8.12.0-hc9e6f67_0 \n libedit pkgs/main/linux-64::libedit-3.1.20230828-h5eee18b_0 \n libev pkgs/main/linux-64::libev-4.33-h7f8727e_1 \n libffi pkgs/main/linux-64::libffi-3.4.4-h6a678d5_1 \n libgcc-ng pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1 \n libgfortran-ng pkgs/main/linux-64::libgfortran-ng-11.2.0-h00389a5_1 \n libgfortran5 pkgs/main/linux-64::libgfortran5-11.2.0-h1234567_1 \n libgomp pkgs/main/linux-64::libgomp-11.2.0-h1234567_1 \n libnghttp2 pkgs/main/linux-64::libnghttp2-1.57.0-h2d74bed_0 \n libssh2 pkgs/main/linux-64::libssh2-1.11.1-h251f7ec_0 \n libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1 \n libuuid pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0 \n lz4-c pkgs/main/linux-64::lz4-c-1.9.4-h6a678d5_1 \n markdown-it-py pkgs/main/linux-64::markdown-it-py-2.2.0-py312h06a4308_1 \n mdurl pkgs/main/linux-64::mdurl-0.1.0-py312h06a4308_0 \n mkl pkgs/main/linux-64::mkl-2023.1.0-h213fc3f_46344 \n mkl-service pkgs/main/linux-64::mkl-service-2.4.0-py312h5eee18b_2 \n mkl_fft pkgs/main/linux-64::mkl_fft-1.3.11-py312h5eee18b_0 \n mkl_random pkgs/main/linux-64::mkl_random-1.2.8-py312h526ad5a_0 \n ml_dtypes pkgs/main/linux-64::ml_dtypes-0.5.0-py312h6a678d5_0 \n namex pkgs/main/linux-64::namex-0.0.7-py312h06a4308_0 \n ncurses pkgs/main/linux-64::ncurses-6.4-h6a678d5_0 \n numpy pkgs/main/linux-64::numpy-2.0.1-py312hc5e2394_1 \n numpy-base pkgs/main/linux-64::numpy-base-2.0.1-py312h0da6c21_1 \n openssl pkgs/main/linux-64::openssl-3.0.15-h5eee18b_0 \n optree pkgs/main/linux-64::optree-0.12.1-py312hdb19cb5_0 \n packaging pkgs/main/linux-64::packaging-24.2-py312h06a4308_0 \n pip pkgs/main/linux-64::pip-25.0-py312h06a4308_0 \n pygments pkgs/main/linux-64::pygments-2.15.1-py312h06a4308_1 \n python pkgs/main/linux-64::python-3.12.6-h5148396_1 \n readline pkgs/main/linux-64::readline-8.2-h5eee18b_0 \n rich pkgs/main/linux-64::rich-13.9.4-py312h06a4308_0 \n setuptools pkgs/main/linux-64::setuptools-75.8.0-py312h06a4308_0 \n sqlite pkgs/main/linux-64::sqlite-3.45.3-h5eee18b_0 \n tbb pkgs/main/linux-64::tbb-2021.8.0-hdb19cb5_0 \n tk pkgs/main/linux-64::tk-8.6.14-h39e8969_0 \n typing-extensions pkgs/main/linux-64::typing-extensions-4.12.2-py312h06a4308_0 \n typing_extensions pkgs/main/linux-64::typing_extensions-4.12.2-py312h06a4308_0 \n tzdata pkgs/main/noarch::tzdata-2025a-h04d1e81_0 \n wheel pkgs/main/linux-64::wheel-0.45.1-py312h06a4308_0 \n xz pkgs/main/linux-64::xz-5.6.4-h5eee18b_1 \n zlib pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1 \n zstd pkgs/main/linux-64::zstd-1.5.6-hc292b87_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n#\n# To activate this environment, use\n#\n# $ conda activate myenv\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Verify environment creation", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Verify the *myenv* environment is in the list of environments. The asterisk (*) indicates active environments. The new environment is created but not activated." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988451958, + "endTime" : 1739988452402, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Verify the myenv environment is in the list of environments. The asterisk (*) indicates active environments. The new environment is created but not activated.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "env list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988452892, + "endTime" : 1739988454778, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# conda environments:\n#\nbase * /opt/conda\nmyenv /u01/.conda/envs/myenv\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Activate the environment", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Activate the *myenv* environment and list the environments to verify the activation. The asterisk (*) next to the environment name confirms the activation." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988455261, + "endTime" : 1739988455710, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Activate the myenv environment and list the environments to verify the activation. The asterisk (*) next to the environment name confirms the activation.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "activate myenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988456189, + "endTime" : 1739988458975, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nConda environment 'myenv' activated\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Show the activated environment in the environment listing", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "env list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988459462, + "endTime" : 1739988461341, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# conda environments:\n#\nbase /opt/conda\nmyenv * /u01/.conda/envs/myenv\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "### Installing and Uninstalling Libraries", + "", + "The ADMIN user can install and uninstall libraries into an environment using the `install` and `uninstall` commands.", + "", + "For help with the conda `install` and `uninstall` commmands, type `install --help` and `uninstall --help` in a %conda paragraph." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988461824, + "endTime" : 1739988462273, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Installing and Uninstalling Libraries

\n

The ADMIN user can install and uninstall libraries into an environment using the install and uninstall commands.

\n

For help with the conda install and uninstall commmands, type install --help and uninstall --help in a %conda paragraph.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Install additional packages", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Install the *pytorch* package into the activated *myenv* environment. Note, to avoid dependency conflicts, install all packages when the environment is built." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988462750, + "endTime" : 1739988463207, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Install the pytorch package into the activated myenv environment. Note, to avoid dependency conflicts, install all packages when the environment is built.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "install pytorch -c pytorch" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988463685, + "endTime" : 1739988643949, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - pytorch\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/myenv\n\n added / updated specs:\n - pytorch\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n cuda-cudart-12.4.127 | h99ab3db_0 21 KB\n cuda-cudart_linux-64-12.4.127| hd681fbe_0 207 KB\n cuda-cupti-12.4.127 | h6a678d5_1 2.0 MB\n cuda-libraries-12.4.1 | h06a4308_1 19 KB\n cuda-nvrtc-12.4.127 | h99ab3db_1 19.8 MB\n cuda-nvtx-12.4.127 | h6a678d5_1 30 KB\n cuda-opencl-12.4.127 | h6a678d5_0 28 KB\n cuda-runtime-12.4.1 | hb982923_0 18 KB\n cuda-version-12.4 | hbda6634_3 19 KB\n filelock-3.13.1 | py312h06a4308_0 24 KB\n gmp-6.3.0 | h6a678d5_0 608 KB\n gmpy2-2.2.1 | py312h5eee18b_0 258 KB\n jinja2-3.1.5 | py312h06a4308_0 352 KB\n libcublas-12.4.5.8 | h99ab3db_1 247.6 MB\n libcufft-11.2.1.3 | h99ab3db_1 174.5 MB\n libcufile-1.9.1.3 | h99ab3db_1 995 KB\n libcurand-10.3.5.147 | h99ab3db_1 42.4 MB\n libcusolver-11.6.1.9 | h99ab3db_1 83.5 MB\n libcusparse-12.3.1.170 | h99ab3db_1 120.0 MB\n libnpp-12.2.5.30 | h99ab3db_1 100.5 MB\n libnvfatbin-12.4.127 | h7934f7d_2 857 KB\n libnvjitlink-12.4.127 | h99ab3db_1 17.4 MB\n libnvjpeg-12.3.1.117 | h6a678d5_1 2.6 MB\n llvm-openmp-14.0.6 | h9e868ea_0 4.4 MB\n markupsafe-3.0.2 | py312h5eee18b_0 26 KB\n mpc-1.3.1 | h5eee18b_0 129 KB\n mpfr-4.2.1 | h5eee18b_0 821 KB\n mpmath-1.3.0 | py312h06a4308_0 988 KB\n networkx-3.4.2 | py312h06a4308_0 3.1 MB\n ocl-icd-2.3.2 | h5eee18b_1 136 KB\n pytorch-2.5.1 |py3.12_cuda12.4_cudnn9.1.0_0 1.46 GB pytorch\n pytorch-cuda-12.4 | hc786d27_7 7 KB pytorch\n pytorch-mutex-1.0 | cuda 3 KB pytorch\n pyyaml-6.0.2 | py312h5eee18b_0 217 KB\n sympy-1.13.3 | py312h06a4308_1 15.0 MB\n torchtriton-3.1.0 | py312 233.6 MB pytorch\n yaml-0.2.5 | h7b6447c_0 75 KB\n ------------------------------------------------------------\n Total: 2.51 GB\n\nThe following NEW packages will be INSTALLED:\n\n cuda-cudart pkgs/main/linux-64::cuda-cudart-12.4.127-h99ab3db_0 \n cuda-cudart_linux~ pkgs/main/noarch::cuda-cudart_linux-64-12.4.127-hd681fbe_0 \n cuda-cupti pkgs/main/linux-64::cuda-cupti-12.4.127-h6a678d5_1 \n cuda-libraries pkgs/main/linux-64::cuda-libraries-12.4.1-h06a4308_1 \n cuda-nvrtc pkgs/main/linux-64::cuda-nvrtc-12.4.127-h99ab3db_1 \n cuda-nvtx pkgs/main/linux-64::cuda-nvtx-12.4.127-h6a678d5_1 \n cuda-opencl pkgs/main/linux-64::cuda-opencl-12.4.127-h6a678d5_0 \n cuda-runtime pkgs/main/noarch::cuda-runtime-12.4.1-hb982923_0 \n cuda-version pkgs/main/noarch::cuda-version-12.4-hbda6634_3 \n filelock pkgs/main/linux-64::filelock-3.13.1-py312h06a4308_0 \n gmp pkgs/main/linux-64::gmp-6.3.0-h6a678d5_0 \n gmpy2 pkgs/main/linux-64::gmpy2-2.2.1-py312h5eee18b_0 \n jinja2 pkgs/main/linux-64::jinja2-3.1.5-py312h06a4308_0 \n libcublas pkgs/main/linux-64::libcublas-12.4.5.8-h99ab3db_1 \n libcufft pkgs/main/linux-64::libcufft-11.2.1.3-h99ab3db_1 \n libcufile pkgs/main/linux-64::libcufile-1.9.1.3-h99ab3db_1 \n libcurand pkgs/main/linux-64::libcurand-10.3.5.147-h99ab3db_1 \n libcusolver pkgs/main/linux-64::libcusolver-11.6.1.9-h99ab3db_1 \n libcusparse pkgs/main/linux-64::libcusparse-12.3.1.170-h99ab3db_1 \n libnpp pkgs/main/linux-64::libnpp-12.2.5.30-h99ab3db_1 \n libnvfatbin pkgs/main/linux-64::libnvfatbin-12.4.127-h7934f7d_2 \n libnvjitlink pkgs/main/linux-64::libnvjitlink-12.4.127-h99ab3db_1 \n libnvjpeg pkgs/main/linux-64::libnvjpeg-12.3.1.117-h6a678d5_1 \n llvm-openmp pkgs/main/linux-64::llvm-openmp-14.0.6-h9e868ea_0 \n markupsafe pkgs/main/linux-64::markupsafe-3.0.2-py312h5eee18b_0 \n mpc pkgs/main/linux-64::mpc-1.3.1-h5eee18b_0 \n mpfr pkgs/main/linux-64::mpfr-4.2.1-h5eee18b_0 \n mpmath pkgs/main/linux-64::mpmath-1.3.0-py312h06a4308_0 \n networkx pkgs/main/linux-64::networkx-3.4.2-py312h06a4308_0 \n ocl-icd pkgs/main/linux-64::ocl-icd-2.3.2-h5eee18b_1 \n pytorch pytorch/linux-64::pytorch-2.5.1-py3.12_cuda12.4_cudnn9.1.0_0 \n pytorch-cuda pytorch/linux-64::pytorch-cuda-12.4-hc786d27_7 \n pytorch-mutex pytorch/noarch::pytorch-mutex-1.0-cuda \n pyyaml pkgs/main/linux-64::pyyaml-6.0.2-py312h5eee18b_0 \n sympy pkgs/main/linux-64::sympy-1.13.3-py312h06a4308_1 \n torchtriton pytorch/linux-64::torchtriton-3.1.0-py312 \n yaml pkgs/main/linux-64::yaml-0.2.5-h7b6447c_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "List packages in the current environment", + "hasTitle" : true, + "message" : [ + "%md", + "", + "List the packages installed in the current environment, and confirm that *keras* and *pytorch* are installed." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988644452, + "endTime" : 1739988644897, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

List the packages installed in the current environment, and confirm that keras and pytorch are installed.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988645384, + "endTime" : 1739988647672, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# packages in environment at /u01/.conda/envs/myenv:\n#\n# Name Version Build Channel\n_libgcc_mutex 0.1 main \n_openmp_mutex 5.1 1_gnu \nabsl-py 2.1.0 py312h06a4308_0 \nblas 1.0 mkl \nbzip2 1.0.8 h5eee18b_6 \nc-ares 1.19.1 h5eee18b_0 \nca-certificates 2024.12.31 h06a4308_0 \ncuda-cudart 12.4.127 h99ab3db_0 \ncuda-cudart_linux-64 12.4.127 hd681fbe_0 \ncuda-cupti 12.4.127 h6a678d5_1 \ncuda-libraries 12.4.1 h06a4308_1 \ncuda-nvrtc 12.4.127 h99ab3db_1 \ncuda-nvtx 12.4.127 h6a678d5_1 \ncuda-opencl 12.4.127 h6a678d5_0 \ncuda-runtime 12.4.1 hb982923_0 \ncuda-version 12.4 hbda6634_3 \nexpat 2.6.4 h6a678d5_0 \nfilelock 3.13.1 py312h06a4308_0 \ngmp 6.3.0 h6a678d5_0 \ngmpy2 2.2.1 py312h5eee18b_0 \nh5py 3.12.1 py312h5842655_1 \nhdf5 1.14.5 h2b7332f_2 \nintel-openmp 2023.1.0 hdb19cb5_46306 \njinja2 3.1.5 py312h06a4308_0 \nkeras 3.6.0 py312h06a4308_0 \nkrb5 1.20.1 h143b758_1 \nld_impl_linux-64 2.40 h12ee557_0 \nlibcublas 12.4.5.8 h99ab3db_1 \nlibcufft 11.2.1.3 h99ab3db_1 \nlibcufile 1.9.1.3 h99ab3db_1 \nlibcurand 10.3.5.147 h99ab3db_1 \nlibcurl 8.12.0 hc9e6f67_0 \nlibcusolver 11.6.1.9 h99ab3db_1 \nlibcusparse 12.3.1.170 h99ab3db_1 \nlibedit 3.1.20230828 h5eee18b_0 \nlibev 4.33 h7f8727e_1 \nlibffi 3.4.4 h6a678d5_1 \nlibgcc-ng 11.2.0 h1234567_1 \nlibgfortran-ng 11.2.0 h00389a5_1 \nlibgfortran5 11.2.0 h1234567_1 \nlibgomp 11.2.0 h1234567_1 \nlibnghttp2 1.57.0 h2d74bed_0 \nlibnpp 12.2.5.30 h99ab3db_1 \nlibnvfatbin 12.4.127 h7934f7d_2 \nlibnvjitlink 12.4.127 h99ab3db_1 \nlibnvjpeg 12.3.1.117 h6a678d5_1 \nlibssh2 1.11.1 h251f7ec_0 \nlibstdcxx-ng 11.2.0 h1234567_1 \nlibuuid 1.41.5 h5eee18b_0 \nllvm-openmp 14.0.6 h9e868ea_0 \nlz4-c 1.9.4 h6a678d5_1 \nmarkdown-it-py 2.2.0 py312h06a4308_1 \nmarkupsafe 3.0.2 py312h5eee18b_0 \nmdurl 0.1.0 py312h06a4308_0 \nmkl 2023.1.0 h213fc3f_46344 \nmkl-service 2.4.0 py312h5eee18b_2 \nmkl_fft 1.3.11 py312h5eee18b_0 \nmkl_random 1.2.8 py312h526ad5a_0 \nml_dtypes 0.5.0 py312h6a678d5_0 \nmpc 1.3.1 h5eee18b_0 \nmpfr 4.2.1 h5eee18b_0 \nmpmath 1.3.0 py312h06a4308_0 \nnamex 0.0.7 py312h06a4308_0 \nncurses 6.4 h6a678d5_0 \nnetworkx 3.4.2 py312h06a4308_0 \nnumpy 2.0.1 py312hc5e2394_1 \nnumpy-base 2.0.1 py312h0da6c21_1 \nocl-icd 2.3.2 h5eee18b_1 \nopenssl 3.0.15 h5eee18b_0 \noptree 0.12.1 py312hdb19cb5_0 \npackaging 24.2 py312h06a4308_0 \npip 25.0 py312h06a4308_0 \npygments 2.15.1 py312h06a4308_1 \npython 3.12.6 h5148396_1 \npytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch\npytorch-cuda 12.4 hc786d27_7 pytorch\npytorch-mutex 1.0 cuda pytorch\npyyaml 6.0.2 py312h5eee18b_0 \nreadline 8.2 h5eee18b_0 \nrich 13.9.4 py312h06a4308_0 \nsetuptools 75.8.0 py312h06a4308_0 \nsqlite 3.45.3 h5eee18b_0 \nsympy 1.13.3 py312h06a4308_1 \ntbb 2021.8.0 hdb19cb5_0 \ntk 8.6.14 h39e8969_0 \ntorchtriton 3.1.0 py312 pytorch\ntyping-extensions 4.12.2 py312h06a4308_0 \ntyping_extensions 4.12.2 py312h06a4308_0 \ntzdata 2025a h04d1e81_0 \nwheel 0.45.1 py312h06a4308_0 \nxz 5.6.4 h5eee18b_1 \nyaml 0.2.5 h7b6447c_0 \nzlib 1.2.13 h5eee18b_1 \nzstd 1.5.6 hc292b87_0 \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Uninstall package", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Libraries can be uninstalled from an environment using the `uninstall` command. Let's uninstall the *pytorch* package from the current environment." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988648156, + "endTime" : 1739988648609, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Libraries can be uninstalled from an environment using the uninstall command. Let's uninstall the pytorch package from the current environment.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "uninstall pytorch" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988649097, + "endTime" : 1739988658710, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - defaults\n - pytorch\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... WARNING conda.conda_libmamba_solver.solver:_export_solved_records(858): Tried to unlink __linux but it is not installed or manageable?\ndone\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/myenv\n\n removed specs:\n - pytorch\n\n\nThe following packages will be REMOVED:\n\n cuda-cudart-12.4.127-h99ab3db_0\n cuda-cudart_linux-64-12.4.127-hd681fbe_0\n cuda-cupti-12.4.127-h6a678d5_1\n cuda-libraries-12.4.1-h06a4308_1\n cuda-nvrtc-12.4.127-h99ab3db_1\n cuda-nvtx-12.4.127-h6a678d5_1\n cuda-opencl-12.4.127-h6a678d5_0\n cuda-runtime-12.4.1-hb982923_0\n cuda-version-12.4-hbda6634_3\n filelock-3.13.1-py312h06a4308_0\n gmp-6.3.0-h6a678d5_0\n gmpy2-2.2.1-py312h5eee18b_0\n jinja2-3.1.5-py312h06a4308_0\n libcublas-12.4.5.8-h99ab3db_1\n libcufft-11.2.1.3-h99ab3db_1\n libcufile-1.9.1.3-h99ab3db_1\n libcurand-10.3.5.147-h99ab3db_1\n libcusolver-11.6.1.9-h99ab3db_1\n libcusparse-12.3.1.170-h99ab3db_1\n libnpp-12.2.5.30-h99ab3db_1\n libnvfatbin-12.4.127-h7934f7d_2\n libnvjitlink-12.4.127-h99ab3db_1\n libnvjpeg-12.3.1.117-h6a678d5_1\n llvm-openmp-14.0.6-h9e868ea_0\n markupsafe-3.0.2-py312h5eee18b_0\n mpc-1.3.1-h5eee18b_0\n mpfr-4.2.1-h5eee18b_0\n mpmath-1.3.0-py312h06a4308_0\n networkx-3.4.2-py312h06a4308_0\n ocl-icd-2.3.2-h5eee18b_1\n pytorch-2.5.1-py3.12_cuda12.4_cudnn9.1.0_0\n pytorch-cuda-12.4-hc786d27_7\n pytorch-mutex-1.0-cuda\n pyyaml-6.0.2-py312h5eee18b_0\n sympy-1.13.3-py312h06a4308_1\n torchtriton-3.1.0-py312\n yaml-0.2.5-h7b6447c_0\n\n\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Verify package was uninstalled", + "hasTitle" : true, + "message" : [ + "%md", + "", + "List packages in current environment and verify that the *pytorch* package was uninstalled." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988659194, + "endTime" : 1739988659655, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

List packages in current environment and verify that the pytorch package was uninstalled.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988660467, + "endTime" : 1739988662565, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# packages in environment at /u01/.conda/envs/myenv:\n#\n# Name Version Build Channel\n_libgcc_mutex 0.1 main \n_openmp_mutex 5.1 1_gnu \nabsl-py 2.1.0 py312h06a4308_0 \nblas 1.0 mkl \nbzip2 1.0.8 h5eee18b_6 \nc-ares 1.19.1 h5eee18b_0 \nca-certificates 2024.12.31 h06a4308_0 \nexpat 2.6.4 h6a678d5_0 \nh5py 3.12.1 py312h5842655_1 \nhdf5 1.14.5 h2b7332f_2 \nintel-openmp 2023.1.0 hdb19cb5_46306 \nkeras 3.6.0 py312h06a4308_0 \nkrb5 1.20.1 h143b758_1 \nld_impl_linux-64 2.40 h12ee557_0 \nlibcurl 8.12.0 hc9e6f67_0 \nlibedit 3.1.20230828 h5eee18b_0 \nlibev 4.33 h7f8727e_1 \nlibffi 3.4.4 h6a678d5_1 \nlibgcc-ng 11.2.0 h1234567_1 \nlibgfortran-ng 11.2.0 h00389a5_1 \nlibgfortran5 11.2.0 h1234567_1 \nlibgomp 11.2.0 h1234567_1 \nlibnghttp2 1.57.0 h2d74bed_0 \nlibssh2 1.11.1 h251f7ec_0 \nlibstdcxx-ng 11.2.0 h1234567_1 \nlibuuid 1.41.5 h5eee18b_0 \nlz4-c 1.9.4 h6a678d5_1 \nmarkdown-it-py 2.2.0 py312h06a4308_1 \nmdurl 0.1.0 py312h06a4308_0 \nmkl 2023.1.0 h213fc3f_46344 \nmkl-service 2.4.0 py312h5eee18b_2 \nmkl_fft 1.3.11 py312h5eee18b_0 \nmkl_random 1.2.8 py312h526ad5a_0 \nml_dtypes 0.5.0 py312h6a678d5_0 \nnamex 0.0.7 py312h06a4308_0 \nncurses 6.4 h6a678d5_0 \nnumpy 2.0.1 py312hc5e2394_1 \nnumpy-base 2.0.1 py312h0da6c21_1 \nopenssl 3.0.15 h5eee18b_0 \noptree 0.12.1 py312hdb19cb5_0 \npackaging 24.2 py312h06a4308_0 \npip 25.0 py312h06a4308_0 \npygments 2.15.1 py312h06a4308_1 \npython 3.12.6 h5148396_1 \nreadline 8.2 h5eee18b_0 \nrich 13.9.4 py312h06a4308_0 \nsetuptools 75.8.0 py312h06a4308_0 \nsqlite 3.45.3 h5eee18b_0 \ntbb 2021.8.0 hdb19cb5_0 \ntk 8.6.14 h39e8969_0 \ntyping-extensions 4.12.2 py312h06a4308_0 \ntyping_extensions 4.12.2 py312h06a4308_0 \ntzdata 2025a h04d1e81_0 \nwheel 0.45.1 py312h06a4308_0 \nxz 5.6.4 h5eee18b_1 \nzlib 1.2.13 h5eee18b_1 \nzstd 1.5.6 hc292b87_0 \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "---", + "### Removing Environments", + "---", + "If you don't intend to upload the environment to Object Storage for the OML users in the database, you can simply exit the notebook session and it will go out of scope. Alternatively, it can be explicitly removed using the `env remove` command. Remove the *myenv* environment and verify it was removed. A best practice is to deactivate the environment prior to removal.", + "", + "For help on the `env remove` command, type `env remove --help` in the %conda interpreter." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988663052, + "endTime" : 1739988663516, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

If you don't intend to upload the environment to Object Storage for the OML users in the database, you can simply exit the notebook session and it will go out of scope. Alternatively, it can be explicitly removed using the env remove command. Remove the myenv environment and verify it was removed. A best practice is to deactivate the environment prior to removal.

\n

For help on the env remove command, type env remove --help in the %conda interpreter.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Deactivate the environment ", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "deactivate" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988664013, + "endTime" : 1739988666803, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nConda environment 'conda' deactivated\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Remove the environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "env remove -n myenv", + "", + "env list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988667287, + "endTime" : 1739988672396, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nRemove all packages in environment /u01/.conda/envs/myenv:\n\n\n\nusage: conda [-h] [-v] [--no-plugins] [-V] COMMAND ...\n\nconda is a tool for managing and deploying applications, environments and packages.\n\noptions:\n -h, --help Show this help message and exit.\n -v, --verbose Can be used multiple times. Once for detailed output,\n twice for INFO logging, thrice for DEBUG logging, four\n times for TRACE logging.\n --no-plugins Disable all plugins that are not built into conda.\n -V, --version Show the conda version number and exit.\n\ncommands:\n The following built-in and plugins subcommands are available.\n\n COMMAND\n activate Activate a conda environment.\n clean Remove unused packages and caches.\n compare Compare packages between conda environments.\n config Modify configuration values in .condarc.\n content-trust Signing and verification tools for Conda\n create Create a new conda environment from a list of specified\n packages.\n deactivate Deactivate the current active conda environment.\n doctor Display a health report for your environment.\n env-lcm See `conda env-lcm --help`.\n info Display information about current conda install.\n init Initialize conda for shell interaction.\n install Install a list of packages into a specified conda\n environment.\n list List installed packages in a conda environment.\n notices Retrieve latest channel notifications.\n pack See `conda pack --help`.\n package Create low-level conda packages. (EXPERIMENTAL)\n remove (uninstall)\n Remove a list of packages from a specified conda\n environment.\n rename Rename an existing environment.\n repoquery Advanced search for repodata.\n run Run an executable in a conda environment.\n search Search for packages and display associated information\n using the MatchSpec format.\n update (upgrade) Update conda packages to the latest compatible version.\n\n\n# conda environments:\n#\nbase * /opt/conda\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "", + "### Specify Packages for Installation", + "", + "", + "", + "#### Install Packages from the `conda-forge` Channel", + "* **Conda channels** are the locations where packages are stored. They serve as the base for hosting and managing packages. Conda packages are downloaded from remote channels, which are URLs to directories containing conda packages. The `conda` command searches a set of channels. By default, packages are automatically downloaded and updated from the [`default` channel] (https://repo.anaconda.com/pkgs/).", + "* The **conda-forge** channel is free for all to use. You can modify what remote channels are automatically searched. You might want to do this to maintain a private or internal channel. We use the [`conda-forge` channel] (https://conda-forge.org/), a community channel made up of thousands of contributors, in the following examples.", + "", + "", + "#### Install a Specific Version of a Package", + "* To install a specific version of a package, use `=`" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988672882, + "endTime" : 1739988673332, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Specify Packages for Installation

\n

Install Packages from the conda-forge Channel

\n
    \n
  • Conda channels are the locations where packages are stored. They serve as the base for hosting and managing packages. Conda packages are downloaded from remote channels, which are URLs to directories containing conda packages. The conda command searches a set of channels. By default, packages are automatically downloaded and updated from the [default channel] (https://repo.anaconda.com/pkgs/).
  • \n
  • The conda-forge channel is free for all to use. You can modify what remote channels are automatically searched. You might want to do this to maintain a private or internal channel. We use the [conda-forge channel] (https://conda-forge.org/), a community channel made up of thousands of contributors, in the following examples.
  • \n
\n

Install a Specific Version of a Package

\n
    \n
  • To install a specific version of a package, use <package_name>=<version>
  • \n
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create an environment using conda-forge ", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "create -n mychannelenv -c conda-forge python=3.12.6", + "", + "activate mychannelenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988673812, + "endTime" : 1739988701037, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - conda-forge\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/mychannelenv\n\n added / updated specs:\n - python=3.12.6\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n _libgcc_mutex-0.1 | conda_forge 3 KB conda-forge\n _openmp_mutex-4.5 | 2_gnu 23 KB conda-forge\n bzip2-1.0.8 | h4bc722e_7 247 KB conda-forge\n ca-certificates-2025.1.31 | hbcca054_0 154 KB conda-forge\n ld_impl_linux-64-2.43 | h712a8e2_2 654 KB conda-forge\n libexpat-2.6.4 | h5888daf_0 72 KB conda-forge\n libffi-3.4.6 | h2dba641_0 52 KB conda-forge\n libgcc-14.2.0 | h77fa898_1 829 KB conda-forge\n libgcc-ng-14.2.0 | h69a702a_1 53 KB conda-forge\n libgomp-14.2.0 | h77fa898_1 450 KB conda-forge\n liblzma-5.6.4 | hb9d3cd8_0 109 KB conda-forge\n liblzma-devel-5.6.4 | hb9d3cd8_0 370 KB conda-forge\n libnsl-2.0.1 | hd590300_0 33 KB conda-forge\n libsqlite-3.49.1 | hee588c1_1 894 KB conda-forge\n libuuid-2.38.1 | h0b41bf4_0 33 KB conda-forge\n libxcrypt-4.4.36 | hd590300_1 98 KB conda-forge\n libzlib-1.3.1 | hb9d3cd8_2 60 KB conda-forge\n ncurses-6.5 | h2d0b736_3 871 KB conda-forge\n openssl-3.4.1 | h7b32b05_0 2.8 MB conda-forge\n pip-25.0.1 | pyh8b19718_0 1.2 MB conda-forge\n python-3.12.6 |hc5c86c4_2_cpython 30.1 MB conda-forge\n readline-8.2 | h8228510_1 275 KB conda-forge\n setuptools-75.8.0 | pyhff2d567_0 757 KB conda-forge\n tk-8.6.13 |noxft_h4845f30_101 3.2 MB conda-forge\n tzdata-2025a | h78e105d_0 120 KB conda-forge\n wheel-0.45.1 | pyhd8ed1ab_1 61 KB conda-forge\n xz-5.6.4 | hbcc6ac9_0 23 KB conda-forge\n xz-gpl-tools-5.6.4 | hbcc6ac9_0 33 KB conda-forge\n xz-tools-5.6.4 | hb9d3cd8_0 88 KB conda-forge\n ------------------------------------------------------------\n Total: 43.4 MB\n\nThe following NEW packages will be INSTALLED:\n\n _libgcc_mutex conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge \n _openmp_mutex conda-forge/linux-64::_openmp_mutex-4.5-2_gnu \n bzip2 conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 \n ca-certificates conda-forge/linux-64::ca-certificates-2025.1.31-hbcca054_0 \n ld_impl_linux-64 conda-forge/linux-64::ld_impl_linux-64-2.43-h712a8e2_2 \n libexpat conda-forge/linux-64::libexpat-2.6.4-h5888daf_0 \n libffi conda-forge/linux-64::libffi-3.4.6-h2dba641_0 \n libgcc conda-forge/linux-64::libgcc-14.2.0-h77fa898_1 \n libgcc-ng conda-forge/linux-64::libgcc-ng-14.2.0-h69a702a_1 \n libgomp conda-forge/linux-64::libgomp-14.2.0-h77fa898_1 \n liblzma conda-forge/linux-64::liblzma-5.6.4-hb9d3cd8_0 \n liblzma-devel conda-forge/linux-64::liblzma-devel-5.6.4-hb9d3cd8_0 \n libnsl conda-forge/linux-64::libnsl-2.0.1-hd590300_0 \n libsqlite conda-forge/linux-64::libsqlite-3.49.1-hee588c1_1 \n libuuid conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 \n libxcrypt conda-forge/linux-64::libxcrypt-4.4.36-hd590300_1 \n libzlib conda-forge/linux-64::libzlib-1.3.1-hb9d3cd8_2 \n ncurses conda-forge/linux-64::ncurses-6.5-h2d0b736_3 \n openssl conda-forge/linux-64::openssl-3.4.1-h7b32b05_0 \n pip conda-forge/noarch::pip-25.0.1-pyh8b19718_0 \n python conda-forge/linux-64::python-3.12.6-hc5c86c4_2_cpython \n readline conda-forge/linux-64::readline-8.2-h8228510_1 \n setuptools conda-forge/noarch::setuptools-75.8.0-pyhff2d567_0 \n tk conda-forge/linux-64::tk-8.6.13-noxft_h4845f30_101 \n tzdata conda-forge/noarch::tzdata-2025a-h78e105d_0 \n wheel conda-forge/noarch::wheel-0.45.1-pyhd8ed1ab_1 \n xz conda-forge/linux-64::xz-5.6.4-hbcc6ac9_0 \n xz-gpl-tools conda-forge/linux-64::xz-gpl-tools-5.6.4-hbcc6ac9_0 \n xz-tools conda-forge/linux-64::xz-tools-5.6.4-hb9d3cd8_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n#\n# To activate this environment, use\n#\n# $ conda activate mychannelenv\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\n\n\nusage: conda [-h] [-v] [--no-plugins] [-V] COMMAND ...\n\nconda is a tool for managing and deploying applications, environments and packages.\n\noptions:\n -h, --help Show this help message and exit.\n -v, --verbose Can be used multiple times. Once for detailed output,\n twice for INFO logging, thrice for DEBUG logging, four\n times for TRACE logging.\n --no-plugins Disable all plugins that are not built into conda.\n -V, --version Show the conda version number and exit.\n\ncommands:\n The following built-in and plugins subcommands are available.\n\n COMMAND\n activate Activate a conda environment.\n clean Remove unused packages and caches.\n compare Compare packages between conda environments.\n config Modify configuration values in .condarc.\n content-trust Signing and verification tools for Conda\n create Create a new conda environment from a list of specified\n packages.\n deactivate Deactivate the current active conda environment.\n doctor Display a health report for your environment.\n env-lcm See `conda env-lcm --help`.\n info Display information about current conda install.\n init Initialize conda for shell interaction.\n install Install a list of packages into a specified conda\n environment.\n list List installed packages in a conda environment.\n notices Retrieve latest channel notifications.\n pack See `conda pack --help`.\n package Create low-level conda packages. (EXPERIMENTAL)\n remove (uninstall)\n Remove a list of packages from a specified conda\n environment.\n rename Rename an existing environment.\n repoquery Advanced search for repodata.\n run Run an executable in a conda environment.\n search Search for packages and display associated information\n using the MatchSpec format.\n update (upgrade) Update conda packages to the latest compatible version.\n\n\n\nConda environment 'mychannelenv' activated\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Install a package from conda-forge by specifying the channel", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "install seaborn --channel conda-forge" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988701560, + "endTime" : 1739988730874, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - conda-forge\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/mychannelenv\n\n added / updated specs:\n - seaborn\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n brotli-1.1.0 | hb9d3cd8_2 19 KB conda-forge\n brotli-bin-1.1.0 | hb9d3cd8_2 18 KB conda-forge\n contourpy-1.3.1 | py312h68727a3_0 270 KB conda-forge\n cycler-0.12.1 | pyhd8ed1ab_1 13 KB conda-forge\n fonttools-4.56.0 | py312h178313f_0 2.7 MB conda-forge\n freetype-2.12.1 | h267a509_2 620 KB conda-forge\n kiwisolver-1.4.8 | py312h84d6215_0 70 KB conda-forge\n lcms2-2.17 | h717163a_0 242 KB conda-forge\n lerc-4.0.0 | h27087fc_0 275 KB conda-forge\n libblas-3.9.0 |28_h59b9bed_openblas 16 KB conda-forge\n libbrotlicommon-1.1.0 | hb9d3cd8_2 67 KB conda-forge\n libbrotlidec-1.1.0 | hb9d3cd8_2 32 KB conda-forge\n libbrotlienc-1.1.0 | hb9d3cd8_2 275 KB conda-forge\n libcblas-3.9.0 |28_he106b2a_openblas 16 KB conda-forge\n libdeflate-1.23 | h4ddbbb0_0 71 KB conda-forge\n libgfortran-14.2.0 | h69a702a_1 53 KB conda-forge\n libgfortran5-14.2.0 | hd5240d6_1 1.4 MB conda-forge\n libjpeg-turbo-3.0.0 | hd590300_1 604 KB conda-forge\n liblapack-3.9.0 |28_h7ac8fdf_openblas 16 KB conda-forge\n libopenblas-0.3.28 |pthreads_h94d23a6_1 5.3 MB conda-forge\n libpng-1.6.47 | h943b412_0 282 KB conda-forge\n libstdcxx-14.2.0 | hc0a3c3a_1 3.7 MB conda-forge\n libstdcxx-ng-14.2.0 | h4852527_1 53 KB conda-forge\n libtiff-4.7.0 | hd9ff511_3 418 KB conda-forge\n libwebp-base-1.5.0 | h851e524_0 420 KB conda-forge\n libxcb-1.17.0 | h8a09558_0 387 KB conda-forge\n matplotlib-base-3.10.0 | py312hd3ec401_0 7.8 MB conda-forge\n munkres-1.1.4 | pyh9f0ad1d_0 12 KB conda-forge\n numpy-2.2.3 | py312h72c5963_0 8.1 MB conda-forge\n openjpeg-2.5.3 | h5fbd93e_0 335 KB conda-forge\n packaging-24.2 | pyhd8ed1ab_2 59 KB conda-forge\n pandas-2.2.3 | py312hf9745cd_1 14.7 MB conda-forge\n patsy-1.0.1 | pyhd8ed1ab_1 182 KB conda-forge\n pillow-11.1.0 | py312h80c1187_0 40.8 MB conda-forge\n pthread-stubs-0.4 | hb9d3cd8_1002 8 KB conda-forge\n pyparsing-3.2.1 | pyhd8ed1ab_0 91 KB conda-forge\n python-dateutil-2.9.0.post0| pyhff2d567_1 217 KB conda-forge\n python-tzdata-2025.1 | pyhd8ed1ab_0 140 KB conda-forge\n python_abi-3.12 | 5_cp312 6 KB conda-forge\n pytz-2024.1 | pyhd8ed1ab_0 184 KB conda-forge\n qhull-2020.2 | h434a139_5 540 KB conda-forge\n scipy-1.15.2 | py312ha707e6e_0 16.3 MB conda-forge\n seaborn-0.13.2 | hd8ed1ab_3 7 KB conda-forge\n seaborn-base-0.13.2 | pyhd8ed1ab_3 223 KB conda-forge\n six-1.17.0 | pyhd8ed1ab_0 16 KB conda-forge\n statsmodels-0.14.4 | py312hc0a28a1_0 11.5 MB conda-forge\n unicodedata2-16.0.0 | py312h66e93f0_0 395 KB conda-forge\n xorg-libxau-1.0.12 | hb9d3cd8_0 14 KB conda-forge\n xorg-libxdmcp-1.1.5 | hb9d3cd8_0 19 KB conda-forge\n zstd-1.5.6 | ha6fb4c9_0 542 KB conda-forge\n ------------------------------------------------------------\n Total: 119.4 MB\n\nThe following NEW packages will be INSTALLED:\n\n brotli conda-forge/linux-64::brotli-1.1.0-hb9d3cd8_2 \n brotli-bin conda-forge/linux-64::brotli-bin-1.1.0-hb9d3cd8_2 \n contourpy conda-forge/linux-64::contourpy-1.3.1-py312h68727a3_0 \n cycler conda-forge/noarch::cycler-0.12.1-pyhd8ed1ab_1 \n fonttools conda-forge/linux-64::fonttools-4.56.0-py312h178313f_0 \n freetype conda-forge/linux-64::freetype-2.12.1-h267a509_2 \n kiwisolver conda-forge/linux-64::kiwisolver-1.4.8-py312h84d6215_0 \n lcms2 conda-forge/linux-64::lcms2-2.17-h717163a_0 \n lerc conda-forge/linux-64::lerc-4.0.0-h27087fc_0 \n libblas conda-forge/linux-64::libblas-3.9.0-28_h59b9bed_openblas \n libbrotlicommon conda-forge/linux-64::libbrotlicommon-1.1.0-hb9d3cd8_2 \n libbrotlidec conda-forge/linux-64::libbrotlidec-1.1.0-hb9d3cd8_2 \n libbrotlienc conda-forge/linux-64::libbrotlienc-1.1.0-hb9d3cd8_2 \n libcblas conda-forge/linux-64::libcblas-3.9.0-28_he106b2a_openblas \n libdeflate conda-forge/linux-64::libdeflate-1.23-h4ddbbb0_0 \n libgfortran conda-forge/linux-64::libgfortran-14.2.0-h69a702a_1 \n libgfortran5 conda-forge/linux-64::libgfortran5-14.2.0-hd5240d6_1 \n libjpeg-turbo conda-forge/linux-64::libjpeg-turbo-3.0.0-hd590300_1 \n liblapack conda-forge/linux-64::liblapack-3.9.0-28_h7ac8fdf_openblas \n libopenblas conda-forge/linux-64::libopenblas-0.3.28-pthreads_h94d23a6_1 \n libpng conda-forge/linux-64::libpng-1.6.47-h943b412_0 \n libstdcxx conda-forge/linux-64::libstdcxx-14.2.0-hc0a3c3a_1 \n libstdcxx-ng conda-forge/linux-64::libstdcxx-ng-14.2.0-h4852527_1 \n libtiff conda-forge/linux-64::libtiff-4.7.0-hd9ff511_3 \n libwebp-base conda-forge/linux-64::libwebp-base-1.5.0-h851e524_0 \n libxcb conda-forge/linux-64::libxcb-1.17.0-h8a09558_0 \n matplotlib-base conda-forge/linux-64::matplotlib-base-3.10.0-py312hd3ec401_0 \n munkres conda-forge/noarch::munkres-1.1.4-pyh9f0ad1d_0 \n numpy conda-forge/linux-64::numpy-2.2.3-py312h72c5963_0 \n openjpeg conda-forge/linux-64::openjpeg-2.5.3-h5fbd93e_0 \n packaging conda-forge/noarch::packaging-24.2-pyhd8ed1ab_2 \n pandas conda-forge/linux-64::pandas-2.2.3-py312hf9745cd_1 \n patsy conda-forge/noarch::patsy-1.0.1-pyhd8ed1ab_1 \n pillow conda-forge/linux-64::pillow-11.1.0-py312h80c1187_0 \n pthread-stubs conda-forge/linux-64::pthread-stubs-0.4-hb9d3cd8_1002 \n pyparsing conda-forge/noarch::pyparsing-3.2.1-pyhd8ed1ab_0 \n python-dateutil conda-forge/noarch::python-dateutil-2.9.0.post0-pyhff2d567_1 \n python-tzdata conda-forge/noarch::python-tzdata-2025.1-pyhd8ed1ab_0 \n python_abi conda-forge/linux-64::python_abi-3.12-5_cp312 \n pytz conda-forge/noarch::pytz-2024.1-pyhd8ed1ab_0 \n qhull conda-forge/linux-64::qhull-2020.2-h434a139_5 \n scipy conda-forge/linux-64::scipy-1.15.2-py312ha707e6e_0 \n seaborn conda-forge/noarch::seaborn-0.13.2-hd8ed1ab_3 \n seaborn-base conda-forge/noarch::seaborn-base-0.13.2-pyhd8ed1ab_3 \n six conda-forge/noarch::six-1.17.0-pyhd8ed1ab_0 \n statsmodels conda-forge/linux-64::statsmodels-0.14.4-py312hc0a28a1_0 \n unicodedata2 conda-forge/linux-64::unicodedata2-16.0.0-py312h66e93f0_0 \n xorg-libxau conda-forge/linux-64::xorg-libxau-1.0.12-hb9d3cd8_0 \n xorg-libxdmcp conda-forge/linux-64::xorg-libxdmcp-1.1.5-hb9d3cd8_0 \n zstd conda-forge/linux-64::zstd-1.5.6-ha6fb4c9_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Install a specific version of a package", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "install seaborn=0.13.2" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988731396, + "endTime" : 1739988749603, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - defaults\n - conda-forge\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/mychannelenv\n\n added / updated specs:\n - seaborn=0.13.2\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n blas-2.128 | openblas 16 KB conda-forge\n blas-devel-3.9.0 |28_h1ea3ea9_openblas 16 KB conda-forge\n liblapacke-3.9.0 |28_he2f377e_openblas 16 KB conda-forge\n numpy-1.26.4 | py312h2809609_0 10 KB\n numpy-base-1.26.4 | py312he1a6c75_0 7.7 MB\n openblas-0.3.28 |pthreads_h6ec200e_1 5.5 MB conda-forge\n seaborn-0.13.2 | py312h06a4308_0 723 KB\n seaborn-base-0.13.2 | pyhd8ed1ab_0 229 KB conda-forge\n ------------------------------------------------------------\n Total: 14.1 MB\n\nThe following NEW packages will be INSTALLED:\n\n blas conda-forge/linux-64::blas-2.128-openblas \n blas-devel conda-forge/linux-64::blas-devel-3.9.0-28_h1ea3ea9_openblas \n liblapacke conda-forge/linux-64::liblapacke-3.9.0-28_he2f377e_openblas \n numpy-base pkgs/main/linux-64::numpy-base-1.26.4-py312he1a6c75_0 \n openblas conda-forge/linux-64::openblas-0.3.28-pthreads_h6ec200e_1 \n\nThe following packages will be SUPERSEDED by a higher-priority channel:\n\n numpy conda-forge::numpy-2.2.3-py312h72c596~ --> pkgs/main::numpy-1.26.4-py312h2809609_0 \n seaborn conda-forge/noarch::seaborn-0.13.2-hd~ --> pkgs/main/linux-64::seaborn-0.13.2-py312h06a4308_0 \n\nThe following packages will be DOWNGRADED:\n\n seaborn-base 0.13.2-pyhd8ed1ab_3 --> 0.13.2-pyhd8ed1ab_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "", + "## Section 2: Create a Conda Environment and Upload to Object Storage", + "", + "", + "This section demonstrates creating two conda environments and uploading them to Object Storage. An environment saved in Object Storage can be used by all of the Autonomous Database users in the database, and will remain in Object Storage until ADMIN deletes it.", + "", + "As ADMIN user:", + "", + "- Create one environment for Python packages named *mypyenv*, and one for R packages named *myrenv*.", + "- Install the *tensorflow* and *seaborn* packages into the Python environment and the *forecast* and *ggplot2* packages into the R environment.", + "- Upload the environments to Object Storage. ", + "- Delete the environments from Object Storage. " + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988750089, + "endTime" : 1739988750539, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Section 2: Create a Conda Environment and Upload to Object Storage

\n

This section demonstrates creating two conda environments and uploading them to Object Storage. An environment saved in Object Storage can be used by all of the Autonomous Database users in the database, and will remain in Object Storage until ADMIN deletes it.

\n

As ADMIN user:

\n
    \n
  • Create one environment for Python packages named mypyenv, and one for R packages named myrenv.
  • \n
  • Install the tensorflow and seaborn packages into the Python environment and the forecast and ggplot2 packages into the R environment.
  • \n
  • Upload the environments to Object Storage.
  • \n
  • Delete the environments from Object Storage.
  • \n
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "Create a conda environment named *mypyenv* with Python 3.12 for OML4Py compatibility and install the `tensorflow` and `seaborn` packages." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988751018, + "endTime" : 1739988751468, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Create a conda environment named mypyenv with Python 3.12 for OML4Py compatibility and install the tensorflow and seaborn packages.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create Python conda environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "create -n mypyenv -c conda-forge --strict-channel-priority python=3.12.6 tensorflow seaborn" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":500,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988751944, + "endTime" : 1739988812127, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - conda-forge\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/mypyenv\n\n added / updated specs:\n - python=3.12.6\n - seaborn\n - tensorflow\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n _openmp_mutex-4.5 | 2_kmp_llvm 6 KB conda-forge\n absl-py-2.1.0 | pyhd8ed1ab_1 105 KB conda-forge\n astunparse-1.6.3 | pyhd8ed1ab_3 18 KB conda-forge\n brotli-python-1.1.0 | py312h2ec8cdc_2 342 KB conda-forge\n c-ares-1.34.4 | hb9d3cd8_0 201 KB conda-forge\n cached-property-1.5.2 | hd8ed1ab_1 4 KB conda-forge\n cached_property-1.5.2 | pyha770c72_1 11 KB conda-forge\n certifi-2025.1.31 | pyhd8ed1ab_0 159 KB conda-forge\n cffi-1.17.1 | py312h06ac9bb_0 288 KB conda-forge\n charset-normalizer-3.4.1 | pyhd8ed1ab_0 46 KB conda-forge\n flatbuffers-24.12.23 | h8f4948b_0 1.5 MB conda-forge\n gast-0.6.0 | pyhd8ed1ab_0 24 KB conda-forge\n giflib-5.2.2 | hd590300_0 75 KB conda-forge\n google-pasta-0.2.0 | pyhd8ed1ab_2 48 KB conda-forge\n grpcio-1.67.1 | py312hacea422_1 879 KB conda-forge\n h2-4.2.0 | pyhd8ed1ab_0 53 KB conda-forge\n h5py-3.13.0 |nompi_py312hedeef09_100 1.3 MB conda-forge\n hdf5-1.14.3 |nompi_h2d575fe_109 3.7 MB conda-forge\n hpack-4.1.0 | pyhd8ed1ab_0 30 KB conda-forge\n hyperframe-6.1.0 | pyhd8ed1ab_0 17 KB conda-forge\n icu-75.1 | he02047a_0 11.6 MB conda-forge\n idna-3.10 | pyhd8ed1ab_1 49 KB conda-forge\n importlib-metadata-8.6.1 | pyha770c72_0 28 KB conda-forge\n keras-3.8.0 | pyh753f3f9_0 716 KB conda-forge\n keyutils-1.6.1 | h166bdaf_0 115 KB conda-forge\n krb5-1.21.3 | h659f571_0 1.3 MB conda-forge\n libabseil-20240722.0 | cxx17_hbbce691_4 1.3 MB conda-forge\n libaec-1.1.3 | h59595ed_0 35 KB conda-forge\n libblas-3.9.0 |30_h59b9bed_openblas 17 KB conda-forge\n libcblas-3.9.0 |30_he106b2a_openblas 16 KB conda-forge\n libcurl-8.12.1 | h332b0f4_0 417 KB conda-forge\n libedit-3.1.20250104 | pl5321h7949ede_0 132 KB conda-forge\n libev-4.33 | hd590300_2 110 KB conda-forge\n libgrpc-1.67.1 | h25350d4_1 7.4 MB conda-forge\n libhwloc-2.11.2 |default_h0d58e46_1001 2.3 MB conda-forge\n libiconv-1.18 | h4ce23a2_0 697 KB conda-forge\n liblapack-3.9.0 |30_h7ac8fdf_openblas 16 KB conda-forge\n libnghttp2-1.64.0 | h161d5f1_0 632 KB conda-forge\n libopenblas-0.3.29 |pthreads_h94d23a6_0 5.6 MB conda-forge\n libprotobuf-5.28.3 | h6128344_1 2.8 MB conda-forge\n libre2-11-2024.07.02 | hbbce691_2 205 KB conda-forge\n libssh2-1.11.1 | hf672d98_0 297 KB conda-forge\n libxml2-2.13.6 | h8d12d68_0 674 KB conda-forge\n llvm-openmp-19.1.7 | h024ca30_0 3.0 MB conda-forge\n markdown-3.6 | pyhd8ed1ab_0 76 KB conda-forge\n markdown-it-py-3.0.0 | pyhd8ed1ab_1 63 KB conda-forge\n markupsafe-3.0.2 | py312h178313f_1 24 KB conda-forge\n mdurl-0.1.2 | pyhd8ed1ab_1 14 KB conda-forge\n mkl-2024.2.2 | ha957f24_16 118.9 MB conda-forge\n ml_dtypes-0.4.0 | py312hf9745cd_2 163 KB conda-forge\n namex-0.0.8 | pyhd8ed1ab_1 11 KB conda-forge\n opt_einsum-3.4.0 | pyhd8ed1ab_1 61 KB conda-forge\n optree-0.14.0 | py312h68727a3_1 362 KB conda-forge\n protobuf-5.28.3 | py312h2ec8cdc_0 454 KB conda-forge\n pycparser-2.22 | pyh29332c3_1 108 KB conda-forge\n pygments-2.19.1 | pyhd8ed1ab_0 868 KB conda-forge\n pysocks-1.7.1 | pyha55dd90_7 21 KB conda-forge\n python-flatbuffers-25.2.10 | pyhbc23db3_0 34 KB conda-forge\n re2-2024.07.02 | h9925aae_2 26 KB conda-forge\n requests-2.32.3 | pyhd8ed1ab_1 57 KB conda-forge\n rich-13.9.4 | pyhd8ed1ab_1 181 KB conda-forge\n snappy-1.2.1 | h8bd8927_1 42 KB conda-forge\n tbb-2021.13.0 | hceb3a55_1 172 KB conda-forge\n tensorboard-2.18.0 | pyhd8ed1ab_1 4.9 MB conda-forge\n tensorboard-data-server-0.7.0| py312hda17c39_2 3.4 MB conda-forge\n tensorflow-2.18.0 |cpu_py312h69ecde4_0 44 KB conda-forge\n tensorflow-base-2.18.0 |cpu_py312h099d1c6_0 150.7 MB conda-forge\n tensorflow-estimator-2.18.0|cpu_py312hc0a35a6_0 683 KB conda-forge\n termcolor-2.5.0 | pyhd8ed1ab_1 12 KB conda-forge\n typing-extensions-4.12.2 | hd8ed1ab_1 10 KB conda-forge\n typing_extensions-4.12.2 | pyha770c72_1 39 KB conda-forge\n urllib3-2.3.0 | pyhd8ed1ab_0 98 KB conda-forge\n werkzeug-3.1.3 | pyhd8ed1ab_1 238 KB conda-forge\n wrapt-1.17.2 | py312h66e93f0_0 62 KB conda-forge\n zipp-3.21.0 | pyhd8ed1ab_1 21 KB conda-forge\n zstandard-0.23.0 | py312hef9b889_1 410 KB conda-forge\n ------------------------------------------------------------\n Total: 330.4 MB\n\nThe following NEW packages will be INSTALLED:\n\n _libgcc_mutex conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge \n _openmp_mutex conda-forge/linux-64::_openmp_mutex-4.5-2_kmp_llvm \n absl-py conda-forge/noarch::absl-py-2.1.0-pyhd8ed1ab_1 \n astunparse conda-forge/noarch::astunparse-1.6.3-pyhd8ed1ab_3 \n brotli conda-forge/linux-64::brotli-1.1.0-hb9d3cd8_2 \n brotli-bin conda-forge/linux-64::brotli-bin-1.1.0-hb9d3cd8_2 \n brotli-python conda-forge/linux-64::brotli-python-1.1.0-py312h2ec8cdc_2 \n bzip2 conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 \n c-ares conda-forge/linux-64::c-ares-1.34.4-hb9d3cd8_0 \n ca-certificates conda-forge/linux-64::ca-certificates-2025.1.31-hbcca054_0 \n cached-property conda-forge/noarch::cached-property-1.5.2-hd8ed1ab_1 \n cached_property conda-forge/noarch::cached_property-1.5.2-pyha770c72_1 \n certifi conda-forge/noarch::certifi-2025.1.31-pyhd8ed1ab_0 \n cffi conda-forge/linux-64::cffi-1.17.1-py312h06ac9bb_0 \n charset-normalizer conda-forge/noarch::charset-normalizer-3.4.1-pyhd8ed1ab_0 \n contourpy conda-forge/linux-64::contourpy-1.3.1-py312h68727a3_0 \n cycler conda-forge/noarch::cycler-0.12.1-pyhd8ed1ab_1 \n flatbuffers conda-forge/linux-64::flatbuffers-24.12.23-h8f4948b_0 \n fonttools conda-forge/linux-64::fonttools-4.56.0-py312h178313f_0 \n freetype conda-forge/linux-64::freetype-2.12.1-h267a509_2 \n gast conda-forge/noarch::gast-0.6.0-pyhd8ed1ab_0 \n giflib conda-forge/linux-64::giflib-5.2.2-hd590300_0 \n google-pasta conda-forge/noarch::google-pasta-0.2.0-pyhd8ed1ab_2 \n grpcio conda-forge/linux-64::grpcio-1.67.1-py312hacea422_1 \n h2 conda-forge/noarch::h2-4.2.0-pyhd8ed1ab_0 \n h5py conda-forge/linux-64::h5py-3.13.0-nompi_py312hedeef09_100 \n hdf5 conda-forge/linux-64::hdf5-1.14.3-nompi_h2d575fe_109 \n hpack conda-forge/noarch::hpack-4.1.0-pyhd8ed1ab_0 \n hyperframe conda-forge/noarch::hyperframe-6.1.0-pyhd8ed1ab_0 \n icu conda-forge/linux-64::icu-75.1-he02047a_0 \n idna conda-forge/noarch::idna-3.10-pyhd8ed1ab_1 \n importlib-metadata conda-forge/noarch::importlib-metadata-8.6.1-pyha770c72_0 \n keras conda-forge/noarch::keras-3.8.0-pyh753f3f9_0 \n keyutils conda-forge/linux-64::keyutils-1.6.1-h166bdaf_0 \n kiwisolver conda-forge/linux-64::kiwisolver-1.4.8-py312h84d6215_0 \n krb5 conda-forge/linux-64::krb5-1.21.3-h659f571_0 \n lcms2 conda-forge/linux-64::lcms2-2.17-h717163a_0 \n ld_impl_linux-64 conda-forge/linux-64::ld_impl_linux-64-2.43-h712a8e2_2 \n lerc conda-forge/linux-64::lerc-4.0.0-h27087fc_0 \n libabseil conda-forge/linux-64::libabseil-20240722.0-cxx17_hbbce691_4 \n libaec conda-forge/linux-64::libaec-1.1.3-h59595ed_0 \n libblas conda-forge/linux-64::libblas-3.9.0-30_h59b9bed_openblas \n libbrotlicommon conda-forge/linux-64::libbrotlicommon-1.1.0-hb9d3cd8_2 \n libbrotlidec conda-forge/linux-64::libbrotlidec-1.1.0-hb9d3cd8_2 \n libbrotlienc conda-forge/linux-64::libbrotlienc-1.1.0-hb9d3cd8_2 \n libcblas conda-forge/linux-64::libcblas-3.9.0-30_he106b2a_openblas \n libcurl conda-forge/linux-64::libcurl-8.12.1-h332b0f4_0 \n libdeflate conda-forge/linux-64::libdeflate-1.23-h4ddbbb0_0 \n libedit conda-forge/linux-64::libedit-3.1.20250104-pl5321h7949ede_0 \n libev conda-forge/linux-64::libev-4.33-hd590300_2 \n libexpat conda-forge/linux-64::libexpat-2.6.4-h5888daf_0 \n libffi conda-forge/linux-64::libffi-3.4.6-h2dba641_0 \n libgcc conda-forge/linux-64::libgcc-14.2.0-h77fa898_1 \n libgcc-ng conda-forge/linux-64::libgcc-ng-14.2.0-h69a702a_1 \n libgfortran conda-forge/linux-64::libgfortran-14.2.0-h69a702a_1 \n libgfortran5 conda-forge/linux-64::libgfortran5-14.2.0-hd5240d6_1 \n libgrpc conda-forge/linux-64::libgrpc-1.67.1-h25350d4_1 \n libhwloc conda-forge/linux-64::libhwloc-2.11.2-default_h0d58e46_1001 \n libiconv conda-forge/linux-64::libiconv-1.18-h4ce23a2_0 \n libjpeg-turbo conda-forge/linux-64::libjpeg-turbo-3.0.0-hd590300_1 \n liblapack conda-forge/linux-64::liblapack-3.9.0-30_h7ac8fdf_openblas \n liblzma conda-forge/linux-64::liblzma-5.6.4-hb9d3cd8_0 \n liblzma-devel conda-forge/linux-64::liblzma-devel-5.6.4-hb9d3cd8_0 \n libnghttp2 conda-forge/linux-64::libnghttp2-1.64.0-h161d5f1_0 \n libnsl conda-forge/linux-64::libnsl-2.0.1-hd590300_0 \n libopenblas conda-forge/linux-64::libopenblas-0.3.29-pthreads_h94d23a6_0 \n libpng conda-forge/linux-64::libpng-1.6.47-h943b412_0 \n libprotobuf conda-forge/linux-64::libprotobuf-5.28.3-h6128344_1 \n libre2-11 conda-forge/linux-64::libre2-11-2024.07.02-hbbce691_2 \n libsqlite conda-forge/linux-64::libsqlite-3.49.1-hee588c1_1 \n libssh2 conda-forge/linux-64::libssh2-1.11.1-hf672d98_0 \n libstdcxx conda-forge/linux-64::libstdcxx-14.2.0-hc0a3c3a_1 \n libstdcxx-ng conda-forge/linux-64::libstdcxx-ng-14.2.0-h4852527_1 \n libtiff conda-forge/linux-64::libtiff-4.7.0-hd9ff511_3 \n libuuid conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 \n libwebp-base conda-forge/linux-64::libwebp-base-1.5.0-h851e524_0 \n libxcb conda-forge/linux-64::libxcb-1.17.0-h8a09558_0 \n libxcrypt conda-forge/linux-64::libxcrypt-4.4.36-hd590300_1 \n libxml2 conda-forge/linux-64::libxml2-2.13.6-h8d12d68_0 \n libzlib conda-forge/linux-64::libzlib-1.3.1-hb9d3cd8_2 \n llvm-openmp conda-forge/linux-64::llvm-openmp-19.1.7-h024ca30_0 \n markdown conda-forge/noarch::markdown-3.6-pyhd8ed1ab_0 \n markdown-it-py conda-forge/noarch::markdown-it-py-3.0.0-pyhd8ed1ab_1 \n markupsafe conda-forge/linux-64::markupsafe-3.0.2-py312h178313f_1 \n matplotlib-base conda-forge/linux-64::matplotlib-base-3.10.0-py312hd3ec401_0 \n mdurl conda-forge/noarch::mdurl-0.1.2-pyhd8ed1ab_1 \n mkl conda-forge/linux-64::mkl-2024.2.2-ha957f24_16 \n ml_dtypes conda-forge/linux-64::ml_dtypes-0.4.0-py312hf9745cd_2 \n munkres conda-forge/noarch::munkres-1.1.4-pyh9f0ad1d_0 \n namex conda-forge/noarch::namex-0.0.8-pyhd8ed1ab_1 \n ncurses conda-forge/linux-64::ncurses-6.5-h2d0b736_3 \n numpy conda-forge/linux-64::numpy-2.2.3-py312h72c5963_0 \n openjpeg conda-forge/linux-64::openjpeg-2.5.3-h5fbd93e_0 \n openssl conda-forge/linux-64::openssl-3.4.1-h7b32b05_0 \n opt_einsum conda-forge/noarch::opt_einsum-3.4.0-pyhd8ed1ab_1 \n optree conda-forge/linux-64::optree-0.14.0-py312h68727a3_1 \n packaging conda-forge/noarch::packaging-24.2-pyhd8ed1ab_2 \n pandas conda-forge/linux-64::pandas-2.2.3-py312hf9745cd_1 \n patsy conda-forge/noarch::patsy-1.0.1-pyhd8ed1ab_1 \n pillow conda-forge/linux-64::pillow-11.1.0-py312h80c1187_0 \n pip conda-forge/noarch::pip-25.0.1-pyh8b19718_0 \n protobuf conda-forge/linux-64::protobuf-5.28.3-py312h2ec8cdc_0 \n pthread-stubs conda-forge/linux-64::pthread-stubs-0.4-hb9d3cd8_1002 \n pycparser conda-forge/noarch::pycparser-2.22-pyh29332c3_1 \n pygments conda-forge/noarch::pygments-2.19.1-pyhd8ed1ab_0 \n pyparsing conda-forge/noarch::pyparsing-3.2.1-pyhd8ed1ab_0 \n pysocks conda-forge/noarch::pysocks-1.7.1-pyha55dd90_7 \n python conda-forge/linux-64::python-3.12.6-hc5c86c4_2_cpython \n python-dateutil conda-forge/noarch::python-dateutil-2.9.0.post0-pyhff2d567_1 \n python-flatbuffers conda-forge/noarch::python-flatbuffers-25.2.10-pyhbc23db3_0 \n python-tzdata conda-forge/noarch::python-tzdata-2025.1-pyhd8ed1ab_0 \n python_abi conda-forge/linux-64::python_abi-3.12-5_cp312 \n pytz conda-forge/noarch::pytz-2024.1-pyhd8ed1ab_0 \n qhull conda-forge/linux-64::qhull-2020.2-h434a139_5 \n re2 conda-forge/linux-64::re2-2024.07.02-h9925aae_2 \n readline conda-forge/linux-64::readline-8.2-h8228510_1 \n requests conda-forge/noarch::requests-2.32.3-pyhd8ed1ab_1 \n rich conda-forge/noarch::rich-13.9.4-pyhd8ed1ab_1 \n scipy conda-forge/linux-64::scipy-1.15.2-py312ha707e6e_0 \n seaborn conda-forge/noarch::seaborn-0.13.2-hd8ed1ab_3 \n seaborn-base conda-forge/noarch::seaborn-base-0.13.2-pyhd8ed1ab_3 \n setuptools conda-forge/noarch::setuptools-75.8.0-pyhff2d567_0 \n six conda-forge/noarch::six-1.17.0-pyhd8ed1ab_0 \n snappy conda-forge/linux-64::snappy-1.2.1-h8bd8927_1 \n statsmodels conda-forge/linux-64::statsmodels-0.14.4-py312hc0a28a1_0 \n tbb conda-forge/linux-64::tbb-2021.13.0-hceb3a55_1 \n tensorboard conda-forge/noarch::tensorboard-2.18.0-pyhd8ed1ab_1 \n tensorboard-data-~ conda-forge/linux-64::tensorboard-data-server-0.7.0-py312hda17c39_2 \n tensorflow conda-forge/linux-64::tensorflow-2.18.0-cpu_py312h69ecde4_0 \n tensorflow-base conda-forge/linux-64::tensorflow-base-2.18.0-cpu_py312h099d1c6_0 \n tensorflow-estima~ conda-forge/linux-64::tensorflow-estimator-2.18.0-cpu_py312hc0a35a6_0 \n termcolor conda-forge/noarch::termcolor-2.5.0-pyhd8ed1ab_1 \n tk conda-forge/linux-64::tk-8.6.13-noxft_h4845f30_101 \n typing-extensions conda-forge/noarch::typing-extensions-4.12.2-hd8ed1ab_1 \n typing_extensions conda-forge/noarch::typing_extensions-4.12.2-pyha770c72_1 \n tzdata conda-forge/noarch::tzdata-2025a-h78e105d_0 \n unicodedata2 conda-forge/linux-64::unicodedata2-16.0.0-py312h66e93f0_0 \n urllib3 conda-forge/noarch::urllib3-2.3.0-pyhd8ed1ab_0 \n werkzeug conda-forge/noarch::werkzeug-3.1.3-pyhd8ed1ab_1 \n wheel conda-forge/noarch::wheel-0.45.1-pyhd8ed1ab_1 \n wrapt conda-forge/linux-64::wrapt-1.17.2-py312h66e93f0_0 \n xorg-libxau conda-forge/linux-64::xorg-libxau-1.0.12-hb9d3cd8_0 \n xorg-libxdmcp conda-forge/linux-64::xorg-libxdmcp-1.1.5-hb9d3cd8_0 \n xz conda-forge/linux-64::xz-5.6.4-hbcc6ac9_0 \n xz-gpl-tools conda-forge/linux-64::xz-gpl-tools-5.6.4-hbcc6ac9_0 \n xz-tools conda-forge/linux-64::xz-tools-5.6.4-hb9d3cd8_0 \n zipp conda-forge/noarch::zipp-3.21.0-pyhd8ed1ab_1 \n zstandard conda-forge/linux-64::zstandard-0.23.0-py312hef9b889_1 \n zstd conda-forge/linux-64::zstd-1.5.6-ha6fb4c9_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n#\n# To activate this environment, use\n#\n# $ conda activate mypyenv\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "Create a conda environment named *myrenv* with R-4.0.5 for OML4R compatibility and install the `forecast` and `ggplot2` packages." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988812620, + "endTime" : 1739988813073, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Create a conda environment named myrenv with R-4.0.5 for OML4R compatibility and install the forecast and ggplot2 packages.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create R conda environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "create -n myrenv -c conda-forge --strict-channel-priority r-base=4.0.5 r-forecast r-ggplot2" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988813566, + "endTime" : 1739988849893, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Channels:\n - conda-forge\n - defaults\nPlatform: linux-64\nCollecting package metadata (repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /u01/.conda/envs/myrenv\n\n added / updated specs:\n - r-base=4.0.5\n - r-forecast\n - r-ggplot2\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n _r-mutex-1.0.1 | anacondar_1 3 KB conda-forge\n binutils_impl_linux-64-2.43| h4bf12b8_2 5.4 MB conda-forge\n bwidget-1.10.1 | ha770c72_0 126 KB conda-forge\n cairo-1.16.0 | ha61ee94_1014 1.5 MB conda-forge\n curl-7.86.0 | h2283fc2_1 91 KB conda-forge\n expat-2.6.4 | h5888daf_0 135 KB conda-forge\n font-ttf-dejavu-sans-mono-2.37| hab24e00_0 388 KB conda-forge\n font-ttf-inconsolata-3.000 | h77eed37_0 94 KB conda-forge\n font-ttf-source-code-pro-2.038| h77eed37_0 684 KB conda-forge\n font-ttf-ubuntu-0.83 | h77eed37_3 1.5 MB conda-forge\n fontconfig-2.14.2 | h14ed4e7_0 266 KB conda-forge\n fonts-conda-ecosystem-1 | 0 4 KB conda-forge\n fonts-conda-forge-1 | 0 4 KB conda-forge\n fribidi-1.0.10 | h36c2ea0_0 112 KB conda-forge\n gcc_impl_linux-64-14.2.0 | h6b349bd_1 69.1 MB conda-forge\n gettext-0.23.1 | h5888daf_0 473 KB conda-forge\n gettext-tools-0.23.1 | h5888daf_0 2.8 MB conda-forge\n gfortran_impl_linux-64-14.2.0| hc73f493_1 16.4 MB conda-forge\n graphite2-1.3.13 | h59595ed_1003 95 KB conda-forge\n gsl-2.7 | he838d99_0 3.2 MB conda-forge\n gxx_impl_linux-64-14.2.0 | h2c03514_1 13.7 MB conda-forge\n harfbuzz-6.0.0 | h8e241bc_0 1.2 MB conda-forge\n icu-70.1 | h27087fc_0 13.5 MB conda-forge\n jpeg-9e | h0b41bf4_3 235 KB conda-forge\n kernel-headers_linux-64-3.10.0| he073ed8_18 921 KB conda-forge\n krb5-1.19.3 | h08a2579_0 1.4 MB conda-forge\n libasprintf-0.23.1 | h8e693c7_0 42 KB conda-forge\n libasprintf-devel-0.23.1 | h8e693c7_0 33 KB conda-forge\n libcurl-7.86.0 | h2283fc2_1 351 KB conda-forge\n libdeflate-1.14 | h166bdaf_0 81 KB conda-forge\n libgcc-devel_linux-64-14.2.0| h41c2201_101 2.6 MB conda-forge\n libgettextpo-0.23.1 | h5888daf_0 163 KB conda-forge\n libgettextpo-devel-0.23.1 | h5888daf_0 36 KB conda-forge\n libgfortran-ng-14.2.0 | h69a702a_1 53 KB conda-forge\n libglib-2.78.1 | hebfc3b9_0 2.6 MB conda-forge\n libhwloc-2.9.1 | hd6dc26d_0 2.5 MB conda-forge\n libnghttp2-1.58.0 | h47da74e_1 617 KB conda-forge\n libpng-1.6.43 | h2797004_0 281 KB conda-forge\n libsanitizer-14.2.0 | h2a3dede_1 4.3 MB conda-forge\n libssh2-1.11.0 | h0841786_0 265 KB conda-forge\n libstdcxx-devel_linux-64-14.2.0| h41c2201_101 12.9 MB conda-forge\n libtiff-4.4.0 | h82bc61c_5 473 KB conda-forge\n libxcb-1.13 | h7f98852_1004 391 KB conda-forge\n libxml2-2.10.3 | hca2bb57_4 697 KB conda-forge\n libzlib-1.2.13 | h4ab18f5_6 60 KB conda-forge\n make-4.4.1 | hb9d3cd8_2 501 KB conda-forge\n pango-1.50.14 | hd33c08f_0 427 KB conda-forge\n pcre2-10.40 | hc3806b6_0 2.3 MB conda-forge\n pixman-0.44.2 | h29eaf8c_0 372 KB conda-forge\n r-backports-1.4.1 | r40hcfec24a_0 110 KB conda-forge\n r-base-4.0.5 | hb87df5d_8 23.8 MB conda-forge\n r-brio-1.1.3 | r40hcfec24a_0 40 KB conda-forge\n r-callr-3.7.2 | r40hc72bb7e_0 427 KB conda-forge\n r-cli-3.4.1 | r40h7525677_0 1.2 MB conda-forge\n r-colorspace-2.0_3 | r40h06615bd_0 2.5 MB conda-forge\n r-crayon-1.5.1 | r40hc72bb7e_0 164 KB conda-forge\n r-curl-4.3.2 | r40hcfec24a_0 777 KB conda-forge\n r-desc-1.4.2 | r40hc72bb7e_0 332 KB conda-forge\n r-diffobj-0.3.5 | r40hcfec24a_0 1.0 MB conda-forge\n r-digest-0.6.29 | r40h03ef668_0 208 KB conda-forge\n r-ellipsis-0.3.2 | r40hcfec24a_0 42 KB conda-forge\n r-evaluate-0.16 | r40hc72bb7e_0 85 KB conda-forge\n r-fansi-1.0.3 | r40h06615bd_0 323 KB conda-forge\n r-farver-2.1.1 | r40h7525677_0 1.4 MB conda-forge\n r-forecast-8.17.0 | r40h37cf8d7_0 1.5 MB conda-forge\n r-fracdiff-1.5_1 | r40hb699f27_1 116 KB conda-forge\n r-fs-1.5.2 | r40h7525677_1 554 KB conda-forge\n r-generics-0.1.3 | r40hc72bb7e_0 93 KB conda-forge\n r-ggplot2-3.3.6 | r40hc72bb7e_0 4.0 MB conda-forge\n r-glue-1.6.2 | r40h06615bd_0 154 KB conda-forge\n r-gtable-0.3.1 | r40hc72bb7e_0 174 KB conda-forge\n r-isoband-0.2.5 | r40h03ef668_0 1.8 MB conda-forge\n r-jsonlite-1.8.0 | r40h06615bd_0 960 KB conda-forge\n r-labeling-0.4.2 | r40hc72bb7e_1 67 KB conda-forge\n r-lattice-0.20_45 | r40hcfec24a_0 1.1 MB conda-forge\n r-lifecycle-1.0.2 | r40hc72bb7e_0 110 KB conda-forge\n r-lmtest-0.9_40 | r40h8da6f51_0 413 KB conda-forge\n r-magrittr-2.0.3 | r40h06615bd_0 215 KB conda-forge\n r-mass-7.3_58.1 | r40h06615bd_0 1.1 MB conda-forge\n r-matrix-1.4_1 | r40h0154571_0 4.8 MB conda-forge\n r-mgcv-1.8_40 | r40h0154571_0 3.0 MB conda-forge\n r-munsell-0.5.0 | r40hc72bb7e_1004 247 KB conda-forge\n r-nlme-3.1_159 | r40h8da6f51_0 2.3 MB conda-forge\n r-nnet-7.3_17 | r40hcfec24a_0 132 KB conda-forge\n r-pillar-1.8.1 | r40hc72bb7e_0 672 KB conda-forge\n r-pkgconfig-2.0.3 | r40hc72bb7e_1 25 KB conda-forge\n r-pkgload-1.3.0 | r40hc72bb7e_0 191 KB conda-forge\n r-praise-1.0.0 | r40hc72bb7e_1005 24 KB conda-forge\n r-processx-3.7.0 | r40h06615bd_0 330 KB conda-forge\n r-ps-1.7.1 | r40h06615bd_0 323 KB conda-forge\n r-quadprog-1.5_8 | r40h742201e_3 47 KB conda-forge\n r-quantmod-0.4.20 | r40hc72bb7e_0 1.0 MB conda-forge\n r-r6-2.5.1 | r40hc72bb7e_0 89 KB conda-forge\n r-rcolorbrewer-1.1_3 | r40h785f33e_0 65 KB conda-forge\n r-rcpp-1.0.9 | r40h7525677_1 2.0 MB conda-forge\n r-rcpparmadillo-0.11.2.3.1 | r40h9f5de39_0 950 KB conda-forge\n r-rematch2-2.1.2 | r40hc72bb7e_1 52 KB conda-forge\n r-rlang-1.0.6 | r40h7525677_0 1.5 MB conda-forge\n r-rprojroot-2.0.3 | r40hc72bb7e_0 115 KB conda-forge\n r-scales-1.2.1 | r40hc72bb7e_0 611 KB conda-forge\n r-testthat-3.1.4 | r40h7525677_0 1.6 MB conda-forge\n r-tibble-3.1.8 | r40h06615bd_0 695 KB conda-forge\n r-timedate-4021.104 | r40hc72bb7e_0 1.6 MB conda-forge\n r-tseries-0.10_51 | r40h1463581_0 391 KB conda-forge\n r-ttr-0.24.3 | r40h06615bd_0 531 KB conda-forge\n r-urca-1.3_0 | r40h8da6f51_1006 1.1 MB conda-forge\n r-utf8-1.2.2 | r40hcfec24a_0 162 KB conda-forge\n r-vctrs-0.4.1 | r40h7525677_0 1.2 MB conda-forge\n r-viridislite-0.4.1 | r40hc72bb7e_0 1.3 MB conda-forge\n r-waldo-0.4.0 | r40hc72bb7e_0 109 KB conda-forge\n r-withr-2.5.0 | r40hc72bb7e_0 240 KB conda-forge\n r-xts-0.12.1 | r40h06615bd_0 906 KB conda-forge\n r-zoo-1.8_11 | r40h06615bd_0 1015 KB conda-forge\n sed-4.8 | he412f7d_0 264 KB conda-forge\n sysroot_linux-64-2.17 | h0157908_18 14.5 MB conda-forge\n tbb-2021.9.0 | hf52228f_0 1.5 MB conda-forge\n tktable-2.10 | h8bc8fbc_6 89 KB conda-forge\n xorg-kbproto-1.0.7 | hb9d3cd8_1003 30 KB conda-forge\n xorg-libice-1.0.10 | h7f98852_0 58 KB conda-forge\n xorg-libsm-1.2.3 | hd9c2040_1000 26 KB conda-forge\n xorg-libx11-1.8.4 | h0b41bf4_0 810 KB conda-forge\n xorg-libxext-1.3.4 | h0b41bf4_2 49 KB conda-forge\n xorg-libxrender-0.9.10 | h7f98852_1003 32 KB conda-forge\n xorg-libxt-1.3.0 | hd590300_0 371 KB conda-forge\n xorg-renderproto-0.11.1 | hb9d3cd8_1003 12 KB conda-forge\n xorg-xextproto-7.3.0 | hb9d3cd8_1004 30 KB conda-forge\n xorg-xproto-7.0.31 | hb9d3cd8_1008 72 KB conda-forge\n zlib-1.2.13 | h4ab18f5_6 91 KB conda-forge\n ------------------------------------------------------------\n Total: 256.9 MB\n\nThe following NEW packages will be INSTALLED:\n\n _libgcc_mutex conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge \n _openmp_mutex conda-forge/linux-64::_openmp_mutex-4.5-2_kmp_llvm \n _r-mutex conda-forge/noarch::_r-mutex-1.0.1-anacondar_1 \n binutils_impl_lin~ conda-forge/linux-64::binutils_impl_linux-64-2.43-h4bf12b8_2 \n bwidget conda-forge/linux-64::bwidget-1.10.1-ha770c72_0 \n bzip2 conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 \n c-ares conda-forge/linux-64::c-ares-1.34.4-hb9d3cd8_0 \n ca-certificates conda-forge/linux-64::ca-certificates-2025.1.31-hbcca054_0 \n cairo conda-forge/linux-64::cairo-1.16.0-ha61ee94_1014 \n curl conda-forge/linux-64::curl-7.86.0-h2283fc2_1 \n expat conda-forge/linux-64::expat-2.6.4-h5888daf_0 \n font-ttf-dejavu-s~ conda-forge/noarch::font-ttf-dejavu-sans-mono-2.37-hab24e00_0 \n font-ttf-inconsol~ conda-forge/noarch::font-ttf-inconsolata-3.000-h77eed37_0 \n font-ttf-source-c~ conda-forge/noarch::font-ttf-source-code-pro-2.038-h77eed37_0 \n font-ttf-ubuntu conda-forge/noarch::font-ttf-ubuntu-0.83-h77eed37_3 \n fontconfig conda-forge/linux-64::fontconfig-2.14.2-h14ed4e7_0 \n fonts-conda-ecosy~ conda-forge/noarch::fonts-conda-ecosystem-1-0 \n fonts-conda-forge conda-forge/noarch::fonts-conda-forge-1-0 \n freetype conda-forge/linux-64::freetype-2.12.1-h267a509_2 \n fribidi conda-forge/linux-64::fribidi-1.0.10-h36c2ea0_0 \n gcc_impl_linux-64 conda-forge/linux-64::gcc_impl_linux-64-14.2.0-h6b349bd_1 \n gettext conda-forge/linux-64::gettext-0.23.1-h5888daf_0 \n gettext-tools conda-forge/linux-64::gettext-tools-0.23.1-h5888daf_0 \n gfortran_impl_lin~ conda-forge/linux-64::gfortran_impl_linux-64-14.2.0-hc73f493_1 \n graphite2 conda-forge/linux-64::graphite2-1.3.13-h59595ed_1003 \n gsl conda-forge/linux-64::gsl-2.7-he838d99_0 \n gxx_impl_linux-64 conda-forge/linux-64::gxx_impl_linux-64-14.2.0-h2c03514_1 \n harfbuzz conda-forge/linux-64::harfbuzz-6.0.0-h8e241bc_0 \n icu conda-forge/linux-64::icu-70.1-h27087fc_0 \n jpeg conda-forge/linux-64::jpeg-9e-h0b41bf4_3 \n kernel-headers_li~ conda-forge/noarch::kernel-headers_linux-64-3.10.0-he073ed8_18 \n keyutils conda-forge/linux-64::keyutils-1.6.1-h166bdaf_0 \n krb5 conda-forge/linux-64::krb5-1.19.3-h08a2579_0 \n ld_impl_linux-64 conda-forge/linux-64::ld_impl_linux-64-2.43-h712a8e2_2 \n lerc conda-forge/linux-64::lerc-4.0.0-h27087fc_0 \n libasprintf conda-forge/linux-64::libasprintf-0.23.1-h8e693c7_0 \n libasprintf-devel conda-forge/linux-64::libasprintf-devel-0.23.1-h8e693c7_0 \n libblas conda-forge/linux-64::libblas-3.9.0-30_h59b9bed_openblas \n libcblas conda-forge/linux-64::libcblas-3.9.0-30_he106b2a_openblas \n libcurl conda-forge/linux-64::libcurl-7.86.0-h2283fc2_1 \n libdeflate conda-forge/linux-64::libdeflate-1.14-h166bdaf_0 \n libedit conda-forge/linux-64::libedit-3.1.20250104-pl5321h7949ede_0 \n libev conda-forge/linux-64::libev-4.33-hd590300_2 \n libexpat conda-forge/linux-64::libexpat-2.6.4-h5888daf_0 \n libffi conda-forge/linux-64::libffi-3.4.6-h2dba641_0 \n libgcc conda-forge/linux-64::libgcc-14.2.0-h77fa898_1 \n libgcc-devel_linu~ conda-forge/noarch::libgcc-devel_linux-64-14.2.0-h41c2201_101 \n libgcc-ng conda-forge/linux-64::libgcc-ng-14.2.0-h69a702a_1 \n libgettextpo conda-forge/linux-64::libgettextpo-0.23.1-h5888daf_0 \n libgettextpo-devel conda-forge/linux-64::libgettextpo-devel-0.23.1-h5888daf_0 \n libgfortran conda-forge/linux-64::libgfortran-14.2.0-h69a702a_1 \n libgfortran-ng conda-forge/linux-64::libgfortran-ng-14.2.0-h69a702a_1 \n libgfortran5 conda-forge/linux-64::libgfortran5-14.2.0-hd5240d6_1 \n libglib conda-forge/linux-64::libglib-2.78.1-hebfc3b9_0 \n libgomp conda-forge/linux-64::libgomp-14.2.0-h77fa898_1 \n libhwloc conda-forge/linux-64::libhwloc-2.9.1-hd6dc26d_0 \n libiconv conda-forge/linux-64::libiconv-1.18-h4ce23a2_0 \n liblapack conda-forge/linux-64::liblapack-3.9.0-30_h7ac8fdf_openblas \n liblzma conda-forge/linux-64::liblzma-5.6.4-hb9d3cd8_0 \n liblzma-devel conda-forge/linux-64::liblzma-devel-5.6.4-hb9d3cd8_0 \n libnghttp2 conda-forge/linux-64::libnghttp2-1.58.0-h47da74e_1 \n libopenblas conda-forge/linux-64::libopenblas-0.3.29-pthreads_h94d23a6_0 \n libpng conda-forge/linux-64::libpng-1.6.43-h2797004_0 \n libsanitizer conda-forge/linux-64::libsanitizer-14.2.0-h2a3dede_1 \n libssh2 conda-forge/linux-64::libssh2-1.11.0-h0841786_0 \n libstdcxx conda-forge/linux-64::libstdcxx-14.2.0-hc0a3c3a_1 \n libstdcxx-devel_l~ conda-forge/noarch::libstdcxx-devel_linux-64-14.2.0-h41c2201_101 \n libstdcxx-ng conda-forge/linux-64::libstdcxx-ng-14.2.0-h4852527_1 \n libtiff conda-forge/linux-64::libtiff-4.4.0-h82bc61c_5 \n libuuid conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 \n libwebp-base conda-forge/linux-64::libwebp-base-1.5.0-h851e524_0 \n libxcb conda-forge/linux-64::libxcb-1.13-h7f98852_1004 \n libxml2 conda-forge/linux-64::libxml2-2.10.3-hca2bb57_4 \n libzlib conda-forge/linux-64::libzlib-1.2.13-h4ab18f5_6 \n llvm-openmp conda-forge/linux-64::llvm-openmp-19.1.7-h024ca30_0 \n make conda-forge/linux-64::make-4.4.1-hb9d3cd8_2 \n mkl conda-forge/linux-64::mkl-2024.2.2-ha957f24_16 \n ncurses conda-forge/linux-64::ncurses-6.5-h2d0b736_3 \n openssl conda-forge/linux-64::openssl-3.4.1-h7b32b05_0 \n pango conda-forge/linux-64::pango-1.50.14-hd33c08f_0 \n pcre2 conda-forge/linux-64::pcre2-10.40-hc3806b6_0 \n pixman conda-forge/linux-64::pixman-0.44.2-h29eaf8c_0 \n pthread-stubs conda-forge/linux-64::pthread-stubs-0.4-hb9d3cd8_1002 \n r-backports conda-forge/linux-64::r-backports-1.4.1-r40hcfec24a_0 \n r-base conda-forge/linux-64::r-base-4.0.5-hb87df5d_8 \n r-brio conda-forge/linux-64::r-brio-1.1.3-r40hcfec24a_0 \n r-callr conda-forge/noarch::r-callr-3.7.2-r40hc72bb7e_0 \n r-cli conda-forge/linux-64::r-cli-3.4.1-r40h7525677_0 \n r-colorspace conda-forge/linux-64::r-colorspace-2.0_3-r40h06615bd_0 \n r-crayon conda-forge/noarch::r-crayon-1.5.1-r40hc72bb7e_0 \n r-curl conda-forge/linux-64::r-curl-4.3.2-r40hcfec24a_0 \n r-desc conda-forge/noarch::r-desc-1.4.2-r40hc72bb7e_0 \n r-diffobj conda-forge/linux-64::r-diffobj-0.3.5-r40hcfec24a_0 \n r-digest conda-forge/linux-64::r-digest-0.6.29-r40h03ef668_0 \n r-ellipsis conda-forge/linux-64::r-ellipsis-0.3.2-r40hcfec24a_0 \n r-evaluate conda-forge/noarch::r-evaluate-0.16-r40hc72bb7e_0 \n r-fansi conda-forge/linux-64::r-fansi-1.0.3-r40h06615bd_0 \n r-farver conda-forge/linux-64::r-farver-2.1.1-r40h7525677_0 \n r-forecast conda-forge/linux-64::r-forecast-8.17.0-r40h37cf8d7_0 \n r-fracdiff conda-forge/linux-64::r-fracdiff-1.5_1-r40hb699f27_1 \n r-fs conda-forge/linux-64::r-fs-1.5.2-r40h7525677_1 \n r-generics conda-forge/noarch::r-generics-0.1.3-r40hc72bb7e_0 \n r-ggplot2 conda-forge/noarch::r-ggplot2-3.3.6-r40hc72bb7e_0 \n r-glue conda-forge/linux-64::r-glue-1.6.2-r40h06615bd_0 \n r-gtable conda-forge/noarch::r-gtable-0.3.1-r40hc72bb7e_0 \n r-isoband conda-forge/linux-64::r-isoband-0.2.5-r40h03ef668_0 \n r-jsonlite conda-forge/linux-64::r-jsonlite-1.8.0-r40h06615bd_0 \n r-labeling conda-forge/noarch::r-labeling-0.4.2-r40hc72bb7e_1 \n r-lattice conda-forge/linux-64::r-lattice-0.20_45-r40hcfec24a_0 \n r-lifecycle conda-forge/noarch::r-lifecycle-1.0.2-r40hc72bb7e_0 \n r-lmtest conda-forge/linux-64::r-lmtest-0.9_40-r40h8da6f51_0 \n r-magrittr conda-forge/linux-64::r-magrittr-2.0.3-r40h06615bd_0 \n r-mass conda-forge/linux-64::r-mass-7.3_58.1-r40h06615bd_0 \n r-matrix conda-forge/linux-64::r-matrix-1.4_1-r40h0154571_0 \n r-mgcv conda-forge/linux-64::r-mgcv-1.8_40-r40h0154571_0 \n r-munsell conda-forge/noarch::r-munsell-0.5.0-r40hc72bb7e_1004 \n r-nlme conda-forge/linux-64::r-nlme-3.1_159-r40h8da6f51_0 \n r-nnet conda-forge/linux-64::r-nnet-7.3_17-r40hcfec24a_0 \n r-pillar conda-forge/noarch::r-pillar-1.8.1-r40hc72bb7e_0 \n r-pkgconfig conda-forge/noarch::r-pkgconfig-2.0.3-r40hc72bb7e_1 \n r-pkgload conda-forge/noarch::r-pkgload-1.3.0-r40hc72bb7e_0 \n r-praise conda-forge/noarch::r-praise-1.0.0-r40hc72bb7e_1005 \n r-processx conda-forge/linux-64::r-processx-3.7.0-r40h06615bd_0 \n r-ps conda-forge/linux-64::r-ps-1.7.1-r40h06615bd_0 \n r-quadprog conda-forge/linux-64::r-quadprog-1.5_8-r40h742201e_3 \n r-quantmod conda-forge/noarch::r-quantmod-0.4.20-r40hc72bb7e_0 \n r-r6 conda-forge/noarch::r-r6-2.5.1-r40hc72bb7e_0 \n r-rcolorbrewer conda-forge/noarch::r-rcolorbrewer-1.1_3-r40h785f33e_0 \n r-rcpp conda-forge/linux-64::r-rcpp-1.0.9-r40h7525677_1 \n r-rcpparmadillo conda-forge/linux-64::r-rcpparmadillo-0.11.2.3.1-r40h9f5de39_0 \n r-rematch2 conda-forge/noarch::r-rematch2-2.1.2-r40hc72bb7e_1 \n r-rlang conda-forge/linux-64::r-rlang-1.0.6-r40h7525677_0 \n r-rprojroot conda-forge/noarch::r-rprojroot-2.0.3-r40hc72bb7e_0 \n r-scales conda-forge/noarch::r-scales-1.2.1-r40hc72bb7e_0 \n r-testthat conda-forge/linux-64::r-testthat-3.1.4-r40h7525677_0 \n r-tibble conda-forge/linux-64::r-tibble-3.1.8-r40h06615bd_0 \n r-timedate conda-forge/noarch::r-timedate-4021.104-r40hc72bb7e_0 \n r-tseries conda-forge/linux-64::r-tseries-0.10_51-r40h1463581_0 \n r-ttr conda-forge/linux-64::r-ttr-0.24.3-r40h06615bd_0 \n r-urca conda-forge/linux-64::r-urca-1.3_0-r40h8da6f51_1006 \n r-utf8 conda-forge/linux-64::r-utf8-1.2.2-r40hcfec24a_0 \n r-vctrs conda-forge/linux-64::r-vctrs-0.4.1-r40h7525677_0 \n r-viridislite conda-forge/noarch::r-viridislite-0.4.1-r40hc72bb7e_0 \n r-waldo conda-forge/noarch::r-waldo-0.4.0-r40hc72bb7e_0 \n r-withr conda-forge/noarch::r-withr-2.5.0-r40hc72bb7e_0 \n r-xts conda-forge/linux-64::r-xts-0.12.1-r40h06615bd_0 \n r-zoo conda-forge/linux-64::r-zoo-1.8_11-r40h06615bd_0 \n readline conda-forge/linux-64::readline-8.2-h8228510_1 \n sed conda-forge/linux-64::sed-4.8-he412f7d_0 \n sysroot_linux-64 conda-forge/noarch::sysroot_linux-64-2.17-h0157908_18 \n tbb conda-forge/linux-64::tbb-2021.9.0-hf52228f_0 \n tk conda-forge/linux-64::tk-8.6.13-noxft_h4845f30_101 \n tktable conda-forge/linux-64::tktable-2.10-h8bc8fbc_6 \n tzdata conda-forge/noarch::tzdata-2025a-h78e105d_0 \n xorg-kbproto conda-forge/linux-64::xorg-kbproto-1.0.7-hb9d3cd8_1003 \n xorg-libice conda-forge/linux-64::xorg-libice-1.0.10-h7f98852_0 \n xorg-libsm conda-forge/linux-64::xorg-libsm-1.2.3-hd9c2040_1000 \n xorg-libx11 conda-forge/linux-64::xorg-libx11-1.8.4-h0b41bf4_0 \n xorg-libxau conda-forge/linux-64::xorg-libxau-1.0.12-hb9d3cd8_0 \n xorg-libxdmcp conda-forge/linux-64::xorg-libxdmcp-1.1.5-hb9d3cd8_0 \n xorg-libxext conda-forge/linux-64::xorg-libxext-1.3.4-h0b41bf4_2 \n xorg-libxrender conda-forge/linux-64::xorg-libxrender-0.9.10-h7f98852_1003 \n xorg-libxt conda-forge/linux-64::xorg-libxt-1.3.0-hd590300_0 \n xorg-renderproto conda-forge/linux-64::xorg-renderproto-0.11.1-hb9d3cd8_1003 \n xorg-xextproto conda-forge/linux-64::xorg-xextproto-7.3.0-hb9d3cd8_1004 \n xorg-xproto conda-forge/linux-64::xorg-xproto-7.0.31-hb9d3cd8_1008 \n xz conda-forge/linux-64::xz-5.6.4-hbcc6ac9_0 \n xz-gpl-tools conda-forge/linux-64::xz-gpl-tools-5.6.4-hbcc6ac9_0 \n xz-tools conda-forge/linux-64::xz-tools-5.6.4-hb9d3cd8_0 \n zlib conda-forge/linux-64::zlib-1.2.13-h4ab18f5_6 \n zstd conda-forge/linux-64::zstd-1.5.6-ha6fb4c9_0 \n\n\n\nDownloading and Extracting Packages: ...working... done\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n#\n# To activate this environment, use\n#\n# $ conda activate myrenv\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Upload environment to Object Storage", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Upload the environments to the Object Storage bucket associated with the Autonomous Database instance using the `upload` command. We provide environment descriptions and two tags, one for the user name and one for the application name, and we overwrite any environment with the same name if it exists.", + "", + "The application tag is required for use with embedded execution. For example, OML4Py embedded Python execution works with conda environments containing the OML4PY tag, and OML4R embedded R execution works with conda environments containing the OML4R tag.", + "", + "There is one Object Storage bucket for each data center region. The conda environments are saved to a folder in Object Storage corresponding to the tenancy and database. The folder is managed by Autonomous Database and only available to users through OML notebooks. There is a 8G maximum size for a single conda environment, and no size limit on Object Storage.", + "", + "To get help for the `upload` command, type `upload --help` in the %conda interpreter." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988850378, + "endTime" : 1739988850820, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Upload the environments to the Object Storage bucket associated with the Autonomous Database instance using the upload command. We provide environment descriptions and two tags, one for the user name and one for the application name, and we overwrite any environment with the same name if it exists.

\n

The application tag is required for use with embedded execution. For example, OML4Py embedded Python execution works with conda environments containing the OML4PY tag, and OML4R embedded R execution works with conda environments containing the OML4R tag.

\n

There is one Object Storage bucket for each data center region. The conda environments are saved to a folder in Object Storage corresponding to the tenancy and database. The folder is managed by Autonomous Database and only available to users through OML notebooks. There is a 8G maximum size for a single conda environment, and no size limit on Object Storage.

\n

To get help for the upload command, type upload --help in the %conda interpreter.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Save Python environment to Object Storage", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "upload mypyenv --overwrite --description 'Install Python seaborn and tensorflow packages' -t user 'OMLUSER' -t application OML4PY" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988851305, + "endTime" : 1739988900143, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\nUploading conda environment mypyenv\nUpload successful for conda environment mypyenv\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Save R environment to Object Storage", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "upload myrenv --overwrite --description 'Install R forecast and ggplot2 packages' -t user 'OMLUSER' -t application 'OML4R'" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988900630, + "endTime" : 1739988941791, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\nUploading conda environment myrenv\nUpload successful for conda environment myrenv\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "The environments are now available for the OML user to download and will remain in Object Storage until it is deleted. ", + "", + "Verify the environments are saved to Object storage using the `list-saved-envs`, passing the environment name to the `-e` flag." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988942307, + "endTime" : 1739988942795, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

The environments are now available for the OML user to download and will remain in Object Storage until it is deleted.

\n

Verify the environments are saved to Object storage using the list-saved-envs, passing the environment name to the -e flag.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Search for the saved environments", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "list-saved-envs -e mypyenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988943282, + "endTime" : 1739988945466, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\n{\n \"name\": \"mypyenv\",\n \"size\": \"2.2 GiB\",\n \"description\": \"Install Python seaborn and tensorflow packages\",\n \"tags\": {\n \"application\": \"OML4PY\",\n \"user\": \"OMLUSER\"\n },\n \"number_of_installed_packages\": 149\n}\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "list-saved-envs -e myrenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988945956, + "endTime" : 1739988948142, + "interpreter" : "conda.conda_medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\n{\n \"name\": \"myrenv\",\n \"size\": \"1.5 GiB\",\n \"description\": \"Install R forecast and ggplot2 packages\",\n \"tags\": {\n \"application\": \"OML4R\",\n \"user\": \"OMLUSER\"\n },\n \"number_of_installed_packages\": 171\n}\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "---", + "### Delete Environment from Object Storage", + "---", + "", + "If an environment is no longer in use, it can be deleted from Object Storage using the `delete` command. ", + "", + "For help on the conda `delete` command, type `delete --help` in a %conda paragraph.", + "", + "Note, do not run the paragraphs containing the `delete` commands if you want to use them in an OML user session." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988948625, + "endTime" : 1739988949077, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

If an environment is no longer in use, it can be deleted from Object Storage using the delete command.

\n

For help on the conda delete command, type delete --help in a %conda paragraph.

\n

Note, do not run the paragraphs containing the delete commands if you want to use them in an OML user session.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Delete R environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "delete myrenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1718053975458, + "endTime" : 1718053977055, + "interpreter" : "conda.conda_medium", + "taskStatus" : null, + "status" : null, + "results" : null, + "forms" : null + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Delete Python environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "delete mypyenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1718053977127, + "endTime" : 1718053978722, + "interpreter" : "conda.conda_medium", + "taskStatus" : null, + "status" : null, + "results" : null, + "forms" : null + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Confirm environment deletion", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "list-saved-envs " + ], + "selectedVisualization" : "html", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1718053978796, + "endTime" : 1718053980390, + "interpreter" : "conda.conda_medium", + "taskStatus" : null, + "status" : null, + "results" : null, + "forms" : null + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Access Control List", + "hasTitle" : true, + "message" : [ + "%md\r", + "---\r", + "## Optional: Add OML user to the Access Control List\r", + "---\r", + "\r", + "A requirement of the SQL API for embedded R and Python execution on Autonomous Database is adding the OML user to the Access Control List (ACL). The ACL provides additional protection to your Autonomous Database by allowing only the client with specific IP addresses to connect to the database. \r", + "\r", + "As ADMIN, run the network access control list `rqAppendHostAce` function (OML4R) or `pyqAppendHostAce` (OML4Py) to enable the OML user to access network services and resources from the database, where the root domain is the data center region where Autonomous Database resides. \r", + "\r", + "For example, if your username is *OMLUSER* and your Autonomous Database resides in the Ashburn region, the root domain is *adb.us-ashburn-1.oraclecloudapps.com*. \r" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988949566, + "endTime" : 1739988950024, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

A requirement of the SQL API for embedded R and Python execution on Autonomous Database is adding the OML user to the Access Control List (ACL). The ACL provides additional protection to your Autonomous Database by allowing only the client with specific IP addresses to connect to the database.

\n

As ADMIN, run the network access control list rqAppendHostAce function (OML4R) or pyqAppendHostAce (OML4Py) to enable the OML user to access network services and resources from the database, where the root domain is the data center region where Autonomous Database resides.

\n

For example, if your username is OMLUSER and your Autonomous Database resides in the Ashburn region, the root domain is adb.us-ashburn-1.oraclecloudapps.com.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "OML4R", + "hasTitle" : true, + "message" : [ + "%script", + "", + "-- replace username and root domain with your database values", + "", + "exec rqAppendHostAce('OMLUSER','adb.us-ashburn-1.oraclecloudapps.com')" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1739989149503, + "endTime" : 1739989150161, + "interpreter" : "script.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nPL/SQL procedure successfully completed.\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "OML4Py", + "hasTitle" : true, + "message" : [ + "%script", + "", + "-- replace username and root domain with your database values", + "", + "exec pyqAppendHostAce('OMLUSER','adb.us-ashburn-1.oraclecloudapps.com')" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1739985092571, + "endTime" : 1739985094835, + "interpreter" : "script.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nPL/SQL procedure successfully completed.\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "---", + "## Use Conda Environments", + "---", + "", + "Refer to the template notebooks, *OML Third-Party Packages - Python Environment Usage* and *OML Third-Party Packages - R Environment Usage* for the steps to download and use the saved environments, *mypyenv* and *myrenv*. " + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988950510, + "endTime" : 1739988950959, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Refer to the template notebooks, OML Third-Party Packages - Python Environment Usage and OML Third-Party Packages - R Environment Usage for the steps to download and use the saved environments, mypyenv and myrenv.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "## End of Script" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739988951458, + "endTime" : 1739988951907, + "interpreter" : "md.medium", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

End of Script

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + } + ] + } +] \ No newline at end of file diff --git a/machine-learning/notebooks-oml/python/OML Third-Party Packages - Python Environment Usage.dsnb b/machine-learning/notebooks-oml/python/OML Third-Party Packages - Python Environment Usage.dsnb new file mode 100644 index 00000000..09361e6f --- /dev/null +++ b/machine-learning/notebooks-oml/python/OML Third-Party Packages - Python Environment Usage.dsnb @@ -0,0 +1,1689 @@ +[ + { + "name" : "OML Third-Party Packages - Python Environment Usage", + "description" : null, + "tags" : null, + "version" : "7", + "layout" : null, + "type" : "low", + "snapshot" : false, + "isEditable" : true, + "isRunnable" : true, + "template" : null, + "templateConfig" : null, + "paragraphs" : [ + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 0, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + " " + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : true, + "hideGutter" : true, + "hideVizConfig" : true, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991635401, + "endTime" : 1739991647669, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "# OML Third-Party Packages - Python Environment Usage", + "", + "Oracle Machine Learning Notebooks provide a conda interpreter to install third-party Python and R packages in a conda environment for use within OML Notebooks sessions, as well as within OML4R and OML4Py embedded execution invocations. Conda is an open source package and environment management system that enables the use of virtual environments containing third-party R and Python packages. With conda environments, you can install and update packages and their dependencies, and switch between environments to use project-specific packages. ", + "", + "Administrators create conda environments and install packages that can then be accessed by non-administrator users and loaded into their OML Notebooks session. The conda environments can be used in the Python and the OML4Py Python, SQL, and REST APIs, and R and the OML4R R, SQL, and REST APIs.", + "", + "In this notebook, we demonstrate a typical workflow for third-party environment usage in OML notebooks using Python and OML4Py. The OML user downloads and uses the packages in conda environments that were previously created and saved to an Object Storage bucket folder associated with the Autonomous Database.", + "", + "The template notebook titled, *OML Third-Party Packages - Environment Creation*, the ADMIN user creates a conda environment, installs packages, and uploads the conda environments to Object Storage.", + "", + "", + "Copyright (c) 2025 Oracle Corporation ", + "The Universal Permissive License (UPL), Version 1.0" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991648241, + "endTime" : 1739991648787, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

OML Third-Party Packages - Python Environment Usage

\n

Oracle Machine Learning Notebooks provide a conda interpreter to install third-party Python and R packages in a conda environment for use within OML Notebooks sessions, as well as within OML4R and OML4Py embedded execution invocations. Conda is an open source package and environment management system that enables the use of virtual environments containing third-party R and Python packages. With conda environments, you can install and update packages and their dependencies, and switch between environments to use project-specific packages.

\n

Administrators create conda environments and install packages that can then be accessed by non-administrator users and loaded into their OML Notebooks session. The conda environments can be used in the Python and the OML4Py Python, SQL, and REST APIs, and R and the OML4R R, SQL, and REST APIs.

\n

In this notebook, we demonstrate a typical workflow for third-party environment usage in OML notebooks using Python and OML4Py. The OML user downloads and uses the packages in conda environments that were previously created and saved to an Object Storage bucket folder associated with the Autonomous Database.

\n

The template notebook titled, OML Third-Party Packages - Environment Creation, the ADMIN user creates a conda environment, installs packages, and uploads the conda environments to Object Storage.

\n

Copyright (c) 2025 Oracle Corporation\nThe Universal Permissive License (UPL), Version 1.0

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "List a named environment in Object Storage", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Use `list-saved-envs` to list and return the details for a named conda environment in Object Storage. The environment name, size, and number of packages is returned along with the description and tags provided by the ADMIN when uploading the environment.", + "", + "The application tag indicates how the environment can be used. For example, the environment `mypyenv` was created for use with Python and OML4Py." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991649372, + "endTime" : 1739991649839, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Use list-saved-envs to list and return the details for a named conda environment in Object Storage. The environment name, size, and number of packages is returned along with the description and tags provided by the ADMIN when uploading the environment.

\n

The application tag indicates how the environment can be used. For example, the environment mypyenv was created for use with Python and OML4Py.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "list-saved-envs -e mypyenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991662579, + "endTime" : 1739991666782, + "interpreter" : "conda.conda_low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\n{\n \"name\": \"mypyenv\",\n \"size\": \"2.2 GiB\",\n \"description\": \"Install Python seaborn and tensorflow packages\",\n \"tags\": {\n \"application\": \"OML4PY\",\n \"user\": \"OMLUSER\"\n },\n \"number_of_installed_packages\": 149\n}\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Using a Python environment", + "hasTitle" : true, + "message" : [ + "%md", + "", + "In this example, we build a machine learning model that accurately predicts an iris flower species when given flower measurements. The Python packages used are *Keras*, *TensorFlow*, *Seaborn*, *Pandas*, *Numpy*, and *Scikit-Learn*, and the conda environment is *mypyenv*. We first run the code and create a UDF using Python, followed by invoking the UDF in OML4Py embedded Python execution from the Python, SQL and REST APIs.", + "", + "Note, *Pandas*, *Numpy*, and *Scikit-Learn* are included in OML Notebooks in Oracle Autonomous Database so it is not necessary for ADMIN to install them when creating the conda environment." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991667287, + "endTime" : 1739991667751, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

In this example, we build a machine learning model that accurately predicts an iris flower species when given flower measurements. The Python packages used are Keras, TensorFlow, Seaborn, Pandas, Numpy, and Scikit-Learn, and the conda environment is mypyenv. We first run the code and create a UDF using Python, followed by invoking the UDF in OML4Py embedded Python execution from the Python, SQL and REST APIs.

\n

Note, Pandas, Numpy, and Scikit-Learn are included in OML Notebooks in Oracle Autonomous Database so it is not necessary for ADMIN to install them when creating the conda environment.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Download and activate the environment", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Conda environments are available after they are downloaded and activated using the `download` and `activate` functions in a %conda paragraph. An activated environment is available until it is deactivated." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991668405, + "endTime" : 1739991668913, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Conda environments are available after they are downloaded and activated using the download and activate functions in a %conda paragraph. An activated environment is available until it is deactivated.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%conda", + "", + "download mypyenv --overwrite", + "", + "activate mypyenv" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991669458, + "endTime" : 1739991701105, + "interpreter" : "conda.conda_low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "/opt/conda/lib/python3.12/site-packages/charset_normalizer/api.py:105: UserWarning: Trying to detect encoding from a tiny portion of (15) byte(s).\n warn('Trying to detect encoding from a tiny portion of ({}) byte(s).'.format(length))\nDownloading conda environment mypyenv\nDownload successful for conda environment mypyenv\n\n\nusage: conda [-h] [-v] [--no-plugins] [-V] COMMAND ...\n\nconda is a tool for managing and deploying applications, environments and packages.\n\noptions:\n -h, --help Show this help message and exit.\n -v, --verbose Can be used multiple times. Once for detailed output,\n twice for INFO logging, thrice for DEBUG logging, four\n times for TRACE logging.\n --no-plugins Disable all plugins that are not built into conda.\n -V, --version Show the conda version number and exit.\n\ncommands:\n The following built-in and plugins subcommands are available.\n\n COMMAND\n activate Activate a conda environment.\n clean Remove unused packages and caches.\n compare Compare packages between conda environments.\n config Modify configuration values in .condarc.\n content-trust Signing and verification tools for Conda\n create Create a new conda environment from a list of specified\n packages.\n deactivate Deactivate the current active conda environment.\n doctor Display a health report for your environment.\n env-lcm See `conda env-lcm --help`.\n info Display information about current conda install.\n init Initialize conda for shell interaction.\n install Install a list of packages into a specified conda\n environment.\n list List installed packages in a conda environment.\n notices Retrieve latest channel notifications.\n pack See `conda pack --help`.\n package Create low-level conda packages. (EXPERIMENTAL)\n remove (uninstall)\n Remove a list of packages from a specified conda\n environment.\n rename Rename an existing environment.\n repoquery Advanced search for repodata.\n run Run an executable in a conda environment.\n search Search for packages and display associated information\n using the MatchSpec format.\n update (upgrade) Update conda packages to the latest compatible version.\n\n\n\nConda environment 'mypyenv' activated\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "List the packages available in the conda environment", + "hasTitle" : true, + "message" : [ + "%conda", + "", + "list" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991701733, + "endTime" : 1739991704406, + "interpreter" : "conda.conda_low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "# packages in environment at /u01/.conda/envs/mypyenv:\n#\n# Name Version Build Channel\n_libgcc_mutex 0.1 conda_forge conda-forge\n_openmp_mutex 4.5 2_kmp_llvm conda-forge\nabsl-py 2.1.0 pyhd8ed1ab_1 conda-forge\nastunparse 1.6.3 pyhd8ed1ab_3 conda-forge\nbrotli 1.1.0 hb9d3cd8_2 conda-forge\nbrotli-bin 1.1.0 hb9d3cd8_2 conda-forge\nbrotli-python 1.1.0 py312h2ec8cdc_2 conda-forge\nbzip2 1.0.8 h4bc722e_7 conda-forge\nc-ares 1.34.4 hb9d3cd8_0 conda-forge\nca-certificates 2025.1.31 hbcca054_0 conda-forge\ncached-property 1.5.2 hd8ed1ab_1 conda-forge\ncached_property 1.5.2 pyha770c72_1 conda-forge\ncertifi 2025.1.31 pyhd8ed1ab_0 conda-forge\ncffi 1.17.1 py312h06ac9bb_0 conda-forge\ncharset-normalizer 3.4.1 pyhd8ed1ab_0 conda-forge\ncontourpy 1.3.1 py312h68727a3_0 conda-forge\ncycler 0.12.1 pyhd8ed1ab_1 conda-forge\nflatbuffers 24.12.23 h8f4948b_0 conda-forge\nfonttools 4.56.0 py312h178313f_0 conda-forge\nfreetype 2.12.1 h267a509_2 conda-forge\ngast 0.6.0 pyhd8ed1ab_0 conda-forge\ngiflib 5.2.2 hd590300_0 conda-forge\ngoogle-pasta 0.2.0 pyhd8ed1ab_2 conda-forge\ngrpcio 1.67.1 py312hacea422_1 conda-forge\nh2 4.2.0 pyhd8ed1ab_0 conda-forge\nh5py 3.13.0 nompi_py312hedeef09_100 conda-forge\nhdf5 1.14.3 nompi_h2d575fe_109 conda-forge\nhpack 4.1.0 pyhd8ed1ab_0 conda-forge\nhyperframe 6.1.0 pyhd8ed1ab_0 conda-forge\nicu 75.1 he02047a_0 conda-forge\nidna 3.10 pyhd8ed1ab_1 conda-forge\nimportlib-metadata 8.6.1 pyha770c72_0 conda-forge\nkeras 3.8.0 pyh753f3f9_0 conda-forge\nkeyutils 1.6.1 h166bdaf_0 conda-forge\nkiwisolver 1.4.8 py312h84d6215_0 conda-forge\nkrb5 1.21.3 h659f571_0 conda-forge\nlcms2 2.17 h717163a_0 conda-forge\nld_impl_linux-64 2.43 h712a8e2_2 conda-forge\nlerc 4.0.0 h27087fc_0 conda-forge\nlibabseil 20240722.0 cxx17_hbbce691_4 conda-forge\nlibaec 1.1.3 h59595ed_0 conda-forge\nlibblas 3.9.0 30_h59b9bed_openblas conda-forge\nlibbrotlicommon 1.1.0 hb9d3cd8_2 conda-forge\nlibbrotlidec 1.1.0 hb9d3cd8_2 conda-forge\nlibbrotlienc 1.1.0 hb9d3cd8_2 conda-forge\nlibcblas 3.9.0 30_he106b2a_openblas conda-forge\nlibcurl 8.12.1 h332b0f4_0 conda-forge\nlibdeflate 1.23 h4ddbbb0_0 conda-forge\nlibedit 3.1.20250104 pl5321h7949ede_0 conda-forge\nlibev 4.33 hd590300_2 conda-forge\nlibexpat 2.6.4 h5888daf_0 conda-forge\nlibffi 3.4.6 h2dba641_0 conda-forge\nlibgcc 14.2.0 h77fa898_1 conda-forge\nlibgcc-ng 14.2.0 h69a702a_1 conda-forge\nlibgfortran 14.2.0 h69a702a_1 conda-forge\nlibgfortran5 14.2.0 hd5240d6_1 conda-forge\nlibgrpc 1.67.1 h25350d4_1 conda-forge\nlibhwloc 2.11.2 default_h0d58e46_1001 conda-forge\nlibiconv 1.18 h4ce23a2_0 conda-forge\nlibjpeg-turbo 3.0.0 hd590300_1 conda-forge\nliblapack 3.9.0 30_h7ac8fdf_openblas conda-forge\nliblzma 5.6.4 hb9d3cd8_0 conda-forge\nliblzma-devel 5.6.4 hb9d3cd8_0 conda-forge\nlibnghttp2 1.64.0 h161d5f1_0 conda-forge\nlibnsl 2.0.1 hd590300_0 conda-forge\nlibopenblas 0.3.29 pthreads_h94d23a6_0 conda-forge\nlibpng 1.6.47 h943b412_0 conda-forge\nlibprotobuf 5.28.3 h6128344_1 conda-forge\nlibre2-11 2024.07.02 hbbce691_2 conda-forge\nlibsqlite 3.49.1 hee588c1_1 conda-forge\nlibssh2 1.11.1 hf672d98_0 conda-forge\nlibstdcxx 14.2.0 hc0a3c3a_1 conda-forge\nlibstdcxx-ng 14.2.0 h4852527_1 conda-forge\nlibtiff 4.7.0 hd9ff511_3 conda-forge\nlibuuid 2.38.1 h0b41bf4_0 conda-forge\nlibwebp-base 1.5.0 h851e524_0 conda-forge\nlibxcb 1.17.0 h8a09558_0 conda-forge\nlibxcrypt 4.4.36 hd590300_1 conda-forge\nlibxml2 2.13.6 h8d12d68_0 conda-forge\nlibzlib 1.3.1 hb9d3cd8_2 conda-forge\nllvm-openmp 19.1.7 h024ca30_0 conda-forge\nmarkdown 3.6 pyhd8ed1ab_0 conda-forge\nmarkdown-it-py 3.0.0 pyhd8ed1ab_1 conda-forge\nmarkupsafe 3.0.2 py312h178313f_1 conda-forge\nmatplotlib-base 3.10.0 py312hd3ec401_0 conda-forge\nmdurl 0.1.2 pyhd8ed1ab_1 conda-forge\nmkl 2024.2.2 ha957f24_16 conda-forge\nml_dtypes 0.4.0 py312hf9745cd_2 conda-forge\nmunkres 1.1.4 pyh9f0ad1d_0 conda-forge\nnamex 0.0.8 pyhd8ed1ab_1 conda-forge\nncurses 6.5 h2d0b736_3 conda-forge\nnumpy 2.2.3 py312h72c5963_0 conda-forge\nopenjpeg 2.5.3 h5fbd93e_0 conda-forge\nopenssl 3.4.1 h7b32b05_0 conda-forge\nopt_einsum 3.4.0 pyhd8ed1ab_1 conda-forge\noptree 0.14.0 py312h68727a3_1 conda-forge\npackaging 24.2 pyhd8ed1ab_2 conda-forge\npandas 2.2.3 py312hf9745cd_1 conda-forge\npatsy 1.0.1 pyhd8ed1ab_1 conda-forge\npillow 11.1.0 py312h80c1187_0 conda-forge\npip 25.0.1 pyh8b19718_0 conda-forge\nprotobuf 5.28.3 py312h2ec8cdc_0 conda-forge\npthread-stubs 0.4 hb9d3cd8_1002 conda-forge\npycparser 2.22 pyh29332c3_1 conda-forge\npygments 2.19.1 pyhd8ed1ab_0 conda-forge\npyparsing 3.2.1 pyhd8ed1ab_0 conda-forge\npysocks 1.7.1 pyha55dd90_7 conda-forge\npython 3.12.6 hc5c86c4_2_cpython conda-forge\npython-dateutil 2.9.0.post0 pyhff2d567_1 conda-forge\npython-flatbuffers 25.2.10 pyhbc23db3_0 conda-forge\npython-tzdata 2025.1 pyhd8ed1ab_0 conda-forge\npython_abi 3.12 5_cp312 conda-forge\npytz 2024.1 pyhd8ed1ab_0 conda-forge\nqhull 2020.2 h434a139_5 conda-forge\nre2 2024.07.02 h9925aae_2 conda-forge\nreadline 8.2 h8228510_1 conda-forge\nrequests 2.32.3 pyhd8ed1ab_1 conda-forge\nrich 13.9.4 pyhd8ed1ab_1 conda-forge\nscipy 1.15.2 py312ha707e6e_0 conda-forge\nseaborn 0.13.2 hd8ed1ab_3 conda-forge\nseaborn-base 0.13.2 pyhd8ed1ab_3 conda-forge\nsetuptools 75.8.0 pyhff2d567_0 conda-forge\nsix 1.17.0 pyhd8ed1ab_0 conda-forge\nsnappy 1.2.1 h8bd8927_1 conda-forge\nstatsmodels 0.14.4 py312hc0a28a1_0 conda-forge\ntbb 2021.13.0 hceb3a55_1 conda-forge\ntensorboard 2.18.0 pyhd8ed1ab_1 conda-forge\ntensorboard-data-server 0.7.0 py312hda17c39_2 conda-forge\ntensorflow 2.18.0 cpu_py312h69ecde4_0 conda-forge\ntensorflow-base 2.18.0 cpu_py312h099d1c6_0 conda-forge\ntensorflow-estimator 2.18.0 cpu_py312hc0a35a6_0 conda-forge\ntermcolor 2.5.0 pyhd8ed1ab_1 conda-forge\ntk 8.6.13 noxft_h4845f30_101 conda-forge\ntyping-extensions 4.12.2 hd8ed1ab_1 conda-forge\ntyping_extensions 4.12.2 pyha770c72_1 conda-forge\ntzdata 2025a h78e105d_0 conda-forge\nunicodedata2 16.0.0 py312h66e93f0_0 conda-forge\nurllib3 2.3.0 pyhd8ed1ab_0 conda-forge\nwerkzeug 3.1.3 pyhd8ed1ab_1 conda-forge\nwheel 0.45.1 pyhd8ed1ab_1 conda-forge\nwrapt 1.17.2 py312h66e93f0_0 conda-forge\nxorg-libxau 1.0.12 hb9d3cd8_0 conda-forge\nxorg-libxdmcp 1.1.5 hb9d3cd8_0 conda-forge\nxz 5.6.4 hbcc6ac9_0 conda-forge\nxz-gpl-tools 5.6.4 hbcc6ac9_0 conda-forge\nxz-tools 5.6.4 hb9d3cd8_0 conda-forge\nzipp 3.21.0 pyhd8ed1ab_1 conda-forge\nzstandard 0.23.0 py312hef9b889_1 conda-forge\nzstd 1.5.6 ha6fb4c9_0 conda-forge\n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Import libraries", + "hasTitle" : true, + "message" : [ + "%python", + "", + "import warnings", + "warnings.filterwarnings(\"ignore\")", + "", + "import keras", + "from keras.models import Sequential", + "from keras.layers import Dense", + "from keras.optimizers import Adam", + "", + "import seaborn as sns", + "import pandas as pd", + "import numpy as np", + "", + "from sklearn.metrics import classification_report, confusion_matrix" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991704959, + "endTime" : 1739991709191, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Load the iris data", + "hasTitle" : true, + "message" : [ + "%python", + "", + "import pandas as pd", + "", + "import ssl", + "ssl._create_default_https_context = ssl._create_unverified_context", + "", + "df = sns.load_dataset(\"iris\")", + "", + "z.show(df)" + ], + "selectedVisualization" : "table", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991709781, + "endTime" : 1739991710932, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "sepal_length\tsepal_width\tpetal_length\tpetal_width\tspecies\n5.1\t3.5\t1.4\t0.2\tsetosa\n4.9\t3.0\t1.4\t0.2\tsetosa\n4.7\t3.2\t1.3\t0.2\tsetosa\n4.6\t3.1\t1.5\t0.2\tsetosa\n5.0\t3.6\t1.4\t0.2\tsetosa\n5.4\t3.9\t1.7\t0.4\tsetosa\n4.6\t3.4\t1.4\t0.3\tsetosa\n5.0\t3.4\t1.5\t0.2\tsetosa\n4.4\t2.9\t1.4\t0.2\tsetosa\n4.9\t3.1\t1.5\t0.1\tsetosa\n5.4\t3.7\t1.5\t0.2\tsetosa\n4.8\t3.4\t1.6\t0.2\tsetosa\n4.8\t3.0\t1.4\t0.1\tsetosa\n4.3\t3.0\t1.1\t0.1\tsetosa\n5.8\t4.0\t1.2\t0.2\tsetosa\n5.7\t4.4\t1.5\t0.4\tsetosa\n5.4\t3.9\t1.3\t0.4\tsetosa\n5.1\t3.5\t1.4\t0.3\tsetosa\n5.7\t3.8\t1.7\t0.3\tsetosa\n5.1\t3.8\t1.5\t0.3\tsetosa\n5.4\t3.4\t1.7\t0.2\tsetosa\n5.1\t3.7\t1.5\t0.4\tsetosa\n4.6\t3.6\t1.0\t0.2\tsetosa\n5.1\t3.3\t1.7\t0.5\tsetosa\n4.8\t3.4\t1.9\t0.2\tsetosa\n5.0\t3.0\t1.6\t0.2\tsetosa\n5.0\t3.4\t1.6\t0.4\tsetosa\n5.2\t3.5\t1.5\t0.2\tsetosa\n5.2\t3.4\t1.4\t0.2\tsetosa\n4.7\t3.2\t1.6\t0.2\tsetosa\n4.8\t3.1\t1.6\t0.2\tsetosa\n5.4\t3.4\t1.5\t0.4\tsetosa\n5.2\t4.1\t1.5\t0.1\tsetosa\n5.5\t4.2\t1.4\t0.2\tsetosa\n4.9\t3.1\t1.5\t0.2\tsetosa\n5.0\t3.2\t1.2\t0.2\tsetosa\n5.5\t3.5\t1.3\t0.2\tsetosa\n4.9\t3.6\t1.4\t0.1\tsetosa\n4.4\t3.0\t1.3\t0.2\tsetosa\n5.1\t3.4\t1.5\t0.2\tsetosa\n5.0\t3.5\t1.3\t0.3\tsetosa\n4.5\t2.3\t1.3\t0.3\tsetosa\n4.4\t3.2\t1.3\t0.2\tsetosa\n5.0\t3.5\t1.6\t0.6\tsetosa\n5.1\t3.8\t1.9\t0.4\tsetosa\n4.8\t3.0\t1.4\t0.3\tsetosa\n5.1\t3.8\t1.6\t0.2\tsetosa\n4.6\t3.2\t1.4\t0.2\tsetosa\n5.3\t3.7\t1.5\t0.2\tsetosa\n5.0\t3.3\t1.4\t0.2\tsetosa\n7.0\t3.2\t4.7\t1.4\tversicolor\n6.4\t3.2\t4.5\t1.5\tversicolor\n6.9\t3.1\t4.9\t1.5\tversicolor\n5.5\t2.3\t4.0\t1.3\tversicolor\n6.5\t2.8\t4.6\t1.5\tversicolor\n5.7\t2.8\t4.5\t1.3\tversicolor\n6.3\t3.3\t4.7\t1.6\tversicolor\n4.9\t2.4\t3.3\t1.0\tversicolor\n6.6\t2.9\t4.6\t1.3\tversicolor\n5.2\t2.7\t3.9\t1.4\tversicolor\n5.0\t2.0\t3.5\t1.0\tversicolor\n5.9\t3.0\t4.2\t1.5\tversicolor\n6.0\t2.2\t4.0\t1.0\tversicolor\n6.1\t2.9\t4.7\t1.4\tversicolor\n5.6\t2.9\t3.6\t1.3\tversicolor\n6.7\t3.1\t4.4\t1.4\tversicolor\n5.6\t3.0\t4.5\t1.5\tversicolor\n5.8\t2.7\t4.1\t1.0\tversicolor\n6.2\t2.2\t4.5\t1.5\tversicolor\n5.6\t2.5\t3.9\t1.1\tversicolor\n5.9\t3.2\t4.8\t1.8\tversicolor\n6.1\t2.8\t4.0\t1.3\tversicolor\n6.3\t2.5\t4.9\t1.5\tversicolor\n6.1\t2.8\t4.7\t1.2\tversicolor\n6.4\t2.9\t4.3\t1.3\tversicolor\n6.6\t3.0\t4.4\t1.4\tversicolor\n6.8\t2.8\t4.8\t1.4\tversicolor\n6.7\t3.0\t5.0\t1.7\tversicolor\n6.0\t2.9\t4.5\t1.5\tversicolor\n5.7\t2.6\t3.5\t1.0\tversicolor\n5.5\t2.4\t3.8\t1.1\tversicolor\n5.5\t2.4\t3.7\t1.0\tversicolor\n5.8\t2.7\t3.9\t1.2\tversicolor\n6.0\t2.7\t5.1\t1.6\tversicolor\n5.4\t3.0\t4.5\t1.5\tversicolor\n6.0\t3.4\t4.5\t1.6\tversicolor\n6.7\t3.1\t4.7\t1.5\tversicolor\n6.3\t2.3\t4.4\t1.3\tversicolor\n5.6\t3.0\t4.1\t1.3\tversicolor\n5.5\t2.5\t4.0\t1.3\tversicolor\n5.5\t2.6\t4.4\t1.2\tversicolor\n6.1\t3.0\t4.6\t1.4\tversicolor\n5.8\t2.6\t4.0\t1.2\tversicolor\n5.0\t2.3\t3.3\t1.0\tversicolor\n5.6\t2.7\t4.2\t1.3\tversicolor\n5.7\t3.0\t4.2\t1.2\tversicolor\n5.7\t2.9\t4.2\t1.3\tversicolor\n6.2\t2.9\t4.3\t1.3\tversicolor\n5.1\t2.5\t3.0\t1.1\tversicolor\n5.7\t2.8\t4.1\t1.3\tversicolor\n6.3\t3.3\t6.0\t2.5\tvirginica\n5.8\t2.7\t5.1\t1.9\tvirginica\n7.1\t3.0\t5.9\t2.1\tvirginica\n6.3\t2.9\t5.6\t1.8\tvirginica\n6.5\t3.0\t5.8\t2.2\tvirginica\n7.6\t3.0\t6.6\t2.1\tvirginica\n4.9\t2.5\t4.5\t1.7\tvirginica\n7.3\t2.9\t6.3\t1.8\tvirginica\n6.7\t2.5\t5.8\t1.8\tvirginica\n7.2\t3.6\t6.1\t2.5\tvirginica\n6.5\t3.2\t5.1\t2.0\tvirginica\n6.4\t2.7\t5.3\t1.9\tvirginica\n6.8\t3.0\t5.5\t2.1\tvirginica\n5.7\t2.5\t5.0\t2.0\tvirginica\n5.8\t2.8\t5.1\t2.4\tvirginica\n6.4\t3.2\t5.3\t2.3\tvirginica\n6.5\t3.0\t5.5\t1.8\tvirginica\n7.7\t3.8\t6.7\t2.2\tvirginica\n7.7\t2.6\t6.9\t2.3\tvirginica\n6.0\t2.2\t5.0\t1.5\tvirginica\n6.9\t3.2\t5.7\t2.3\tvirginica\n5.6\t2.8\t4.9\t2.0\tvirginica\n7.7\t2.8\t6.7\t2.0\tvirginica\n6.3\t2.7\t4.9\t1.8\tvirginica\n6.7\t3.3\t5.7\t2.1\tvirginica\n7.2\t3.2\t6.0\t1.8\tvirginica\n6.2\t2.8\t4.8\t1.8\tvirginica\n6.1\t3.0\t4.9\t1.8\tvirginica\n6.4\t2.8\t5.6\t2.1\tvirginica\n7.2\t3.0\t5.8\t1.6\tvirginica\n7.4\t2.8\t6.1\t1.9\tvirginica\n7.9\t3.8\t6.4\t2.0\tvirginica\n6.4\t2.8\t5.6\t2.2\tvirginica\n6.3\t2.8\t5.1\t1.5\tvirginica\n6.1\t2.6\t5.6\t1.4\tvirginica\n7.7\t3.0\t6.1\t2.3\tvirginica\n6.3\t3.4\t5.6\t2.4\tvirginica\n6.4\t3.1\t5.5\t1.8\tvirginica\n6.0\t3.0\t4.8\t1.8\tvirginica\n6.9\t3.1\t5.4\t2.1\tvirginica\n6.7\t3.1\t5.6\t2.4\tvirginica\n6.9\t3.1\t5.1\t2.3\tvirginica\n5.8\t2.7\t5.1\t1.9\tvirginica\n6.8\t3.2\t5.9\t2.3\tvirginica\n6.7\t3.3\t5.7\t2.5\tvirginica\n6.7\t3.0\t5.2\t2.3\tvirginica\n6.3\t2.5\t5.0\t1.9\tvirginica\n6.5\t3.0\t5.2\t2.0\tvirginica\n6.2\t3.4\t5.4\t2.3\tvirginica\n5.9\t3.0\t5.1\t1.8\tvirginica\n", + "type" : "TABLE" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Plot the pairwise relationships in the iris dataset", + "hasTitle" : true, + "message" : [ + "%python", + "", + "sns.set(style=\"ticks\")", + "sns.set_palette(\"husl\")", + "sns.pairplot(df.iloc[:,0:6], hue=\"species\")" + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991711520, + "endTime" : 1739991714466, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n", + "type" : "TEXT" + }, + { + "message" : "
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Split the data into test and train sets", + "hasTitle" : true, + "message" : [ + "%python", + "", + "X = df.iloc[:,0:4].values", + "y = df.iloc[:,4].values", + "", + "from sklearn.preprocessing import LabelEncoder", + "encoder = LabelEncoder()", + "y1 = encoder.fit_transform(y)", + "Y = pd.get_dummies(y1).values", + "", + "from sklearn.model_selection import train_test_split", + "X_train,X_test, y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=0) " + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991714999, + "endTime" : 1739991715540, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Build the model", + "hasTitle" : true, + "message" : [ + "%python", + "", + "model = Sequential()", + "", + "model.add(Dense(4,input_shape=(4,), activation='relu'))", + "model.add(Dense(3,activation='softmax'))", + "", + "# compile the model using the atom optimizer with a learning rate of 0.04 and a loss function of ", + "# categorical cross-entropy and the parameter set to optimize is accuracy.", + "model.compile(Adam(learning_rate=0.04),'categorical_crossentropy', metrics=['accuracy'])", + "", + "model.fit(X_train,y_train, epochs=50)" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":200,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991716075, + "endTime" : 1739991719083, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "Epoch 1/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m1s\u001B[0m 534ms/step - accuracy: 0.5312 - loss: 2.0064\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m1s\u001B[0m 7ms/step - accuracy: 0.3629 - loss: 2.2617 \nEpoch 2/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.2500 - loss: 1.5031\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3167 - loss: 1.3985 \nEpoch 3/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 18ms/step - accuracy: 0.2500 - loss: 1.2553\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.1269 - loss: 1.2588 \nEpoch 4/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 18ms/step - accuracy: 0.0000e+00 - loss: 1.2702\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.1183 - loss: 1.1952 \nEpoch 5/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3438 - loss: 1.1394\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3310 - loss: 1.1394 \nEpoch 6/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3125 - loss: 1.1985\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3185 - loss: 1.1562 \nEpoch 7/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.1875 - loss: 1.1558\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.2810 - loss: 1.1411 \nEpoch 8/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3750 - loss: 1.1013\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3300 - loss: 1.1194 \nEpoch 9/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3125 - loss: 1.0840\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3371 - loss: 1.1059 \nEpoch 10/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3438 - loss: 1.1178\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3748 - loss: 1.1044 \nEpoch 11/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.2500 - loss: 1.1282\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3467 - loss: 1.1058 \nEpoch 12/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3438 - loss: 1.1010\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3665 - loss: 1.0972 \nEpoch 13/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.2500 - loss: 1.1225\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3165 - loss: 1.1085 \nEpoch 14/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4062 - loss: 1.0936\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3779 - loss: 1.0957 \nEpoch 15/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3125 - loss: 1.1051\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3446 - loss: 1.0999 \nEpoch 16/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.4375 - loss: 1.0853\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3935 - loss: 1.0927 \nEpoch 17/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.5938 - loss: 1.0599\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.4310 - loss: 1.0868 \nEpoch 18/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.2812 - loss: 1.1112\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3310 - loss: 1.1028 \nEpoch 19/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4688 - loss: 1.0822\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3935 - loss: 1.0927 \nEpoch 20/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4062 - loss: 1.0897\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3665 - loss: 1.0961 \nEpoch 21/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3125 - loss: 1.1005\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3519 - loss: 1.0974 \nEpoch 22/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3125 - loss: 1.1019\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3363 - loss: 1.1002 \nEpoch 23/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4375 - loss: 1.0830\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3862 - loss: 1.0925 \nEpoch 24/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.4062 - loss: 1.0896\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3852 - loss: 1.0940 \nEpoch 25/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3750 - loss: 1.0930\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3842 - loss: 1.0925 \nEpoch 26/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4375 - loss: 1.0845\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3685 - loss: 1.0960 \nEpoch 27/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3750 - loss: 1.0898\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3738 - loss: 1.0933 \nEpoch 28/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3750 - loss: 1.0973\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3706 - loss: 1.0966 \nEpoch 29/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.2812 - loss: 1.1060\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3310 - loss: 1.1008 \nEpoch 30/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.2500 - loss: 1.1040\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3519 - loss: 1.0969 \nEpoch 31/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3750 - loss: 1.0954\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3633 - loss: 1.0959 \nEpoch 32/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.5000 - loss: 1.0782\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.4060 - loss: 1.0910 \nEpoch 33/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 19ms/step - accuracy: 0.3125 - loss: 1.1076\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3540 - loss: 1.0988 \nEpoch 34/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 19ms/step - accuracy: 0.4688 - loss: 1.0789\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3862 - loss: 1.0933 \nEpoch 35/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3438 - loss: 1.0998\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3498 - loss: 1.0989 \nEpoch 36/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3438 - loss: 1.0972\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3727 - loss: 1.0947 \nEpoch 37/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 20ms/step - accuracy: 0.4062 - loss: 1.0876\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3665 - loss: 1.0956 \nEpoch 38/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.4688 - loss: 1.0794\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3904 - loss: 1.0914 \nEpoch 39/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3438 - loss: 1.1082\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3602 - loss: 1.0998 \nEpoch 40/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.2500 - loss: 1.1164\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3383 - loss: 1.1012 \nEpoch 41/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4062 - loss: 1.0883\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3696 - loss: 1.0952 \nEpoch 42/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.4688 - loss: 1.0770\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.4123 - loss: 1.0884 \nEpoch 43/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.2500 - loss: 1.1157\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3217 - loss: 1.1035 \nEpoch 44/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3438 - loss: 1.0942\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3529 - loss: 1.0971 \nEpoch 45/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3438 - loss: 1.1020\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3477 - loss: 1.0998 \nEpoch 46/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.3750 - loss: 1.0917\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3706 - loss: 1.0949 \nEpoch 47/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3750 - loss: 1.0921\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3717 - loss: 1.0950 \nEpoch 48/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4375 - loss: 1.0784\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3790 - loss: 1.0931 \nEpoch 49/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 17ms/step - accuracy: 0.3125 - loss: 1.0980\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 6ms/step - accuracy: 0.3508 - loss: 1.0972 \nEpoch 50/50\n\r\u001B[1m1/4\u001B[0m \u001B[32m━━━━━\u001B[0m\u001B[37m━━━━━━━━━━━━━━━\u001B[0m \u001B[1m0s\u001B[0m 16ms/step - accuracy: 0.4688 - loss: 1.0814\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r\u001B[1m4/4\u001B[0m \u001B[32m━━━━━━━━━━━━━━━━━━━━\u001B[0m\u001B[37m\u001B[0m \u001B[1m0s\u001B[0m 5ms/step - accuracy: 0.3967 - loss: 1.0920 \n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Score the test data using the model", + "hasTitle" : true, + "message" : [ + "%python", + "", + "y_pred = model.predict(X_test, verbose=0)", + "y_pred" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":300,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991719697, + "endTime" : 1739991720319, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "array([[0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842],\n [0.32338503, 0.30626655, 0.37034842]], dtype=float32)\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Get the accuracy of the predicted values", + "hasTitle" : true, + "message" : [ + "%python", + "", + "y_test_class = np.argmax(y_test,axis=1)", + "y_pred_class = np.argmax(y_pred,axis=1)", + "", + "report = classification_report(y_test_class,y_pred_class, output_dict=True)", + "", + "res = pd.DataFrame(report)", + "res" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991720912, + "endTime" : 1739991721422, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : " 0 1 2 accuracy macro avg weighted avg\nprecision 0.0 0.0 0.200000 0.2 0.066667 0.040000\nrecall 0.0 0.0 1.000000 0.2 0.333333 0.200000\nf1-score 0.0 0.0 0.333333 0.2 0.111111 0.066667\nsupport 11.0 13.0 6.000000 0.2 30.000000 30.000000\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Create Python UDF", + "hasTitle" : true, + "message" : [ + "%python", + "", + "def build_mod(df):", + " import oml", + " import keras", + " from keras.models import Sequential", + " from keras.layers import Dense", + " from keras.optimizers import Adam", + " import seaborn as sns", + " import pandas as pd", + " import numpy as np", + " import matplotlib.pyplot as plt", + "", + " sns.set(style=\"ticks\")", + " sns.set_palette(\"husl\")", + " sns.pairplot(df.iloc[:,0:6], hue=\"species\")", + "", + " # split the data into test and train sets", + " X = df.iloc[:,0:4].values", + " y = df.iloc[:,4].values", + "", + " from sklearn.preprocessing import LabelEncoder", + " encoder = LabelEncoder()", + " y1 = encoder.fit_transform(y)", + " Y = pd.get_dummies(y1).values", + "", + " from sklearn.model_selection import train_test_split", + " X_train,X_test, y_train,y_test = train_test_split(X,Y,test_size=0.2,", + " random_state=0) ", + "", + " model = Sequential()", + " model.add(Dense(4,input_shape=(4,),activation='relu'))", + " model.add(Dense(3,activation='softmax'))", + " model.compile(Adam(learning_rate=0.04),'categorical_crossentropy',metrics=['accuracy'])", + " model.fit(X_train,y_train, epochs=50, verbose=0)", + "", + " y_pred = model.predict(X_test, verbose=0)", + " y_test_class = np.argmax(y_test,axis=1)", + " y_pred_class = np.argmax(y_pred,axis=1)", + " ", + " from sklearn.metrics import classification_report,confusion_matrix", + " report = classification_report(y_test_class,y_pred_class, output_dict=True)", + " res = pd.DataFrame(report)", + " return print(res)", + " plt.show()", + " " + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991721914, + "endTime" : 1739991722407, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Run the UDF in Python", + "hasTitle" : true, + "message" : [ + "%python", + "", + "build_mod(df)" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991722883, + "endTime" : 1739991728259, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : " 0 1 2 accuracy macro avg weighted avg\nprecision 1.0 1.0 1.0 1.0 1.0 1.0\nrecall 1.0 1.0 1.0 1.0 1.0 1.0\nf1-score 1.0 1.0 1.0 1.0 1.0 1.0\nsupport 11.0 13.0 6.0 1.0 30.0 30.0\n", + "type" : "TEXT" + }, + { + "message" : "
\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : " Create a database table from a pandas DataFrame and get a proxy object", + "hasTitle" : true, + "message" : [ + "%python", + "", + "import oml", + "", + "try:", + " oml.drop(table=\"DF\")", + "except: ", + " pass", + "", + "MY_DF = oml.create(df, table=\"DF\")", + "", + "z.show(MY_DF)" + ], + "selectedVisualization" : "table", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991728763, + "endTime" : 1739991729516, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "sepal_length\tsepal_width\tpetal_length\tpetal_width\tspecies\n5.1\t3.5\t1.4\t0.2\tsetosa\n4.9\t3.0\t1.4\t0.2\tsetosa\n4.7\t3.2\t1.3\t0.2\tsetosa\n4.6\t3.1\t1.5\t0.2\tsetosa\n5.0\t3.6\t1.4\t0.2\tsetosa\n5.4\t3.9\t1.7\t0.4\tsetosa\n4.6\t3.4\t1.4\t0.3\tsetosa\n5.0\t3.4\t1.5\t0.2\tsetosa\n4.4\t2.9\t1.4\t0.2\tsetosa\n4.9\t3.1\t1.5\t0.1\tsetosa\n5.4\t3.7\t1.5\t0.2\tsetosa\n4.8\t3.4\t1.6\t0.2\tsetosa\n4.8\t3.0\t1.4\t0.1\tsetosa\n4.3\t3.0\t1.1\t0.1\tsetosa\n5.8\t4.0\t1.2\t0.2\tsetosa\n5.7\t4.4\t1.5\t0.4\tsetosa\n5.4\t3.9\t1.3\t0.4\tsetosa\n5.1\t3.5\t1.4\t0.3\tsetosa\n5.7\t3.8\t1.7\t0.3\tsetosa\n5.1\t3.8\t1.5\t0.3\tsetosa\n5.4\t3.4\t1.7\t0.2\tsetosa\n5.1\t3.7\t1.5\t0.4\tsetosa\n4.6\t3.6\t1.0\t0.2\tsetosa\n5.1\t3.3\t1.7\t0.5\tsetosa\n4.8\t3.4\t1.9\t0.2\tsetosa\n5.0\t3.0\t1.6\t0.2\tsetosa\n5.0\t3.4\t1.6\t0.4\tsetosa\n5.2\t3.5\t1.5\t0.2\tsetosa\n5.2\t3.4\t1.4\t0.2\tsetosa\n4.7\t3.2\t1.6\t0.2\tsetosa\n4.8\t3.1\t1.6\t0.2\tsetosa\n5.4\t3.4\t1.5\t0.4\tsetosa\n5.2\t4.1\t1.5\t0.1\tsetosa\n5.5\t4.2\t1.4\t0.2\tsetosa\n4.9\t3.1\t1.5\t0.2\tsetosa\n5.0\t3.2\t1.2\t0.2\tsetosa\n5.5\t3.5\t1.3\t0.2\tsetosa\n4.9\t3.6\t1.4\t0.1\tsetosa\n4.4\t3.0\t1.3\t0.2\tsetosa\n5.1\t3.4\t1.5\t0.2\tsetosa\n5.0\t3.5\t1.3\t0.3\tsetosa\n4.5\t2.3\t1.3\t0.3\tsetosa\n4.4\t3.2\t1.3\t0.2\tsetosa\n5.0\t3.5\t1.6\t0.6\tsetosa\n5.1\t3.8\t1.9\t0.4\tsetosa\n4.8\t3.0\t1.4\t0.3\tsetosa\n5.1\t3.8\t1.6\t0.2\tsetosa\n4.6\t3.2\t1.4\t0.2\tsetosa\n5.3\t3.7\t1.5\t0.2\tsetosa\n5.0\t3.3\t1.4\t0.2\tsetosa\n7.0\t3.2\t4.7\t1.4\tversicolor\n6.4\t3.2\t4.5\t1.5\tversicolor\n6.9\t3.1\t4.9\t1.5\tversicolor\n5.5\t2.3\t4.0\t1.3\tversicolor\n6.5\t2.8\t4.6\t1.5\tversicolor\n5.7\t2.8\t4.5\t1.3\tversicolor\n6.3\t3.3\t4.7\t1.6\tversicolor\n4.9\t2.4\t3.3\t1.0\tversicolor\n6.6\t2.9\t4.6\t1.3\tversicolor\n5.2\t2.7\t3.9\t1.4\tversicolor\n5.0\t2.0\t3.5\t1.0\tversicolor\n5.9\t3.0\t4.2\t1.5\tversicolor\n6.0\t2.2\t4.0\t1.0\tversicolor\n6.1\t2.9\t4.7\t1.4\tversicolor\n5.6\t2.9\t3.6\t1.3\tversicolor\n6.7\t3.1\t4.4\t1.4\tversicolor\n5.6\t3.0\t4.5\t1.5\tversicolor\n5.8\t2.7\t4.1\t1.0\tversicolor\n6.2\t2.2\t4.5\t1.5\tversicolor\n5.6\t2.5\t3.9\t1.1\tversicolor\n5.9\t3.2\t4.8\t1.8\tversicolor\n6.1\t2.8\t4.0\t1.3\tversicolor\n6.3\t2.5\t4.9\t1.5\tversicolor\n6.1\t2.8\t4.7\t1.2\tversicolor\n6.4\t2.9\t4.3\t1.3\tversicolor\n6.6\t3.0\t4.4\t1.4\tversicolor\n6.8\t2.8\t4.8\t1.4\tversicolor\n6.7\t3.0\t5.0\t1.7\tversicolor\n6.0\t2.9\t4.5\t1.5\tversicolor\n5.7\t2.6\t3.5\t1.0\tversicolor\n5.5\t2.4\t3.8\t1.1\tversicolor\n5.5\t2.4\t3.7\t1.0\tversicolor\n5.8\t2.7\t3.9\t1.2\tversicolor\n6.0\t2.7\t5.1\t1.6\tversicolor\n5.4\t3.0\t4.5\t1.5\tversicolor\n6.0\t3.4\t4.5\t1.6\tversicolor\n6.7\t3.1\t4.7\t1.5\tversicolor\n6.3\t2.3\t4.4\t1.3\tversicolor\n5.6\t3.0\t4.1\t1.3\tversicolor\n5.5\t2.5\t4.0\t1.3\tversicolor\n5.5\t2.6\t4.4\t1.2\tversicolor\n6.1\t3.0\t4.6\t1.4\tversicolor\n5.8\t2.6\t4.0\t1.2\tversicolor\n5.0\t2.3\t3.3\t1.0\tversicolor\n5.6\t2.7\t4.2\t1.3\tversicolor\n5.7\t3.0\t4.2\t1.2\tversicolor\n5.7\t2.9\t4.2\t1.3\tversicolor\n6.2\t2.9\t4.3\t1.3\tversicolor\n5.1\t2.5\t3.0\t1.1\tversicolor\n5.7\t2.8\t4.1\t1.3\tversicolor\n6.3\t3.3\t6.0\t2.5\tvirginica\n5.8\t2.7\t5.1\t1.9\tvirginica\n7.1\t3.0\t5.9\t2.1\tvirginica\n6.3\t2.9\t5.6\t1.8\tvirginica\n6.5\t3.0\t5.8\t2.2\tvirginica\n7.6\t3.0\t6.6\t2.1\tvirginica\n4.9\t2.5\t4.5\t1.7\tvirginica\n7.3\t2.9\t6.3\t1.8\tvirginica\n6.7\t2.5\t5.8\t1.8\tvirginica\n7.2\t3.6\t6.1\t2.5\tvirginica\n6.5\t3.2\t5.1\t2.0\tvirginica\n6.4\t2.7\t5.3\t1.9\tvirginica\n6.8\t3.0\t5.5\t2.1\tvirginica\n5.7\t2.5\t5.0\t2.0\tvirginica\n5.8\t2.8\t5.1\t2.4\tvirginica\n6.4\t3.2\t5.3\t2.3\tvirginica\n6.5\t3.0\t5.5\t1.8\tvirginica\n7.7\t3.8\t6.7\t2.2\tvirginica\n7.7\t2.6\t6.9\t2.3\tvirginica\n6.0\t2.2\t5.0\t1.5\tvirginica\n6.9\t3.2\t5.7\t2.3\tvirginica\n5.6\t2.8\t4.9\t2.0\tvirginica\n7.7\t2.8\t6.7\t2.0\tvirginica\n6.3\t2.7\t4.9\t1.8\tvirginica\n6.7\t3.3\t5.7\t2.1\tvirginica\n7.2\t3.2\t6.0\t1.8\tvirginica\n6.2\t2.8\t4.8\t1.8\tvirginica\n6.1\t3.0\t4.9\t1.8\tvirginica\n6.4\t2.8\t5.6\t2.1\tvirginica\n7.2\t3.0\t5.8\t1.6\tvirginica\n7.4\t2.8\t6.1\t1.9\tvirginica\n7.9\t3.8\t6.4\t2.0\tvirginica\n6.4\t2.8\t5.6\t2.2\tvirginica\n6.3\t2.8\t5.1\t1.5\tvirginica\n6.1\t2.6\t5.6\t1.4\tvirginica\n7.7\t3.0\t6.1\t2.3\tvirginica\n6.3\t3.4\t5.6\t2.4\tvirginica\n6.4\t3.1\t5.5\t1.8\tvirginica\n6.0\t3.0\t4.8\t1.8\tvirginica\n6.9\t3.1\t5.4\t2.1\tvirginica\n6.7\t3.1\t5.6\t2.4\tvirginica\n6.9\t3.1\t5.1\t2.3\tvirginica\n5.8\t2.7\t5.1\t1.9\tvirginica\n6.8\t3.2\t5.9\t2.3\tvirginica\n6.7\t3.3\t5.7\t2.5\tvirginica\n6.7\t3.0\t5.2\t2.3\tvirginica\n6.3\t2.5\t5.0\t1.9\tvirginica\n6.5\t3.0\t5.2\t2.0\tvirginica\n6.2\t3.4\t5.4\t2.3\tvirginica\n5.9\t3.0\t5.1\t1.8\tvirginica\n", + "type" : "TABLE" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Run the UDF using the Python API for embedded Python execution", + "hasTitle" : true, + "message" : [ + "%python", + "", + "oml.table_apply(MY_DF, func=build_mod, graphics=True)" + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991730002, + "endTime" : 1739991735518, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : " 0 1 2 accuracy macro avg weighted avg\nprecision 1.0 1.0 1.0 1.0 1.0 1.0\nrecall 1.0 1.0 1.0 1.0 1.0 1.0\nf1-score 1.0 1.0 1.0 1.0 1.0 1.0\nsupport 11.0 13.0 6.0 1.0 30.0 30.0\n", + "type" : "TEXT" + }, + { + "message" : "
\nNone\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create a string representation of the UDF", + "hasTitle" : true, + "message" : [ + "%python", + "", + "build_mod = \"\"\"def build_mod(df):", + " import oml", + " import warnings", + " warnings.filterwarnings(\"ignore\")", + " import keras", + " from keras.models import Sequential", + " from keras.layers import Dense", + " from keras.optimizers import Adam", + " import seaborn as sns", + " import pandas as pd", + " import numpy as np", + " import matplotlib.pyplot as plt", + " ", + " sns.set(style=\"ticks\")", + " sns.set_palette(\"husl\")", + " sns.pairplot(df.iloc[:,0:6], hue=\"species\")", + "", + " # split the data into test and train sets", + " X = df.iloc[:,0:4].values", + " y = df.iloc[:,4].values", + "", + " from sklearn.preprocessing import LabelEncoder", + " encoder = LabelEncoder()", + " y1 = encoder.fit_transform(y)", + " Y = pd.get_dummies(y1).values", + "", + " from sklearn.model_selection import train_test_split", + " X_train,X_test, y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=0) ", + "", + " model = Sequential()", + " model.add(Dense(4,input_shape=(4,),activation='relu'))", + " model.add(Dense(3,activation='softmax'))", + " model.compile(Adam(learning_rate=0.04),'categorical_crossentropy',metrics=['accuracy'])", + " model.fit(X_train,y_train, epochs=50, verbose=0)", + "", + " y_pred = model.predict(X_test, verbose=0)", + " y_test_class = np.argmax(y_test,axis=1)", + " y_pred_class = np.argmax(y_pred,axis=1)", + "", + " from sklearn.metrics import classification_report,confusion_matrix", + " report = classification_report(y_test_class,y_pred_class, output_dict=True)", + " res = pd.DataFrame(report)", + " return(res)", + " plt.show()\"\"\"", + " " + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991736012, + "endTime" : 1739991736467, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Save the string representation of the UDF to the script repository", + "hasTitle" : true, + "message" : [ + "%python", + "", + "oml.script.create(\"build_mod\", func=build_mod, overwrite=True)" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991736956, + "endTime" : 1739991737443, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 6, + "title" : "Run the UDF by referencing it as a named script", + "hasTitle" : true, + "message" : [ + "%python", + "", + "oml.table_apply(MY_DF, func=\"build_mod\", graphics=True)" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991737938, + "endTime" : 1739991742923, + "interpreter" : "python.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "
\n 0 1 2 accuracy macro avg weighted avg\nprecision 1.0 1.0 1.0 1.0 1.0 1.0\nrecall 1.0 1.0 1.0 1.0 1.0 1.0\nf1-score 1.0 1.0 1.0 1.0 1.0 1.0\nsupport 11.0 13.0 6.0 1.0 30.0 30.0\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Access Control List", + "hasTitle" : true, + "message" : [ + "%md", + "---", + "### Running UDFs in the SQL and REST APIs for embedded Python execution", + "---", + "", + "The Access Control List (ACL) provides additional protection to your Autonomous Database by allowing only the client with specific IP addresses to connect to the database. ", + "", + "As ADMIN, run the network access control list `pyqAppendHostAce` function to enable the OML user to access network services and resources from the database, where the root domain is the data center region where Autonomous Database resides. For example, if your username is *OMLUSER* and your Autonomous Database resides in the Ashburn region, the root domain is *adb.us-ashburn-1.oraclecloudapps.com*.", + "", + "Then run the token store `pyqSetAuthToken` function to persist the authorization token issued by a cloud host for use with the upcoming SQL calls." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991743405, + "endTime" : 1739991743878, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

The Access Control List (ACL) provides additional protection to your Autonomous Database by allowing only the client with specific IP addresses to connect to the database.

\n

As ADMIN, run the network access control list pyqAppendHostAce function to enable the OML user to access network services and resources from the database, where the root domain is the data center region where Autonomous Database resides. For example, if your username is OMLUSER and your Autonomous Database resides in the Ashburn region, the root domain is adb.us-ashburn-1.oraclecloudapps.com.

\n

Then run the token store pyqSetAuthToken function to persist the authorization token issued by a cloud host for use with the upcoming SQL calls.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Add user to the Access Control List", + "hasTitle" : true, + "message" : [ + "", + "exec pyqAppendHostAce('OMLUSER','adb.us-ashburn-1.oraclecloudapps.com')", + "", + " PL/SQL procedure successfully completed.", + " Elapsed: 00:00:00.370" + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : null, + "result" : null, + "relations" : [ ], + "dynamicFormParams" : null + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Define function to get authorization token", + "hasTitle" : true, + "message" : [ + "%md", + "", + "To use the SQL and REST APIs for embedded Python execution, the UDF must be stored in the script repository, and an Oracle Machine Learning (OML) cloud account USERNAME, PASSWORD, and URL must be provided to obtain an authentication token. To use a conda environment when calling OML4Py script execution endpoints, specify the conda environment in the `env_name` field when using SQL, and the `envName` field when using REST.", + "", + "Obtain the token URL from the OML service console in Autonomous Database by first signing into your Oracle Cloud Infrastructure account with your OCI user name and password:", + "", + "- Click the “three line menu” and select the desired Autonomous Database instance.", + "- Click Database Actions and scroll down to the Oracle Machine Learning RESTful Services tile.", + "- Click “Copy” to get the REST authentication token for REST APIs URL of the form: /omlusers/api/oauth2/v1/token", + "", + "The includes the location, tenancy ID, and database name separated by dashes. For example, https://qtraya2braestch-omldb.adb.us-sanjose-1.oraclecloudapps.com." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991744377, + "endTime" : 1739991744839, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

To use the SQL and REST APIs for embedded Python execution, the UDF must be stored in the script repository, and an Oracle Machine Learning (OML) cloud account USERNAME, PASSWORD, and URL must be provided to obtain an authentication token. To use a conda environment when calling OML4Py script execution endpoints, specify the conda environment in the env_name field when using SQL, and the envName field when using REST.

\n

Obtain the token URL from the OML service console in Autonomous Database by first signing into your Oracle Cloud Infrastructure account with your OCI user name and password:

\n
    \n
  • Click the “three line menu” and select the desired Autonomous Database instance.
  • \n
  • Click Database Actions and scroll down to the Oracle Machine Learning RESTful Services tile.
  • \n
  • Click “Copy” to get the REST authentication token for REST APIs URL of the form: /omlusers/api/oauth2/v1/token
  • \n
\n

The includes the location, tenancy ID, and database name separated by dashes. For example, https://qtraya2braestch-omldb.adb.us-sanjose-1.oraclecloudapps.com.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Define get_token2 function", + "hasTitle" : true, + "message" : [ + "%script", + "", + "create or replace function get_token2 (", + " p_url varchar2,", + " p_username varchar2,", + " p_password varchar2,", + " p_verbose number default 0) ", + " return varchar2", + " authid definer as pragma autonomous_transaction;", + " l_buffer varchar2(4000);", + " l_request_content clob;", + " l_response_content clob;", + " l_request utl_http.req;", + " l_response utl_http.resp;", + " l_url varchar2(1000);", + " l_smt varchar2(1000);", + " l_access_token varchar2(4000);", + "begin", + " l_url := p_url;", + " l_request_content := q'[{\"grant_type\":\"password\", \"username\":\"]' || p_username || q'[\", \"password\":\"]' || p_password ||", + " q'[\"}]';", + " begin", + " utl_http.set_wallet('');", + " l_request := utl_http.begin_request(l_url,'POST',' HTTP/1.1');", + " utl_http.set_header(l_request,'Content-Type','application/json');", + " utl_http.set_header(l_request,'Accept','application/json');", + " utl_http.set_header(l_request,'Content-Length',length(l_request_content));", + " utl_http.write_text(l_request,l_request_content);", + " l_response := utl_http.get_response(l_request);", + " if (l_response.status_code != utl_http.HTTP_OK) THEN", + " RAISE_APPLICATION_ERROR(-20999, 'Token request failed with status code: '||l_response.status_code||', reason: '||l_response.reason_phrase);", + " end if;", + " if p_verbose > 0 then", + " dbms_output.put_line('HTTP response status code: ' || l_response.status_code);", + " dbms_output.put_line('HTTP response reason phrase: ' || l_response.reason_phrase);", + " end if;", + " begin", + " loop", + " utl_http.read_line(l_response,l_buffer);", + " l_access_token := regexp_substr(l_buffer,'\"accessToken\":\"([^\"]+)\"',1,1,'i',1);", + " if p_verbose > 0 then", + " dbms_output.put_line('l_access_token: [' || l_access_token || ']');", + " end if;", + " end loop;", + " utl_http.end_response(l_response);", + " exception", + " when utl_http.end_of_body then utl_http.end_response(l_response);", + " end;", + " end;", + " return(l_access_token);", + " dbms_output.put_line('Access token is: ' || l_access_token);", + "end get_token2;", + "/" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991800603, + "endTime" : 1739991801083, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nFunction GET_TOKEN2 compiled\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Create access details table to support token acquisition", + "hasTitle" : true, + "message" : [ + "%script", + "", + "BEGIN EXECUTE IMMEDIATE 'DROP TABLE ACCESS_DETAILS';", + "EXCEPTION WHEN OTHERS THEN NULL; ", + "END;", + "/", + "", + "CREATE TABLE ACCESS_DETAILS (", + " NAME VARCHAR2(20),", + " VALUE VARCHAR2(1000)", + " );", + "/", + "", + "-- Important! Replace the OML URL, username and password with your database-specific values", + "", + "INSERT INTO ACCESS_DETAILS VALUES ('URL', '/omlusers/api/oauth2/v1/token');", + "INSERT INTO ACCESS_DETAILS VALUES ('USERNAME', 'OMLUSER');", + "INSERT INTO ACCESS_DETAILS VALUES ('PASSWORD', 'YOUR PASSWORD');" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : "[]", + "result" : { + "startTime" : 1739991806248, + "endTime" : 1739991806788, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\nPL/SQL procedure successfully completed.\n\n\n---------------------------\n\nTable ACCESS_DETAILS created.\n\n\n---------------------------\n\n---------------------------\n\n1 row inserted.\n\n\n---------------------------\n\n1 row inserted.\n\n\n---------------------------\n\n1 row inserted.\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Obtain and set token", + "hasTitle" : true, + "message" : [ + "%script", + "", + "DECLARE", + " access_token VARCHAR2(2000);", + " is_set BOOLEAN;", + "BEGIN", + " with url_string as (select VALUE from ACCESS_DETAILS where NAME = 'URL'), ", + " username as (select VALUE from ACCESS_DETAILS where NAME = 'USERNAME'),", + " pswd as (select VALUE from ACCESS_DETAILS where NAME = 'PASSWORD')", + " SELECT get_token2(url_string.VALUE, username.VALUE, pswd.VALUE) INTO access_token ", + " FROM url_string, username, pswd ", + " WHERE localtimestamp=localtimestamp;", + "", + " pyqSetAuthToken(access_token);", + " is_set := pyqIsTokenSet();", + " IF (is_set) THEN", + " DBMS_OUTPUT.put_line ('token is set');", + " -- enable to return token", + " -- DBMS_OUTPUT.put_line (''|| access_token);", + " END IF;", + "END;" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991814893, + "endTime" : 1739991816375, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "token is set\n\n\nPL/SQL procedure successfully completed.\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Show the list of available OML4Py conda environments", + "hasTitle" : true, + "message" : [ + "%script", + "", + "SELECT * from table(pyqListEnvs())" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : "[{\"raw\":{\"height\":200,\"lastColumns\":[],\"version\":1}}]", + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991822775, + "endTime" : 1739991834874, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "name size description number_of_installed_packages \nmypyenv 2.2 GiB Install Python seaborn and tensorflow packages 149 \n\n\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Run the Python UDF using the SQL API for embedded Python execution - synchronous mode", + "hasTitle" : true, + "message" : [ + "%script", + "", + "set long 2000", + "", + "SELECT ID, VALUE FROM table(pyqTableEval( ", + " inp_nam => 'DF', ", + " par_lst => '{\"oml_graphics_flag\":true}', ", + " out_fmt => 'PNG', ", + " scr_name => 'build_mod',", + " scr_owner=> NULL,", + " env_name => 'mypyenv'));" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739991906479, + "endTime" : 1739991990282, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n---------------------------\nID VALUE \n 1 [{\"0\":0.0,\"1\":0.0,\"2\":0.2,\"accuracy\":0.2,\"macro avg\":0.0666666667,\"weighted avg\":0.04},{\"0\":0.0,\"1\":0.0,\"2\":1.0,\"accuracy\":0.2,\"macro avg\":0.3333333333,\"weighted avg\":0.2},{\"0\":0.0,\"1\":0.0,\"2\":0.3333333333,\"accuracy\":0.2,\"macro avg\":0.1111111111,\"weighted avg\":0.0666666667},{\"0\":11.0,\"1\":13.0,\"2\":6.0,\"accuracy\":0.2,\"macro avg\":30.0,\"weighted avg\":30.0}] \n\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Run the Python UDF using the SQL API for embedded Python execution - asynchronous mode", + "hasTitle" : true, + "message" : [ + "%script", + "", + "set long 2000", + "", + "SELECT VALUE FROM table(pyqTableEval( ", + " inp_nam => 'DF', ", + " par_lst => '{\"oml_graphics_flag\":true, \"oml_async_flag\":true}', ", + " out_fmt => 'PNG', ", + " scr_name => 'build_mod',", + " scr_owner=> NULL,", + " env_name => 'mypyenv'));" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739992093748, + "endTime" : 1739992106356, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n---------------------------\nVALUE \nhttps://hmnsuoc10zlzdol-moviestreamworkshop.adb.ap-tokyo-1.oraclecloudapps.com/oml/api/py-scripts/v1/jobs/2a98c846-22db-4f4f-9b01-4f2cb0284555 \n\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Get the job status", + "hasTitle" : true, + "message" : [ + "%md", + "", + "Poll the job status sing the `pyqJobStatus` function. If the job is still running, the return value will be *job is still running*. When the job completes, a job ID and result location are returned. ", + "", + "Note, replace your job ID with the one shown here." + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739992115147, + "endTime" : 1739992115596, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

Poll the job status sing the pyqJobStatus function. If the job is still running, the return value will be job is still running. When the job completes, a job ID and result location are returned.

\n

Note, replace your job ID with the one shown here.

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%script", + "", + "set long 1000", + "", + "SELECT VALUE from pyqJobStatus(job_id => '2a98c846-22db-4f4f-9b01-4f2cb0284555');" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739992209207, + "endTime" : 1739992214854, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n---------------------------\nVALUE \nhttps://hmnsuoc10zlzdol-moviestreamworkshop.adb.ap-tokyo-1.oraclecloudapps.com/oml/api/py-scripts/v1/jobs/2a98c846-22db-4f4f-9b01-4f2cb0284555/result \n\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Retrieve the result", + "hasTitle" : true, + "message" : [ + "%script", + "", + "set long 500", + "", + "SELECT NAME, ID, VALUE, dbms_lob.substr(image,100,1) image FROM pyqJobResult(job_id => '2a98c846-22db-4f4f-9b01-4f2cb0284555', out_fmt=>'PNG');" + ], + "selectedVisualization" : "raw", + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739992224450, + "endTime" : 1739992225359, + "interpreter" : "script.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "\n---------------------------\nNAME ID VALUE IMAGE \n 1 [{\"0\":0.0,\"1\":0.0,\"2\":0.2,\"accuracy\":0.2,\"macro avg\":0.0666666667,\"weighted avg\":0.04},{\"0\":0.0,\"1\":0.0,\"2\":1.0,\"accuracy\":0.2,\"macro avg\":0.3333333333,\"weighted avg\":0.2},{\"0\":0.0,\"1\":0.0,\"2\":0.3333333333,\"accuracy\":0.2,\"macro avg\":0.1111111111,\"weighted avg\":0.0666666667},{\"0\":11.0,\"1\":13.0,\"2\":6.0,\"accuracy\":0.2,\"macro avg\":30.0,\"weighted avg\":30.0}] 89504E470D0A1A0A0000000D494844520000046A000003E808060000008668185B0000003A74455874536F667477617265004D6174706C6F746C69622076657273696F6E332E31302E302C2068747470733A2F2F6D6174706C6F746C69622E6F72672F94 \n\n\n\n---------------------------\n", + "type" : "TEXT" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Run the Python UDF using the REST API for embedded Python execution - synchronous mode", + "hasTitle" : true, + "message" : [ + "", + "", + "$ curl -i -X POST --header \"Authorization: Bearer \" --header 'Content-Type: application/json' \\", + " --header 'Accept: application/json' -d '{\"input\":\"select * from DF\",\"envName\":\"mypyenv\", \"graphicsFlag\":true}' \\", + " \"/oml/api/py-scripts/v1/table-apply/build_mod\"", + "", + "Returns the result:", + "", + "[{\"0\":0.0,\"1\":0.0,\"2\":0.2,\"accuracy\":0.2,\"macro avg\":0.0666666667,\"weighted avg\":0.04},...}] 89504E470D0A1A0A0000000D494844520000046A00000..." + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : null, + "result" : null, + "relations" : [ ], + "dynamicFormParams" : null + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Run the Python UDF using the REST API for embedded Python execution - asynchronous mode", + "hasTitle" : true, + "message" : [ + "", + "", + "$ curl -i -X POST --header \"Authorization: Bearer \" --header 'Content-Type: application/json' \\", + " --header 'Accept: application/json' -d '{\"input\":\"select * from DF\",\"envName\":\"mypyenv\", \"graphicsFlag\":true, \"asyncFlag\":true}' \\", + " \"/oml/api/py-scripts/v1/table-apply/build_mod\"", + "", + "Returns the job ID:", + "", + "Location: /oml/api/py-scripts/v1/jobs/3887d8a2-9b91-448c-940e-542b39205049" + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : null, + "result" : null, + "relations" : [ ], + "dynamicFormParams" : null + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : "Retrieve the result", + "hasTitle" : true, + "message" : [ + "", + "$ curl -i -X GET --header \"Authorization: Bearer \" \\", + " --header 'Accept: application/json' \\", + " \"/oml/api/py-scripts/v1/jobs/3887d8a2-9b91-448c-940e-542b39205049/result\"", + "", + "Returns:", + "", + "[{\"0\":0.0,\"1\":0.0,\"2\":0.2,\"accuracy\":0.2,\"macro avg\":0.0666666667,\"weighted avg\":0.04},...}] 89504E470D0A1A0A0000000D494844520000046A00000..." + ], + "selectedVisualization" : null, + "visualizationConfig" : null, + "hideCode" : false, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : false, + "forms" : null, + "result" : null, + "relations" : [ ], + "dynamicFormParams" : null + }, + { + "row" : 0, + "col" : 0, + "sizeX" : 0, + "width" : 12, + "title" : null, + "hasTitle" : false, + "message" : [ + "%md", + "", + "## End of script" + ], + "selectedVisualization" : "html", + "visualizationConfig" : null, + "hideCode" : true, + "hideResult" : false, + "hideGutter" : true, + "hideVizConfig" : false, + "hideInIFrame" : false, + "enabled" : true, + "forms" : "[]", + "result" : { + "startTime" : 1739992251007, + "endTime" : 1739992251454, + "interpreter" : "md.low", + "taskStatus" : "SUCCESS", + "status" : "SUCCESS", + "results" : [ + { + "message" : "

End of script

\n", + "type" : "HTML" + } + ], + "forms" : "[]" + }, + "relations" : [ ], + "dynamicFormParams" : "{}" + } + ] + } +] \ No newline at end of file diff --git a/machine-learning/notebooks-oml/python/OML4Py -0- Tour.dsnb b/machine-learning/notebooks-oml/python/OML4Py -0- Tour.dsnb new file mode 100644 index 00000000..72e587d0 --- /dev/null +++ b/machine-learning/notebooks-oml/python/OML4Py -0- Tour.dsnb @@ -0,0 +1 @@ +[{"layout":null,"template":null,"templateConfig":null,"name":"OML4Py -0- Tour","description":null,"readOnly":false,"type":"low","paragraphs":[{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":null,"title":null,"message":[],"enabled":true,"result":{"startTime":1737140124407,"interpreter":"md.low","endTime":1737140124936,"results":[],"taskStatus":"SUCCESS","forms":"[]","status":"SUCCESS"},"sizeX":0,"hideCode":true,"width":0,"hideResult":true,"dynamicFormParams":"{}","row":0,"hasTitle":false,"hideVizConfig":true,"hideGutter":true,"relations":[],"forms":"[]"},{"col":0,"visualizationConfig":null,"hideInIFrame":false,"selectedVisualization":"html","title":null,"message":["%md","# Oracle Machine Learning for Python (OML4Py)","","***Oracle Machine Learning for Python*** (OML4Py) makes the open source Python scripting language and environment ready for the enterprise and big data. Designed for problems involving both large and small data volumes, OML4Py integrates Python with Oracle Autonomous Database, allowing users to run Python commands and scripts for statistical, machine learning, and visualization analyses on database tables and views using Python syntax. Many familiar Python functions are overloaded that translate Python behavior into SQL for running in-database, as well as automated machine learning capabilities. In this notebook, we highlight the range of OML4Py features.","","* Automated Machine Learning (AutoML)","* Machine learning model building and scoring","* Creating database tables","* Transparency layer functionality","* Embedded Python execution","* REST API invocation of embedded Python execution","","This notebook is the first of a series (0 through 5) that is intended to introduce the range of OML4Py functionality through short examples. ","","Copyright (c) 2025 Oracle Corporation ","","###### The Universal Permissive License (UPL), Version 1.0<\/a>","---"],"enabled":true,"result":{"startTime":1737140125443,"interpreter":"md.low","endTime":1737140125909,"results":[{"message":"

Oracle Machine Learning for Python (OML4Py)<\/h1>\n

Oracle Machine Learning for Python<\/strong><\/em> (OML4Py) makes the open source Python scripting language and environment ready for the enterprise and big data. Designed for problems involving both large and small data volumes, OML4Py integrates Python with Oracle Autonomous Database, allowing users to run Python commands and scripts for statistical, machine learning, and visualization analyses on database tables and views using Python syntax. Many familiar Python functions are overloaded that translate Python behavior into SQL for running in-database, as well as automated machine learning capabilities. In this notebook, we highlight the range of OML4Py features.<\/p>\n