Skip to content

Commit

Permalink
[featurestore sample] Update instructions (#2380)
Browse files Browse the repository at this point in the history
* Update instructions

* Bump version

* Update
  • Loading branch information
bastrik committed Jun 15, 2023
1 parent d93f9cb commit 34eb5e2
Show file tree
Hide file tree
Showing 9 changed files with 26 additions and 26 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -128,11 +128,12 @@
"* Option 1: Create a new notebook, and execute the instructions in this document step by step. \n",
"* Option 2: Open the existing notebook named `1. Develop a feature set and register with managed feature store.ipynb`, and run it step by step. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"\n",
"1. Select **AzureML Spark compute** in the top nav \"Compute\" dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **configure session**.\n",
"1. Select **Serverless Spark compute** in the top nav \"Compute\" dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **configure session**.\n",
"\n",
"1. Select \"configure session\" from the top nav (this could take one to two minutes to display):\n",
"\n",
" 1. Select **configure session** in the bottom nav\n",
" 1. Select **configure session** in the top status bar\n",
" 1. Select **Python packages**\n",
" 1. Select **Upload conda file**\n",
" 1. Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` located on your local device\n",
" 1. (Optional) Increase the session time-out (idle time) to reduce the serverless spark cluster startup time."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,13 +91,11 @@
}
},
"source": [
"#### (updated for sdk+cli track) Configure Azure ML spark notebook\n",
"#### Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"\n",
"\n"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,8 @@
"#### (updated) Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `3. Experiment and train models using features`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@
"#### (updated) Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `4. Enable recurrent materialization and run batch inference`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,16 +110,11 @@
"\n",
"2. Upload the feature store samples directory to project workspace: Open Azure ML studio UI of your Azure ML workspace -> click on \"Notebooks\" in left nav -> right click on your user name in the directory listing -> click \"upload folder\" -> select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`\n",
"\n",
"3. You can either create a new notebook and paste the instructions in this document step by step and execute OR open the existing notebook titled `1.hello_world.ipynb`. You can execute step by step. Keep this document open and refer to it for detailed explanation of the steps. The notebooks are available in the folder: `featurestore_sample/notebooks`. Select either `sdk_only` folder or the `sdk_and_cli` folder. The latter has CLI commands mixed with python sdk useful in ci/cd scenarios.\n",
"3. You can either create a new notebook and paste the instructions in this document step by step and execute OR open the existing notebook titled `1.Develop a feature set and register with managed feature store.ipynb`. You can execute step by step. Keep this document open and refer to it for detailed explanation of the steps. The notebooks are available in the folder: `featurestore_sample/notebooks`. Select either `sdk_only` folder or the `sdk_and_cli` folder. The latter has CLI commands mixed with python sdk useful in ci/cd scenarios.\n",
"\n",
"4. Enable preview access of managed spark: (To be removed after 23-May-2023) <BR> \n",
" (a) click on the \"manage preview features\" icon (looks like an announcment icon) in the top right nav of this screen <BR>\n",
" (b) enable access by selecting: \"Run notebooks and jobs on managed spark\"<BR>\n",
" If you have any issues, details steps are [here](https://learn.microsoft.com/en-us/azure/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml#prerequisites) - just enable the feature is enough for now\n",
"4. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". It may take 1-2 minutes for this activity to complete. Wait for a status bar in the top to display `configure session`\n",
"\n",
"5. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". It may take 1-2 minutes for this activity to complete. Wait for a status bar in the top to display `configure session`\n",
"\n",
"6. Click on \"configure session\" -> click on \"upload conda file\" -> select the file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` from your local machine; Also increase the session time out (idle time) if you want to reduce serverless spark cluster startup time.\n",
"5. Click on \"configure session\" -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` from your local machine; Also increase the session time out (idle time) if you want to reduce serverless spark cluster startup time.\n",
"\n",
"__Important:__ Except for this step, you need to run all the other steps every time you have a new spark session/session time out\n"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@
"source": [
"#### Configure Azure ML spark notebook\n",
"\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"\n",
"\n"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,11 @@
},
"source": [
"#### Configure Azure ML spark notebook\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". Not that it may say it is \"Ready\", but the spark session is created only when you execute below step\n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prereqisites frequently\n"
"\n",
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"\n",
"\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,11 @@
},
"source": [
"#### Configure Azure ML spark notebook\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". Not that it may say it is \"Ready\", but the spark session is created only when you execute below step\n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prereqisites frequently\n"
"\n",
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"\n",
"\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion sdk/python/featurestore_sample/project/env/conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ dependencies:
# Protobuf is needed to avoid conflict with managed spark
- protobuf==3.19.6
# Feature store core SDK
- azureml-featurestore==0.1.0b1
- azureml-featurestore==0.1.0b2
# This is needed if you want to execute the Part 2 of the "SDK" track or execute "SDK+CLI" track in the docs tutorial
- azure-cli

Expand Down

0 comments on commit 34eb5e2

Please sign in to comment.