New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to edit notebooks created from ADS in VS Code #2820

Closed
uc-msft opened this Issue Oct 11, 2018 · 2 comments

Comments

Projects
None yet
4 participants
@uc-msft

uc-msft commented Oct 11, 2018

Issue Type: Bug

Create notebook in ADS. Open in VS code with the AI Explorer extension. Also sorts of error ensues in AI Explorer when you try to view the notebook and kernel dies there eventually.

Ex error:
Notebook validation failed: Additional properties are not allowed ('transient' was unexpected):
{
"metadata": {},
"data": {
"text/plain": "<IPython.core.display.HTML object>",
"text/html": "\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
MagicExampleExplanation
info%%infoOutputs session information for the current Livy endpoint.
cleanup%%cleanup -fDeletes all sessions for the current Livy endpoint, including this notebook's session. The force flag is mandatory.
delete%%delete -f -s 0Deletes a session by number for the current Livy endpoint. Cannot delete this kernel's session.
logs%%logsOutputs the current session's Livy logs.
configure%%configure -f
{"executorMemory": "1000M", "executorCores": 4}
Configure the session creation parameters. The force flag is mandatory if a session has already been\n created and the session will be dropped and recreated.
Look at <a href="https://github.com/cloudera/livy#request-body\">\n Livy's POST /sessions Request Body for a list of valid parameters. Parameters must be passed in as a JSON string.
spark%%spark -o df
df = spark.read.parquet('...
Executes spark commands.\n Parameters:\n
    \n
  • -o VAR_NAME: The Spark dataframe of name VAR_NAME will be available in the %%local Python context as a\n <a href="http://pandas.pydata.org/\">Pandas dataframe with the same name.
  • \n
  • -m METHOD: Sample method, either take or sample.
  • \n
  • -n MAXROWS: The maximum number of rows of a dataframe that will be pulled from Livy to Jupyter.\n If this number is negative, then the number of rows will be unlimited.
  • \n
  • -r FRACTION: Fraction used for sampling.
  • \n
\n
sql%%sql -o tables -q
SHOW TABLES
Executes a SQL query against the variable sqlContext (Spark v1.x) or spark (Spark v2.x).\n Parameters:\n
    \n
  • -o VAR_NAME: The result of the SQL query will be available in the %%local Python context as a\n <a href="http://pandas.pydata.org/\">Pandas dataframe.
  • \n
  • -q: The magic will return None instead of the dataframe (no visualization).
  • \n
  • -m, -n, -r are the same as the %%spark parameters above.
  • \n
\n
local%%local
a = 1
All the code in subsequent lines will be executed locally. Code must be valid Python code.
\n"
},
"transient": {},
"output_type": "display_data"
}

Azure Data Studio version: azuredatastudio 1.0.0 (cab8f3e, 2018-09-20T22:04:35.897Z)
OS version: Windows_NT x64 10.0.18252

@kevcunnane

This comment has been minimized.

Member

kevcunnane commented Oct 11, 2018

Hi UC, can you share the notebook with me? T
Thanks for raising this issue.

@rajmusuku

This comment has been minimized.

Contributor

rajmusuku commented Oct 30, 2018

Issue is fixed. We no longer see additional transient tag while opening a notebook in Jupyter.

@rajmusuku rajmusuku closed this Oct 30, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment