You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user, I want to enable the job monitor on Spark 2.1
Expected behavior
Works as expected
Actual behavior
Results in following stacktrace:
Exception in thread Thread-5:
Traceback (most recent call last):
File "/opt/conda/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/opt/conda/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/pixiedust/pixiedust/utils/sparkJobProgressMonitor.py", line 42, in startSparkJobProgressMonitor
progressMonitor = SparkJobProgressMonitor()
File "/pixiedust/pixiedust/utils/sparkJobProgressMonitor.py", line 166, in __init__
self.addSparkListener()
File "/pixiedust/pixiedust/utils/sparkJobProgressMonitor.py", line 195, in addSparkListener
_env.getTemplate("sparkJobProgressMonitor/addSparkListener.scala").render()
File "/opt/conda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2115, in run_cell_magic
result = fn(magic_arg_s, cell)
File "<decorator-gen-123>", line 2, in scala
File "/opt/conda/lib/python2.7/site-packages/IPython/core/magic.py", line 188, in <lambda>
call = lambda f, *a, **k: f(*a, **k)
File "/pixiedust/pixiedust/utils/scalaBridge.py", line 179, in scala
runnerObject.callMethod("init", pd_getJavaSparkContext(), None if self.hasLineOption(line, "noSqlContext") else self.interactiveVariables.getVar("sqlContext")._ssql_ctx )
File "/pixiedust/pixiedust/utils/javaBridge.py", line 135, in callMethod
jMethodParams[i] = None if arg is None else (arg if arg.__class__.__name__ == "JavaClass" else arg.getClass())
File "/root/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_collections.py", line 228, in __setitem__
return self.__set_item(key, value)
File "/root/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_collections.py", line 211, in __set_item
return get_return_value(answer, self._gateway_client)
File "/root/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 323, in get_return_value
format(target_id, ".", name, value))
Py4JError: An error occurred while calling None.None. Trace:
java.lang.NullPointerException
at py4j.commands.ArrayCommand.convertArgument(ArrayCommand.java:154)
at py4j.commands.ArrayCommand.setArray(ArrayCommand.java:144)
at py4j.commands.ArrayCommand.execute(ArrayCommand.java:97)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/root/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 883, in send_command
response = connection.send_command(command)
File "/root/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1040, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
Py4JNetworkError: Error while receiving
Steps to reproduce the behavior
Run pixiedust.enableJobMonitor() in notebook running Spark 2.1.
The text was updated successfully, but these errors were encountered:
I also have the exact same issue running locally, so I guess reproducibility is not an issue and the outcome is a major feature advertised on your IBM website is simply not working at all ?
As a user, I want to enable the job monitor on Spark 2.1
Expected behavior
Works as expected
Actual behavior
Results in following stacktrace:
Steps to reproduce the behavior
Run pixiedust.enableJobMonitor() in notebook running Spark 2.1.
The text was updated successfully, but these errors were encountered: