Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArrayIndexOutOfBoundsException when converting Spark DataFrame to H2OFrame #29

Closed
aswinjoseroy opened this issue Apr 11, 2016 · 3 comments

Comments

@aswinjoseroy
Copy link

After initializing H2OContext like val h2o = H2OContext.getOrCreate(sc)

when I try to convert my Spark DF to H2OFrame

val h2oDF: H2OFrame = dataFrame

This gives me java.lang.ArrayIndexOutOfBoundsException. I am running on Spark 1.6.0 and Sparkling water 1.6.1. What might the reason be?

Thanks!

@jakubhava
Copy link
Contributor

Hi Aswin-roy,
Could you please provide us with:

  • content of your MASTER environment variable
  • complete code/script you are trying to execute

Without this information I'm not able to reproduce the issue. I suppose you are using Sparkling-shell ?

Thanks!, Kuba

@aswinjoseroy
Copy link
Author

I am trying to run a Spark-Streaming job which does some operations and then moves on to predictions (from trained model on disk). I am running the job on a spark 1.6.0 standalone setup (single node). The amount of executors I have specified in conf is 49. But, when I log my h2oContext it comes to something like

Sparkling Water Context:

  • H2O name: sparkling-water-root_612514996

  • number of executors: 48

  • list of used executors:
    (executorId, host, port)
    ....

    Open H2O Flow in browser: http://null:0 (CMD + click in Mac OSX)

even though in the spark ui, I can see that 49 executors are up. Why is this happening? After this, I get the ERROR JobScheduler: Error running job streaming job 1460453735000 ms.0 java.lang.ArrayIndexOutOfBoundsException: 65535 on the h2oFrame creation line.

I often get these exceptions too :
Exception in thread "main" java.lang.RuntimeException: Cloud size under 48

ERROR LiveListenerBus: Listener anon1 threw an exception
java.lang.IllegalArgumentException: Executor without H2O instance discovered, killing the cloud!

I run the job using spark-submit and using the --packages ai.h2o:sparkling-water-core_2.10:1.6.1 parameter. What might be wrong with my setup?

Thanks!

@jakubhava
Copy link
Contributor

Hi aswin-roy,
thanks for the detailed explanation!

I already saw that you reacted in the issue #4 . The issue explained there is exactly why in some cases we are not able to initiate H2OContext. It's because we weren't able to find all Spark executors during the creation of H2OContext.

In Sparkling Water you are using we created a listener which checks for changes in the cluster topology and just kills the cloud if new executor without H2O instance appeared. It's not great, but at lest we get notified about what is happening.

We are undergoing an architectural discussion with the rest of the Sparkling Water team & the community what could be the best approach when dealing with this.

The java.lang.ArrayIndexOutOfBoundsException is probably just a consequence of the failed H2OContext initialisation.

I will close this one and redirect you please to comment at #4.

Thanks, Kuba!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants