Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't initialize H2OContext on IPv6-only machine #24

Closed
tomdz opened this issue Dec 9, 2015 · 10 comments
Closed

Can't initialize H2OContext on IPv6-only machine #24

tomdz opened this issue Dec 9, 2015 · 10 comments

Comments

@tomdz
Copy link

tomdz commented Dec 9, 2015

On a machine without IPv4 it simply crashes, even in local-only mode (master = local[*]):

scala> val h2oContext = new org.apache.spark.h2o.H2OContext(sc).start()
15/12/09 11:53:33 WARN H2OContext: Increasing 'spark.locality.wait' to value 30000
[Exit 255]
@tomdz
Copy link
Author

tomdz commented Dec 9, 2015

This can also be reproduced on a dual stack machine by running Spark with -Djava.net.preferIPv6Addresses=true

@mmalohlava
Copy link
Member

Thanks @tomdz! We have to improve handling of IPv6 stack in H2O itself.

@tomdz
Copy link
Author

tomdz commented Dec 9, 2015

Any workaround I can use in the meantime ?

@mmalohlava
Copy link
Member

So far, i was not able reproduce it, i have dual stack machine, passing:

--conf spark.driver.extraJavaOptions="-Djava.net.preferIPv6Addresses=true"\
--conf spark.executor.extraJavaOptions="-Djava.net.preferIPv6Addresses=true"

I tried to pass different master={local | local[*] | local-cluster[....] }, but creation of H2O context was still successful (but i can see that even spark is using IPV4).

However, i know that inside H2O there are few places expecting IPv4.

@tomkraljevic
Copy link
Contributor

Yes, IPv4 is a requirement, and will likely be for the foreseeable future.
Interestingly, I have yet to see an enterprise customer without IPv4.

Tom

On Dec 9, 2015, at 1:30 PM, Michal Malohlava notifications@github.com wrote:

So far, i was not able reproduce it, i have dual stack machine, passing:

--conf spark.driver.extraJavaOptions="-Djava.net.preferIPv6Addresses=true"
--conf spark.executor.extraJavaOptions="-Djava.net.preferIPv6Addresses=true"
I tried to pass different master={local | local[*] | local-cluster[....] }, but creation of H2O context was still successful (but i can see that even spark is using IPV4).

However, i know that inside H2O there are few places expecting IPv4.


Reply to this email directly or view it on GitHub.

@tomdz
Copy link
Author

tomdz commented Dec 9, 2015

We (Facebook) are very heavily invested in IPv6 (see e.g. http://www.internetsociety.org/deploy360/wp-content/uploads/2014/04/WorldIPv6Congress-IPv6_LH-v2.pdf) and software that we run internally is generally required to play nice with IPv6-only hosts these days (which in Java is not all that difficult, fwiw).

@tomdz
Copy link
Author

tomdz commented Dec 9, 2015

@mmalohlava I tried this with Spark 1.5.1 running on JDK 1.8.0_60-b27 on Centos, fwiw. Maybe adding java.net.preferIPv4Stack=false also helps ?

@mmalohlava
Copy link
Member

@tomdz so far no luck - java7/java8 with preferIPv4Stack=false uses IPv4. Need to try that on dedicated linux box

@mmalohlava
Copy link
Member

Should be fixed now...

@mmalohlava
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants