You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The XGBoost chapter includes an FAQ that describes setting the extramempercent option to a high value (120 recommended). Update this FAQ with the following text:
This is why the extramempercent option exists, and we recommend setting this to a high value, such as 120. What happens internally is that when you specify node_memory=10G and extramempercent=120, the h2o driver will ask Hadoop for 10G * (1 + 1.2) = 22G of memory. At the same time, the h2o driver will limit the memory used by the container JVM (the h2o node) to 10G, leaving the 10G*120%=12G memory "unused." This memory can be then safely used by XGBoost outside of the JVM. Keep in mind that H2O algorithms will only have access to the JVM memory (10GB), while XGBoost will use the native memory for model training.
The text was updated successfully, but these errors were encountered:
The XGBoost chapter includes an FAQ that describes setting the extramempercent option to a high value (120 recommended). Update this FAQ with the following text:
This is why the extramempercent option exists, and we recommend setting this to a high value, such as 120. What happens internally is that when you specify node_memory=10G and extramempercent=120, the h2o driver will ask Hadoop for 10G * (1 + 1.2) = 22G of memory. At the same time, the h2o driver will limit the memory used by the container JVM (the h2o node) to 10G, leaving the 10G*120%=12G memory "unused." This memory can be then safely used by XGBoost outside of the JVM. Keep in mind that H2O algorithms will only have access to the JVM memory (10GB), while XGBoost will use the native memory for model training.
The text was updated successfully, but these errors were encountered: