You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, Dataproc deploys Hadoop, Spark, PySpark, SparkSQL, Hive and Pig among other things like HDFS. It would be great to have a AI script which will control the state of each component (on/off).
Some of us need only Spark working with Google Cloud Storage and having other components installed and running just wastes the resources of workers.
Maybe we should have a script with small "settings" section which will control which components we need to be ON and which are not necessary and turn them OFF
The text was updated successfully, but these errors were encountered:
spark2ignite
changed the title
Controlling components state
Control of components' state
Dec 2, 2015
This suggestion makes sense. We'd probably want to control the individual daemons with the scripts. I am assuming you'd want the ability to set/unset specific packages/daemons to be on or off for the lifetime of the cluster, correct?
Hey @spark2ignite - We released an update today and I thought you might be interested based on this issue. We now allow you to set cluster properties to set XML and conf properties.
While it's not a direct answer to your issue, I thought you might find it interesting!
Right now, Dataproc deploys Hadoop, Spark, PySpark, SparkSQL, Hive and Pig among other things like HDFS. It would be great to have a AI script which will control the state of each component (on/off).
Some of us need only Spark working with Google Cloud Storage and having other components installed and running just wastes the resources of workers.
Maybe we should have a script with small "settings" section which will control which components we need to be ON and which are not necessary and turn them OFF
The text was updated successfully, but these errors were encountered: