Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BIGTOP-3015: hadoop-spark bundle correction #348

wants to merge 2 commits into
base: master


None yet
2 participants
Copy link

commented Mar 21, 2018

This change removes the "hadoop-plugin" charm from the "hadoop-spark" bundles and tests as it isn't needed in the bundle.



This comment has been minimized.

Copy link

commented Mar 25, 2018

@jamesbeedy, the reason hadoop-client was in this bundle was to act as an endpoint for people that were accustomed to running jobs and managing hdfs from there. iow, muscle memory and scripts that do this will work on any of the hadoop-x bundles::

juju run --unit hadoop-client/0 'hdfs dfs -cmd'

It was also there in case someone started with hadoop-spark, switched the spark runtime from YARN to HA, then scaled spark to x units. In that scenario, the single hadoop-client would be a recognizable endpoint to facilitate hdfs administration, versus making users do that on a random spark unit.

That said, I'm coming around to your proposal to remove hadoop-client. Hulk smashing apps on a single unit has never been recommended and is only done to balance app density with resource cost. As you already know, spark is a hadoop-client from the charms perspective, so an additional explicit client isn't needed.

I'm +1 to this removal if you'll also pull out the Client-related statements from the bundle

remove client related statements from readme
* abolish references to the now removed hadoop-client
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.