Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding 'cluster' property to resource 'wls_server'. #318

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

diegoliber
Copy link

On issue #317 , I mentioned an inability on configuring a Weblogic Cluster without knowing a priori all servers and machines on the infrastructure.

With this update, "cluster" property to the "wls_server" resource that allows the server to provide which cluster it will belong to. This partially solves the problem, since a puppet script can declare which clusters are available in the admin server and its main properties, and on each managed server, which cluster it will belong.

There's a remaining issue on updating the wls_server cluster and machine information, since it requires a managed server restart, before activation. This contribution still don't solve these edge cases.

…rver can be added on demand to an existing weblogic cluster.
@coveralls
Copy link

Coverage Status

Coverage remained the same at 38.714% when pulling 1cb5166 on diegoliber:master into 59e04a8 on biemond:master.

@biemond
Copy link
Owner

biemond commented Mar 29, 2016

Hi,

Can you check this #200, it should does the same only delays it to the cluster creation. In this you can create clusters first , create servers and run it again or first servers and then clusters. It works in every situation.

you can do this on a wls_server

server_parameters: 'WebCluster'
and on wls_cluster use this.

servers: 'inherited'

@diegoliber
Copy link
Author

Hi,

I understand the features provided by #200. I just believe it's a fragile implementation, since it depends on an specific format on the Notes attribute on the managed server (which is just a String of comma-separated values), and require a puppet agent execution on the managed server, followed by a puppet execution on the admin server, in this order. I still have to check if this solution solves the problem of requiring a managed server restart.

@biemond
Copy link
Owner

biemond commented Mar 30, 2016

Ok,

but you can add the cluster config also to all nodes, no need to run it again on the adminserver

so basically you want to do 1 time provisioning on all nodes. So on the admin ,just a domain with some empty clusters .
After that the server nodes which will create a machine, managed server add itself to the cluster. This can also work for persistence and a jms server.

How do you want to handle datasources and jms modules. You also need to have something to add the server to a datasource target or a jms server to a subdeployment . If you don't like this and not using notes it will remove the other targets.

also I have managed server control which can subscribe to managed server changes and do an auto restart when the config changes. This can solve your restart problem.

Maybe we should make a shared vagrant box where we can make this use case.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants