* changes: bump test submodule ptr bump java submodule ptr
…ks/runtimes around that are no longer in the DB - Added started apps and instances - Aggregated queries to reduce number of requests"
- Fixed varz stats keeping frameworks/runtimes around that are no longer in the DB - Added started apps and instances - Aggregated queries to reduce number of requests Change-Id: I03ee8368e1e2f06819f40cb1cc74ced949f8a79f
For applying this fix, you can use MySQL (/w em_mysql2 adapter) as cloud controller database by configuring cloud_controller.yml as follows: development: database: cloudcontroller host: localhost port: 3306 username: root password: password adapter: em_mysql2 encoding: utf8 timeout: 2000 Change-Id: Ie881f617f77d1d4aacc0772b1594547b9beb1af5
Setting a default value for REVISION was incorrect. nil is the right default for this property. Change-Id: Iaf2d636cb508c4e3469f479663f9ade8ecf9f895
All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
…rom json to yml."
…n to yml. Added template files for all cf components including services. Add comments to the deployment yaml config file. Cleanup some of the scripts. Change-Id: I9209749ab9ca50a2bd894189c571e1b4c33bc77b