Skip to content
This repository has been archived by the owner on Nov 29, 2017. It is now read-only.

core/0 doesn't start correctly #43

Closed
drnic opened this issue Jan 14, 2013 · 31 comments
Closed

core/0 doesn't start correctly #43

drnic opened this issue Jan 14, 2013 · 31 comments

Comments

@drnic
Copy link
Contributor

drnic commented Jan 14, 2013

Does it start correctly on the VM (and need to increase the timeout?)

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

$ tail -f -n 200 /var/vcap/monit/monit.log
[UTC Jan 14 03:41:56] error    : 'cloud_controller' process is not running
[UTC Jan 14 03:41:56] info     : 'cloud_controller' trying to restart
[UTC Jan 14 03:41:56] info     : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Jan 14 03:41:57] error    : 'acm' process is not running
[UTC Jan 14 03:41:57] info     : 'acm' trying to restart
[UTC Jan 14 03:41:57] info     : 'acm' start: /var/vcap/jobs/acm/bin/acm_ctl
[UTC Jan 14 03:41:58] error    : 'postgresql_gateway' process is not running
[UTC Jan 14 03:41:58] info     : 'postgresql_gateway' trying to restart
[UTC Jan 14 03:41:58] info     : 'postgresql_gateway' start: /var/vcap/jobs/postgresql_gateway/bin/postgresql_gateway_ctl

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

Processes that were running on core/0:

 2635 ?        S      0:00 /bin/bash /var/vcap/jobs/serialization_data_server/bin/serialization_data_server_ctl start
 2645 ?        S      0:00 mount -t nfs 54.235.200.165:/cfsnapshot /var/vcap/service_lifecycle
 2646 ?        D      0:00 /sbin/mount.nfs 54.235.200.165:/cfsnapshot /var/vcap/service_lifecycle -o rw
 2772 ?        Sl     0:03 ruby /var/vcap/packages/cloud_controller/health_manager/bin/health_manager -c /var/vcap/jobs/health_manager/config/health_manager.yml
 3227 ?        Ss     0:00 sshd: bosh_w2g01eieh [priv]
 3287 ?        S      0:00 sshd: bosh_w2g01eieh@pts/0
 3288 pts/0    Ss     0:00 -bash
 3345 pts/0    S      0:00 su -
 3346 pts/0    S      0:00 -su
 3361 ?        S      0:00 /bin/bash /var/vcap/jobs/postgres/bin/postgres_ctl start
 3364 ?        S      0:00 su - vcap -c LD_LIBRARY_PATH=/var/vcap/packages/postgres/lib: /var/vcap/packages/postgres/bin/pg_ctl -o "-h 10.4.70.116 -p 2544" -w start -D /var/vcap/
 3365 ?        S      0:00 /var/vcap/packages/postgres/bin/pg_ctl -o -h 10.4.70.116 -p 2544 -w start -D /var/vcap/store/postgres -l /var/vcap/store/postgres/pg_log/startup.log
 3371 ?        S      0:00 /bin/bash /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl start
 3389 ?        Rl     0:05 ruby /var/vcap/packages/cloud_controller/cloud_controller/vendor/bundle/ruby/1.9.1/bin/rake db:migrate
 3409 ?        Rl     0:03 ruby /var/vcap/packages/postgresql_gateway/services/ng/postgresql/bin/postgresql_gateway -c /var/vcap/jobs/postgresql_gateway/config/postgresql_gateway
 3419 pts/0    R+     0:00 ps ax
12048 ?        Sl     0:13 ruby /var/vcap/packages/nats/vendor/bundle/ruby/1.9.1/bin/nats-server -c /var/vcap/jobs/nats/config/nats.yml
12066 ?        Ss     0:00 nginx: master process /var/vcap/packages/nginx/sbin/nginx -c /var/vcap/jobs/router/config/nginx.conf
12067 ?        S      0:00 nginx: worker process                                                         
12069 ?        Sl     0:13 ruby /var/vcap/packages/router/bin/router -c /var/vcap/jobs/router/config/router.yml
12082 ?        Sl     0:00 rsyslogd -c4
12158 ?        Sl     0:12 /var/vcap/packages/ruby/bin/ruby /var/vcap/packages/stager/bin/stager -c /var/vcap/jobs/stager/config/stager.yml
12214 ?        Sl     0:54 /var/vcap/packages/uaa/jre/bin/java -Djava.util.logging.config.file=/var/vcap/data/uaa/tomcat/conf/logging.properties -DPID=12214 -Djava.util.logging.m
12301 ?        Sl     0:12 /var/vcap/packages/ruby/bin/ruby /var/vcap/packages/uaa/vcap_registrar/bin/vcap_registrar --register_with_router --register_varz_credentials
12360 ?        Ss     0:01 /var/vcap/packages/redis22/bin/redis-server /var/vcap/jobs/vcap_redis/config/vcap_redis.conf

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

acm is not mentioned in the ps ax listing above; but cloud_controller and postgresql_gateway are mentioned.

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

On an m1.small without postgresql_gateway, core/0 started successfully.

Updating job core
  core/0 (canary) (00:03:23)                                                                        
Done                    1/1 00:03:23                                                                

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

acm is still not running and confirmed by monit.log

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

acm startup error:

==> /var/vcap/sys/log/acm/acm.stderr.log <==
/var/vcap/packages/acm/acm/vendor/bundle/ruby/1.9.1/gems/sequel-3.33.0/lib/sequel/adapters/postgres.rb:249:in `initialize': PG::Error: could not connect to server: Connection refused (Sequel::DatabaseConnectionError)
    Is the server running on host "54.235.200.165" and accepting
    TCP/IP connections on port 2544?

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

Why is postgres binding to public IP? From its startup.log

LOG:  could not bind IPv4 socket: Cannot assign requested address
HINT:  Is another postmaster already running on port 2544? If not, wait a few seconds and retry.
WARNING:  could not create listen socket for "54.235.200.165"
FATAL:  could not create any TCP/IP sockets

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

Bah, at the very least it looks like I built off appcloud instead of my patched appcloud-dev.

@drnic
Copy link
Contributor Author

drnic commented Jan 14, 2013

It starts up correctly, when using #42; created #46 for specific issue

@drnic drnic closed this as completed Jan 14, 2013
@tpradeep
Copy link
Contributor

I am having this issue. Whats going wrong ? How do I debug the problem ?

Preparing DNS
binding DNS (00:00:00)
Done 1/1 00:00:00

Creating bound missing VMs
core/0 (00:00:30)
Done 1/1 00:00:30

Binding instance VMs
core/0 (00:00:01)
Done 1/1 00:00:01

Preparing configuration
binding configuration (00:00:01)
Done 1/1 00:00:01

Updating job core
core/0 (canary) (00:03:46)
Done 1/1 00:03:46

Error 400007: `core/0' is not running after update

Task 3 error
/usr/local/lib/ruby/1.9.1/rake/file_utils.rb:53:in block in create_shell_runner': Command failed with status (1): [bosh -n --color deploy...] (RuntimeError) from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:incall'
from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in sh' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:632:inbosh_cmd'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:156:in block in deploy' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:ineach'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:in deploy' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/command_handler.rb:57:inrun'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:61:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:18:inrun'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/bin/bosh:16:in <top (required)>' from /usr/local/bin/bosh:23:inload'
from /usr/local/bin/bosh:23:in `

'

@drnic
Copy link
Contributor Author

drnic commented Mar 13, 2013

If you re-run "bosh deploy" does it work?

Next run: bosh ssh core/0

And see if you can find out what's not starting (in time).

On Wed, Mar 13, 2013 at 9:45 AM, Pradeep Tummala notifications@github.com
wrote:

I am having this issue. Whats going wrong ?
Preparing DNS
binding DNS (00:00:00)
Done 1/1 00:00:00
Creating bound missing VMs
core/0 (00:00:30)
Done 1/1 00:00:30
Binding instance VMs
core/0 (00:00:01)
Done 1/1 00:00:01
Preparing configuration
binding configuration (00:00:01)
Done 1/1 00:00:01
Updating job core
core/0 (canary) (00:03:46)
Done 1/1 00:03:46
Error 400007: core/0' is not running after update Task 3 error /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:53:inblock in create_shell_runner': Command failed with status (1): bosh -n --color deploy...
from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in call' from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:insh'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:632:in bosh_cmd' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:156:inblock in deploy'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:indeploy'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/command_handler.rb:57:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:61:inrun'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:18:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/bin/bosh:16:in<top (required)>'
from /usr/local/bin/bosh:23:in `load'

from /usr/local/bin/bosh:23:in `'

Reply to this email directly or view it on GitHub:
#43 (comment)

@tpradeep
Copy link
Contributor

Nothing is running on core/0. I had decreased number of workers to 4. Is that a problem ?

[UTC Mar 14 05:44:04] info : 'system_ce923067-79b4-4dbb-b707-ea798617b322' Monit reloaded
[UTC Mar 14 05:44:09] info : start service 'postgres' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:09] info : start service 'nats' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'nginx_router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'health_manager_next' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : 'postgres' start: /var/vcap/jobs/postgres/bin/postgres_ctl
[UTC Mar 14 05:44:09] info : start service 'cloud_controller' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'stager' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'uaa' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_registrar' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_redis' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'dea' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:10] info : 'postgres' start action done
[UTC Mar 14 05:44:10] info : 'nats' start: /var/vcap/jobs/nats/bin/nats_ctl
[UTC Mar 14 05:44:11] info : 'nats' start action done
[UTC Mar 14 05:44:11] info : 'nginx_router' start: /var/vcap/jobs/router/bin/nginx_ctl
[UTC Mar 14 05:44:12] info : 'nginx_router' start action done
[UTC Mar 14 05:44:12] info : 'router' start: /var/vcap/jobs/router/bin/router_ctl
[UTC Mar 14 05:44:13] info : 'router' start action done
[UTC Mar 14 05:44:13] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:14] info : 'health_manager_next' start action done
[UTC Mar 14 05:44:14] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:15] info : 'cloud_controller' start action done
[UTC Mar 14 05:44:15] info : 'stager' start: /var/vcap/jobs/stager/bin/stager_ctl
[UTC Mar 14 05:44:16] info : 'stager' start action done
[UTC Mar 14 05:44:17] info : 'uaa' start: /var/vcap/jobs/uaa/bin/uaa_ctl
[UTC Mar 14 05:44:18] info : 'uaa' start action done
[UTC Mar 14 05:44:18] info : 'vcap_registrar' start: /var/vcap/jobs/uaa/bin/vcap_registrar_ctl
[UTC Mar 14 05:44:19] info : 'vcap_registrar' start action done
[UTC Mar 14 05:44:19] info : 'vcap_redis' start: /var/vcap/jobs/vcap_redis/bin/vcap_redis_ctl
[UTC Mar 14 05:44:20] info : 'vcap_redis' start action done
[UTC Mar 14 05:44:20] info : 'dea' start: /var/vcap/jobs/dea/bin/dea_ctl
[UTC Mar 14 05:44:21] info : 'dea' start action done
[UTC Mar 14 05:44:21] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:21] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:21] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:21] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:22] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:22] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:22] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:33] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:33] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:33] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:34] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:34] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:34] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:45] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:45] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:45] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:46] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:46] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:46] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:57] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:57] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:57] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:58] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:58] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:58] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:09] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:09] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:09] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:10] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:10] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:10] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:40] error : 'cloud_controller' failed to start
[UTC Mar 14 05:45:50] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:50] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:50] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:51] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:51] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:51] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:46:21] error : 'cloud_controller' failed to start
[UTC Mar 14 05:46:31] error : 'health_manager_next' process is not running
[UTC Mar 14 05:46:31] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:46:31] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:46:32] error : 'cloud_controller' process is not running
[UTC Mar 14 05:46:32] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:46:32] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:02] error : 'cloud_controller' failed to start
[UTC Mar 14 05:47:12] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:12] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:12] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:13] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:13] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:13] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:14] info : 'cloud_controller' started
[UTC Mar 14 05:47:24] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:24] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:24] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:25] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:25] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:25] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:36] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:36] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:36] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:37] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:37] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:37] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:48] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:48] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:48] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:49] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:49] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:49] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:00] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:00] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:00] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:01] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:01] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:01] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:31] error : 'cloud_controller' failed to start
[UTC Mar 14 05:48:41] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:41] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:41] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:42] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:42] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:42] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:43] info : 'cloud_controller' started
[UTC Mar 14 05:48:53] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:53] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:53] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:54] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:54] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:54] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:05] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:05] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:05] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:06] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:06] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:06] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:17] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:17] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:17] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:18] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:18] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:18] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:48] error : 'cloud_controller' failed to start
[UTC Mar 14 05:49:58] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:58] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:58] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:59] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:59] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:59] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl

@tpradeep
Copy link
Contributor

I forgot to mention I am executing "bosh cf deploy"

@tpradeep
Copy link
Contributor

Deleted deployment and started again. Still same problem.

@drnic
Copy link
Contributor Author

drnic commented Mar 14, 2013

If apps aren't starting, then your monit logs are the place to go first:

tail -f -n 200 /var/vcap/monit/monit.log

On Thu, Mar 14, 2013 at 3:13 AM, Pradeep Tummala
notifications@github.comwrote:

Deleted deployment and started again. Still same problem.


Reply to this email directly or view it on GitHubhttps://github.com//issues/43#issuecomment-14889459
.

Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic

@tpradeep
Copy link
Contributor

I had mentioned the monit logs in my previous comment. Here it is again,

[UTC Mar 14 05:44:04] info : 'system_ce923067-79b4-4dbb-b707-ea798617b322' Monit reloaded
[UTC Mar 14 05:44:09] info : start service 'postgres' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:09] info : start service 'nats' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'nginx_router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'health_manager_next' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : 'postgres' start: /var/vcap/jobs/postgres/bin/postgres_ctl
[UTC Mar 14 05:44:09] info : start service 'cloud_controller' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'stager' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'uaa' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_registrar' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_redis' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'dea' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:10] info : 'postgres' start action done
[UTC Mar 14 05:44:10] info : 'nats' start: /var/vcap/jobs/nats/bin/nats_ctl
[UTC Mar 14 05:44:11] info : 'nats' start action done
[UTC Mar 14 05:44:11] info : 'nginx_router' start: /var/vcap/jobs/router/bin/nginx_ctl
[UTC Mar 14 05:44:12] info : 'nginx_router' start action done
[UTC Mar 14 05:44:12] info : 'router' start: /var/vcap/jobs/router/bin/router_ctl
[UTC Mar 14 05:44:13] info : 'router' start action done
[UTC Mar 14 05:44:13] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:14] info : 'health_manager_next' start action done
[UTC Mar 14 05:44:14] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:15] info : 'cloud_controller' start action done
[UTC Mar 14 05:44:15] info : 'stager' start: /var/vcap/jobs/stager/bin/stager_ctl
[UTC Mar 14 05:44:16] info : 'stager' start action done
[UTC Mar 14 05:44:17] info : 'uaa' start: /var/vcap/jobs/uaa/bin/uaa_ctl
[UTC Mar 14 05:44:18] info : 'uaa' start action done
[UTC Mar 14 05:44:18] info : 'vcap_registrar' start: /var/vcap/jobs/uaa/bin/vcap_registrar_ctl
[UTC Mar 14 05:44:19] info : 'vcap_registrar' start action done
[UTC Mar 14 05:44:19] info : 'vcap_redis' start: /var/vcap/jobs/vcap_redis/bin/vcap_redis_ctl
[UTC Mar 14 05:44:20] info : 'vcap_redis' start action done
[UTC Mar 14 05:44:20] info : 'dea' start: /var/vcap/jobs/dea/bin/dea_ctl
[UTC Mar 14 05:44:21] info : 'dea' start action done
[UTC Mar 14 05:44:21] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:21] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:21] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:21] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:22] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:22] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:22] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:33] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:33] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:33] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:34] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:34] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:34] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:45] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:45] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:45] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:46] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:46] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:46] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:57] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:57] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:57] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:58] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:58] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:58] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:09] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:09] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:09] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:10] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:10] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:10] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:40] error : 'cloud_controller' failed to start
[UTC Mar 14 05:45:50] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:50] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:50] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:51] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:51] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:51] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:46:21] error : 'cloud_controller' failed to start
[UTC Mar 14 05:46:31] error : 'health_manager_next' process is not running
[UTC Mar 14 05:46:31] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:46:31] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:46:32] error : 'cloud_controller' process is not running
[UTC Mar 14 05:46:32] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:46:32] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:02] error : 'cloud_controller' failed to start
[UTC Mar 14 05:47:12] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:12] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:12] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:13] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:13] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:13] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:14] info : 'cloud_controller' started
[UTC Mar 14 05:47:24] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:24] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:24] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:25] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:25] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:25] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:36] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:36] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:36] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:37] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:37] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:37] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:48] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:48] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:48] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:49] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:49] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:49] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:00] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:00] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:00] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:01] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:01] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:01] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:31] error : 'cloud_controller' failed to start
[UTC Mar 14 05:48:41] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:41] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:41] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:42] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:42] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:42] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:43] info : 'cloud_controller' started
[UTC Mar 14 05:48:53] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:53] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:53] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:54] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:54] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:54] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:05] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:05] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:05] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:06] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:06] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:06] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:17] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:17] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:17] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:18] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:18] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:18] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:48] error : 'cloud_controller' failed to start
[UTC Mar 14 05:49:58] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:58] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:58] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:59] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:59] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:59] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl

@drnic
Copy link
Contributor Author

drnic commented Mar 15, 2013

Sorry that wasn't very enlightening was it. I forgot that cf-release monit doesn't have the monit_wrapper from bosh-gen.

All the job logs are in /var/vcap/sys/log//

Dr Nic Williams
Stark & Wayne LLC - the consultancy for Cloud Foundry
http://starkandwayne.com
+1 415 860 2185
twitter: drnic

On Thu, Mar 14, 2013 at 10:26 PM, Pradeep Tummala
notifications@github.com wrote:

I had mentioned the monit logs in my previous comment. Here it is again,
[UTC Mar 14 05:44:04] info : 'system_ce923067-79b4-4dbb-b707-ea798617b322' Monit reloaded
[UTC Mar 14 05:44:09] info : start service 'postgres' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:09] info : start service 'nats' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'nginx_router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'router' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'health_manager_next' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : 'postgres' start: /var/vcap/jobs/postgres/bin/postgres_ctl
[UTC Mar 14 05:44:09] info : start service 'cloud_controller' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'stager' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'uaa' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_registrar' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'vcap_redis' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:09] info : start service 'dea' on user request
[UTC Mar 14 05:44:09] info : monit daemon at 1001 awakened
[UTC Mar 14 05:44:10] info : 'postgres' start action done
[UTC Mar 14 05:44:10] info : 'nats' start: /var/vcap/jobs/nats/bin/nats_ctl
[UTC Mar 14 05:44:11] info : 'nats' start action done
[UTC Mar 14 05:44:11] info : 'nginx_router' start: /var/vcap/jobs/router/bin/nginx_ctl
[UTC Mar 14 05:44:12] info : 'nginx_router' start action done
[UTC Mar 14 05:44:12] info : 'router' start: /var/vcap/jobs/router/bin/router_ctl
[UTC Mar 14 05:44:13] info : 'router' start action done
[UTC Mar 14 05:44:13] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:14] info : 'health_manager_next' start action done
[UTC Mar 14 05:44:14] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:15] info : 'cloud_controller' start action done
[UTC Mar 14 05:44:15] info : 'stager' start: /var/vcap/jobs/stager/bin/stager_ctl
[UTC Mar 14 05:44:16] info : 'stager' start action done
[UTC Mar 14 05:44:17] info : 'uaa' start: /var/vcap/jobs/uaa/bin/uaa_ctl
[UTC Mar 14 05:44:18] info : 'uaa' start action done
[UTC Mar 14 05:44:18] info : 'vcap_registrar' start: /var/vcap/jobs/uaa/bin/vcap_registrar_ctl
[UTC Mar 14 05:44:19] info : 'vcap_registrar' start action done
[UTC Mar 14 05:44:19] info : 'vcap_redis' start: /var/vcap/jobs/vcap_redis/bin/vcap_redis_ctl
[UTC Mar 14 05:44:20] info : 'vcap_redis' start action done
[UTC Mar 14 05:44:20] info : 'dea' start: /var/vcap/jobs/dea/bin/dea_ctl
[UTC Mar 14 05:44:21] info : 'dea' start action done
[UTC Mar 14 05:44:21] info : Awakened by User defined signal 1
[UTC Mar 14 05:44:21] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:21] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:21] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:22] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:22] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:22] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:33] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:33] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:33] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:34] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:34] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:34] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:45] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:45] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:45] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:46] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:46] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:46] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:44:57] error : 'health_manager_next' process is not running
[UTC Mar 14 05:44:57] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:44:57] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:44:58] error : 'cloud_controller' process is not running
[UTC Mar 14 05:44:58] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:44:58] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:09] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:09] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:09] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:10] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:10] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:10] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:45:40] error : 'cloud_controller' failed to start
[UTC Mar 14 05:45:50] error : 'health_manager_next' process is not running
[UTC Mar 14 05:45:50] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:45:50] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:45:51] error : 'cloud_controller' process is not running
[UTC Mar 14 05:45:51] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:45:51] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:46:21] error : 'cloud_controller' failed to start
[UTC Mar 14 05:46:31] error : 'health_manager_next' process is not running
[UTC Mar 14 05:46:31] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:46:31] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:46:32] error : 'cloud_controller' process is not running
[UTC Mar 14 05:46:32] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:46:32] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:02] error : 'cloud_controller' failed to start
[UTC Mar 14 05:47:12] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:12] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:12] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:13] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:13] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:13] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:14] info : 'cloud_controller' started
[UTC Mar 14 05:47:24] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:24] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:24] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:25] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:25] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:25] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:36] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:36] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:36] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:37] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:37] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:37] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:47:48] error : 'health_manager_next' process is not running
[UTC Mar 14 05:47:48] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:47:48] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:47:49] error : 'cloud_controller' process is not running
[UTC Mar 14 05:47:49] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:47:49] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:00] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:00] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:00] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:01] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:01] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:01] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:31] error : 'cloud_controller' failed to start
[UTC Mar 14 05:48:41] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:41] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:41] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:42] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:42] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:42] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:48:43] info : 'cloud_controller' started
[UTC Mar 14 05:48:53] error : 'health_manager_next' process is not running
[UTC Mar 14 05:48:53] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:48:53] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:48:54] error : 'cloud_controller' process is not running
[UTC Mar 14 05:48:54] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:48:54] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:05] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:05] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:05] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:06] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:06] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:06] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:17] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:17] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:17] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:18] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:18] info : 'cloud_controller' trying to restart
[UTC Mar 14 05:49:18] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl
[UTC Mar 14 05:49:48] error : 'cloud_controller' failed to start
[UTC Mar 14 05:49:58] error : 'health_manager_next' process is not running
[UTC Mar 14 05:49:58] info : 'health_manager_next' trying to restart
[UTC Mar 14 05:49:58] info : 'health_manager_next' start: /var/vcap/jobs/health_manager_next/bin/health_manager_next_ctl
[UTC Mar 14 05:49:59] error : 'cloud_controller' process is not running
[UTC Mar 14 05:49:59] info : 'cloud_controller' trying to restart

[UTC Mar 14 05:49:59] info : 'cloud_controller' start: /var/vcap/jobs/cloud_controller/bin/cloud_controller_ctl

Reply to this email directly or view it on GitHub:
#43 (comment)

@tpradeep
Copy link
Contributor

Tuning OpenStack to respect cloud foundry demands. Will run the whole process once again and let you know.

@tpradeep
Copy link
Contributor

1 doubt - Are the worker VMs supposed to get deleted before core VM starts ?

All the 10 VMs are spawning perfectly then after few steps core VM spawns and workers vanish.

Also, is it ok if I reduce number of workers to say,4 or 6 ?

@tpradeep
Copy link
Contributor

Inside var/vcap/sys/log/monit, log contents of various jobs are

/var/vcap/jobs/cloud_controller/config/sudoers: parsed OK
Migrations failed
Removing stale pidfile...

router looks fine - rsyslog start/running, process 14878

nats

net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024

Postgresql

kernel.shmmax = 284934144
The files belonging to this database system will be owned by user "vcap".
This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default text search configuration will be set to "english".

fixing permissions on existing directory /var/vcap/store/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
creating template1 database in /var/vcap/store/postgres/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

Success. You can now start the database server using:

/var/vcap/packages/postgres/bin/postgres -D /var/vcap/store/postgres

or
/var/vcap/packages/postgres/bin/pg_ctl -D /var/vcap/store/postgres -l logfile start

Starting PostgreSQL:
waiting for server to start.... done
server started
PostgreSQL started successfully
Creating roles...
Trying to create role root...
CREATE ROLE
Setting password for role root...
ALTER ROLE
Trying to create role uaa...
CREATE ROLE
Setting password for role uaa...
ALTER ROLE
Creating databases...
Trying to create database appcloud...
Trying to create database uaa...

@tpradeep
Copy link
Contributor

uaa

[2013-03-15 13:16:30.060] batch/batch - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Initializing filter 'springSecurityFilterChain'
[2013-03-15 13:16:30.157] batch/batch - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Filter 'springSecurityFilterChain' configured successfully
[2013-03-15 13:16:30.186] batch/batch - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Initializing servlet 'spring'
[2013-03-15 13:16:30.267] batch/batch - 15046 [localhost-startStop-1] .... INFO --- DispatcherServlet: FrameworkServlet 'spring': initialization started
[2013-03-15 13:16:30.268] batch/batch - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Servlet with name 'spring' will try to create custom WebApplicationContext context of class 'org.springframework.web.context.support.XmlWebApplicationContext', using parent context [null]
[2013-03-15 13:16:30.432] batch/batch - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loading from YAML: URL [file:/var/vcap/jobs/uaa/config/batch.yml]
[2013-03-15 13:16:30.498] batch/batch - 15046 [localhost-startStop-1] .... DEBUG --- YamlProcessor: Merging document (no matchers set){name=batch, database={driverClassName=org.postgresql.Driver, url=jdbc:postgresql://192.168.1.132:2544/uaa, username=uaa, password=#}, spring_profiles=postgresql, cloud_controller={database={url=jdbc:postgresql://192.168.1.132:2544/appcloud, username=root, password=#}}, logging={config=/var/vcap/jobs/uaa/config/batch.log4j.properties}, batch={username=uaa, password=#}}
[2013-03-15 13:16:30.508] batch/batch - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loaded 1 document from YAML resource: URL [file:/var/vcap/jobs/uaa/config/batch.yml]
[2013-03-15 13:16:42.451] uaa - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Initializing filter 'springSecurityFilterChain'
[2013-03-15 13:16:42.496] uaa - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Filter 'springSecurityFilterChain' configured successfully
[2013-03-15 13:16:42.510] uaa - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Initializing servlet 'spring'
[2013-03-15 13:16:42.531] uaa - 15046 [localhost-startStop-1] .... INFO --- DispatcherServlet: FrameworkServlet 'spring': initialization started
[2013-03-15 13:16:42.532] uaa - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Servlet with name 'spring' will try to create custom WebApplicationContext context of class 'org.springframework.web.context.support.XmlWebApplicationContext', using parent context [null]
[2013-03-15 13:16:42.610] uaa - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loading from YAML: URL [file:/var/vcap/jobs/uaa/config/uaa.yml]
[2013-03-15 13:16:42.654] uaa - 15046 [localhost-startStop-1] .... DEBUG --- YamlProcessor: Merging document (no matchers set){name=uaa, database={driverClassName=org.postgresql.Driver, url=jdbc:postgresql://192.168.1.132:2544/uaa, username=uaa, password=#}, spring_profiles=postgresql, cloud_controller={database={url=jdbc:postgresql://192.168.1.132:2544/appcloud, username=root, password=#}}, logging={config=/var/vcap/jobs/uaa/config/log4j.properties}, jwt={token={signing-key=#, verification-key=TOKEN_SECRET}}, issuer.uri=https://uaa.test123.com, oauth={client={autoapprove=null}, clients={admin={authorized-grant-types=client_credentials, authorities=clients.read,clients.write,clients.secret,uaa.admin, id=admin, secret=#}, cloud_controller={authorized-grant-types=client_credentials, authorities=scim.read,scim.write,password.write, id=cloud_controller, secret=#, access-token-validity=604800}, vmc={authorized-grant-types=implicit, scope=cloud_controller.read,cloud_controller.write,openid,password.write, authorities=uaa.none, id=vmc, redirect-uri=http://uaa.cloudfoundry.com/redirect/vmc,https://uaa.cloudfoundry.com/redirect/vmc, access-token-validity=604800}, login={id=login, secret=#, authorized-grant-types=client_credentials, authorities=oauth.login}}}, scim={userids_enabled=false, user.override=true}}
[2013-03-15 13:16:42.654] uaa - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loaded 1 document from YAML resource: URL [file:/var/vcap/jobs/uaa/config/uaa.yml]
[2013-03-15 13:16:52.472] varz/varz - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Initializing filter 'varzSecurityFilterChain'
[2013-03-15 13:16:52.557] varz/varz - 15046 [localhost-startStop-1] .... DEBUG --- DelegatingFilterProxy: Filter 'varzSecurityFilterChain' configured successfully
[2013-03-15 13:16:52.578] varz/varz - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Initializing servlet 'varz'
[2013-03-15 13:16:52.633] varz/varz - 15046 [localhost-startStop-1] .... INFO --- DispatcherServlet: FrameworkServlet 'varz': initialization started
[2013-03-15 13:16:52.634] varz/varz - 15046 [localhost-startStop-1] .... DEBUG --- DispatcherServlet: Servlet with name 'varz' will try to create custom WebApplicationContext context of class 'org.springframework.web.context.support.XmlWebApplicationContext', using parent context [null]
[2013-03-15 13:16:52.763] varz/varz - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loading from YAML: URL [file:/var/vcap/jobs/uaa/config/varz.yml]
[2013-03-15 13:16:52.798] varz/varz - 15046 [localhost-startStop-1] .... DEBUG --- YamlProcessor: Merging document (no matchers set){pid=/var/vcap/sys/run/uaa.pid, varz={username=503-11212-29271-22616, password=#, type=uaa}, logging={config=/var/vcap/jobs/uaa/config/varz.log4j.properties}}
[2013-03-15 13:16:52.799] varz/varz - 15046 [localhost-startStop-1] .... INFO --- YamlProcessor: Loaded 1 document from YAML resource: URL [file:/var/vcap/jobs/uaa/config/varz.yml]

@tpradeep
Copy link
Contributor

Please check the above mentioned logs. I cant understand the root cause of the problem,esp. of cloud controller
Also, I have asked a number of other questions above. Please answer those too.

Thanks

@mrdavidlaing
Copy link
Contributor

As I understand things this is correct behaviour.

The worker VMs are spawned to compile the source into packages stored in
bosh's blobstore. The compilation VMs can then be deleted.

The core VM is then started and those compiled packages are installed from
said blobstore.

If you bosh delete deployment and then bosh deploy again, the
compilation VMs won't get started since the packages have already been
compiled.

On 15 March 2013 10:29, Pradeep Tummala notifications@github.com wrote:

1 doubt - Are the worker VMs supposed to get deleted before core VM starts
?

All the 10 VMs are spawning perfectly then after few steps core VM spawns
and workers vanish.


Reply to this email directly or view it on GitHubhttps://github.com//issues/43#issuecomment-14953689
.

David Laing
Open source @ City Index - github.com/cityindex
http://davidlaing.com
Twitter: @davidlaing

@tpradeep
Copy link
Contributor

Thanks mate. That clears few things.

It's a bit difficult for me to reverse engineer cloud foundry code. The architecture is quite complex.

@drnic
Copy link
Contributor Author

drnic commented Mar 17, 2013

Pradeep, he's technically describing bosh behaviour. You've still got CF architecture to learn. :)

On Sun, Mar 17, 2013 at 11:44 AM, Pradeep Tummala
notifications@github.com wrote:

Thanks mate. That clears few things.

It's a bit difficult for me to reverse engineer cloud foundry code. The architecture is quite complex.

Reply to this email directly or view it on GitHub:
#43 (comment)

@tpradeep
Copy link
Contributor

Correct me here,

The packages like health manager, cloud controller and others belong to CF.
So is the compilation process of these packages happening here part of bosh
not CF. That essentially means CF's role does not start unless application
is pushed on it. Also, does CF play any role (small or big) for its own
deployment ?

Yes, you are right. There's lot to learn including OpenStack (going
deeper) in my case. I have made few tweaks into devstack to accommodate
bosh-cloudfoundry requirements such as spawning more than 10 instances,
handling more POST/GET requests and others. Will include in docs once I
complete bosh-cloudfoundry myself.

On Monday, March 18, 2013, Dr Nic Williams wrote:

Pradeep, he's technically describing bosh behaviour. You've still got CF
architecture to learn. :)

On Sun, Mar 17, 2013 at 11:44 AM, Pradeep Tummala
<notifications@github.com <javascript:_e({}, 'cvml',
'notifications@github.com');>> wrote:

Thanks mate. That clears few things.
It's a bit difficult for me to reverse engineer cloud foundry code. The

architecture is quite complex.

Reply to this email directly or view it on GitHub:

#43 (comment)


Reply to this email directly or view it on GitHubhttps://github.com//issues/43#issuecomment-15030557
.

@tpradeep
Copy link
Contributor

@drnic I am not able to get past this core/0 error. Can you please look at the logs and see whats going wrong. Thanks

==> /var/vcap/sys/log/uaa/varz.log <==
[2013-03-18 06:43:07.084] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- XmlWebApplicationContext: Refreshing WebApplicationContext for namespace 'varz-servlet': startup date [Mon Mar 18 06:43:07 UTC 2013]; root of context hierarchy
[2013-03-18 06:43:07.129] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- XmlBeanDefinitionReader: Loading XML bean definitions from ServletContext resource [/WEB-INF/varz-servlet.xml]
[2013-03-18 06:43:07.185] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- XmlBeanDefinitionReader: Loading XML bean definitions from ServletContext resource [/WEB-INF/env-context.xml]
[2013-03-18 06:43:07.255] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- SpringSecurityCoreVersion: You are running with Spring Security Core 3.1.2.RELEASE
[2013-03-18 06:43:07.255] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- SecurityNamespaceHandler: Spring Security 'config' module version is 3.1.2.RELEASE
[2013-03-18 06:43:07.297] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- HttpSecurityBeanDefinitionParser: Checking sorted filter chain: [Root bean: class [org.springframework.security.web.context.SecurityContextPersistenceFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 300, Root bean: class [org.springframework.security.web.authentication.www.BasicAuthenticationFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1200, Root bean: class [org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1400, Root bean: class [org.springframework.security.web.access.ExceptionTranslationFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1900, <org.springframework.security.web.access.intercept.FilterSecurityInterceptor#0>, order = 2000]
[2013-03-18 06:43:07.448] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- YamlProcessor: Loading from YAML: class path resource [varz.yml]
[2013-03-18 06:43:07.448] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- YamlProcessor: Loaded 0 document from YAML resource: class path resource [varz.yml]
[2013-03-18 06:43:07.504] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- DefaultListableBeanFactory: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@70dc3491: defining beans [applicationProperties,org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,mbeanServer,org.springframework.security.filterChains,org.springframework.security.filterChainProxy,org.springframework.security.web.DefaultSecurityFilterChain#0,org.springframework.security.web.PortMapperImpl#0,org.springframework.security.web.PortResolverImpl#0,org.springframework.security.config.authentication.AuthenticationManagerFactoryBean#0,org.springframework.security.authentication.ProviderManager#0,org.springframework.security.web.savedrequest.NullRequestCache#0,org.springframework.security.access.vote.AffirmativeBased#0,org.springframework.security.web.access.intercept.FilterSecurityInterceptor#0,org.springframework.security.web.access.DefaultWebInvocationPrivilegeEvaluator#0,org.springframework.security.userDetailsServiceFactory,org.springframework.security.web.DefaultSecurityFilterChain#1,securityContextRepository,org.springframework.security.provisioning.InMemoryUserDetailsManager#0,org.springframework.security.authentication.dao.DaoAuthenticationProvider#0,org.springframework.security.authentication.DefaultAuthenticationEventPublisher#0,org.springframework.security.authenticationManager,authenticationDetailsSource,accessDeniedHandler,authenticationEntryPoint,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#0,org.springframework.format.support.FormattingConversionServiceFactoryBean#0,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter#0,org.springframework.web.servlet.handler.MappedInterceptor#0,org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver#0,org.springframework.web.servlet.mvc.annotation.ResponseStatusExceptionResolver#0,org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver#0,org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping,org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter,org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#1,varzEndpoint]; root of factory hierarchy
[2013-03-18 06:43:07.536] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- DefaultSecurityFilterChain: Creating filter chain: Ant [pattern='/info/'], []
[2013-03-18 06:43:07.725] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- DefaultSecurityFilterChain: Creating filter chain: org.springframework.security.web.util.AnyRequestMatcher@1, [org.springframework.security.web.context.SecurityContextPersistenceFilter@7cff7c63, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@693f87b1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@49e5ab72, org.springframework.security.web.access.ExceptionTranslationFilter@55aef45, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@15129dc2]
[2013-03-18 06:43:07.794] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/{domain}],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getDomain(java.lang.String,java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:07.795] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/info],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getInfo(java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:07.795] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getVarz(java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:07.795] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/env],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getEnv() throws java.lang.Exception
[2013-03-18 06:43:07.796] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/domains],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Set<java.lang.String> org.cloudfoundry.identity.varz.VarzEndpoint.getMBeanDomains() throws java.io.IOException
[2013-03-18 06:43:08.054] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/{domain}],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getDomain(java.lang.String,java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:08.054] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/info],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getInfo(java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:08.054] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getVarz(java.lang.String) throws java.lang.Exception
[2013-03-18 06:43:08.055] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/env],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getEnv() throws java.lang.Exception
[2013-03-18 06:43:08.055] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/domains],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Set<java.lang.String> org.cloudfoundry.identity.varz.VarzEndpoint.getMBeanDomains() throws java.io.IOException
[2013-03-18 06:43:08.071] uaa/varz - 15015 [localhost-startStop-1] .... INFO --- DispatcherServlet: FrameworkServlet 'varz': initialization completed in 1152 ms
[2013-03-18 06:47:59.431] uaa - 15015 [localhost-startStop-2] .... INFO --- XmlWebApplicationContext: Closing WebApplicationContext for namespace 'varz-servlet': startup date [Mon Mar 18 06:43:07 UTC 2013]; root of context hierarchy
[2013-03-18 06:47:59.433] uaa - 15015 [localhost-startStop-2] .... INFO --- DefaultListableBeanFactory: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@70dc3491: defining beans [applicationProperties,org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,mbeanServer,org.springframework.security.filterChains,org.springframework.security.filterChainProxy,org.springframework.security.web.DefaultSecurityFilterChain#0,org.springframework.security.web.PortMapperImpl#0,org.springframework.security.web.PortResolverImpl#0,org.springframework.security.config.authentication.AuthenticationManagerFactoryBean#0,org.springframework.security.authentication.ProviderManager#0,org.springframework.security.web.savedrequest.NullRequestCache#0,org.springframework.security.access.vote.AffirmativeBased#0,org.springframework.security.web.access.intercept.FilterSecurityInterceptor#0,org.springframework.security.web.access.DefaultWebInvocationPrivilegeEvaluator#0,org.springframework.security.userDetailsServiceFactory,org.springframework.security.web.DefaultSecurityFilterChain#1,securityContextRepository,org.springframework.security.provisioning.InMemoryUserDetailsManager#0,org.springframework.security.authentication.dao.DaoAuthenticationProvider#0,org.springframework.security.authentication.DefaultAuthenticationEventPublisher#0,org.springframework.security.authenticationManager,authenticationDetailsSource,accessDeniedHandler,authenticationEntryPoint,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#0,org.springframework.format.support.FormattingConversionServiceFactoryBean#0,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter#0,org.springframework.web.servlet.handler.MappedInterceptor#0,org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver#0,org.springframework.web.servlet.mvc.annotation.ResponseStatusExceptionResolver#0,org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver#0,org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping,org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter,org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#1,varzEndpoint]; root of factory hierarchy
[2013-03-18 06:49:25.402] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- XmlWebApplicationContext: Refreshing WebApplicationContext for namespace 'varz-servlet': startup date [Mon Mar 18 06:49:25 UTC 2013]; root of context hierarchy
[2013-03-18 06:49:25.530] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- XmlBeanDefinitionReader: Loading XML bean definitions from ServletContext resource [/WEB-INF/varz-servlet.xml]
[2013-03-18 06:49:25.656] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- XmlBeanDefinitionReader: Loading XML bean definitions from ServletContext resource [/WEB-INF/env-context.xml]
[2013-03-18 06:49:25.760] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- SpringSecurityCoreVersion: You are running with Spring Security Core 3.1.2.RELEASE
[2013-03-18 06:49:25.761] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- SecurityNamespaceHandler: Spring Security 'config' module version is 3.1.2.RELEASE
[2013-03-18 06:49:25.853] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- HttpSecurityBeanDefinitionParser: Checking sorted filter chain: [Root bean: class [org.springframework.security.web.context.SecurityContextPersistenceFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 300, Root bean: class [org.springframework.security.web.authentication.www.BasicAuthenticationFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1200, Root bean: class [org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1400, Root bean: class [org.springframework.security.web.access.ExceptionTranslationFilter]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 1900, <org.springframework.security.web.access.intercept.FilterSecurityInterceptor#0>, order = 2000]
[2013-03-18 06:49:26.022] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- YamlProcessor: Loading from YAML: class path resource [varz.yml]
[2013-03-18 06:49:26.022] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- YamlProcessor: Loaded 0 document from YAML resource: class path resource [varz.yml]
[2013-03-18 06:49:26.102] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- DefaultListableBeanFactory: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@53adedc2: defining beans [applicationProperties,org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,mbeanServer,org.springframework.security.filterChains,org.springframework.security.filterChainProxy,org.springframework.security.web.DefaultSecurityFilterChain#0,org.springframework.security.web.PortMapperImpl#0,org.springframework.security.web.PortResolverImpl#0,org.springframework.security.config.authentication.AuthenticationManagerFactoryBean#0,org.springframework.security.authentication.ProviderManager#0,org.springframework.security.web.savedrequest.NullRequestCache#0,org.springframework.security.access.vote.AffirmativeBased#0,org.springframework.security.web.access.intercept.FilterSecurityInterceptor#0,org.springframework.security.web.access.DefaultWebInvocationPrivilegeEvaluator#0,org.springframework.security.userDetailsServiceFactory,org.springframework.security.web.DefaultSecurityFilterChain#1,securityContextRepository,org.springframework.security.provisioning.InMemoryUserDetailsManager#0,org.springframework.security.authentication.dao.DaoAuthenticationProvider#0,org.springframework.security.authentication.DefaultAuthenticationEventPublisher#0,org.springframework.security.authenticationManager,authenticationDetailsSource,accessDeniedHandler,authenticationEntryPoint,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#0,org.springframework.format.support.FormattingConversionServiceFactoryBean#0,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter#0,org.springframework.web.servlet.handler.MappedInterceptor#0,org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver#0,org.springframework.web.servlet.mvc.annotation.ResponseStatusExceptionResolver#0,org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver#0,org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping,org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter,org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter,org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#1,varzEndpoint]; root of factory hierarchy
[2013-03-18 06:49:26.161] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- DefaultSecurityFilterChain: Creating filter chain: Ant [pattern='/info/
'], []
[2013-03-18 06:49:26.336] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- DefaultSecurityFilterChain: Creating filter chain: org.springframework.security.web.util.AnyRequestMatcher@1, [org.springframework.security.web.context.SecurityContextPersistenceFilter@68d88d5, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@36125b4f, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6f28bb44, org.springframework.security.web.access.ExceptionTranslationFilter@5b1ea65f, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@393cd5b5]
[2013-03-18 06:49:26.486] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/{domain}],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getDomain(java.lang.String,java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.486] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/info],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getInfo(java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.486] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getVarz(java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.486] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/env],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getEnv() throws java.lang.Exception
[2013-03-18 06:49:26.487] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/domains],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Set<java.lang.String> org.cloudfoundry.identity.varz.VarzEndpoint.getMBeanDomains() throws java.io.IOException
[2013-03-18 06:49:26.753] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/{domain}],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getDomain(java.lang.String,java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.753] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/info],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getInfo(java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.753] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getVarz(java.lang.String) throws java.lang.Exception
[2013-03-18 06:49:26.754] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/env],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Map<java.lang.String, ?> org.cloudfoundry.identity.varz.VarzEndpoint.getEnv() throws java.lang.Exception
[2013-03-18 06:49:26.754] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- RequestMappingHandlerMapping: Mapped "{[/domains],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.util.Set<java.lang.String> org.cloudfoundry.identity.varz.VarzEndpoint.getMBeanDomains() throws java.io.IOException
[2013-03-18 06:49:26.770] uaa/varz - 30941 [localhost-startStop-1] .... INFO --- DispatcherServlet: FrameworkServlet 'varz': initialization completed in 1689 ms

==> /var/vcap/sys/log/vcap_redis/vcap_redis.log <==
[15416] 18 Mar 06:42:36 * Server started, Redis version 2.2.15
[15416] 18 Mar 06:42:36 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[15416] 18 Mar 06:42:36 * The server is now ready to accept connections on port 3456
[15416] 18 Mar 06:48:01 # Received SIGTERM, scheduling shutdown...
[15416] 18 Mar 06:48:01 # User requested shutdown...
[15416] 18 Mar 06:48:01 * Removing the pid file.
[15416] 18 Mar 06:48:01 # Redis is now ready to exit, bye bye...
[31177] 18 Mar 06:48:57 * Server started, Redis version 2.2.15
[31177] 18 Mar 06:48:57 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[31177] 18 Mar 06:48:57 * The server is now ready to accept connections on port 3456

==> /var/vcap/sys/log/vcap_registrar/vcap_registrar.log <==
[2013-03-18 06:42:35] vcap_registrar - 15225 8540 73b1 INFO -- Reading varz config from /var/vcap/jobs/uaa/config/varz.yml
[2013-03-18 06:42:35] vcap_registrar - 15225 8540 73b1 INFO -- Connected to NATS - router registration
[2013-03-18 06:42:35] vcap_registrar - 15225 8540 73b1 INFO -- Sending registration: {"host":"10.0.0.4","port":8100,"uris":["uaa.test123.com"],"tags":{"component":"uaa"}}
[2013-03-18 06:42:35] vcap_registrar - 15225 8540 73b1 INFO -- Connected to NATS - varz registration
[2013-03-18 06:42:35] vcap_registrar - 15225 8540 73b1 INFO -- Announcing start up vcap.component.announce
[2013-03-18 06:48:00] vcap_registrar - 15225 8540 73b1 INFO -- Shutting down
[2013-03-18 06:48:00] vcap_registrar - 15225 8540 73b1 INFO -- Sending unregistration: {"host":"10.0.0.4","port":8100,"uris":["uaa.test123.com"],"tags":{"component":"uaa"}}
[2013-03-18 06:48:57] vcap_registrar - 31126 daaa 954e INFO -- Reading varz config from /var/vcap/jobs/uaa/config/varz.yml
[2013-03-18 06:48:57] vcap_registrar - 31126 daaa 954e INFO -- Connected to NATS - router registration
[2013-03-18 06:48:57] vcap_registrar - 31126 daaa 954e INFO -- Sending registration: {"host":"10.0.0.4","port":8100,"uris":["uaa.test123.com"],"tags":{"component":"uaa"}}
[2013-03-18 06:48:57] vcap_registrar - 31126 daaa 954e INFO -- Connected to NATS - varz registration
[2013-03-18 06:48:57] vcap_registrar - 31126 daaa 954e INFO -- Announcing start up vcap.component.announce

==> /var/vcap/sys/log/vcap_registrar/vcap_registrar.stderr.log <==

==> /var/vcap/sys/log/vcap_registrar/vcap_registrar.stdout.log <==

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589549.1903656,"message":"HealthManager: initializing","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":39,"method":"initialize"}
{"timestamp":1363589549.190842,"message":"starting...","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":74,"method":"start"}
{"timestamp":1363589549.1930196,"message":"subscribing to heartbeats","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":17,"method":"start"}
{"timestamp":1363589549.1931229,"message":"subscribing to droplet.exited","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":22,"method":"start"}
{"timestamp":1363589549.1932065,"message":"subscribing to droplet.updated","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":27,"method":"start"}
{"timestamp":1363589549.1933007,"message":"registering VCAP component","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":59,"method":"register_as_vcap_component"}
{"timestamp":1363589549.2651827,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}
{"timestamp":1363589549.2654836,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}

==> /var/vcap/sys/log/monit/health_manager_next_ctl.log <==
Removing stale pidfile...

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Removing stale pidfile...
/var/vcap/jobs/cloud_controller/config/sudoers: parsed OK

==> /var/vcap/sys/log/cloud_controller/db_migrate.stderr.log <==
rake aborted!
couldn't parse YAML at line 105 column 37
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/appconfig.rb:12:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:in `<top (required)>'
(See full trace by running task with --trace)

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Migrations failed

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589554.2667458,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}
{"timestamp":1363589554.266918,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":13754500,"fiber_id":19095240,"process_id":32139,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}

==> /var/vcap/sys/log/health_manager_next/health_manager_next.stderr.log <==
/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz_common.rb:33:in release': Path [:total] is not held (ArgumentError) from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inblock in release_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:in each' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inrelease_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb:159:in block in with_credentials' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:in block in timeout' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in run_machine' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:inrun'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:129:in start' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb:77:instart'
from /var/vcap/packages/health_manager_next/health_manager_next/bin/health_manager:61:in `

'

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589590.2939203,"message":"HealthManager: initializing","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":39,"method":"initialize"}
{"timestamp":1363589590.2944503,"message":"starting...","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":74,"method":"start"}
{"timestamp":1363589590.2967958,"message":"subscribing to heartbeats","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":17,"method":"start"}
{"timestamp":1363589590.296934,"message":"subscribing to droplet.exited","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":22,"method":"start"}
{"timestamp":1363589590.2970212,"message":"subscribing to droplet.updated","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":27,"method":"start"}
{"timestamp":1363589590.297149,"message":"registering VCAP component","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":59,"method":"register_as_vcap_component"}

==> /var/vcap/sys/log/monit/health_manager_next_ctl.log <==
Removing stale pidfile...

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589590.3785353,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}
{"timestamp":1363589590.378865,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Removing stale pidfile...
/var/vcap/jobs/cloud_controller/config/sudoers: parsed OK

==> /var/vcap/sys/log/cloud_controller/db_migrate.stderr.log <==
rake aborted!
couldn't parse YAML at line 105 column 37
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/appconfig.rb:12:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:in `<top (required)>'
(See full trace by running task with --trace)

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Migrations failed

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589595.37999,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}
{"timestamp":1363589595.3801672,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":17385600,"fiber_id":22726160,"process_id":32204,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}

==> /var/vcap/sys/log/health_manager_next/health_manager_next.stderr.log <==
/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz_common.rb:33:in release': Path [:total] is not held (ArgumentError) from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inblock in release_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:in each' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inrelease_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb:159:in block in with_credentials' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:in block in timeout' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in run_machine' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:inrun'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:129:in start' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb:77:instart'
from /var/vcap/packages/health_manager_next/health_manager_next/bin/health_manager:61:in `

'

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589602.2915912,"message":"HealthManager: initializing","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":39,"method":"initialize"}
{"timestamp":1363589602.291983,"message":"starting...","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":74,"method":"start"}
{"timestamp":1363589602.2942615,"message":"subscribing to heartbeats","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":17,"method":"start"}
{"timestamp":1363589602.294364,"message":"subscribing to droplet.exited","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":22,"method":"start"}
{"timestamp":1363589602.2945118,"message":"subscribing to droplet.updated","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":27,"method":"start"}
{"timestamp":1363589602.294613,"message":"registering VCAP component","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":59,"method":"register_as_vcap_component"}

==> /var/vcap/sys/log/monit/health_manager_next_ctl.log <==
Removing stale pidfile...

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589602.3662772,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}
{"timestamp":1363589602.3665795,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Removing stale pidfile...
/var/vcap/jobs/cloud_controller/config/sudoers: parsed OK

==> /var/vcap/sys/log/cloud_controller/db_migrate.stderr.log <==
rake aborted!
couldn't parse YAML at line 105 column 37
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/appconfig.rb:12:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:in `<top (required)>'
(See full trace by running task with --trace)

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Migrations failed

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589607.3678083,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}
{"timestamp":1363589607.3679955,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":15523980,"fiber_id":20864480,"process_id":32249,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}

==> /var/vcap/sys/log/health_manager_next/health_manager_next.stderr.log <==
/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz_common.rb:33:in release': Path [:total] is not held (ArgumentError) from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inblock in release_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:in each' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inrelease_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb:159:in block in with_credentials' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:in block in timeout' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in run_machine' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:inrun'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:129:in start' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb:77:instart'
from /var/vcap/packages/health_manager_next/health_manager_next/bin/health_manager:61:in `

'

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589614.3061788,"message":"HealthManager: initializing","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":39,"method":"initialize"}
{"timestamp":1363589614.3065712,"message":"starting...","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":74,"method":"start"}
{"timestamp":1363589614.3091369,"message":"subscribing to heartbeats","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":17,"method":"start"}
{"timestamp":1363589614.3092403,"message":"subscribing to droplet.exited","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":22,"method":"start"}
{"timestamp":1363589614.3093235,"message":"subscribing to droplet.updated","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/nats_based_known_state_provider.rb","lineno":27,"method":"start"}
{"timestamp":1363589614.3094146,"message":"registering VCAP component","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb","lineno":59,"method":"register_as_vcap_component"}

==> /var/vcap/sys/log/monit/health_manager_next_ctl.log <==
Removing stale pidfile...

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589614.3831644,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}
{"timestamp":1363589614.383469,"message":"bulk: requesting API credentials over NATS...","log_level":"info","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":147,"method":"with_credentials"}

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Removing stale pidfile...
/var/vcap/jobs/cloud_controller/config/sudoers: parsed OK

==> /var/vcap/sys/log/cloud_controller/db_migrate.stderr.log <==
rake aborted!
couldn't parse YAML at line 105 column 37
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/appconfig.rb:12:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/boot.rb:121:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/config/application.rb:1:in <top (required)>' /var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:inrequire'
/var/vcap/data/packages/cloud_controller/76.1-dev.1/cloud_controller/Rakefile:1:in `<top (required)>'
(See full trace by running task with --trace)

==> /var/vcap/sys/log/monit/cloud_controller_ctl.log <==
Migrations failed

==> /var/vcap/sys/log/health_manager_next/health_manager_next.log <==
{"timestamp":1363589619.384754,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}
{"timestamp":1363589619.3849397,"message":"bulk: NATS timeout getting bulk api credentials. Request ignored.","log_level":"error","source":"hm","data":{},"thread_id":9570440,"fiber_id":14911740,"process_id":32309,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb","lineno":158,"method":"block in with_credentials"}

==> /var/vcap/sys/log/health_manager_next/health_manager_next.stderr.log <==
/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz_common.rb:33:in release': Path [:total] is not held (ArgumentError) from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inblock in release_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:in each' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/varz.rb:70:inrelease_expected_stats'
from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/bulk_based_expected_state_provider.rb:159:in block in with_credentials' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:369:in block in timeout' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:incall'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in run_machine' from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:inrun'
from /var/vcap/packages/health_manager_next/health_manager_next/vendor/bundle/ruby/1.9.1/gems/nats-0.4.26/lib/nats/client.rb:129:in start' from /var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager.rb:77:instart'
from /var/vcap/packages/health_manager_next/health_manager_next/bin/health_manager:61:in `

'

@tpradeep
Copy link
Contributor

A new error

Updating job core
core/0 (canary) (00:00:46)
Done 1/1 00:00:46

Error 450001: Expected sha1: b6c18cb2c5b7baf55f7d0b4cc655d24f52535821, Downloaded sha1: da39a3ee5e6b4b0d3255bfef95601890afd80709: ["/var/vcap/bosh/agent/lib/agent/util.rb:42:in unpack_blob'", "/var/vcap/bosh/agent/lib/agent/apply_plan/package.rb:51:infetch_package'", "/var/vcap/bosh/agent/lib/agent/apply_plan/package.rb:37:in install_for_job'", "/var/vcap/bosh/agent/lib/agent/apply_plan/plan.rb:80:inblock (2 levels) in install_packages'", "/var/vcap/bosh/agent/lib/agent/apply_plan/plan.rb:79:in each'", "/var/vcap/bosh/agent/lib/agent/apply_plan/plan.rb:79:inblock in install_packages'", "/var/vcap/bosh/agent/lib/agent/apply_plan/plan.rb:78:in each'", "/var/vcap/bosh/agent/lib/agent/apply_plan/plan.rb:78:ininstall_packages'", "/var/vcap/bosh/agent/lib/agent/message/apply.rb:120:in apply_packages'", "/var/vcap/bosh/agent/lib/agent/message/apply.rb:73:inapply'", "/var/vcap/bosh/agent/lib/agent/message/apply.rb:10:in process'", "/var/vcap/bosh/agent/lib/agent/handler.rb:274:inprocess'", "/var/vcap/bosh/agent/lib/agent/handler.rb:259:in process_long_running'", "/var/vcap/bosh/agent/lib/agent/handler.rb:181:inblock in process_in_thread'", "internal:prelude:10:in synchronize'", "/var/vcap/bosh/agent/lib/agent/handler.rb:179:inprocess_in_thread'", "/var/vcap/bosh/agent/lib/agent/handler.rb:160:in block in handle_message'"] /var/vcap/bosh/agent/lib/agent/message/apply.rb:78:inrescue in apply'
/var/vcap/bosh/agent/lib/agent/message/apply.rb:70:in apply' /var/vcap/bosh/agent/lib/agent/message/apply.rb:10:inprocess'
/var/vcap/bosh/agent/lib/agent/handler.rb:274:in process' /var/vcap/bosh/agent/lib/agent/handler.rb:259:inprocess_long_running'
/var/vcap/bosh/agent/lib/agent/handler.rb:181:in block in process_in_thread' <internal:prelude>:10:insynchronize'
/var/vcap/bosh/agent/lib/agent/handler.rb:179:in process_in_thread' /var/vcap/bosh/agent/lib/agent/handler.rb:160:inblock in handle_message'

Task 66 error
/usr/local/lib/ruby/1.9.1/rake/file_utils.rb:53:in block in create_shell_runner': Command failed with status (1): [bosh -n --color deploy...] (RuntimeError) from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:incall'
from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in sh' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:632:inbosh_cmd'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:156:in block in deploy' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:ineach'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:in deploy' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/command_handler.rb:57:inrun'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:61:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:18:inrun'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/bin/bosh:16:in <top (required)>' from /usr/local/bin/bosh:23:inload'
from /usr/local/bin/bosh:23:in `

'

@tpradeep
Copy link
Contributor

Resyncing blobs and checking again

Backing to original error.

@bonzofenix
Copy link

Im stuck in this same issue

Error 400007: `core/0' is not running after update

Task 12 error
/usr/local/lib/ruby/1.9.1/rake/file_utils.rb:53:in `block in create_shell_runner': Command failed with status (1): [bosh -n --color deploy...] (RuntimeError)
    from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in `call'
    from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in `sh'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:632:in `bosh_cmd'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:156:in `block in deploy'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:in `each'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-cloudfoundry-0.6.1/lib/bosh/cli/commands/cf.rb:154:in `deploy'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/command_handler.rb:57:in `run'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:61:in `run'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/lib/cli/runner.rb:18:in `run'
    from /usr/local/lib/ruby/gems/1.9.1/gems/bosh_cli-1.0.3/bin/bosh:16:in `<top (required)>'
    from /usr/local/bin/bosh:23:in `load'
    from /usr/local/bin/bosh:23:in `<main>'

@flochaz
Copy link

flochaz commented Apr 3, 2013

I am a new bosh user but concerning cloudfoundry it cannot work: using release 130 (I don't know if it is linked) the configs files in the core/0 vm make some services listening on localhost or local address (nats for instance) and most of services are using the public ip .

I checked on the cf-release and it is clear that the ref are not the same :

From /cf-release / jobs / health_manager_next / templates / health_manager_next.yml.erb

mbus: nats://<%= properties.nats.user %>:<%= properties.nats.password %>@<%= properties.nats.address %>:<%= properties.nats.port %>

and From /cf-release / jobs / nats / templates / nats.yml.erb (updated 4 month ago)

net: <%= spec.networks.send(properties.networks.apps).ip %>

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants