Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a new backend API and accessing it returns 404 via nginx #168

Closed
as33ms opened this issue Sep 8, 2015 · 19 comments · Fixed by NREL/api-umbrella-router#9
Closed
Milestone

Comments

@as33ms
Copy link

as33ms commented Sep 8, 2015

I added a new API backend at our api umbrella server (umbrella.apinf.io) using the guidelines from: http://apiumbrella.io/docs/getting-started/

After publishing the changes and trying to access the google maps api returns a 404. Any pointers to troubleshoot this problem are highly appreciated.

@as33ms
Copy link
Author

as33ms commented Sep 8, 2015

@GUI any pointers for the above issue? We are using api-umbrella v0.8.0

@GUI
Copy link
Member

GUI commented Sep 9, 2015

@ashakunt: Hm, I just ran through the setup directions on a fresh installation of v0.8.0 under Ubuntu 14.04, and it seemed to work for me. But sorry you've run into troubles. And you're sure you went to the Publish page after saving the API Backend and published your changes? If the backend wasn't published, that would explain a 404 response, but if the backend is in fact published, then this is a little more puzzling.

If the backend is published, here's a couple things to try:

  • Could you share the contents of /opt/api-umbrella/etc/nginx/backends.conf?
  • If you haven't already, could you also try doing a full restart of api umbrella (sudo /etc/init.d/api-umbrella restart) and see if that makes any difference?
  • Is the 404 page you get the very generic nginx one (it just says "404 Not Found / nginx")?
  • What value did you enter for the "Frontend Host" when adding the API backend? Was it "localhost" or does it match some other domain name/hostname your server is going by?

We did change a fair bit of the logic behind how hostnames get matched for routing purposes in v0.8.0, so I'm wondering if you're maybe hitting something related to that. I think we also fixed a few things revolving around hostname matching after the v0.8.0 release in master, but I didn't think those fixes were particularly critical. But hopefully we can get to the bottom of this without too much trouble.

@as33ms
Copy link
Author

as33ms commented Sep 9, 2015

Thanks a lot @GUI for pointers. Some of my answers below:

  • 404 page is a generic 404 Not Found / nginx page.
  • Front-end host values: we tried with both: localhost (first) and then, the FQDN of the localhost (umbrella.apinf.io). In both the cases, 404 was encountered.

I will provide updates and contents of backends.conf as soon as I can.

@bajiat
Copy link

bajiat commented Sep 11, 2015

What is the status of this issue?

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

@GUI

Here are the details /opt/api-umbrella/etc/nginx/backends.conf

  # API Umbrella - Default
  upstream api_umbrella_api-umbrella-web-backend_backend {
      least_conn;
      keepalive 10;
        server 127.0.0.1:14012;
  }

  # Qtipme Staging API (Example API)
  upstream api_umbrella_2174c11a-0fe6-495f-a85a-ff1f36d3fee3_backend {
      least_conn;
      keepalive 10;
        server 82.196.12.109:443;
  }

  # Google Search
  upstream api_umbrella_9ee1c1e6-a6ea-47a8-a7db-546a8d9bbf59_backend {
      least_conn;
      keepalive 10;
        server 185.38.0.15:80;
        server 185.38.0.19:80;
        server 185.38.0.23:80;
        server 185.38.0.27:80;
        server 185.38.0.29:80;
        server 185.38.0.30:80;
        server 185.38.0.34:80;
        server 185.38.0.38:80;
        server 185.38.0.42:80;
        server 185.38.0.44:80;
        server 185.38.0.45:80;
        server 185.38.0.49:80;
        server 185.38.0.53:80;
        server 185.38.0.57:80;
        server 185.38.0.59:80;
  }

  # Data.gov
  upstream api_umbrella_3ff10e77-4864-4d72-91f3-c9a266413acd_backend {
      least_conn;
      keepalive 10;
        server 52.0.227.177:80;
        server 52.6.174.103:80;
  }

  # yandex
  upstream api_umbrella_4a9b306e-2031-402c-bd5e-be029f4fe181_backend {
      least_conn;
      keepalive 10;
        server 5.255.255.5:80;
        server 5.255.255.55:80;
        server 77.88.55.55:80;
        server 77.88.55.66:80;
  }

  # Google Geocoding APIs (Testing)
  upstream api_umbrella_29f73169-0c12-40b0-876f-401e822d5aa0_backend {
      least_conn;
      keepalive 10;
        server 216.58.209.106:80;
  }

  # Google Geocoding APIs
  upstream api_umbrella_59429f2f-40bd-4d9c-8a84-c49ba7af8cf2_backend {
      least_conn;
      keepalive 10;
        server 216.58.209.106:80;
  }



  # 
  upstream api_umbrella_website_api-umbrella-website-backend_backend {
    least_conn;
    keepalive 10;
      server 127.0.0.1:14013;}

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

@GUI turns out, restarting did not help.


user@umbrella:/opt/api-umbrella/etc/nginx$ vi backends.conf 
user@umbrella:/opt/api-umbrella/etc/nginx$ readlink backends.conf 
user@umbrella:/opt/api-umbrella/etc/nginx$ sudo /etc/init.d/api-umbrella restart 
user@umbrella:/opt/api-umbrella/etc/nginx$ sudo /etc/init.d/api-umbrella restart 
[sudo] password for user: 
Stopping api-umbrella... [  OK  ]
Starting api-umbrella................................. [FAIL]

Failed to start processes:
  router-nginx (FATAL - /opt/api-umbrella/var/log/router-nginx.log)

  See /opt/api-umbrella/var/log/supervisord-forever.log for more details

Stopping api-umbrella...... [  OK  ]
user@umbrella:/opt/api-umbrella/etc/nginx$ cat /opt/api-umbrella/var/log/router-nginx.log
2015/09/11 16:34:14 [emerg] 946#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:38:32 [emerg] 972#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:38:33 [emerg] 975#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:38:35 [emerg] 990#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:30 [emerg] 1756#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:32 [emerg] 1806#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:34 [emerg] 2031#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:37 [emerg] 2099#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:41 [emerg] 2101#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:47 [emerg] 2137#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:47:53 [emerg] 2150#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:48:00 [emerg] 2151#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:48:08 [emerg] 2152#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:48:17 [emerg] 2153#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2015/09/11 16:48:27 [emerg] 2158#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
user@umbrella:/opt/api-umbrella/etc/nginx$ 

Can you please suggest with priority what should be done in order to fix this issue and get the umbrella up and running asap

cc: @ccsr @bajiat @brylie

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

2015-09-11 16:47:17,853 WARN received SIGTERM indicating exit request
2015-09-11 16:47:17,854 INFO waiting for router-log-listener, gatekeeper2, gatekeeper3, gatekeeper1, gatekeeper4, config-reloader, varnishd, web-delayed-job, log-processor, mongod, redis, distributed-rate-limits-sync, web-puma, web-nginx, router-nginx, varnishncsa, elasticsearch, dnsmasq, beanstalkd to die
2015-09-11 16:47:17,856 INFO stopped: beanstalkd (terminated by SIGTERM)
2015-09-11 16:47:17,879 INFO exited: varnishncsa (exit status 0; expected)
2015-09-11 16:47:17,880 INFO stopped: dnsmasq (exit status 0)
2015-09-11 16:47:17,929 INFO exited: gatekeeper3 (exit status 143; not expected)
2015-09-11 16:47:17,929 INFO exited: web-nginx (exit status 0; expected)
2015-09-11 16:47:17,929 INFO exited: router-nginx (exit status 0; expected)
2015-09-11 16:47:17,934 INFO exited: router-log-listener (exit status 143; not expected)
2015-09-11 16:47:17,934 INFO exited: gatekeeper2 (exit status 143; not expected)
2015-09-11 16:47:17,934 INFO exited: gatekeeper4 (exit status 143; not expected)
2015-09-11 16:47:17,934 INFO exited: redis (exit status 0; expected)
2015-09-11 16:47:17,940 INFO exited: gatekeeper1 (exit status 143; not expected)
2015-09-11 16:47:17,940 INFO exited: distributed-rate-limits-sync (exit status 143; not expected)
2015-09-11 16:47:17,945 INFO exited: mongod (exit status 0; expected)
2015-09-11 16:47:17,949 INFO exited: config-reloader (exit status 143; not expected)
2015-09-11 16:47:17,949 INFO exited: log-processor (exit status 143; not expected)
2015-09-11 16:47:18,003 WARN received SIGTERM indicating exit request
2015-09-11 16:47:18,315 INFO exited: varnishd (exit status 0; expected)
2015-09-11 16:47:18,376 INFO exited: web-puma (exit status 0; expected)
2015-09-11 16:47:18,471 INFO stopped: elasticsearch (exit status 143)
2015-09-11 16:47:21,476 INFO waiting for web-delayed-job to die
2015-09-11 16:47:24,529 INFO waiting for web-delayed-job to die
2015-09-11 16:47:27,400 INFO stopped: web-delayed-job (exit status 1)
2015-09-11 16:47:29,422 CRIT Supervisor running as root (no user in config file)
2015-09-11 16:47:29,443 INFO RPC interface 'supervisor' initialized
2015-09-11 16:47:29,444 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2015-09-11 16:47:29,445 INFO RPC interface 'supervisor' initialized
2015-09-11 16:47:29,445 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2015-09-11 16:47:29,445 INFO supervisord started with pid 1724
2015-09-11 16:47:29,462 INFO spawned: 'router-log-listener' with pid 1727
2015-09-11 16:47:29,469 INFO spawned: 'gatekeeper2' with pid 1728
2015-09-11 16:47:29,476 INFO spawned: 'gatekeeper3' with pid 1729
2015-09-11 16:47:29,500 INFO spawned: 'gatekeeper1' with pid 1730
2015-09-11 16:47:29,517 INFO spawned: 'gatekeeper4' with pid 1733
2015-09-11 16:47:29,532 INFO spawned: 'config-reloader' with pid 1734
2015-09-11 16:47:29,573 INFO spawned: 'varnishd' with pid 1736
2015-09-11 16:47:29,614 INFO spawned: 'web-delayed-job' with pid 1738
2015-09-11 16:47:29,678 INFO spawned: 'log-processor' with pid 1741
2015-09-11 16:47:29,723 INFO spawned: 'mongod' with pid 1743
2015-09-11 16:47:29,783 INFO spawned: 'redis' with pid 1746
2015-09-11 16:47:29,834 INFO spawned: 'distributed-rate-limits-sync' with pid 1748
2015-09-11 16:47:29,901 INFO spawned: 'web-puma' with pid 1751
2015-09-11 16:47:29,973 INFO spawned: 'web-nginx' with pid 1755
2015-09-11 16:47:30,047 INFO spawned: 'router-nginx' with pid 1756
2015-09-11 16:47:30,136 INFO spawned: 'varnishncsa' with pid 1758
2015-09-11 16:47:30,228 INFO spawned: 'elasticsearch' with pid 1764
2015-09-11 16:47:30,335 INFO spawned: 'dnsmasq' with pid 1770
2015-09-11 16:47:30,386 INFO spawned: 'beanstalkd' with pid 1777
2015-09-11 16:47:30,437 INFO exited: varnishncsa (exit status 1; not expected)
2015-09-11 16:47:30,583 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:31,848 INFO spawned: 'router-nginx' with pid 1806
2015-09-11 16:47:31,919 INFO spawned: 'varnishncsa' with pid 1807
2015-09-11 16:47:32,098 INFO exited: varnishncsa (exit status 1; not expected)
2015-09-11 16:47:32,247 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:33,394 INFO success: beanstalkd entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2015-09-11 16:47:34,118 INFO spawned: 'varnishncsa' with pid 2026
2015-09-11 16:47:34,300 INFO spawned: 'router-nginx' with pid 2031
2015-09-11 16:47:34,500 INFO success: gatekeeper2 entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,500 INFO success: gatekeeper3 entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,500 INFO success: gatekeeper1 entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,600 INFO success: gatekeeper4 entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,600 INFO success: varnishd entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,648 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:34,794 INFO success: redis entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,794 INFO success: distributed-rate-limits-sync entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:34,920 INFO success: web-nginx entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:35,288 INFO success: dnsmasq entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:37,690 INFO spawned: 'router-nginx' with pid 2099
2015-09-11 16:47:37,851 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:39,147 INFO success: varnishncsa entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2015-09-11 16:47:39,502 INFO success: router-log-listener entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:39,602 INFO success: config-reloader entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:39,603 INFO success: web-delayed-job entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:39,693 INFO success: log-processor entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:39,693 INFO success: mongod entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:39,886 INFO success: web-puma entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:40,177 INFO success: elasticsearch entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-09-11 16:47:41,892 INFO spawned: 'router-nginx' with pid 2101
2015-09-11 16:47:42,040 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:47,058 INFO spawned: 'router-nginx' with pid 2137
2015-09-11 16:47:47,100 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:47:53,174 INFO spawned: 'router-nginx' with pid 2150
2015-09-11 16:47:53,229 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:48:00,304 INFO spawned: 'router-nginx' with pid 2151
2015-09-11 16:48:00,358 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:48:08,410 INFO spawned: 'router-nginx' with pid 2152
2015-09-11 16:48:08,463 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:48:17,482 INFO spawned: 'router-nginx' with pid 2153
2015-09-11 16:48:17,523 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:48:27,532 INFO spawned: 'router-nginx' with pid 2158
2015-09-11 16:48:27,598 INFO exited: router-nginx (exit status 1; not expected)
2015-09-11 16:48:27,682 INFO gave up: router-nginx entered FATAL state, too many start retries too quickly
2015-09-11 16:48:29,269 WARN received SIGTERM indicating exit request
2015-09-11 16:48:29,270 INFO waiting for router-log-listener, gatekeeper2, gatekeeper3, gatekeeper1, gatekeeper4, config-reloader, varnishd, web-delayed-job, log-processor, mongod, redis, distributed-rate-limits-sync, web-puma, web-nginx, varnishncsa, elasticsearch, dnsmasq, beanstalkd to die
2015-09-11 16:48:29,278 INFO stopped: beanstalkd (terminated by SIGTERM)
2015-09-11 16:48:29,280 INFO exited: gatekeeper3 (exit status 143; not expected)
2015-09-11 16:48:29,281 INFO stopped: dnsmasq (exit status 0)
2015-09-11 16:48:29,320 INFO exited: distributed-rate-limits-sync (exit status 143; not expected)
2015-09-11 16:48:29,320 INFO exited: web-nginx (exit status 0; expected)
2015-09-11 16:48:29,351 INFO exited: router-log-listener (exit status 143; not expected)
2015-09-11 16:48:29,351 INFO exited: gatekeeper1 (exit status 143; not expected)
2015-09-11 16:48:29,351 INFO exited: gatekeeper2 (exit status 143; not expected)
2015-09-11 16:48:29,352 INFO exited: gatekeeper4 (exit status 143; not expected)
2015-09-11 16:48:29,352 INFO exited: config-reloader (exit status 143; not expected)
2015-09-11 16:48:29,352 INFO exited: varnishncsa (exit status 0; expected)
2015-09-11 16:48:29,363 INFO exited: log-processor (exit status 143; not expected)
2015-09-11 16:48:29,363 INFO exited: mongod (exit status 0; expected)
2015-09-11 16:48:29,384 INFO exited: redis (exit status 0; expected)
2015-09-11 16:48:29,560 INFO stopped: elasticsearch (exit status 143)
2015-09-11 16:48:29,656 INFO stopped: web-puma (exit status 0)
2015-09-11 16:48:30,456 INFO exited: varnishd (exit status 0; expected)
2015-09-11 16:48:32,363 INFO waiting for web-delayed-job to die
2015-09-11 16:48:35,278 INFO stopped: web-delayed-job (exit status 1)
user@umbrella:/opt/api-umbrella/etc/nginx$ 

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

@GUI for your information, i just added:

http {
    server_names_hash_bucket_size 64;
    ...
}

to /opt/api-umbrella/embedded/etc/nginx/nginx.con and it does not seem to have any effect.

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

@GUI, sorry for spamming too much, but eventually, editing the router.conf / web.conf at /opt/api-umbrella/etc/nginx is not of any help either since they are overwritten upon every start!

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

So, aparently, I had to make changes in:


embedded/apps/router/releases/20150420012143/templates/etc/nginx/router.conf.hbs:  server_names_hash_bucket_size 64;
embedded/apps/router/releases/20150420012143/templates/etc/nginx/web.conf.hbs:  server_names_hash_bucket_size 64;
embedded/etc/nginx/nginx.conf:    server_names_hash_bucket_size 64;

With this change, the server is up and running again. I will check next the api requests!

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

And this also helps in removing the 404 status for accessing the API. Although, I must say that the instructions at http://apiumbrella.io/images/docs/add_api_backend_example-af3ba028.png must be updated to have the frontend host have the value of: your-api-umbrella-host instead of localhost

I am now waiting for you to review this bug (sever_names_hash_bucket_size 64;) and in case its already fixed in upstream, please close this bug.

Best, Aseem
cc @ccsr @bajiat @brylie

@GUI
Copy link
Member

GUI commented Sep 11, 2015

@ashakunt: Ah, thanks for sleuthing this, these details really helps. If you have a chance and could do one more thing, could you report the results of cat /proc/cpuinfo on this server? The default value of sever_names_hash_bucket_size varies depending on CPU cache sizes, so I'm just curious what CPU triggers this, since I'm pretty sure we have hostnames exceeding 32 characters on our systems. But if it's a hassle to pull those extra CPU details, don't worry about it.

So basically, I think triggering this issue is dependent on the length of the hostnames you're using and also what type of CPU your server is on (since that will dictate the default sever_names_hash_bucket_size).

The workaround you've implemented by adding values to the templates/etc/nginx/router.conf.hbs and templates/etc/nginx/web.conf.hbs is probably the best bet for now (and sorry for the template overwriting confusion, but all of the config files do get re-written from these templates on reloads, since a large portion of our config has to be dynamic to account for adding API backends, etc). The only pitfall of this approach is that those template files will actually get overwritten the next time you upgrade API Umbrella, so you'll just want to be careful to remember this if you ever modify any of those template config files.

But for the next package release, I think we can aim to eliminate this specific issue altogether. Since we know the length of the hostnames we're adding in the templates, it should be pretty easy for us to detect if the hostnames are longer and adjust the sever_names_hash_bucket_size automatically, so this shouldn't happen regardless of the CPU-based defaults.

More generally, we may also want to add the ability to customize these kind of nginx settings without having to modify the templates. We typically do that by exposing specific setting in our YAML config file, but it's also been on my radar to maybe come up with a more generic solution for any type of nginx config customizations without us having to explicitly support specific settings (maybe allow including other external nginx config file at various places).

@as33ms
Copy link
Author

as33ms commented Sep 11, 2015

@GUI here you go:

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 6
model name  : QEMU Virtual CPU version 2.0.0
stepping    : 3
microcode   : 0x1
cpu MHz     : 2493.774
cache size  : 4096 KB
physical id : 0
siblings    : 1
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 4
wp      : yes
flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm vnmi
bogomips    : 4987.54
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model       : 6
model name  : QEMU Virtual CPU version 2.0.0
stepping    : 3
microcode   : 0x1
cpu MHz     : 2493.774
cache size  : 4096 KB
physical id : 1
siblings    : 1
core id     : 0
cpu cores   : 1
apicid      : 1
initial apicid  : 1
fpu     : yes
fpu_exception   : yes
cpuid level : 4
wp      : yes
flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm vnmi
bogomips    : 4987.54
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Please don't be sorry about the overwriting confusion. I am sure you have your own reasons to re-write them. :)

GUI added a commit to NREL/api-umbrella-router that referenced this issue Sep 22, 2015
It was possible that nginx would bomb if an API backend was added with a
long hostname. The exact length that would trigger this error varied,
since nginx's default lengths actually varied depending on the CPU type.
But this should fix things so that we should properly support hostnames
up to 110 characters long regardless of CPU defaults.

Fixes NREL/api-umbrella#168
@GUI
Copy link
Member

GUI commented Sep 22, 2015

Fixed in master by NREL/api-umbrella-router#9 This will be part of the v0.9 release.

We now adjust this nginx setting based on the length of the longest hostname in the system. This should fix things so that the default nginx value (based on the CPU architecture) shouldn't matter.

Thanks for the report!

@GUI GUI added this to the v0.9 milestone Sep 22, 2015
@yoanisgil
Copy link

@GUI I am having the same issue as @ashakunt. I did the changes suggested but I still get 404. Here is my output from cat /proc/cpuinfo https://gist.github.com/yoanisgil/278ec85f536ddad9bd8e

Any ideas?

@yoanisgil
Copy link

By the way my path to those files is embedded/apps/router/releases/20150420012117/ which is not the same as @ashakunt 's

@brylie
Copy link
Contributor

brylie commented Sep 30, 2015

@mauriciovieira is this issue related to the difficulties you encountered with API Umbrella installation?

@brylie
Copy link
Contributor

brylie commented Jan 11, 2016

We are still encountering this issue after upgrading a server from 0.8 to 0.10.

@GUI
Copy link
Member

GUI commented Jan 14, 2016

@brylie: Sorry for the trouble. Following up in #208.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants