Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable arm again #3852

Merged
merged 1 commit into from Jun 27, 2019

Conversation

@aledbf
Copy link
Member

aledbf commented Mar 5, 2019

What this PR does / why we need it:

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

Special notes for your reviewer:

@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Mar 5, 2019

Issue building the new image:

Scanning dependencies of target Boost
make[5]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[5]: Entering directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
[ 12%] Creating directories for 'Boost'
[ 25%] Performing download step (download, verify and extract) for 'Boost'
-- Downloading...
   dst='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
   timeout='none'
-- Using src='https://github.com/hunter-packages/boost/archive/v1.69.0-p0.tar.gz'
-- verifying file...
       file='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
-- Downloading... done
-- extracting...
     src='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
     dst='/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Source'
-- extracting... [tar xfz]
-- extracting... [analysis]
-- extracting... [rename]
-- extracting... [clean up]
-- extracting... done
[ 37%] No patch step for 'Boost'
[ 50%] Performing update step for 'Boost'
[ 62%] Performing configure step for 'Boost'
Dummy patch command
Building Boost.Build engine with toolset gcc... tools/build/src/engine/bin.linuxarm/b2
Detecting Python version... 2.7
Detecting Python root... /usr
Unicode/ICU support for Boost.Regex?... /usr
Generating Boost.Build configuration in project-config.jam...

Bootstrapping is done. To build, run:

    ./b2
    
To adjust configuration, edit 'project-config.jam'.
Further information:

   - Command line help:
     ./b2 --help
     
   - Getting started guide: 
     http://www.boost.org/more/getting_started/unix-variants.html
     
   - Boost.Build documentation:
     http://www.boost.org/build/doc/html/index.html

[ 75%] No build step for 'Boost'
[ 87%] Performing install step for 'Boost'
Unable to load Boost.Build: could not find "boost-build.jam"
---------------------------------------------------------------
BOOST_ROOT must be set, either in the environment, or 
on the command-line with -sBOOST_ROOT=..., to the root
of the boost installation.

Attempted search from /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Source up to the root
at /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/share/boost-build
and in these directories from BOOST_BUILD_PATH and BOOST_ROOT: /usr/share/boost-build, /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Install.
Please consult the documentation at 'http://www.boost.org'.
make[5]: *** [CMakeFiles/Boost.dir/build.make:74: Boost-prefix/src/Boost-stamp/Boost-install] Error 1
make[5]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[4]: *** [CMakeFiles/Makefile2:73: CMakeFiles/Boost.dir/all] Error 2
make[4]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[3]: *** [Makefile:84: all] Error 2
make[3]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'

[hunter ** FATAL ERROR **] Build step failed (dir: /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost
[hunter ** FATAL ERROR **] [Directory:/root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/Boost]

------------------------------ ERROR -----------------------------
    https://docs.hunter.sh/en/latest/reference/errors/error.external.build.failed.html
------------------------------------------------------------------

CMake Error at /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_error_page.cmake:12 (message):
Call Stack (most recent call first):
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_fatal_error.cmake:20 (hunter_error_page)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_download.cmake:614 (hunter_fatal_error)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/Boost/hunter.cmake:381 (hunter_download)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_add_package.cmake:62 (include)
  build/cmake/DefineOptions.cmake:110 (hunter_add_package)
  CMakeLists.txt:58 (include)


-- Configuring incomplete, errors occurred!
See also "/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/thrift/Build/thrift-Release-prefix/src/thrift-Release-build/CMakeFiles/CMakeOutput.log".
make[2]: *** [CMakeFiles/thrift-Release.dir/build.make:110: thrift-Release-prefix/src/thrift-Release-stamp/thrift-Release-configure] Error 1
make[1]: *** [CMakeFiles/Makefile2:73: CMakeFiles/thrift-Release.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

[hunter ** FATAL ERROR **] Build step failed (dir: /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/thrift
[hunter ** FATAL ERROR **] [Directory:/root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/thrift]

------------------------------ ERROR -----------------------------
    https://docs.hunter.sh/en/latest/reference/errors/error.external.build.failed.html
------------------------------------------------------------------

CMake Error at /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_error_page.cmake:12 (message):
Call Stack (most recent call first):
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_fatal_error.cmake:20 (hunter_error_page)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_download.cmake:614 (hunter_fatal_error)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/thrift/hunter.cmake:70 (hunter_download)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_add_package.cmake:62 (include)
  CMakeLists.txt:50 (hunter_add_package)


-- Configuring incomplete, errors occurred!
See also "/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build/CMakeFiles/CMakeOutput.log".
root@343a9f2f19b8:/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build# 
root@343a9f2f19b8:/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build# cd /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build
@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Apr 19, 2019

Waiting for feedback in jaegertracing/jaeger-client-cpp#151

@alexellis

This comment has been minimized.

Copy link

alexellis commented Jun 26, 2019

Looking forward to seeing this 👍

Note: you have some rebase conflicts showing up?

@aledbf aledbf force-pushed the aledbf:arm branch from 6da28d6 to 7962529 Jun 26, 2019
@codecov-io

This comment has been minimized.

Copy link

codecov-io commented Jun 26, 2019

Codecov Report

❗️ No coverage uploaded for pull request base (master@ecce3fd). Click here to learn what that means.
The diff coverage is 16.66%.

Impacted file tree graph

@@           Coverage Diff            @@
##             master   #3852   +/-   ##
========================================
  Coverage          ?   57.9%           
========================================
  Files             ?      87           
  Lines             ?    6544           
  Branches          ?       0           
========================================
  Hits              ?    3789           
  Misses            ?    2324           
  Partials          ?     431
Impacted Files Coverage Δ
internal/ingress/controller/template/template.go 83.93% <16.66%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ecce3fd...ddffa2a. Read the comment docs.

@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 26, 2019

@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 26, 2019

@alexellis thank you for asking the state of this PR. The error is located in the jaeger tracing. Right now I am testing a conditional build, omitting this plugin for arm. If this works I can re-enable arm with the caveat that the opentracing feature (for jaeger) not be available.

@aledbf aledbf force-pushed the aledbf:arm branch 2 times, most recently from 053d1bf to 4a64d00 Jun 26, 2019
@aledbf aledbf changed the title WIP: Enable arm again Enable arm again Jun 27, 2019
@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 27, 2019

I added a new script to be able to run a container with the image created locally against an existent k8s cluster. Specifying the ARCH environment variable, we can test any of the three images: amd64, arm or arm64 using Docker.

aledbf@me:~/go/src/k8s.io/ingress-nginx$ ARCH=arm make run-ingress-controller
Register /usr/bin/qemu-ARCH-static as the handler for binaries in multiple platforms
make[1]: Entering directory '/home/aledbf/go/src/k8s.io/ingress-nginx'
# Register /usr/bin/qemu-ARCH-static as the handler for binaries in multiple platforms
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k

.......
.......
Successfully built 79fa1ad38471
Successfully tagged quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:local
make[2]: Leaving directory '/home/aledbf/go/src/k8s.io/ingress-nginx'
make[1]: Leaving directory '/home/aledbf/go/src/k8s.io/ingress-nginx'
Running against kubectl cluster azure-uswest-01
kubectl proxy process PID: 6423
waiting for kubectl proxy
Starting to serve on [::]:8001
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    local
  Build:      git-5ee82bd08
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------

W0627 02:04:54.305041       7 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
built by gcc 8.3.0 (Debian 8.3.0-6) 
built with OpenSSL 1.1.1c  28 May 2019
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2 -g -Og -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wno-deprecated-declarations -fno-strict-aliasing -D_FORTIFY_SOURCE=2 --param=ssp-buffer-size=4 -DTCP_FASTOPEN=23 -fPIC -Wno-cast-function-type' --add-module=../ngx_devel_kit-0.3.1rc1 --add-module=../echo-nginx-module-0.61 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.31 --add-module=../ngx_lua-0.10.15 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.7 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -fPIE -fPIC -pie -Wl,-z,relro -Wl,-z,now' --with-compat --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_sub_module --with-http_v2_module --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-http_secure_link_module --with-http_gunzip_module --with-md5-asm --with-sha1-asm --with-file-aio --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --user=www-data --group=www-data --add-module=/tmp/build/nginx-http-auth-digest-cd8641886c873cf543255aeda20d23e4cd603d05 --add-module=/tmp/build/ngx_http_substitutions_filter_module-bc58cb11844bc42735bbaef7085ea86ace46d05b --add-module=/tmp/build/nginx-influxdb-module-5b09391cb7b9a889687c0aa67964c06a2d933e8b --add-dynamic-module=/tmp/build/nginx-opentracing-0.8.0/opentracing --add-dynamic-module=/tmp/build/ModSecurity-nginx-d7101e13685efd7e7c9f808871b202656a969f4b --add-dynamic-module=/tmp/build/ngx_http_geoip2_module-3.2 --add-module=/tmp/build/nginx_ajp_module-bf6cd93f2098b59260de8d494f0f4b1f11a84627 --add-module=/tmp/build/ngx_brotli --with-stream --with-stream_ssl_preread_module
I0627 02:04:54.398540       7 main.go:196] Creating API client for http://0.0.0.0:8001
I0627 02:04:54.409491       7 main.go:216] Trying to discover Kubernetes version
I0627 02:04:54.673342       7 main.go:240] Running in Kubernetes cluster version v1.13 (v1.13.5) - git (clean) commit 2166946f41b36dea2c4626f90a77706f426cdea2 - platform linux/amd64
I0627 02:05:00.198583       7 main.go:111] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0627 02:05:00.404801       7 main.go:131] v1.13.5
W0627 02:05:00.408989       7 main.go:115] Using deprecated "k8s.io/api/extensions/v1beta1" package because Kubernetes version is < v1.14.0
Unknown QEMU_IFLA_BR type 46 (lots of this errors)
W0627 02:05:01.006627       7 store.go:624] Unexpected error reading configuration configmap: resource name may not be empty
W0627 02:05:01.085157       7 nginx.go:159] Update of Ingress status is disabled (flag --update-status)
I0627 02:05:01.377632       7 nginx.go:280] Starting NGINX Ingress controller
I0627 02:05:08.455154       7 nginx.go:712] NGINX configuration diff:
--- /etc/nginx/nginx.conf	2019-06-27 02:04:01.000000000 +0000
+++ /tmp/new-nginx-cfg757082901	2019-06-27 02:05:08.000000000 +0000
@@ -1,6 +1,1381 @@
-# A very simple nginx configuration file that forces nginx to start.
+
+# Configuration checksum: 10327262340018821232
+
+# setup custom paths that do not require root access
 pid /tmp/nginx.pid;
.....
.....

I0627 02:05:12.124364       7 controller.go:156] Backend successfully reloaded.
2019/06/27 02:05:12 [emerg] 380#380: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 382#382: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 384#384: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 386#386: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 388#388: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 391#391: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 389#389: io_setup() failed (38: Function not implemented)
2019/06/27 02:05:12 [emerg] 392#392: io_setup() failed (38: Function not implemented)
[27/Jun/2019:02:05:12 +0000]TCP200000.006
I0627 02:05:12.623370       7 controller.go:179] Dynamic reconfiguration succeeded.
I0627 02:05:12.675804       7 socket.go:344] removing ingresses [] from metrics
E0627 02:05:23.278782       7 leaderelection.go:328] error initially creating leader election record: namespaces "invalid-namespace" not found
^CI0626 22:07:02.508920    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
I0626 22:07:02.509001    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
I0626 22:07:02.509074    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
I0626 22:07:02.509142    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
I0626 22:07:02.509155    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
I0626 22:07:02.532761    6423 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled
Stoping kubectl proxy
@aledbf aledbf added this to In Progress in 0.25.0 Jun 27, 2019
Makefile Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
@aledbf aledbf force-pushed the aledbf:arm branch from 79dd972 to 0649e0a Jun 27, 2019
@aledbf aledbf force-pushed the aledbf:arm branch from 0649e0a to ddffa2a Jun 27, 2019
@ElvinEfendi

This comment has been minimized.

Copy link
Member

ElvinEfendi commented Jun 27, 2019

/lgtm

I wonder if users use lua-resty-waf at all. My gut says we can drop support for it.

@k8s-ci-robot k8s-ci-robot added the lgtm label Jun 27, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 27, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aledbf, ElvinEfendi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 27, 2019

I wonder if users use lua-resty-waf at all. My gut says we can drop support for it.

I want to add a new route to the ingress controller (Go) to dump a JSON object with stats of the ingress controller:

  • ingress controller version
  • arch
  • k8s API server version
  • informers stats
    • number of ingresses and paths
    • number of configmaps
    • number of secrets
    • name of annotations being used and count
    • number of configmap used in annotations (the informer could have 1000 configmaps and only 2 being referenced)
  • enabled features
  • number of workers
  • ram utilization

Then the user can upload to a new google form (not sure if supports uploads) and then decide if we can do some cleanup using two or three releases for deprecation and removal

@k8s-ci-robot k8s-ci-robot merged commit 2586542 into kubernetes:master Jun 27, 2019
12 checks passed
12 checks passed
cla/linuxfoundation aledbf authorized
Details
pull-ingress-nginx-boilerplate Job succeeded.
Details
pull-ingress-nginx-codegen Job succeeded.
Details
pull-ingress-nginx-e2e-1-12 Job succeeded.
Details
pull-ingress-nginx-e2e-1-13 Job succeeded.
Details
pull-ingress-nginx-e2e-1-14 Job succeeded.
Details
pull-ingress-nginx-e2e-1-15 Job succeeded.
Details
pull-ingress-nginx-gofmt Job succeeded.
Details
pull-ingress-nginx-golint Job succeeded.
Details
pull-ingress-nginx-test Job succeeded.
Details
pull-ingress-nginx-test-lua Job succeeded.
Details
tide In merge pool.
Details
@aledbf aledbf moved this from In Progress to done in 0.25.0 Jun 27, 2019
@aledbf aledbf deleted the aledbf:arm branch Jun 27, 2019
@alexellis

This comment has been minimized.

Copy link

alexellis commented Jun 27, 2019

I would love to test this. I'm currently installing and operating Nginx through the helm chart.

Can you provide a helm command that I can use?

@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 27, 2019

@alexellis something like

helm install \
    --name nginx-ingress stable/nginx-ingress \
    --namespace arm \
    --set rbac.create=true \
    --set controller.service.type=NodePort \
kubectl --namespace arm set image deployment/nginx-ingress-controller \
  nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:dev
@MattJeanes

This comment has been minimized.

Copy link

MattJeanes commented Jun 27, 2019

@aledbf apologies if this error is unrelated but I gave this a go (your exact commands) and the pod output was this (raspberry pi 3b):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-5ee82bd08
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
 I0627 22:33:23.953859       6 flags.go:194] Watching for Ingress class: nginx
W0627 22:33:23.954592       6 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0627 22:33:23.974962       6 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0627 22:33:23.976149       6 main.go:196] Creating API client for https://10.96.0.1:443
I0627 22:33:24.154351       6 main.go:240] Running in Kubernetes cluster version v1.14 (v1.14.0) - git (clean) commit 641856db18352033a0d96dbc99153fa3b27298e5 - platform linux/arm
I0627 22:33:24.175474       6 main.go:100] Validated nginx/nginx-ingress-default-backend as the default backend.
I0627 22:33:28.713949       6 main.go:111] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0627 22:33:28.726154       6 main.go:131] v1.14.0
W0627 22:33:28.814667       6 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0627 22:33:28.871539       6 nginx.go:280] Starting NGINX Ingress controller
E0627 22:33:29.987585       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:31.001687       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:32.015138       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:33.027626       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:33.325338       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:34.038846       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:35.071120       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:36.085533       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:37.097822       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:38.109134       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:39.123038       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:39.662185       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:40.136173       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:41.153777       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:42.166882       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:43.176737       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:43.281129       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:44.188361       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:45.205556       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:46.216230       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 27, 2019

@MattJeanes no, this is an issue with the helm chart if you run a k8s cluster > v1.14.0 related to #4127

To fix this please run the next patch command to add the new roles compatible with the new API.

kubectl patch --namespace ingress-nginx role nginx-ingress-role --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch --namespace ingress-nginx role nginx-ingress-role --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

After running that command delete the pod. The new one will have the new permissions

@MattJeanes

This comment has been minimized.

Copy link

MattJeanes commented Jun 27, 2019

Had to modify those commands slightly but still no dice. Not sure if your link was correct?

Here are exact commands used:

helm install \
    --name nginx-ingress stable/nginx-ingress \
    --namespace nginx-ingress \
    --set rbac.create=true \
    --set controller.service.type=NodePort

kubectl --namespace nginx-ingress set image deployment/nginx-ingress-controller \
    nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:dev

kubectl patch --namespace nginx-ingress role nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'

kubectl patch --namespace nginx-ingress role nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

Might be worth noting that the arm:0.20.0 build starts ok. Happy to take this to another issue if you want, had a search around couldn't find any other documentation/issues/etc around this.

Some more hopefully useful output:

matt@Matt-PC:~$ kubectl get role --namespace nginx-ingress -o json
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "rbac.authorization.k8s.io/v1",
            "kind": "Role",
            "metadata": {
                "creationTimestamp": "2019-06-27T23:21:54Z",
                "labels": {
                    "app": "nginx-ingress",
                    "chart": "nginx-ingress-1.7.0",
                    "heritage": "Tiller",
                    "release": "nginx-ingress"
                },
                "name": "nginx-ingress",
                "namespace": "nginx-ingress",
                "resourceVersion": "12102653",
                "selfLink": "/apis/rbac.authorization.k8s.io/v1/namespaces/nginx-ingress/roles/nginx-ingress",
                "uid": "59872e9f-9932-11e9-9d41-b827eb498b75"
            },
            "rules": [
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "namespaces"
                    ],
                    "verbs": [
                        "get"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "configmaps",
                        "pods",
                        "secrets",
                        "endpoints"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "services"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "update",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "extensions"
                    ],
                    "resources": [
                        "ingresses"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "extensions"
                    ],
                    "resources": [
                        "ingresses/status"
                    ],
                    "verbs": [
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resourceNames": [
                        "ingress-controller-leader-nginx"
                    ],
                    "resources": [
                        "configmaps"
                    ],
                    "verbs": [
                        "get",
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "configmaps"
                    ],
                    "verbs": [
                        "create"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "endpoints"
                    ],
                    "verbs": [
                        "create",
                        "get",
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "events"
                    ],
                    "verbs": [
                        "create",
                        "patch"
                    ]
                },
                {
                    "apiGroups": [
                        "networking.k8s.io"
                    ],
                    "resources": [
                        "ingresses"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "networking.k8s.io"
                    ],
                    "resources": [
                        "ingresses/status"
                    ],
                    "verbs": [
                        "update"
                    ]
                }
            ]
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
matt@Matt-PC:~$ kubectl get rolebinding --namespace nginx-ingress -o json                                                                                                                                                                                                                                                                                                  {
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "rbac.authorization.k8s.io/v1",
            "kind": "RoleBinding",
            "metadata": {
                "creationTimestamp": "2019-06-27T23:21:54Z",
                "labels": {
                    "app": "nginx-ingress",
                    "chart": "nginx-ingress-1.7.0",
                    "heritage": "Tiller",
                    "release": "nginx-ingress"
                },
                "name": "nginx-ingress",
                "namespace": "nginx-ingress",
                "resourceVersion": "12102140",
                "selfLink": "/apis/rbac.authorization.k8s.io/v1/namespaces/nginx-ingress/rolebindings/nginx-ingress",
                "uid": "598d49ac-9932-11e9-9d41-b827eb498b75"
            },
            "roleRef": {
                "apiGroup": "rbac.authorization.k8s.io",
                "kind": "Role",
                "name": "nginx-ingress"
            },
            "subjects": [
                {
                    "kind": "ServiceAccount",
                    "name": "nginx-ingress",
                    "namespace": "nginx-ingress"
                }
            ]
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
matt@Matt-PC:~$ kubectl get clusterrolebinding nginx-ingress -o json                                                                                                                                                                                                                                                                                                       {
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRoleBinding",
    "metadata": {
        "creationTimestamp": "2019-06-27T23:21:54Z",
        "labels": {
            "app": "nginx-ingress",
            "chart": "nginx-ingress-1.7.0",
            "heritage": "Tiller",
            "release": "nginx-ingress"
        },
        "name": "nginx-ingress",
        "resourceVersion": "12102135",
        "selfLink": "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/nginx-ingress",
        "uid": "597a8e30-9932-11e9-9d41-b827eb498b75"
    },
    "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "nginx-ingress"
    },
    "subjects": [
        {
            "kind": "ServiceAccount",
            "name": "nginx-ingress",
            "namespace": "nginx-ingress"
        }
    ]
}
matt@Matt-PC:~$ kubectl get clusterrole nginx-ingress -o json                                                                                                                                                                                                                                                                                                              {
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRole",
    "metadata": {
        "creationTimestamp": "2019-06-27T23:21:53Z",
        "labels": {
            "app": "nginx-ingress",
            "chart": "nginx-ingress-1.7.0",
            "heritage": "Tiller",
            "release": "nginx-ingress"
        },
        "name": "nginx-ingress",
        "resourceVersion": "12102134",
        "selfLink": "/apis/rbac.authorization.k8s.io/v1/clusterroles/nginx-ingress",
        "uid": "596f32c9-9932-11e9-9d41-b827eb498b75"
    },
    "rules": [
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "configmaps",
                "endpoints",
                "nodes",
                "pods",
                "secrets"
            ],
            "verbs": [
                "list",
                "watch"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "nodes"
            ],
            "verbs": [
                "get"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "services"
            ],
            "verbs": [
                "get",
                "list",
                "update",
                "watch"
            ]
        },
        {
            "apiGroups": [
                "extensions"
            ],
            "resources": [
                "ingresses"
            ],
            "verbs": [
                "get",
                "list",
                "watch"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "events"
            ],
            "verbs": [
                "create",
                "patch"
            ]
        },
        {
            "apiGroups": [
                "extensions"
            ],
            "resources": [
                "ingresses/status"
            ],
            "verbs": [
                "update"
            ]
        }
    ]
}

Latest pod logs:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-5ee82bd08
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
 I0627 23:36:30.715149       7 flags.go:194] Watching for Ingress class: nginx
W0627 23:36:30.715998       7 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0627 23:36:30.730456       7 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0627 23:36:30.731551       7 main.go:196] Creating API client for https://10.96.0.1:443
I0627 23:36:30.920810       7 main.go:240] Running in Kubernetes cluster version v1.14 (v1.14.0) - git (clean) commit 641856db18352033a0d96dbc99153fa3b27298e5 - platform linux/arm
I0627 23:36:30.957370       7 main.go:100] Validated nginx-ingress/nginx-ingress-default-backend as the default backend.
I0627 23:36:37.475757       7 main.go:111] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0627 23:36:37.504099       7 main.go:131] v1.14.0
W0627 23:36:37.570802       7 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0627 23:36:37.626477       7 nginx.go:280] Starting NGINX Ingress controller
E0627 23:36:38.746353       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:39.755765       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:40.765440       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:41.783611       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:42.794894       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:43.803262       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:44.814760       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:44.995472       7 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 23:36:45.828702       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:46.841740       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:47.856288       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:48.082283       7 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 23:36:48.869435       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:49.880386       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:50.889823       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:51.904185       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:52.916480       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
@aledbf

This comment has been minimized.

Copy link
Member Author

aledbf commented Jun 27, 2019

@MattJeanes please also execute

kubectl patch clusterrole nginx-ingress-clusterrole --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch clusterrole nginx-ingress-clusterrole --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

The previous command only patches the role in the ingress-nginx namespace

@MattJeanes

This comment has been minimized.

Copy link

MattJeanes commented Jun 28, 2019

Thank you very much, that's fixed it. Just had to make a tiny tweak to your commands as below. Hopefully this can help others if they run into this problem too.

kubectl patch clusterrole nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch clusterrole nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'
@alexellis

This comment has been minimized.

Copy link

alexellis commented Jun 28, 2019

This is exciting 🎉

What is the final set of commands to run?

Quick question for @MattJeanes: last time I checked, tiller wasn't available for armhf - are you templating the YAML then applying it, or do you have a working tiller Docker image on your RPi too?

Alex

@MattJeanes

This comment has been minimized.

Copy link

MattJeanes commented Jun 28, 2019

@alexellis if you use my commands used from my previous two comments you should get it working. Note this is only for Kubernetes v1.14 and above otherwise you don't need to run all the patch commands. I used https://github.com/jessestuart/tiller-multiarch to install tiller on arm.

@mylesagray

This comment has been minimized.

Copy link

mylesagray commented Jul 24, 2019

I'll note that it was also necessary to change the default-backend image to arm too:

kubectl --namespace nginx-ingress set image deployment/nginx-ingress-default-backend \
    nginx-ingress-default-backend=gcr.io/google_containers/defaultbackend-arm:1.5
@alokhom

This comment has been minimized.

Copy link

alokhom commented Jul 29, 2019

@MattJeanes I am facing this "does not have any active Endpoint" .. how do i resolve this ?

ubuntu@master-node:~/charts$ kubectl logs pod/nginx-ingress-controller-6dc598747-kkwss -n arm
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-c01effb07
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------

I0729 00:27:23.185541       6 flags.go:194] Watching for Ingress class: nginx
W0729 00:27:23.185920       6 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0729 00:27:23.192582       6 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0729 00:27:23.192957       6 main.go:183] Creating API client for https://10.96.0.1:443
I0729 00:27:23.208651       6 main.go:227] Running in Kubernetes cluster version v1.15 (v1.15.1) - git (clean) commit 4485c6f18cee9a5d3c3b4e523bd27972b1b53892 - platform linux/arm64
I0729 00:27:23.213841       6 main.go:91] Validated arm/nginx-ingress-default-backend as the default backend.
I0729 00:27:25.009018       6 main.go:102] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0729 00:27:25.010201       6 main.go:131] v1.15.1
W0729 00:27:25.039315       6 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0729 00:27:25.059015       6 nginx.go:277] Starting NGINX Ingress controller
I0729 00:27:26.259713       6 leaderelection.go:235] attempting to acquire leader lease  arm/ingress-controller-leader-nginx...
I0729 00:27:26.259697       6 nginx.go:321] Starting NGINX process
W0729 00:27:26.261827       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
I0729 00:27:26.262112       6 controller.go:137] Configuration changes detected, backend reload required.
I0729 00:27:26.271849       6 leaderelection.go:245] successfully acquired lease arm/ingress-controller-leader-nginx
I0729 00:27:26.272342       6 status.go:86] new leader elected: nginx-ingress-controller-6dc598747-kkwss
I0729 00:27:26.373564       6 controller.go:153] Backend successfully reloaded.
I0729 00:27:26.373659       6 controller.go:162] Initial sync, sleeping for 1 second.
[29/Jul/2019:00:27:27 +0000]TCP200000.000
W0729 00:27:30.068659       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:27:38.653588       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:29:49.068996       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:29:52.402469       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
[29/Jul/2019:00:29:55 +0000]TCP200000.000
@MattJeanes

This comment has been minimized.

Copy link

MattJeanes commented Jul 29, 2019

@alokhom did you also follow the command above sent by @mylesagray about changing the default backend? The error seems to be saying that for the default backend service there are no active pods ready, likely caused by them not being the arm versions and failing to start up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
0.25.0
  
done
8 participants
You can’t perform that action at this time.