Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow ambassador from other namespace to access SeldonDeployment #280

Merged
merged 3 commits into from
Nov 5, 2018

Conversation

ChenyuanZ
Copy link
Contributor

This PR is to allow ambassador from other namespace to access SeldonDeployment #279

@ChenyuanZ
Copy link
Contributor Author

Tested that cluster-manager can add namespace seldon to corresponding ambassador annotation:

2018-11-04 21:02:29.967 DEBUG 7 --- [pool-1-thread-1] i.s.c.k.SeldonDeploymentControllerImpl   : Created service:{
  "metadata": {
    "name": "test-output-transformer",
    "generateName": "",
    "namespace": "seldon",
    "selfLink": "/api/v1/namespaces/seldon/services/test-output-transformer",
    "uid": "f10a4f18-e074-11e8-b574-000c293741a7",
    "resourceVersion": "830921",
    "generation": 0,
    "creationTimestamp": "2018-11-04T21:02:29Z",
    "labels": {
      "seldon-app": "test-output-transformer",
      "seldon-deployment-id": "test-output-transformer"
    },
    "annotations": {
      "getambassador.io/config": "---\napiVersion: ambassador/v0\nkind:  Mapping\nname:  seldon_seldon-deployment-output-transformer_rest_mapping\nprefix: /seldon/seldon-deployment-output-transformer/\nservice: test-output-transformer.seldon:8000\ntimeout_ms: 3000\n---\napiVersion: ambassador/v0\nkind:  Mapping\nname:  seldon-deployment-output-transformer_grpc_mapping\ngrpc: true\nprefix: /seldon.protos.Seldon/\nrewrite: /seldon.protos.Seldon/\nheaders:\n  seldon: seldon-deployment-output-transformer\nservice: test-output-transformer.seldon:5001\ntimeout_ms: 3000\n"
    },
    "ownerReferences": [{
      "kind": "SeldonDeployment",
      "name": "seldon-deployment-output-transformer",
      "uid": "f07c05f5-e074-11e8-b574-000c293741a7",
      "apiVersion": "machinelearning.seldon.io/v1alpha2",
      "controller": true
    }],
    "clusterName": ""
  },
  "spec": {
    "ports": [{
      "name": "http",
      "protocol": "TCP",
      "port": 8000,
      "targetPort": "",
      "nodePort": 0
    }, {
      "name": "grpc",
      "protocol": "TCP",
      "port": 5001,
      "targetPort": "",
      "nodePort": 0
    }],
    "selector": {
      "seldon-app": "test-output-transformer"
    },
    "clusterIP": "10.110.134.100",
    "type": "ClusterIP",
    "sessionAffinity": "None",
    "loadBalancerIP": "",
    "externalName": "",
    "externalTrafficPolicy": "",
    "healthCheckNodePort": 0
  },
  "status": {
    "loadBalancer": {
    }
  }
}

Verified that the default namespace ambassador works with seldon namespace SeldonDeployment:

$ kubectl port-forward $(kubectl get pods -n default -l service=ambassador -o jsonpath='{.items[0].metadata.name}') -n default 8003:80
Forwarding from 127.0.0.1:8003 -> 80
Handling connection for 8003
>>> response = requests.post("http://localhost:8003/seldon/seldon-deployment-output-transformer/api/v0.1/predictions", json=payload)
>>> response.text
'{\n  "meta": {\n    "puid": "rpik1bn250m5h5qa36teqs6fc3",\n    "tags": {\n    },\n    "routing": {\n      "output-transformer": -1\n    },\n    "requestPath": {\n      "classifier": "seldonio/mock_classifier:1.0",\n      "output-transformer": "seldonio/output_transformer:0.1"\n    }\n  },\n  "data": {\n    "names": ["proba"],\n    "tensor": {\n      "shape": [2, 1],\n      "values": [0.07577603016695865, 1.0]\n    }\n  }\n}'

ambassador log:

$ k logs -f ambassador-c9686b5b-4w5km ambassador -n default
2018-11-04 20:57:16 kubewatch 0.40.0 INFO: generating config with gencount 1 (1 change)
/usr/lib/python3.6/site-packages/pkg_resources/__init__.py:1235: UserWarning: /ambassador is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable).
  warnings.warn(msg, UserWarning)
2018-11-04 20:57:17 kubewatch 0.40.0 INFO: Scout reports {"latest_version": "0.40.1", "application": "ambassador", "notices": [], "cached": false, "timestamp": 1541365036.901549}
[2018-11-04 20:57:17.436][17][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 20:57:17.438][17][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 20:57:17.438][17][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 20:57:17.445][17][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 20:57:17.445][17][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
AMBASSADOR: starting diagd
AMBASSADOR: starting Envoy
STATSD_ENABLED is not set to true, no stats will be exposed
AMBASSADOR: waiting
PIDS: 21:diagd 22:envoy 23:kubewatch
starting hot-restarter with target: /ambassador/start-envoy.sh
forking and execing new child process at epoch 0
forked new child process with PID=27
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:183] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:185] statically linked extensions:
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:187]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:190]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,extauth
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:193]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:196]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-11-04 20:57:17.626][27][info][main] source/server/server.cc:198]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.statsd
[2018-11-04 20:57:17.627][27][info][main] source/server/server.cc:200]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-11-04 20:57:17.627][27][info][main] source/server/server.cc:203]   transport_sockets.downstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 20:57:17.627][27][info][main] source/server/server.cc:206]   transport_sockets.upstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 20:57:17.655][27][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 20:57:17.674][27][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 20:57:17.686][27][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 20:57:17.686][27][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-11-04 20:57:17.687][27][info][main] source/server/server.cc:398] starting main dispatch loop
[2018-11-04 20:57:17.689][27][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 20:57:17.689][27][info][main] source/server/server.cc:378] all clusters initialized. initializing init manager
[2018-11-04 20:57:17.689][27][info][config] source/server/listener_manager_impl.cc:781] all dependencies initialized. starting workers
/usr/lib/python3.6/site-packages/pkg_resources/__init__.py:1235: UserWarning: /ambassador is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable).
  warnings.warn(msg, UserWarning)
2018-11-04 20:57:18 diagd 0.40.0 [P21TMainThread] INFO: thread count 9, listening on 0.0.0.0:8877
[2018-11-04 20:57:19 +0000] [21] [INFO] Starting gunicorn 19.8.1
[2018-11-04 20:57:19 +0000] [21] [INFO] Listening at: http://0.0.0.0:8877 (21)
[2018-11-04 20:57:19 +0000] [21] [INFO] Using worker: threads
[2018-11-04 20:57:19 +0000] [60] [INFO] Booting worker with pid: 60
2018-11-04 20:57:19 diagd 0.40.0 [P60TMainThread] INFO: Starting periodic updates
[2018-11-04 20:57:27.689][27][info][main] source/server/drain_manager_impl.cc:63] shutting down parent after drain
2018-11-04 20:57:33 kubewatch 0.40.0 INFO: generating config with gencount 2 (1 change)
/usr/lib/python3.6/site-packages/pkg_resources/__init__.py:1235: UserWarning: /ambassador is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable).
  warnings.warn(msg, UserWarning)
2018-11-04 20:57:34 kubewatch 0.40.0 INFO: Scout reports {"latest_version": "0.40.1", "application": "ambassador", "notices": [], "cached": false, "timestamp": 1541365053.683807}
[2018-11-04 20:57:34.129][63][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 20:57:34.130][63][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 20:57:34.130][63][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 20:57:34.137][63][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 20:57:34.137][63][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
got SIGHUP
forking and execing new child process at epoch 1
forked new child process with PID=67
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:183] initializing epoch 1 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:185] statically linked extensions:
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:187]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:190]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,extauth
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:193]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:196]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:198]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.statsd
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:200]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:203]   transport_sockets.downstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 20:57:34.157][67][info][main] source/server/server.cc:206]   transport_sockets.upstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 20:57:34.165][27][warning][main] source/server/server.cc:449] shutting down admin due to child startup
[2018-11-04 20:57:34.165][27][warning][main] source/server/server.cc:457] terminating parent process
[2018-11-04 20:57:34.167][67][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 20:57:34.171][67][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 20:57:34.182][67][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 20:57:34.182][67][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-11-04 20:57:34.185][67][info][main] source/server/server.cc:398] starting main dispatch loop
[2018-11-04 20:57:34.188][67][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 20:57:34.188][67][info][main] source/server/server.cc:378] all clusters initialized. initializing init manager
[2018-11-04 20:57:34.188][67][info][config] source/server/listener_manager_impl.cc:781] all dependencies initialized. starting workers
[2018-11-04 20:57:34.191][27][info][main] source/server/server.cc:98] closing and draining listeners
[2018-11-04 20:57:44.189][67][info][main] source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2018-11-04 20:57:44.190][27][info][main] source/server/hot_restart_impl.cc:435] shutting down due to child request
[2018-11-04 20:57:44.190][27][warning][main] source/server/server.cc:348] caught SIGTERM
[2018-11-04 20:57:44.190][27][info][main] source/server/server.cc:402] main dispatch loop exited
[2018-11-04 20:57:44.193][27][info][main] source/server/server.cc:437] exiting
got SIGCHLD
PID=27 exited with code=0
Child process exited gracefully, everything looks fine.
2018-11-04 21:02:34 kubewatch 0.40.0 INFO: generating config with gencount 3 (1 change)
2018-11-04 21:02:34 kubewatch 0.40.0 INFO: Scout reports {"latest_version": "0.40.1", "application": "ambassador", "notices": [], "cached": true, "timestamp": 1541365053.683807}
[2018-11-04 21:02:34.482][93][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 21:02:34.485][93][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 21:02:34.485][93][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 21:02:34.498][93][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 21:02:34.498][93][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
got SIGHUP
forking and execing new child process at epoch 2
forked new child process with PID=97
[2018-11-04 21:02:34.558][97][info][main] source/server/server.cc:183] initializing epoch 2 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-11-04 21:02:34.558][97][info][main] source/server/server.cc:185] statically linked extensions:
[2018-11-04 21:02:34.558][97][info][main] source/server/server.cc:187]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:190]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,extauth
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:193]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:196]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:198]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.statsd
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:200]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:203]   transport_sockets.downstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 21:02:34.559][97][info][main] source/server/server.cc:206]   transport_sockets.upstream: envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-04 21:02:34.574][67][warning][main] source/server/server.cc:449] shutting down admin due to child startup
[2018-11-04 21:02:34.574][67][warning][main] source/server/server.cc:457] terminating parent process
[2018-11-04 21:02:34.577][97][info][config] source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-04 21:02:34.590][97][info][config] source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-11-04 21:02:34.604][97][info][config] source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-04 21:02:34.604][97][info][config] source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-11-04 21:02:34.605][97][info][main] source/server/server.cc:398] starting main dispatch loop
[2018-11-04 21:02:34.608][97][info][upstream] source/common/upstream/cluster_manager_impl.cc:132] cm init: all clusters initialized
[2018-11-04 21:02:34.608][97][info][main] source/server/server.cc:378] all clusters initialized. initializing init manager
[2018-11-04 21:02:34.608][97][info][config] source/server/listener_manager_impl.cc:781] all dependencies initialized. starting workers
[2018-11-04 21:02:34.608][67][info][main] source/server/server.cc:98] closing and draining listeners
[2018-11-04 21:02:44.609][97][info][main] source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2018-11-04 21:02:44.609][67][info][main] source/server/hot_restart_impl.cc:435] shutting down due to child request
[2018-11-04 21:02:44.609][67][warning][main] source/server/server.cc:348] caught SIGTERM
[2018-11-04 21:02:44.609][67][info][main] source/server/server.cc:402] main dispatch loop exited
[2018-11-04 21:02:44.612][67][info][main] source/server/server.cc:437] exiting
got SIGCHLD
PID=67 exited with code=0
Child process exited gracefully, everything looks fine.
ACCESS [2018-11-04T21:07:28.176Z] "POST /seldon/seldon-deployment-output-transformer/api/v0.1/predictions HTTP/1.1" 200 - 153 407 165 164 "-" "python-requests/2.19.1" "b493efe0-da65-49f3-b55e-383d79e8ef84" "localhost:8003" "10.110.134.100:8000"

@ChenyuanZ ChenyuanZ changed the title WIP: allow ambassador from other namespace to access SeldonDeployment Allow ambassador from other namespace to access SeldonDeployment Nov 4, 2018
Copy link
Contributor

@ukclivecox ukclivecox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At present we keep all the versions at x-SNAPSHOT for the next release x. So if we follow the present regime we should keep it at 0.2.4-SNAPSHOT. Happy to discuss alternatives but we have a script that updates all poms to match release.

@ChenyuanZ
Copy link
Contributor Author

Hi @cliveseldon ,

I've reverted the manual pom version change. Please let me know if there's anything else I should do.

Thanks,
Chenyuan

@ukclivecox
Copy link
Contributor

Thanks for this!

@ukclivecox ukclivecox merged commit faaea04 into SeldonIO:master Nov 5, 2018
@ChenyuanZ ChenyuanZ deleted the enable_global_ambassador branch November 5, 2018 15:03
agrski pushed a commit that referenced this pull request Dec 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants