Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grpc-json transcoder: returns 415 instead of 404 #5010

Closed
toefel18 opened this issue Nov 11, 2018 · 6 comments
Closed

grpc-json transcoder: returns 415 instead of 404 #5010

toefel18 opened this issue Nov 11, 2018 · 6 comments
Labels
question Questions that are neither investigations, bugs, nor enhancements

Comments

@toefel18
Copy link

Bug Template

Title: grpc-json transcoder returns http status 415 instead of 404 when a route is not configured in the .proto file.

Description:
When I run envoy as a grpc-json transcoder and I try to GET a path that is not mapped by any http option, envoy returns 415 with a body Content-Type is missing from the request. If the route is mapped and gRPC service returns NOT_FOUND, then envoy correctly returns 404.

The expected behaviour is a 404 (or 400?) for unmapped paths.

Repro steps:
I came across this issue while writing a blog post about Envoy and grpc transcoding. Please just focus on the CreateReservation rpc for reproduction.

Complete working exampe transcoding-grpc-to-http-json

reservation_service.proto file:

syntax = "proto3";

package reservations.v1;

// Creates separate .java files for message and service
// instead of creating them inside the class defined by
// java_outer_classname
option java_multiple_files = true;

// Class that will contain descriptor
option java_outer_classname = "ReservationServiceProto";

// The package where the generated classes will reside
option java_package = "nl.toefel.reservations.v1";

// required to add annotations to the rpc calls
import "google/api/annotations.proto";
import "google/protobuf/empty.proto";

service ReservationService {

    rpc CreateReservation(CreateReservationRequest) returns (Reservation) {
        option(google.api.http) = {
            post: "/v1/reservations"
            body: "reservation"
        };
    }

    rpc GetReservation(GetReservationRequest) returns (Reservation) {
        // {id} is mapped into the GetReservationRequest.id field!
        option (google.api.http) = {
            get: "/v1/reservations/{id}"
        };
    }

    // lists all the reservations, use query params on venue or timestamp to filter the resultset.
    rpc ListReservations(ListReservationsRequest) returns (stream Reservation) {
        // use query parameter to specify filters, example: ?venue=UtrechtHomeoffice
        // these query parameters will be automatically mapped to the ListReservationRequest object
        option (google.api.http) = {
            get: "/v1/reservations"
        };
    }

    rpc DeleteReservation(DeleteReservationRequest) returns (google.protobuf.Empty) {
        // {id} is mapped into the DeleteReservationRequest.id field!
        option (google.api.http) = {
            delete: "/v1/reservations/{id}"
        };
    }

}

message Reservation {
    string id = 1;
    string title = 2;
    string venue = 3;
    string room = 4;
    string timestamp = 5;
    repeated Person attendees = 6;
}

message Person {
    string ssn = 1;
    string firstName = 2;
    string lastName = 3;
}

message CreateReservationRequest {
    Reservation reservation = 2;
}

message CreateReservationResponse {
    Reservation reservation = 1;
}

message GetReservationRequest {
    string id = 1;
}

message ListReservationsRequest {
    string venue = 1;
    string timestamp = 2;
    string room = 3;

    Attendees attendees = 4;

    message Attendees {
        repeated string lastName = 1;
    }
}

message DeleteReservationRequest {
    string id = 1;
}

envoy config

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }         

static_resources:
  listeners:
  - name: main-listener
    address:
      socket_address: { address: 0.0.0.0, port_value: 51051 }      
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: grpc_json
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: grpc-backend-services, timeout: { seconds: 60 } }   
          http_filters:
          - name: envoy.grpc_json_transcoder
            # configuration docs: https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/http/transcoder/v2/transcoder.proto
            config:
              proto_descriptor: "/data/reservation_service_definition.pb"             
              services: ["reservations.v1.ReservationService"]                        
              print_options:
                add_whitespace: true
                always_print_primitive_fields: true
                always_print_enums_as_ints: false
                preserve_proto_field_names: false                                     
          - name: envoy.router

  clusters:
  - name: grpc-backend-services                  
    connect_timeout: 1.25s
    type: logical_dns
    lb_policy: round_robin
    dns_lookup_family: V4_ONLY
    http2_protocol_options: {}
    hosts:
    - socket_address:
        address: 127.0.0.1                       
        port_value: 53000

generating service descriptor

protoc -I. -Ibuild/extracted-include-protos/main --include_imports \
                --include_source_info \
                --descriptor_set_out=reservation_service_definition.pb \
                src/main/proto/reservation_service.proto

Running envoy using docker

docker run -it --rm --name envoy --network="host" \
       -v "$(pwd)/reservation_service_definition.pb:/data/reservation_service_definition.pb:ro" \
       -v "$(pwd)/envoy-config.yml:/etc/envoy/envoy.yaml:ro" \
       envoyproxy/envoy

successful cal

curl http://localhost:51051/v1/reservations/00b5561d-29d9-458a-818b-f40ac6ff0ca1 -v
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations/00b5561d-29d9-458a-818b-f40ac6ff0ca1 HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< content-type: application/grpc
< grpc-status: 5
< grpc-message: no reservation with id 00b5561d-29d9-458a-818b-f40ac6ff0ca1
< x-envoy-upstream-service-time: 7
< content-length: 0
< date: Sun, 11 Nov 2018 13:13:30 GMT
< server: envoy
< 

Call producing wrong status code

curl http://localhost:51051/v1/reservations/00b5561d-29d9-458a-818b-f40ac6ff0ca1/2 -v
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations/00b5561d-29d9-458a-818b-f40ac6ff0ca1/2 HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 415 Unsupported Media Type
< content-type: text/plain; encoding=utf-8
< grpc-status: 13
< grpc-message: Content-Type is missing from the request
< x-envoy-upstream-service-time: 0
< date: Sun, 11 Nov 2018 13:28:18 GMT
< server: envoy
< transfer-encoding: chunked
< 
* Connection #0 to host localhost left intact
Content-Type is missing from the request

Admin and Stats Output:

/clusters

grpc-backend-services::default_priority::max_connections::1024
grpc-backend-services::default_priority::max_pending_requests::1024
grpc-backend-services::default_priority::max_requests::1024
grpc-backend-services::default_priority::max_retries::3
grpc-backend-services::high_priority::max_connections::1024
grpc-backend-services::high_priority::max_pending_requests::1024
grpc-backend-services::high_priority::max_requests::1024
grpc-backend-services::high_priority::max_retries::3
grpc-backend-services::added_via_api::false
grpc-backend-services::0.0.0.0:0::cx_active::5
grpc-backend-services::0.0.0.0:0::cx_connect_fail::0
grpc-backend-services::0.0.0.0:0::cx_total::5
grpc-backend-services::0.0.0.0:0::rq_active::0
grpc-backend-services::0.0.0.0:0::rq_error::0
grpc-backend-services::0.0.0.0:0::rq_success::6
grpc-backend-services::0.0.0.0:0::rq_timeout::0
grpc-backend-services::0.0.0.0:0::rq_total::6
grpc-backend-services::0.0.0.0:0::health_flags::healthy
grpc-backend-services::0.0.0.0:0::weight::1
grpc-backend-services::0.0.0.0:0::region::
grpc-backend-services::0.0.0.0:0::zone::
grpc-backend-services::0.0.0.0:0::sub_zone::
grpc-backend-services::0.0.0.0:0::canary::false
grpc-backend-services::0.0.0.0:0::success_rate::-1

/server_info

{
 "version": "7f1bbfaceb44c51e0c7734c7c0abe0afa00f39f6/1.9.0-dev/Clean/RELEASE",
 "state": "LIVE",
 "epoch": 0,
 "uptime_current_epoch": "4419s",
 "uptime_all_epochs": "4419s"
}

*/stats

cluster.grpc-backend-services.bind_errors: 0
cluster.grpc-backend-services.circuit_breakers.default.cx_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_pending_open: 0
cluster.grpc-backend-services.circuit_breakers.default.rq_retry_open: 0
cluster.grpc-backend-services.circuit_breakers.high.cx_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_pending_open: 0
cluster.grpc-backend-services.circuit_breakers.high.rq_retry_open: 0
cluster.grpc-backend-services.external.upstream_rq_200: 3
cluster.grpc-backend-services.external.upstream_rq_2xx: 3
cluster.grpc-backend-services.external.upstream_rq_415: 4
cluster.grpc-backend-services.external.upstream_rq_4xx: 4
cluster.grpc-backend-services.external.upstream_rq_completed: 7
cluster.grpc-backend-services.http2.header_overflow: 0
cluster.grpc-backend-services.http2.headers_cb_no_stream: 0
cluster.grpc-backend-services.http2.rx_messaging_error: 0
cluster.grpc-backend-services.http2.rx_reset: 0
cluster.grpc-backend-services.http2.too_many_header_frames: 0
cluster.grpc-backend-services.http2.trailers: 0
cluster.grpc-backend-services.http2.tx_reset: 0
cluster.grpc-backend-services.lb_healthy_panic: 0
cluster.grpc-backend-services.lb_local_cluster_not_ok: 0
cluster.grpc-backend-services.lb_recalculate_zone_structures: 0
cluster.grpc-backend-services.lb_subsets_active: 0
cluster.grpc-backend-services.lb_subsets_created: 0
cluster.grpc-backend-services.lb_subsets_fallback: 0
cluster.grpc-backend-services.lb_subsets_removed: 0
cluster.grpc-backend-services.lb_subsets_selected: 0
cluster.grpc-backend-services.lb_zone_cluster_too_small: 0
cluster.grpc-backend-services.lb_zone_no_capacity_left: 0
cluster.grpc-backend-services.lb_zone_number_differs: 0
cluster.grpc-backend-services.lb_zone_routing_all_directly: 0
cluster.grpc-backend-services.lb_zone_routing_cross_zone: 0
cluster.grpc-backend-services.lb_zone_routing_sampled: 0
cluster.grpc-backend-services.max_host_weight: 0
cluster.grpc-backend-services.membership_change: 1
cluster.grpc-backend-services.membership_healthy: 1
cluster.grpc-backend-services.membership_total: 1
cluster.grpc-backend-services.original_dst_host_invalid: 0
cluster.grpc-backend-services.retry_or_shadow_abandoned: 0
cluster.grpc-backend-services.update_attempt: 892
cluster.grpc-backend-services.update_empty: 0
cluster.grpc-backend-services.update_failure: 0
cluster.grpc-backend-services.update_no_rebuild: 0
cluster.grpc-backend-services.update_success: 892
cluster.grpc-backend-services.upstream_cx_active: 5
cluster.grpc-backend-services.upstream_cx_close_notify: 0
cluster.grpc-backend-services.upstream_cx_connect_attempts_exceeded: 0
cluster.grpc-backend-services.upstream_cx_connect_fail: 0
cluster.grpc-backend-services.upstream_cx_connect_timeout: 0
cluster.grpc-backend-services.upstream_cx_destroy: 0
cluster.grpc-backend-services.upstream_cx_destroy_local: 0
cluster.grpc-backend-services.upstream_cx_destroy_local_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_destroy_remote: 0
cluster.grpc-backend-services.upstream_cx_destroy_remote_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_destroy_with_active_rq: 0
cluster.grpc-backend-services.upstream_cx_http1_total: 0
cluster.grpc-backend-services.upstream_cx_http2_total: 5
cluster.grpc-backend-services.upstream_cx_idle_timeout: 0
cluster.grpc-backend-services.upstream_cx_max_requests: 0
cluster.grpc-backend-services.upstream_cx_none_healthy: 0
cluster.grpc-backend-services.upstream_cx_overflow: 0
cluster.grpc-backend-services.upstream_cx_protocol_error: 0
cluster.grpc-backend-services.upstream_cx_rx_bytes_buffered: 638
cluster.grpc-backend-services.upstream_cx_rx_bytes_total: 1178
cluster.grpc-backend-services.upstream_cx_total: 5
cluster.grpc-backend-services.upstream_cx_tx_bytes_buffered: 0
cluster.grpc-backend-services.upstream_cx_tx_bytes_total: 1793
cluster.grpc-backend-services.upstream_flow_control_backed_up_total: 0
cluster.grpc-backend-services.upstream_flow_control_drained_total: 0
cluster.grpc-backend-services.upstream_flow_control_paused_reading_total: 0
cluster.grpc-backend-services.upstream_flow_control_resumed_reading_total: 0
cluster.grpc-backend-services.upstream_rq_200: 3
cluster.grpc-backend-services.upstream_rq_2xx: 3
cluster.grpc-backend-services.upstream_rq_415: 4
cluster.grpc-backend-services.upstream_rq_4xx: 4
cluster.grpc-backend-services.upstream_rq_active: 0
cluster.grpc-backend-services.upstream_rq_cancelled: 0
cluster.grpc-backend-services.upstream_rq_completed: 7
cluster.grpc-backend-services.upstream_rq_maintenance_mode: 0
cluster.grpc-backend-services.upstream_rq_pending_active: 0
cluster.grpc-backend-services.upstream_rq_pending_failure_eject: 0
cluster.grpc-backend-services.upstream_rq_pending_overflow: 0
cluster.grpc-backend-services.upstream_rq_pending_total: 0
cluster.grpc-backend-services.upstream_rq_per_try_timeout: 0
cluster.grpc-backend-services.upstream_rq_retry: 0
cluster.grpc-backend-services.upstream_rq_retry_overflow: 0
cluster.grpc-backend-services.upstream_rq_retry_success: 0
cluster.grpc-backend-services.upstream_rq_rx_reset: 0
cluster.grpc-backend-services.upstream_rq_timeout: 0
cluster.grpc-backend-services.upstream_rq_total: 7
cluster.grpc-backend-services.upstream_rq_tx_reset: 0
cluster.grpc-backend-services.version: 0
cluster_manager.active_clusters: 1
cluster_manager.cluster_added: 1
cluster_manager.cluster_modified: 0
cluster_manager.cluster_removed: 0
cluster_manager.cluster_updated: 0
cluster_manager.cluster_updated_via_merge: 0
cluster_manager.update_merge_cancelled: 0
cluster_manager.update_out_of_merge_window: 0
cluster_manager.warming_clusters: 0
filesystem.flushed_by_timer: 49
filesystem.reopen_failed: 0
filesystem.write_buffered: 11
filesystem.write_completed: 6
filesystem.write_total_buffered: 192
http.admin.downstream_cx_active: 1
http.admin.downstream_cx_delayed_close_timeout: 0
http.admin.downstream_cx_destroy: 1
http.admin.downstream_cx_destroy_active_rq: 0
http.admin.downstream_cx_destroy_local: 0
http.admin.downstream_cx_destroy_local_active_rq: 0
http.admin.downstream_cx_destroy_remote: 1
http.admin.downstream_cx_destroy_remote_active_rq: 0
http.admin.downstream_cx_drain_close: 0
http.admin.downstream_cx_http1_active: 1
http.admin.downstream_cx_http1_total: 2
http.admin.downstream_cx_http2_active: 0
http.admin.downstream_cx_http2_total: 0
http.admin.downstream_cx_idle_timeout: 0
http.admin.downstream_cx_overload_disable_keepalive: 0
http.admin.downstream_cx_protocol_error: 0
http.admin.downstream_cx_rx_bytes_buffered: 360
http.admin.downstream_cx_rx_bytes_total: 4004
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 2
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 28759
http.admin.downstream_cx_upgrades_active: 0
http.admin.downstream_cx_upgrades_total: 0
http.admin.downstream_flow_control_paused_reading_total: 0
http.admin.downstream_flow_control_resumed_reading_total: 0
http.admin.downstream_rq_1xx: 0
http.admin.downstream_rq_2xx: 5
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 6
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_completed: 11
http.admin.downstream_rq_http1_total: 12
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_idle_timeout: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_overload_close: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_too_large: 0
http.admin.downstream_rq_total: 12
http.admin.downstream_rq_tx_reset: 0
http.admin.downstream_rq_ws_on_non_ws_route: 0
http.admin.rs_too_large: 0
http.async-client.no_cluster: 0
http.async-client.no_route: 0
http.async-client.rq_direct_response: 0
http.async-client.rq_redirect: 0
http.async-client.rq_total: 0
http.grpc_json.downstream_cx_active: 0
http.grpc_json.downstream_cx_delayed_close_timeout: 0
http.grpc_json.downstream_cx_destroy: 7
http.grpc_json.downstream_cx_destroy_active_rq: 0
http.grpc_json.downstream_cx_destroy_local: 0
http.grpc_json.downstream_cx_destroy_local_active_rq: 0
http.grpc_json.downstream_cx_destroy_remote: 7
http.grpc_json.downstream_cx_destroy_remote_active_rq: 0
http.grpc_json.downstream_cx_drain_close: 0
http.grpc_json.downstream_cx_http1_active: 0
http.grpc_json.downstream_cx_http1_total: 7
http.grpc_json.downstream_cx_http2_active: 0
http.grpc_json.downstream_cx_http2_total: 0
http.grpc_json.downstream_cx_idle_timeout: 0
http.grpc_json.downstream_cx_overload_disable_keepalive: 0
http.grpc_json.downstream_cx_protocol_error: 0
http.grpc_json.downstream_cx_rx_bytes_buffered: 0
http.grpc_json.downstream_cx_rx_bytes_total: 1327
http.grpc_json.downstream_cx_ssl_active: 0
http.grpc_json.downstream_cx_ssl_total: 0
http.grpc_json.downstream_cx_total: 7
http.grpc_json.downstream_cx_tx_bytes_buffered: 0
http.grpc_json.downstream_cx_tx_bytes_total: 2358
http.grpc_json.downstream_cx_upgrades_active: 0
http.grpc_json.downstream_cx_upgrades_total: 0
http.grpc_json.downstream_flow_control_paused_reading_total: 0
http.grpc_json.downstream_flow_control_resumed_reading_total: 0
http.grpc_json.downstream_rq_1xx: 0
http.grpc_json.downstream_rq_2xx: 1
http.grpc_json.downstream_rq_3xx: 0
http.grpc_json.downstream_rq_4xx: 6
http.grpc_json.downstream_rq_5xx: 0
http.grpc_json.downstream_rq_active: 0
http.grpc_json.downstream_rq_completed: 7
http.grpc_json.downstream_rq_http1_total: 7
http.grpc_json.downstream_rq_http2_total: 0
http.grpc_json.downstream_rq_idle_timeout: 0
http.grpc_json.downstream_rq_non_relative_path: 0
http.grpc_json.downstream_rq_overload_close: 0
http.grpc_json.downstream_rq_response_before_rq_complete: 0
http.grpc_json.downstream_rq_rx_reset: 0
http.grpc_json.downstream_rq_too_large: 0
http.grpc_json.downstream_rq_total: 7
http.grpc_json.downstream_rq_tx_reset: 0
http.grpc_json.downstream_rq_ws_on_non_ws_route: 0
http.grpc_json.no_cluster: 0
http.grpc_json.no_route: 0
http.grpc_json.rq_direct_response: 0
http.grpc_json.rq_redirect: 0
http.grpc_json.rq_total: 7
http.grpc_json.rs_too_large: 0
http.grpc_json.tracing.client_enabled: 0
http.grpc_json.tracing.health_check: 0
http.grpc_json.tracing.not_traceable: 0
http.grpc_json.tracing.random_sampling: 0
http.grpc_json.tracing.service_forced: 0
listener.0.0.0.0_51051.downstream_cx_active: 0
listener.0.0.0.0_51051.downstream_cx_destroy: 7
listener.0.0.0.0_51051.downstream_cx_total: 7
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_1xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_2xx: 1
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_3xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_4xx: 6
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_5xx: 0
listener.0.0.0.0_51051.http.grpc_json.downstream_rq_completed: 7
listener.0.0.0.0_51051.no_filter_chain_match: 0
listener.admin.downstream_cx_active: 1
listener.admin.downstream_cx_destroy: 1
listener.admin.downstream_cx_total: 2
listener.admin.http.admin.downstream_rq_1xx: 0
listener.admin.http.admin.downstream_rq_2xx: 5
listener.admin.http.admin.downstream_rq_3xx: 0
listener.admin.http.admin.downstream_rq_4xx: 6
listener.admin.http.admin.downstream_rq_5xx: 0
listener.admin.http.admin.downstream_rq_completed: 11
listener.admin.no_filter_chain_match: 0
listener_manager.listener_added: 1
listener_manager.listener_create_failure: 0
listener_manager.listener_create_success: 8
listener_manager.listener_modified: 0
listener_manager.listener_removed: 0
listener_manager.total_listeners_active: 1
listener_manager.total_listeners_draining: 0
listener_manager.total_listeners_warming: 0
runtime.admin_overrides_active: 0
runtime.load_error: 0
runtime.load_success: 0
runtime.num_keys: 0
runtime.override_dir_exists: 0
runtime.override_dir_not_exists: 0
server.concurrency: 8
server.days_until_first_cert_expiring: 2147483647
server.hot_restart_epoch: 0
server.live: 1
server.memory_allocated: 4123264
server.memory_heap_size: 6291456
server.parent_connections: 0
server.total_connections: 0
server.uptime: 4459
server.version: 8330175
server.watchdog_mega_miss: 0
server.watchdog_miss: 0
stats.overflow: 0
cluster.grpc-backend-services.external.upstream_rq_time: P0(nan,0) P25(nan,2.075) P50(nan,8.025) P75(nan,9.025) P90(nan,10.3) P95(nan,10.65) P99(nan,10.93) P99.5(nan,10.965) P99.9(nan,10.993) P100(nan,11)
cluster.grpc-backend-services.upstream_cx_connect_ms: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,3.05) P95(nan,3.075) P99(nan,3.095) P99.5(nan,3.0975) P99.9(nan,3.0995) P100(nan,3.1)
cluster.grpc-backend-services.upstream_cx_length_ms: No recorded values
cluster.grpc-backend-services.upstream_rq_time: P0(nan,0) P25(nan,2.075) P50(nan,8.025) P75(nan,9.025) P90(nan,10.3) P95(nan,10.65) P99(nan,10.93) P99.5(nan,10.965) P99.9(nan,10.993) P100(nan,11)
http.admin.downstream_cx_length_ms: P0(nan,220000) P25(nan,222500) P50(nan,225000) P75(nan,227500) P90(nan,229000) P95(nan,229500) P99(nan,229900) P99.5(nan,229950) P99.9(nan,229990) P100(nan,230000)
http.admin.downstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
http.grpc_json.downstream_cx_length_ms: P0(nan,1) P25(nan,2.075) P50(nan,9.01667) P75(nan,9.075) P90(nan,14.3) P95(nan,14.65) P99(nan,14.93) P99.5(nan,14.965) P99.9(nan,14.993) P100(nan,15)
http.grpc_json.downstream_rq_time: P0(nan,0) P25(nan,2.075) P50(nan,8.05) P75(nan,9.0625) P90(nan,14.3) P95(nan,14.65) P99(nan,14.93) P99.5(nan,14.965) P99.9(nan,14.993) P100(nan,15)
listener.0.0.0.0_51051.downstream_cx_length_ms: P0(nan,1) P25(nan,2.075) P50(nan,9.01667) P75(nan,9.075) P90(nan,14.3) P95(nan,14.65) P99(nan,14.93) P99.5(nan,14.965) P99.9(nan,14.993) P100(nan,15)
listener.admin.downstream_cx_length_ms: P0(nan,220000) P25(nan,222500) P50(nan,225000) P75(nan,227500) P90(nan,229000) P95(nan,229500) P99(nan,229900) P99.5(nan,229950) P99.9(nan,229990) P100(nan,230000)

Note: If there are privacy concerns, sanitize the data prior to
sharing.

Config:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }         

static_resources:
  listeners:
  - name: main-listener
    address:
      socket_address: { address: 0.0.0.0, port_value: 51051 }      
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: grpc_json
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: grpc-backend-services, timeout: { seconds: 60 } }   
          http_filters:
          - name: envoy.grpc_json_transcoder
            # configuration docs: https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/http/transcoder/v2/transcoder.proto
            config:
              proto_descriptor: "/data/reservation_service_definition.pb"             
              services: ["reservations.v1.ReservationService"]                        
              print_options:
                add_whitespace: true
                always_print_primitive_fields: true
                always_print_enums_as_ints: false
                preserve_proto_field_names: false                                     
          - name: envoy.router

  clusters:
  - name: grpc-backend-services                  
    connect_timeout: 1.25s
    type: logical_dns
    lb_policy: round_robin
    dns_lookup_family: V4_ONLY
    http2_protocol_options: {}
    hosts:
    - socket_address:
        address: 127.0.0.1                       
        port_value: 53000

Logs:

./start-envoy.sh 
Envoy will run at port 51051 (see envoy-config.yml)
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:203] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:205] statically linked extensions:
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:207]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:210]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:213]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:216]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:218]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:220]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:223]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-11 13:12:42.746][000008][info][main] [source/server/server.cc:226]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-11-11 13:12:42.748][000008][info][main] [source/server/server.cc:268] admin address: 0.0.0.0:9901
[2018-11-11 13:12:42.749][000008][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-11-11 13:12:42.749][000008][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2018-11-11 13:12:42.753][000008][info][upstream] [source/common/upstream/cluster_manager_impl.cc:136] cm init: all clusters initialized
[2018-11-11 13:12:42.753][000008][info][config] [source/server/configuration_impl.cc:61] loading 1 listener(s)
[2018-11-11 13:12:42.756][000008][info][config] [source/server/configuration_impl.cc:94] loading tracing configuration
[2018-11-11 13:12:42.756][000008][info][config] [source/server/configuration_impl.cc:112] loading stats sink configuration
[2018-11-11 13:12:42.756][000008][info][main] [source/server/server.cc:426] all clusters initialized. initializing init manager
[2018-11-11 13:12:42.756][000008][info][config] [source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2018-11-11 13:12:42.756][000008][info][main] [source/server/server.cc:454] starting main dispatch loop
[2018-11-11 13:27:42.756][000008][info][main] [source/server/drain_manager_impl.cc:63] shutting down parent after drain

Call Stack:

@mattklein123 mattklein123 added the question Questions that are neither investigations, bugs, nor enhancements label Nov 11, 2018
@mattklein123
Copy link
Member

@lizan

@lizan
Copy link
Member

lizan commented Nov 11, 2018

cluster.grpc-backend-services.upstream_rq_415: 4

415 is returned from upstream, the gRPC JSON transcoder wouldn't modify unmapped requests so they will go directly to upstream and it is up to upstream to return whatever error code.

@lizan
Copy link
Member

lizan commented Nov 11, 2018

If you don't want unmodified request to be sent to upstream, add grpc: {} to your route match, then the unmapped request should be result 404 (no route found).

@toefel18
Copy link
Author

toefel18 commented Nov 11, 2018

I tested it and it works, thanks :). I added it here:

            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" , grpc: {} }
                route: { cluster: grpc-backend-services, timeout: { seconds: 60 } }   

I found docs here:
https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route/route.proto#envoy-api-msg-route-routematch-grpcroutematchoptions

@toefel18
Copy link
Author

I have another question that might be related. But it might be another issue (I can create that):

If the resource returns a stream, and I throw an error, the response is always 200 with body [ ] regardless of the error or exception that was thrown:

.proto

    // lists all the reservations, use query params on venue or timestamp to filter the resultset.
    rpc ListReservations(ListReservationsRequest) returns (stream Reservation) {
        // use query parameter to specify filters, example: ?venue=UtrechtHomeoffice
        // these query parameters will be automatically mapped to the ListReservationRequest object
        option (google.api.http) = {
            get: "/v1/reservations"
        };
    }

Java implementation:

    @Override
    public void listReservations(ListReservationsRequest request, StreamObserver<Reservation> responseObserver) {
            responseObserver.onError(Status.UNAUTHENTICATED.asRuntimeException());
    }

Curl:

curl http://localhost:51051/v1/reservations -v
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 51051 (#0)
> GET /v1/reservations HTTP/1.1
> Host: localhost:51051
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/grpc
< grpc-status: 16
< x-envoy-upstream-service-time: 108
< date: Sun, 11 Nov 2018 20:21:28 GMT
< server: envoy
< transfer-encoding: chunked
< 
* Connection #0 to host localhost left intact
[]

@toefel18
Copy link
Author

Moved last question to a separte issue. #5011

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Questions that are neither investigations, bugs, nor enhancements
Projects
None yet
Development

No branches or pull requests

3 participants