Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cant create chaining of responses #194

Closed
sergiofilhowz opened this issue Feb 4, 2019 · 10 comments
Closed

Cant create chaining of responses #194

sergiofilhowz opened this issue Feb 4, 2019 · 10 comments
Assignees
Milestone

Comments

@sergiofilhowz
Copy link

According to the docs:
https://www.krakend.io/docs/endpoints/sequential-proxy/

I have the following config:

{
  "version": 2,
  "extra_config": {
    "github_com/devopsfaith/krakend-cors": {
      "allow_methods": [
        "GET",
        "HEAD",
        "POST",
        "PUT",
        "DELETE",
        "OPTIONS",
        "PATCH"
      ],
      "allow_credentials": false,
      "allow_headers": [
        "Accept",
        "Content-Type",
        "Content-Length",
        "Accept-Encoding",
        "Authorization"
      ],
      "expose_headers": [
        "Location"
      ],
      "max_age": "3600s"
    },
    "github_com/devopsfaith/krakend-metrics": {
      "collection_time": "60s",
      "proxy_disabled": false,
      "router_disabled": false,
      "backend_disabled": false,
      "endpoint_disabled": false,
      "listen_address": ":8090"
    }
  },
  "timeout": "3000ms",
  "cache_ttl": "300s",
  "output_encoding": "json",
  "name": "Payments",
  "port": 8080,
  "endpoints": [
    {
      "endpoint": "/provider/{uuid}",
      "method": "GET",
      "extra_config": {
        "github.com/devopsfaith/krakend/proxy": {
          "sequential": true
        }
      },
      "output_encoding": "json",
      "concurrent_calls": 1,
      "backend": [
        {
          "url_pattern": "/provider/{uuid}",
          "encoding": "json",
          "sd": "static",
          "host": [
            "http://docker.for.mac.localhost:8081"
          ],
          "disable_host_sanitize": false,
          "blacklist": [],
          "whitelist": [
            "name",
            "phone",
            "wallet_id",
            "type",
            "uuid"
          ],
          "is_collection": false,
          "target": ""
        },
        {
          "url_pattern": "/recipient/{resp0_wallet_id}",
          "encoding": "json",
          "sd": "static",
          "host": [
            "http://docker.for.mac.localhost:8082"
          ],
          "disable_host_sanitize": false,
          "target": ""
        }
      ]
    }
  ]
}

When I start the service I get the following message:

ERROR parsing the configuration file: Undefined output param [resp0_wallet_id]! input: map[uuid:<nil>], output: [resp0_wallet_id]
@kpacha kpacha self-assigned this Feb 4, 2019
@kpacha kpacha added the bug label Feb 4, 2019
@kpacha kpacha added this to the 0.8 milestone Feb 4, 2019
@kpacha
Copy link
Member

kpacha commented Feb 4, 2019

hi @sergiofilhowz

You're right, it looks like we missed something at the integration step... thanks for the heads up!!!

@sergiofilhowz
Copy link
Author

Hey @kpacha

Thank you for the quick answer!

BTW I'm really enjoying this project and want to contribute with some features.

  1. Implement cache management (to plugin a redis)
  2. Implement fallbacks (when some response chaining has failed to return a preconfigured fallback)
  3. Implement Request chaining to create objects between multiple microservices (with rollback policies)
  4. Implement Response chaining on arrays (some endpoints must consume data between multiple microservices, this has to be used with a cache management to not fail on performance).
  5. Response chaining should have an option to put a response from a service inside a property (for example the config I showed above, I want the /recipient response to be inside "recipient" property in the body.
  6. Healthcheck

@kpacha
Copy link
Member

kpacha commented Feb 4, 2019

The fix of the bug has been merged into master and will be included into the next release.

Regarding your last comment:

  1. Implement cache management (to plugin a redis)

Please, take a look at krakend/krakend-httpcache#1

  1. Implement fallbacks (when some response chaining has failed to return a preconfigured fallback)

I think you already have that feature with the static backends. Here you have a simple example: https://github.com/devopsfaith/krakend/blob/master/test/krakend.json#L9-L30

  1. Implement Request chaining to create objects between multiple microservices (with rollback policies)

I'd say that's the main feature of the KrakenD framework. Can you point what do you think is missing (https://www.krakend.io/docs/endpoints/response-manipulation/) ?

  1. Implement Response chaining on arrays (some endpoints must consume data between multiple microservices, this has to be used with a cache management to not fail on performance).

We'd love to get new ideas regarding array manipulations at the proxy level. It has been a known limitation and there are several issues related You can open a new issue with your thoughts and/or proposal or start a new conversation in our slack channel.

  1. Response chaining should have an option to put a response from a service inside a property (for example the config I showed above, I want the /recipient response to be inside "recipient" property in the body.

It is already possible with the grouping feature (https://www.krakend.io/docs/endpoints/response-manipulation/#grouping). Sadly, this feature doesn't play nice with the sequential backends, because the constraints of the response param extraction of the later.

  1. Healthcheck

please, check #181

Again, thanks for the heads up

@kpacha kpacha closed this as completed Feb 4, 2019
@sergiofilhowz
Copy link
Author

Hey @kpacha , thanks for all the support

I'd say that's the main feature of the KrakenD framework. Can you point what do you think is missing (https://www.krakend.io/docs/endpoints/response-manipulation/) ?

I couldn't create a chain on POST endpoints, is that possible? I my current solution.

I have two services and a creation of a provider chains first to recipient then to provider with the result of the first request, if the creation of provider fails, then I have to rollback the creation of recipient.

We'd love to get new ideas regarding array manipulations at the proxy level. It has been a known limitation and there are several issues related You can open a new issue with your thoughts and/or proposal or start a new conversation in our slack channel.

Of course, I'll start a discussion about that soon

@kpacha
Copy link
Member

kpacha commented Feb 5, 2019

TL;DR Using multiple backends for methods other than GET is not supported because all the possible transaction-related implications. Orchestrating distributed transactions in a single endpoint requires deep knowledge of the business rules involved in every use case represented by the given endpoint.

The RFC-7231 refers the GET method as the single safe one. It also defines the POST method as the only non-idempotent one.

You already pointed out the need for some way of transaction management for the POST method but, to make things worst, even with idempotent PUT, PATCH or DELETE operations, there is a transaction involved and it is a hard problem to solve because all these methods represent state changes: they have side effects that should be reversed.

Imagine an endpoint accepting a single PUT request and sending it to 3 different backends. It is possible to argue that we could just return an OK if we got 3 OKs and a KO otherwise, but that approach would be ignoring the actual complexities of the backend services, for example: they could be sending messages or event notifications to some unknown interested collector(s) that will react to them in an unknown ways. With that in mind, we should solve some questions in a very general way, suitable to be included in a framework, before start digging into the code. From the top of my head, these could be the top 5:

  • What should the gateway do after a single backend failure? in other words: how to rollback the succesful backend request and be sure that future requests won't collide with the (hopefully) removed one?
  • How should the gateway handle a rollback error?
  • How far can we handle a cascade rollback(with and without errors) without adding tons of complexity to the merger?
  • How can we keep the error ratio of the endpoint lower or equal than the error ratio of worst backend?
  • How can we keep the response time distribution of the endpoint lower or equal than the one of the slowest backend?

Maybe, if we enforce some restrictions to the feature (like allowing this just on endpoints using the sequential backend feature) some of these problems go away, so I am eager to hear any proposals.

If it's ok with you and in order to avoid polluting this issue, I'd suggest to move this conversation to the KrakenD slack channel or to a new issue.

cheers!

@sergiofilhowz
Copy link
Author

@kpacha I don't have access to the slack and couldn't create an account, its asking me to contact an administrator. Can you please create an account to me? sergiofilhow@gmail.com

@kpacha
Copy link
Member

kpacha commented Feb 5, 2019

here you have the link to the invite form: https://invite.slack.golangbridge.org/

@nlappe
Copy link

nlappe commented Aug 24, 2020

@kpacha
It took me about 4.5 hours of googling to find this issue and you answered my hour long journy with

Using multiple backends for methods other than GET is not supported because all the possible transaction-related implications.

I tried to implement a POST endpoint with 3 backends (as my client sends a single request with a JOSN that should be split and distributed to 3 independent BEs), but getting nothing but strange 404s as soon as i added more than 1 backend.

This info needs to be in the DOCs (i didnt find it in the backend or endpoint category - but i could be blind, its in the middle of the night right now) as the generated error (404) is kinda not fitting and impossible to trace. Also this Github issue doesn't turn up easily on google. i have GET routes with multiple backends which works fine, so its just a natural assumption that 'the same thing' works just as well with a POST.

Anyway, if this info already exists and i'm just blind and sorry for bothering. Otherwise, it would be great to have this (important) piece of information easily available. :)

Thanks for this wonderful Project ❤️

@jose-lpa
Copy link

jose-lpa commented Jun 9, 2021

Almost a year after the last comment, I faced the same situation. Took me hours as well to find the response of @kpacha which makes a lot of sense and clarifies everything.

Just wanted to say that as @nlappe said before, this information should be in the docs. At least a warning in the Response Manipulation, where the "merging" operation is described, would be great.

And also, many thanks for this project and for making it OSS. Cheers 🙂

@github-actions
Copy link

github-actions bot commented Apr 6, 2022

This issue was marked as resolved a long time ago and now has been automatically locked as there has not been any recent activity after it. You can still open a new issue and reference this link.

@github-actions github-actions bot added the locked label Apr 6, 2022
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 6, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants