Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question/Request - lambda BatchInvoke limit #51

Closed
stefangomez opened this issue Sep 17, 2019 · 35 comments
Closed

Question/Request - lambda BatchInvoke limit #51

stefangomez opened this issue Sep 17, 2019 · 35 comments
Labels
feature-request New feature or request

Comments

@stefangomez
Copy link

I can't seem to find any documentation on the limit that AppSync imposes on how many events get passed to BatchInvoke. My questions:

  1. Is the limit still 5?
  2. Is there a plan to increase the limit either by default or by configuration/billing?
  3. Can you please document this.

I understand that for most use cases it'll bring N+1 queries down substantially, but it's still got a (N/5)+1 problem...which can become problematic at some point.

It'll be nice to have this documented somewhere, because I'm sure there are plenty of people that want to use BatchInvoke but can't or won't without knowing this first.

I'm fine if it's 5, and will stay 5 forever, but some transparency on the actual limit and plan would be appreciated. For some, and specifically for me, it's not a blocker, but knowing helps us determine the best way to write the resolver. For example, if it's a nested relationship that queries a DB, one might decide to write that query for nested elements in the top-level resolver depending on document size/memory reqs, etc. Depending on the situation a more expensive/complex 2-query call in a resolver could be better off than a cheaper/simpler (N/5)+1 call...but we need to know what we're working with! Thanks :)

@katzeforest
Copy link

Thanks for using AppSync! To answer your questions:

  1. The limit is 5.
  2. There are many optimizations done internally in the service, including parallel optimizations at appropriate times depending on different conditions. We are actively looking into feature like this to benefit customers. I will +1 this feature request to the team on your behalf.
  3. Thank you for the feedback and we will document it.

@fgiarritiello
Copy link

+1

1 similar comment
@adamjq
Copy link

adamjq commented Apr 6, 2020

+1

@3nvi
Copy link

3nvi commented Apr 8, 2020

Out of curiosity, is there a public roadmap available somewhere? It would be great to know if this is something that will be coming in the near future or far down the line.

Thanks!

@markoff-s
Copy link

+1

@gjimenezv
Copy link

Complete year passed and we don't have any response, the entire internet know nothing about this topic, is there any hope?

@Ricardo1980
Copy link

You have to be more annoying (that is my experience). Use Discord and Twitter and tag aws engineers.

@LMulvey
Copy link

LMulvey commented Nov 25, 2020

I've received a response on AWS Support Forums before indicating that they do not have any public roadmap, sadly. That was last year but I doubt it has changed.

@collinglass
Copy link

+1

2 similar comments
@arpitchaudhary
Copy link

+1

@photomz
Copy link

photomz commented Feb 15, 2021

+1

@milindkhurd
Copy link

+1

This impacts my design, query performance and explodes cost. AppSync is only good for POC's not real world applications.

@MatteoGranziera
Copy link

+1

@drwxmrrs
Copy link

drwxmrrs commented Apr 19, 2021

+1 - would love to see this implemented. Tossing up rolling our own in Apollo until this is resolved :(

@dranes
Copy link

dranes commented Apr 30, 2021

this is a very big issue for us too, we are hitting too many connections in our database too fast and the limit of 5 is ridiculous

@metalc0der
Copy link

We are having the same problem here. We were so excited about AppSync, but this make it impossible for us to use it. Besides, the 5 limit goes against the main purpose of BatchInvoke requests in the first place.

@jpignata jpignata added the feature-request New feature or request label Jun 7, 2021
@sebastiansanio
Copy link

Any news about this? It's a very big issue for us too.

@nacho8
Copy link

nacho8 commented Jun 24, 2021

+1

@leandrosalo
Copy link

This is so frustrating

@mutasemhidmi
Copy link

mutasemhidmi commented Aug 1, 2021

When will this be fixed? It's nearly two years, and this still hasn't changed. This is so frustrating.

@kylevillegas93
Copy link

+1 - we are planning on migrating over to REST APIs and spinning up our own Apollo GraphQL server, which is a substantial undertaking, if there is no plan to increase the limit. A simple yes or no on this issue will help us make better decisions when implementing our software.

AWS AppSync Team - your lack of follow-up on this issue and lack of transparency of your roadmap clearly goes against Amazon's leadership principle of customer obsession and rather shows a disinterest in customer feature requests. Please consider prioritizing your customers first. Thanks!

@gastonsilva
Copy link

+1

@onlybakam
Copy link
Contributor

Hey all, thank you for the feedback on batching. would be great to get your feedback on this:

  • do you have use cases where you need to dynamically configure the batch size for your lambda resolver (i.e.: in VTL)?
  • are you using Direct Lambda resolvers (DLR)? would it be helpful to configure your DLR to batch (i.e.: no VTL)?

@sebastiansanio
Copy link

Hi Brice, thanks for the response.
I answer for our case:

  • it would be good to have, but not necessary. We paginate at the query level so a fixed limit but much higher than the current one would greatly improve our solution (for example, a limit of 100).
  • we are using Direct lambda resolvers, so yes, it would be helpful to be able to configure the DLR to batch without VTL.

@johansteffner
Copy link

johansteffner commented Sep 23, 2021

Hi @onlybakam

We are using lambda resolvers with VTL and it would be really nice to be able to configure the batch size.

@kylevillegas93
Copy link

@onlybakam we are using DLR and it would be very useful to configure the DLR to batch more than current limit of 5.

@DrWorkhard
Copy link

@onlybakam A dynamically configured batch size would be helpful. E.g. right now we fetch new elements for our feed in batches of seven. So if we could configure your batch resolver batch size to 7 we would perfectly match our internal batch size.

@mutasemhidmi
Copy link

Any update for this?

@thdxr
Copy link

thdxr commented Dec 16, 2021

@onlybakam I'll take whatever is easier, really really need this to be a higher number

@rraat
Copy link

rraat commented Dec 16, 2021

Don't think that a configurable limit would be a requirement. Having a batch size of 100 would suffice in many cases.
If one would need a smaller batch size the parent could return less data to solve this.

@mutasemhidmi
Copy link

Need this feature

@ndejaco2
Copy link

ndejaco2 commented Jan 6, 2022

Hi all, this enhancement has finally been released:

https://aws.amazon.com/blogs/mobile/introducing-configurable-batching-size-for-aws-appsync-lambda-resolvers/

@yvele
Copy link

yvele commented Jan 12, 2022

@ndejaco2 cool thank you!

Is this already supported in CloudFormation?

There is nothing in AWS::AppSync::Resolver such as maxBatchSize 🤔

I tried in the request mapping template but that doesn't work:

{
  "version": "2018-05-29",
  "operation": "BatchInvoke",
  "maxBatchSize": 50, <--------------------- Does NOT work 🤷🏻‍♂️
  "payload": {
    "context": $utils.toJson($context)
  }
}

image

@ndejaco2
Copy link

Yes Cloudformation is supported. The property name is MaxBatchSize and it can be set on AWS::AppSync::Resolver and AWS::AppSync::FunctionConfiguration. The official Cloudformation doc update will be out this week.

@yvele
Copy link

yvele commented Jan 12, 2022

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request New feature or request
Projects
None yet
Development

No branches or pull requests