Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade all NuGet bits, and fix issues. And address #287 #294

Closed
wants to merge 5 commits into from

Conversation

IEvangelist
Copy link
Owner

@IEvangelist IEvangelist commented Aug 2, 2022

Copy link
Collaborator

@mumby0168 mumby0168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few questions for this one, looks great, looking forward to having the async enumerable API.

Might be me, but did this PR mention fixing the in-memory repository bug? I couldn't spot the code for that.


_logger.LogQueryConstructed(query);

IEnumerable<TItem>? items = await GetByQueryAsync(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this implementation, using a non-async enumerable approach to load all the records let's say 10 pages into memory, before then starting to stream them back to the client? Would we want to stream the first page back to the caller as soon as it's available to us?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that is a really good callout. I was trying to think of a more appropriate approach. Perhaps, I should refactor this to:

  • Get the count first.
  • Using the count determine a reasonable page size.
  • Page over the data, then for each page yield it back.

That's probably better, right? Or do you have another idea?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that could work, but I am pretty sure running a count query across a large data set can rack up a hefty RU bill, @robertbennett1998 did you have the docs or article we read around this?

I am wondering whether we allow the caller to specify there chunk size, we can assume a reasonable default maybe?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, it has been a while but I found some threads discussing count before and it seemed to be quite an expensive operation and we probably shouldnt run it without the user explicitly knowing about it. This is the closest thread I can find atm
https://stackoverflow.com/questions/71958612/how-to-use-where-count-in-cosmos-db
But there was one that dug into it a bit deeper.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm asking one of the PMs on the team, I'm not sure if there is any way to avoid this. Perhaps we just omit the count? /CC @jcocchi

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could the continuation token being null let you know there are no more pages?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any news on this?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to ask this question to @jcocchi. Let's ask her here, and see if this additional context helps to make the question more clear.

@IEvangelist IEvangelist marked this pull request as draft October 21, 2022 13:07
@IEvangelist IEvangelist closed this Jan 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants