Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT-#5394: Reduce amount of remote calls for TreeReduce and GroupByReduce operators #7245

Merged
merged 3 commits into from
May 14, 2024

Conversation

Retribution98
Copy link
Collaborator

@Retribution98 Retribution98 commented May 8, 2024

Apply approaches from PR-7136 for TreeReduce and GroupByReduce operators

What do these changes do?

  • first commit message and PR title follow format outlined here

    NOTE: If you edit the PR title to match this format, you need to add another commit (even if it's empty) or amend your last commit for the CI job that checks the PR title to pick up the new PR title.

  • passes flake8 modin/ asv_bench/benchmarks scripts/doc_checker.py
  • passes black --check modin/ asv_bench/benchmarks scripts/doc_checker.py
  • signed commit with git commit -s
  • Resolves Reduce amount of remote calls for square-like dataframes #5394
  • tests added and passing
  • module layout described at docs/development/architecture.rst is up-to-date

… and GroupByReduce operators

Signed-off-by: Kirill Suvorov <kirill.suvorov@intel.com>
Copy link
Collaborator

@anmyachev anmyachev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Retribution98 do you have any performance numbers?

It's also a good idea to add tests for the new operators, which now work a little differently.

@@ -2205,46 +2205,12 @@ def map(
PandasDataframe
A new dataframe.
"""
if self.num_parts <= 1.5 * CpuCount.get():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was the implementation moved to a lower level?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was moved here to avoid duplicating logic, and map_partitions in the partition manager is only used in these cases.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, your implementation was also used instead of self._partition_mgr_cls.lazy_map_partitions function, under some condition, but now it is not. Is that how it was intended?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tree_reduce and groupby_reduce call map_partitions at the partition mgr. That's why @Retribution98 moved the logic there I guess.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I see it. I think lazy map will be better in a lazy pipeline because some partitions may not be calculated further, so this issue is not relevant for this case.

@Retribution98
Copy link
Collaborator Author

@Retribution98 do you have any performance numbers?

@anmyachev
This case is similar to the previous PR, so we can expect the same performance.
Using 112 CPU:

df.count partitions shape main this PR
(Using 112 CPU) (112, 1) 0.202289 0.19788
  (12544, 1) 13.67759 10.99517
  (112, 112) 4.544378 1.760422

@Retribution98
Copy link
Collaborator Author

It's also a good idea to add tests for the new operators, which now work a little differently.

Since the logic is at a lower level, I modified the test to test this and it covered all cases where map_partitions is used.

)
else:
# splitting by full axis partitions
new_partitions = cls.map_axis_partitions(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using map_axis_partitions function in map_partitions function does not seem obvious and defeats the purpose of map_partitions function, which declares that it applies to every partition.

The dataframe level for choosing a suitable strategy seems more appropriate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since partition mgr rather than core df is designed to play around with partitions, maybe we should just update the docstring?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I added base_map_partitions to keep the simplest implementation, but by default we will use new approaches. Are you agree?

@YarShev
Copy link
Collaborator

YarShev commented May 13, 2024

@Retribution98, could you also check performance for dtypes, which is part of #2751?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reduce amount of remote calls for square-like dataframes
3 participants