Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve image/audio read throughput by 50% for image/audio features using Daft #3249

Merged
merged 47 commits into from
Jun 23, 2023

Conversation

arnavgarg1
Copy link
Contributor

@arnavgarg1 arnavgarg1 commented Mar 15, 2023

There are many advantages to do this from a code maintainability perspective:

  1. We can get rid of a lot of custom custom and unnecessary Dask transformations
  2. Still get to use Ray, but Daft is much more optimized for data querying and aggregations.
  3. We only use Daft for image/audio reads

In the local single-partition Ray cluster setup: Daft is a 50% speedup over the existing Ludwig code for URL downloading, and an overall 40% speedup for all of preprocessing.

In the distributed multi-partition Ray cluster setup: Daft is a 50% speedup over the existing Ludwig code for URL downloading, and an overall 40% speedup for all of preprocessing.

See benchmarking doc for more details.


Co-authored-by: @jaychia

@github-actions
Copy link

github-actions bot commented Mar 15, 2023

Unit Test Results

  6 files  ±0    6 suites  ±0   1h 14m 59s ⏱️ - 3m 35s
33 tests ±0  29 ✔️ ±0    4 💤 ±0  0 ±0 
99 runs  ±0  87 ✔️ ±0  12 💤 ±0  0 ±0 

Results for commit 1883bff. ± Comparison against base commit d2f71c5.

♻️ This comment has been updated with latest results.

@arnavgarg1 arnavgarg1 changed the title Parallelize image/audio reads using Daft on Ray instead of Ray + Dask Test parallelize image/audio reads using Daft on Ray instead of Ray + Dask Mar 15, 2023
@arnavgarg1 arnavgarg1 changed the title Test parallelize image/audio reads using Daft on Ray instead of Ray + Dask Parallelize image/audio reads using Daft on Ray instead of Ray + Dask May 3, 2023
@arnavgarg1 arnavgarg1 marked this pull request as ready for review May 13, 2023 00:49
@arnavgarg1 arnavgarg1 requested a review from jppgks May 16, 2023 16:01
ludwig/backend/ray.py Outdated Show resolved Hide resolved
@arnavgarg1 arnavgarg1 changed the title Parallelize image/audio reads using Daft on Ray instead of Ray + Dask Parallelize image/audio reads using Daft on Ray instead of Dask on Ray Jun 1, 2023
df = df.to_dask_dataframe()
df = self.df_engine.persist(df)
else:
df = df.to_pandas()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this only needed for the Modin df engine? I think Pandas shouldn't use this codepath at all, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it's needed to convert from Daft back to either Dask or Pandas, so even Pandas would hit this code path

ludwig/backend/ray.py Outdated Show resolved Hide resolved
@arnavgarg1 arnavgarg1 changed the title Parallelize image/audio reads using Daft on Ray instead of Dask on Ray Improve read throughput by 50% for image/audio reads using Daft Jun 20, 2023
Copy link
Collaborator

@justinxzhao justinxzhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excited for the speedups!!

@arnavgarg1 arnavgarg1 changed the title Improve read throughput by 50% for image/audio reads using Daft Improve image/audio read throughput by 50% for image/audio features using Daft Jun 23, 2023
@arnavgarg1 arnavgarg1 merged commit 91c28f8 into master Jun 23, 2023
16 checks passed
@arnavgarg1 arnavgarg1 deleted the daft_reads branch June 23, 2023 14:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants