Skip to content

Conversation

sayakpaul
Copy link
Member

What does this PR do?

This PR fixes how the LCM benchmark numbers are reported.

@@ -162,6 +162,25 @@ def run_inference(self, pipe, args):
guidance_scale=1.0,
)

def benchmark(self, args):
Copy link
Member Author

@sayakpaul sayakpaul Dec 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This ensures that the overridden get_result_filepath() in the LCMLoRATextToImageBenchmark class gets called properly.

@sayakpaul sayakpaul merged commit 9cef07d into main Dec 17, 2023
@sayakpaul sayakpaul deleted the fix/lcm-benchmark-reporting branch December 17, 2023 10:02
donhardman pushed a commit to donhardman/diffusers that referenced this pull request Dec 18, 2023
* fix: lcm benchmarking reporting

* fix generate_csv_dict call.
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* fix: lcm benchmarking reporting

* fix generate_csv_dict call.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant