Skip to content

Conversation

AbdBarho
Copy link
Contributor

@AbdBarho AbdBarho commented Apr 9, 2023

When running this app first time in WSL2 environment, which is notoriously slow when it comes to IO, computing the SHAs of the models takes an eternity.

Computing shas for sd2.1

| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 510.87s)

I increased the chunk size to 16MB reduce the number of round trips for loading the data. New results:

| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 59.89s)

Higher values don't seem to make an impact.

@AbdBarho AbdBarho changed the title Increase chunk size when computing SHAs Increase chunk size when computing diffusers SHAs Apr 9, 2023
Copy link
Contributor

@damian0815 damian0815 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems fine

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OMG! I hadn't realized how long the hashing takes on your platform. On mine, it completes in ~10s.

@lstein lstein enabled auto-merge April 9, 2023 20:37
@lstein lstein merged commit f050957 into invoke-ai:main Apr 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants