Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of cached adjacency matrix in all algorithms #356

Closed
sbaldu opened this issue Sep 28, 2023 · 1 comment · Fixed by #358
Closed

Use of cached adjacency matrix in all algorithms #356

sbaldu opened this issue Sep 28, 2023 · 1 comment · Fixed by #358
Assignees
Labels
core something about core enhancement New feature or request good first issue Good for newcomers hacktoberfest hacktoberfest issue performance Performance issue Priority:High Priority Label for high priority issue

Comments

@sbaldu
Copy link
Collaborator

sbaldu commented Sep 28, 2023

Since we now have a cachedAdjacencyMatrix member, wouldn't it be better to use it in all the algorithms, instead of calling the getAdjMatrix method? I think that in some applications this would save a lot of time, especially for very large graphs.

@ZigRazor ZigRazor added enhancement New feature or request good first issue Good for newcomers core something about core Priority:High Priority Label for high priority issue performance Performance issue hacktoberfest hacktoberfest issue labels Sep 28, 2023
@sbaldu sbaldu self-assigned this Sep 28, 2023
@sbaldu sbaldu linked a pull request Sep 28, 2023 that will close this issue
@nrkramer
Copy link
Collaborator

It absolutely would. I encourage we replace all usages with utilized the cached adj matrix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core something about core enhancement New feature or request good first issue Good for newcomers hacktoberfest hacktoberfest issue performance Performance issue Priority:High Priority Label for high priority issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants