Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add resources section to README. #1085

Merged
merged 4 commits into from
Aug 13, 2022

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Aug 12, 2022

Description

This PR adds a "Resources" section similar to the RMM README, similar to what I did in cuDF some time ago. My primary reason for adding this is to have a quick link to the RMM documentation.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@bdice bdice requested a review from harrism August 12, 2022 00:48
@bdice bdice self-assigned this Aug 12, 2022
@bdice bdice added doc Documentation non-breaking Non-breaking change labels Aug 12, 2022
README.md Show resolved Hide resolved
- [Getting Started](https://rapids.ai/start.html): Instructions for installing RMM.
- [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate.
- [GitHub repository](https://github.com/rapidsai/rmm): Download the cuDF source code.
- [Issue tracker](https://github.com/rapidsai/rmm/issues): Report issues or request features.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [Issue tracker](https://github.com/rapidsai/rmm/issues): Report issues or request features.
- [Issue tracker](https://github.com/rapidsai/rmm/issues): Report issues or request features.
- [Stream-ordered Allocation](https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/): More information about the semantics of stream-ordered allocation.

Copy link
Contributor Author

@bdice bdice Aug 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrhemstad This blog post isn't about RMM as a library, it's a topic of background info that developers should know. It can definitely be linked in the README, but it doesn't feel appropriate for the Resources top-level section. I'd link it in the section titled "Stream-ordered Memory Allocation" below. If you're okay with that, let me know and I'll make the change. edit: went ahead and did it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tackled this in 4d675f2. Feel free to suggest changes / approve if you're happy with it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's what you have in mind for "Resources" then sure.

I had thought this section was more of a "Further Reading" as just a collection of links to things anyone interested in RMM would also be interested in.

Copy link
Contributor Author

@bdice bdice Aug 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to keep the "Resources" section brief since it's a top level thing -- just a set of links to crucial project information, like how to acquire, use, report issues with, and contribute to the code. "Further Reading" about theory/concepts underlying the software design definitely fits in the README, just maybe not in the core project info.

@harrism
Copy link
Member

harrism commented Aug 12, 2022

That reminds me there is also a blog post about RMM to link.

https://developer.nvidia.com/blog/fast-flexible-allocation-for-cuda-with-rapids-memory-manager/

@jakirkham
Copy link
Member

If there are any videos of RMM talks, those might be nice to link as well

README.md Outdated Show resolved Hide resolved
Co-authored-by: Jake Hemstad <jhemstad@nvidia.com>
README.md Show resolved Hide resolved
@bdice
Copy link
Contributor Author

bdice commented Aug 12, 2022

That reminds me there is also a blog post about RMM to link.

https://developer.nvidia.com/blog/fast-flexible-allocation-for-cuda-with-rapids-memory-manager/

Resolved in 21d4644.


Achieving optimal performance in GPU-centric workflows frequently requires customizing how host and
device memory are allocated. For example, using "pinned" host memory for asynchronous
host <-> device memory transfers, or using a device memory pool sub-allocator to reduce the cost of
dynamic device memory allocation.
dynamic device memory allocation.
Copy link
Contributor Author

@bdice bdice Aug 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My editor trimmed trailing whitespace. Sorry for the noisy diff -- can we merge this with the whitespace trimmed, or do I need to revert?

@bdice
Copy link
Contributor Author

bdice commented Aug 13, 2022

@gpucibot merge

@rapids-bot rapids-bot bot merged commit 03013f3 into rapidsai:branch-22.10 Aug 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc Documentation non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants