Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation home update #2713

Merged
merged 4 commits into from
Feb 25, 2021
Merged

Conversation

jantonguirao
Copy link
Contributor

@jantonguirao jantonguirao commented Feb 23, 2021

Signed-off-by: Joaquin Anton janton@nvidia.com

Why we need this PR?

Pick one, remove the rest

  • Updates README.rst to reflect on latest features and improve readability.

What happened in this PR?

Fill relevant points, put NA otherwise. Replace anything inside []

  • What solution was applied:
    Added diagram
    Listed supported formats
    Mention GDS and Triton integration
    Added info from developer page
    Added quick installation command
  • Affected modules and functionalities:
    Readme
  • Key points relevant for the review:
    NA
  • Validation and testing:
    NA
  • Documentation (including examples):
    NA

JIRA TASK: [DALI-1870]

@jantonguirao jantonguirao force-pushed the dali_1_0_doc_4 branch 19 times, most recently from e9f3aee to 34c5355 Compare February 23, 2021 17:58
@jantonguirao jantonguirao changed the title [WIP] Documentation home update Documentation home update Feb 23, 2021
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2101828]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2101828]: BUILD PASSED

Signed-off-by: Joaquin Anton <janton@nvidia.com>
README.rst Show resolved Hide resolved
README.rst Outdated
The NVIDIA Data Loading Library (DALI) is a library for data loading and
pre-processing to accelerate deep learning applications. It provides a
collection of highly optimized building blocks for loading and processing
image, video and audio data, and it can be used as a portable drop-in replacement
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
image, video and audio data, and it can be used as a portable drop-in replacement
image, video and audio data. It can be used as a portable drop-in replacement

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.rst Outdated
pre-processing to accelerate deep learning applications. It provides a
collection of highly optimized building blocks for loading and processing
image, video and audio data, and it can be used as a portable drop-in replacement
for built in data loaders and data iterations in popular deep learning frameworks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for built in data loaders and data iterations in popular deep learning frameworks.
for built in data loaders and data iterators in popular deep learning frameworks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.rst Outdated
raw formats, LMDB, RecordIO, TFRecord.
- Extensible for user-specific needs through open source license.
- Easy-to-use functional style Python API.
- Multiple data formats support - LMDB, RecordIO, TFRecord, COCO, JPEG, JPEG 2000, WAV, FLAC, OGG, H.264 and HEVC.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Multiple data formats support - LMDB, RecordIO, TFRecord, COCO, JPEG, JPEG 2000, WAV, FLAC, OGG, H.264 and HEVC.
- Multiple data formats support - LMDB, RecordIO, TFRecord, COCO, JPEG, JPEG 2000, WAV, FLAC, OGG, H.264, VP9 and HEVC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.rst Outdated
--------------------

- GPU Technology Conference 2018; Fast data pipeline for deep learning training, T. Gale, S. Layton and P. Trędak: |slides1|_, |recording1|_.
- GPU Technology Conference 2019; Fast AI data pre-preprocessing with DALI; Janusz Lisiecki, Michał Zientkiewicz: |slides2|_, |recording2|_.
- GPU Technology Conference 2019; Integration of DALI with TensorRT on Xavier; Josh Park and Anurag Dixit: |slides3|_, |recording3|_.
- GPU Technology Conference 2020; Fast Data Pre-Processing with NVIDIA Data Loading Library (DALI); Albert Wolant, Joaquin Anton Guirao |recording4|_.
- `Developer page <https://developer.nvidia.com/DALI>`_.
- `Blog post <https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali/>`_.
- `Blog post: Fast AI Data Preprocessing with NVIDIA DALI <https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali/>`_.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can refer all blog posts related to DALI - https://developer.nvidia.com/blog/tag/dali/ ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2104773]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2104773]: BUILD PASSED

Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2105293]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2105293]: BUILD PASSED

README.rst Outdated
- Portable accross popular deep learning frameworks: TensorFlow, PyTorch, MXNet, PaddlePaddle.
- Supports CPU and GPU execution.
- Scalable across multiple GPUs.
- Flexible graphs lets developers create custom pipelines.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

graphs let or graph lets

Also it's not clear for me what does it mean. What makes our graphs flexible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied it from the developer page. It means that users can write their own processing graphs.

Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2108619]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2108619]: BUILD PASSED

@jantonguirao jantonguirao merged commit 7132cdb into NVIDIA:master Feb 25, 2021
@JanuszL JanuszL mentioned this pull request May 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants