-
Notifications
You must be signed in to change notification settings - Fork 496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add ubuntu installation guide for NUC system #8
Merged
profvjreddi
merged 1 commit into
mlcommons:master
from
rai-project:feature/nuc_installation
Dec 6, 2018
Merged
add ubuntu installation guide for NUC system #8
profvjreddi
merged 1 commit into
mlcommons:master
from
rai-project:feature/nuc_installation
Dec 6, 2018
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
profvjreddi
suggested changes
Nov 29, 2018
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please rename this filename so that it is not all caps?
Also, can we call it setup_nuc or something since you also talk about Docker in this file?
We could use this as the general file that tells people how to configure the NUC as we find new issues in the future.
mpjlu
pushed a commit
to mpjlu/inference
that referenced
this pull request
Mar 11, 2019
add ubuntu installation guide for NUC system
guschmue
pushed a commit
that referenced
this pull request
May 16, 2020
* adding test file. * copying classification_and_detection into recommendation directory * making initial changes for model and dataset wrapping * Updated mlperf integration Updated mlperf integration, running Single Stream with one item per batch * - data loader is complient with loadgen - fixed data loading to let loadgen create sequence if samples in batches - fixed accuracy reporting for DLRM * updated DLRM Queue Runner * synch with the original after previous edits * second synchronization second synchronization * synch 2020-04-10 * updating to the original master * updating to the original * added script to generate fake data - added script to generate fake data - added option to specify output directory in quickgen.py * Adding support for interchangeable CPU, GPU and multiple GPUs execution. * Change how we obtain the length of the test dataset. Plus a few cosmetic changes. * Code refactor for simplification (part 1) * Code refactor for simplification (part 2) * removing spurious file. * removing another spurious file. * removing more spurious files. * Enabling gpu path (work in progress). * Adding automatic support for different dataset options. * Enabling gpu path, with help from Dmitriy (done). * Fixing a latent bug. * Adding support for binary loader (work in progress). * adding quick test generator. * improving quick test generator. * Adding support for binary loader (done). * Moving quick test generator from python to tools. Removing spurious files. * Removing non pytorch native backends. * Fixing a latent bug. * adjusting location of a script. * adding profile option to fake data generation script * Adding readme instructions. * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * adjusting run_and_time.sh script * Update README.md * Update README.md * incorporating count variable in criteo data set * Switching default num-workers to 0. Plus a few cosmetic changes. * Adjusting script name. * Adding readme instructions and docker dependencies. * Adding a few more docker dependencies. * Removing redundant functions. * Adding implementation and reporting of AUC metric. * Adjusting README * Adjusting README. * Adjusting README * Adjusting README * More README adjustments. * Continue polishing README. * Continue polishing README. * Continue polishing README. * Continue work on README. * Updating README. * Updating README. * Updating README. * Updating README. * Reorganizing README for clarity. * Adjusting README. * Adjusting run script * Adjusting readme. * Refactoring some parameters. * minor cleanup. * Fixing a typo. * Fixing latent bug when storing results for AUC. Also, resetting QUERY_CAP_LENGTH from 500 to 2048. * Fixing remaining parts of the README. * Split count command line argument for samples and queries. * Added CPU dockerfiles + kickstart script * Modified kickstart script * Modified README to reflect * Update README. * Update README. * Adding support for tested docker GPU setup * Adding README for GPU support on Docker * Minor adjustments to README * Adjusting README. * Adding support for aggregation of samples. * Adjusting name of the parameters. * added variable query sizes (#8) added variable query sizes * Refactoring and adding code to write offsets. Co-authored-by: dkorchevgithub <63178227+dkorchevgithub@users.noreply.github.com> Co-authored-by: Sam Naghshineh <sam@naghshineh.net>
tjablin
added a commit
to tjablin/inference
that referenced
this pull request
Feb 26, 2022
nettrix-calibration
arjunsuresh
pushed a commit
to GATEOverflow/inference
that referenced
this pull request
Apr 29, 2024
* adding test file. * copying classification_and_detection into recommendation directory * making initial changes for model and dataset wrapping * Updated mlperf integration Updated mlperf integration, running Single Stream with one item per batch * - data loader is complient with loadgen - fixed data loading to let loadgen create sequence if samples in batches - fixed accuracy reporting for DLRM * updated DLRM Queue Runner * synch with the original after previous edits * second synchronization second synchronization * synch 2020-04-10 * updating to the original master * updating to the original * added script to generate fake data - added script to generate fake data - added option to specify output directory in quickgen.py * Adding support for interchangeable CPU, GPU and multiple GPUs execution. * Change how we obtain the length of the test dataset. Plus a few cosmetic changes. * Code refactor for simplification (part 1) * Code refactor for simplification (part 2) * removing spurious file. * removing another spurious file. * removing more spurious files. * Enabling gpu path (work in progress). * Adding automatic support for different dataset options. * Enabling gpu path, with help from Dmitriy (done). * Fixing a latent bug. * Adding support for binary loader (work in progress). * adding quick test generator. * improving quick test generator. * Adding support for binary loader (done). * Moving quick test generator from python to tools. Removing spurious files. * Removing non pytorch native backends. * Fixing a latent bug. * adjusting location of a script. * adding profile option to fake data generation script * Adding readme instructions. * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * adjusting run_and_time.sh script * Update README.md * Update README.md * incorporating count variable in criteo data set * Switching default num-workers to 0. Plus a few cosmetic changes. * Adjusting script name. * Adding readme instructions and docker dependencies. * Adding a few more docker dependencies. * Removing redundant functions. * Adding implementation and reporting of AUC metric. * Adjusting README * Adjusting README. * Adjusting README * Adjusting README * More README adjustments. * Continue polishing README. * Continue polishing README. * Continue polishing README. * Continue work on README. * Updating README. * Updating README. * Updating README. * Updating README. * Reorganizing README for clarity. * Adjusting README. * Adjusting run script * Adjusting readme. * Refactoring some parameters. * minor cleanup. * Fixing a typo. * Fixing latent bug when storing results for AUC. Also, resetting QUERY_CAP_LENGTH from 500 to 2048. * Fixing remaining parts of the README. * Split count command line argument for samples and queries. * Added CPU dockerfiles + kickstart script * Modified kickstart script * Modified README to reflect * Update README. * Update README. * Adding support for tested docker GPU setup * Adding README for GPU support on Docker * Minor adjustments to README * Adjusting README. * Adding support for aggregation of samples. * Adjusting name of the parameters. * added variable query sizes (mlcommons#8) added variable query sizes * Refactoring and adding code to write offsets. Co-authored-by: dkorchevgithub <63178227+dkorchevgithub@users.noreply.github.com> Co-authored-by: Sam Naghshineh <sam@naghshineh.net> Former-commit-id: db0b7eb
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.