Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ubuntu installation guide for NUC system #8

Merged
merged 1 commit into from
Dec 6, 2018

Conversation

abduld
Copy link
Contributor

@abduld abduld commented Nov 26, 2018

No description provided.

Copy link
Contributor

@profvjreddi profvjreddi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please rename this filename so that it is not all caps?

Also, can we call it setup_nuc or something since you also talk about Docker in this file?

We could use this as the general file that tells people how to configure the NUC as we find new issues in the future.

@profvjreddi profvjreddi merged commit 28a19b4 into mlcommons:master Dec 6, 2018
mpjlu pushed a commit to mpjlu/inference that referenced this pull request Mar 11, 2019
add ubuntu installation guide for NUC system
guschmue pushed a commit that referenced this pull request May 16, 2020
* adding test file.

* copying classification_and_detection into recommendation directory

* making initial changes for model and dataset wrapping

* Updated mlperf integration

Updated mlperf integration, running Single Stream with one item per
batch

* - data loader is complient with loadgen

- fixed data loading to let loadgen create sequence if samples in
batches
- fixed accuracy reporting for DLRM

* updated DLRM Queue Runner

* synch with the original after previous edits

* second synchronization

second synchronization

* synch 2020-04-10

* updating to the original master

* updating to the original

* added script to generate fake data

- added script to generate fake data
- added option to specify output directory in quickgen.py

* Adding support for interchangeable CPU, GPU and multiple GPUs execution.

* Change how we obtain the length of the test dataset. Plus a few cosmetic changes.

* Code refactor for simplification (part 1)

* Code refactor for simplification (part 2)

* removing spurious file.

* removing another spurious file.

* removing more spurious files.

* Enabling gpu path (work in progress).

* Adding automatic support for different dataset options.

* Enabling gpu path, with help from Dmitriy (done).

* Fixing a latent bug.

* Adding support for binary loader (work in progress).

* adding quick test generator.

* improving quick test generator.

* Adding support for binary loader (done).

* Moving quick test generator from python to tools. Removing spurious files.

* Removing non pytorch native backends.

* Fixing a latent bug.

* adjusting location of a script.

* adding profile option to fake data generation script

* Adding readme instructions.

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* adjusting run_and_time.sh script

* Update README.md

* Update README.md

* incorporating count variable in criteo data set

* Switching default num-workers to 0. Plus a few cosmetic changes.

* Adjusting script name.

* Adding readme instructions and docker dependencies.

* Adding a few more docker dependencies.

* Removing redundant functions.

* Adding implementation and reporting of AUC metric.

* Adjusting README

* Adjusting README.

* Adjusting README

* Adjusting README

* More README adjustments.

* Continue polishing README.

* Continue polishing README.

* Continue polishing README.

* Continue work on README.

* Updating README.

* Updating README.

* Updating README.

* Updating README.

* Reorganizing README for clarity.

* Adjusting README.

* Adjusting run script

* Adjusting readme.

* Refactoring some parameters.

* minor cleanup.

* Fixing a typo.

* Fixing latent bug when storing results for AUC. Also, resetting QUERY_CAP_LENGTH from 500 to 2048.

* Fixing remaining parts of the README.

* Split count command line argument for samples and queries.

* Added CPU dockerfiles + kickstart script

* Modified kickstart script

* Modified README to reflect

* Update README.

* Update README.

* Adding support for tested docker GPU setup

* Adding README for GPU support on Docker

* Minor adjustments to README

* Adjusting README.

* Adding support for aggregation of samples.

* Adjusting name of the parameters.

* added variable query sizes (#8)

added variable query sizes

* Refactoring and adding code to write offsets.

Co-authored-by: dkorchevgithub <63178227+dkorchevgithub@users.noreply.github.com>
Co-authored-by: Sam Naghshineh <sam@naghshineh.net>
tjablin added a commit to tjablin/inference that referenced this pull request Feb 26, 2022
arjunsuresh pushed a commit to GATEOverflow/inference that referenced this pull request Apr 29, 2024
* adding test file.

* copying classification_and_detection into recommendation directory

* making initial changes for model and dataset wrapping

* Updated mlperf integration

Updated mlperf integration, running Single Stream with one item per
batch

* - data loader is complient with loadgen

- fixed data loading to let loadgen create sequence if samples in
batches
- fixed accuracy reporting for DLRM

* updated DLRM Queue Runner

* synch with the original after previous edits

* second synchronization

second synchronization

* synch 2020-04-10

* updating to the original master

* updating to the original

* added script to generate fake data

- added script to generate fake data
- added option to specify output directory in quickgen.py

* Adding support for interchangeable CPU, GPU and multiple GPUs execution.

* Change how we obtain the length of the test dataset. Plus a few cosmetic changes.

* Code refactor for simplification (part 1)

* Code refactor for simplification (part 2)

* removing spurious file.

* removing another spurious file.

* removing more spurious files.

* Enabling gpu path (work in progress).

* Adding automatic support for different dataset options.

* Enabling gpu path, with help from Dmitriy (done).

* Fixing a latent bug.

* Adding support for binary loader (work in progress).

* adding quick test generator.

* improving quick test generator.

* Adding support for binary loader (done).

* Moving quick test generator from python to tools. Removing spurious files.

* Removing non pytorch native backends.

* Fixing a latent bug.

* adjusting location of a script.

* adding profile option to fake data generation script

* Adding readme instructions.

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* adjusting run_and_time.sh script

* Update README.md

* Update README.md

* incorporating count variable in criteo data set

* Switching default num-workers to 0. Plus a few cosmetic changes.

* Adjusting script name.

* Adding readme instructions and docker dependencies.

* Adding a few more docker dependencies.

* Removing redundant functions.

* Adding implementation and reporting of AUC metric.

* Adjusting README

* Adjusting README.

* Adjusting README

* Adjusting README

* More README adjustments.

* Continue polishing README.

* Continue polishing README.

* Continue polishing README.

* Continue work on README.

* Updating README.

* Updating README.

* Updating README.

* Updating README.

* Reorganizing README for clarity.

* Adjusting README.

* Adjusting run script

* Adjusting readme.

* Refactoring some parameters.

* minor cleanup.

* Fixing a typo.

* Fixing latent bug when storing results for AUC. Also, resetting QUERY_CAP_LENGTH from 500 to 2048.

* Fixing remaining parts of the README.

* Split count command line argument for samples and queries.

* Added CPU dockerfiles + kickstart script

* Modified kickstart script

* Modified README to reflect

* Update README.

* Update README.

* Adding support for tested docker GPU setup

* Adding README for GPU support on Docker

* Minor adjustments to README

* Adjusting README.

* Adding support for aggregation of samples.

* Adjusting name of the parameters.

* added variable query sizes (mlcommons#8)

added variable query sizes

* Refactoring and adding code to write offsets.

Co-authored-by: dkorchevgithub <63178227+dkorchevgithub@users.noreply.github.com>
Co-authored-by: Sam Naghshineh <sam@naghshineh.net>
Former-commit-id: db0b7eb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants