Skip to content

openvinotoolkit/MLPerf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLPerf Intel OpenVino Inference Code

Environment

Rules

MLPerf Inference Rules are here.

Supported benchmarks

Area Task Model Dataset SingleStream MultiStream Server Offline
Vision Image classification Resnet50-v1.5 (Image classification) ImageNet (224x224) ✔️ ✔️ ✔️ ✔️
Vision Object detection Retinanet (Object detection) OpenImages (800x800) ✔️ ✔️ ✔️ ✔️
Vision Medical image segmentation 3D UNET (Medical image segmentation) KITS 2019 (602x512x512) ✔️ ✔️
Language Language processing BERT-large (Language processing) SQuAD v1.1 (max_seq_len=384) ✔️ ✔️ ✔️ ✔️

✔️ - supported ❌ - not supported

Execution Modes

  • Performance
  • Accuracy
    • User first runs the benchmark in Accuracy mode to generate mlperf_log_accuracy.json
    • User then runs a dedicated accuracy tool provided by MLPerf

BKC on CPX, ICX systems

Use the following to optimize performance on CPX/ICX systems. These BKCs are provided in performance.sh mentioned in How to Build and Run.

  • Turbo ON
    echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo
    
  • Set CPU governor to performance (Please rerun this command after reboot):
    echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    
    OR
    cpupower frequency-set -g performance
    

How to Build and Run

  1. Navigate to root repository directory. This directory is your BUILD_DIRECTORY.

  2. Run the build script:

    ./build.sh
    

    NOTE: sudo privileges are required

  3. Modify BUILD_DIRECTORY in setup_env.sh(if necessary) and source:

    source scripts/setup_env.sh
    
  4. Run the performance script for CPX/ICX systems:

    ./scripts/performance.sh
    
  5. Download models

    ./scripts/download_models.sh [specific model]
    
  6. Download datasets

    ./scripts/download_datasets.sh [specific dataset]
    
  7. Modify script ./scripts/run.sh to apply desired parameters.

    The following OpenVINO parameters should be adjusted based on selected hardware target:

    • number of streams
    • number of infer requests
    • number of threads
    • inference precision
  8. Update MLPerf parameters (user.conf and mlperf.conf) if it is needed.

  9. Run:

    ./scripts/run.sh -m <model> -d <device> -s <scenario> -e <mode>
    

    For example (results will be stored into the ${BUILD_DIRECTORY}/results/resnet50/CPU/Performance/SingleStream folder):

    ./scripts/run.sh -m resnet50 -d CPU -s SingleStream -e Performance
    

    To run all combination of models/devices/scenarios/modes:

    ./scripts/run_all.sh
    

    NOTE: This product is not for production use and scripts are provided as example. For reporting MLPerf results dedicated scripts should be provided for each model with suitable parameters.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •