Skip to content

BerkeleyLab/inference-engine

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
src
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  _        __                                                     _            
 (_)      / _|                                                   (_)           
  _ _ __ | |_ ___ _ __ ___ _ __   ___ ___         ___ _ __   __ _ _ _ __   ___ 
 | | '_ \|  _/ _ \ '__/ _ \ '_ \ / __/ _ \  __   / _ \ '_ \ / _` | | '_ \ / _ \
 | | | | | ||  __/ | |  __/ | | | (_|  __/ |__| |  __/ | | | (_| | | | | |  __/
 |_|_| |_|_| \___|_|  \___|_| |_|\___\___|       \___|_| |_|\__, |_|_| |_|\___|
                                                             __/ |             
                                                            |___/              

GitHub manifest version GitHub branch checks state GitHub issues GitHub license GitHub watchers

Inference-Engine

Table of contents

Overview

Inference-Engine is a software library for researching ways to efficiently propagate inputs through deep, feed-forward neural networks exported from Python by the companion package nexport. Inference-Engine's implementation language, Fortran 2018, makes it suitable for integration into high-performance computing (HPC) applications. The first HPC application of interest is the Intermediate Complexity Atmospheric Research (ICAR) model. The novel features of Inference-Engine include

  1. Exposing concurrency via
  • An elemental inference function
  • An elemental activation strategy
  1. Gathering network weights and biases into contiguous arrays
  2. Runtime selection of inference algorithm

Item 1 ensures that the infer procedure can be invoked inside Fortran's do concurrent construct, which some compilers can offload automatically to graphics processing units (GPUs). We envision this being useful in applications that require large numbers of independent inferences. Item 2 exploits the special case where the number of neurons is uniform across the network layers. The use of contiguous arrays facilitates spatial locality in memory access patterns. Item 3 offers the possibility of adaptive inference method selection based on runtime information. The current methods include ones based on intrinsic functions, dot_product or matmul. Future options will explore the use of OpenMP and OpenACC for vectorization, multithreading, and/or accelerator offloading.

Downloading, Building and Testing

To download, build, and test Inference-Engine, enter the following commands in a Linux, macOS, or Windows Subsystem for Linux shell:

git clone https://github.com/berkeleylab/inference-engine
cd inference-engine
./setup.sh

whereupon the trailing output will provide instructions for running the examples in the example subdirectory.

Examples

The example subdirectory contains demonstrations of several intended use cases.

Documentation

Please see the Inference-Engine GitHub Pages site for HTML documentation generated by ford.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •