Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Integer Neural Net

The integer neural network is identical to standard, floating-point neural networks in form and function. The primary difference is that all operations are performed on integers rather than floating point numbers. This is done to reduce computational complexity and make the network easier to implement in hardware (e.g. as a dedicated co-processor of some kind).

Because networks are trained using probabilities, many of the underlying mathematical functions necessary for training the neural network are not applicable when using integers. This means that integer networks are typically not trained. Rather, training is performed on a standard network and then the results are converted to an integer network according to some defined bit-depth. Likewise, the activation function is usually reliant on floating point operations, so in the integer network's case, it is saved to a file and read into memory as an array. This allows the integer neural network much faster access to the activation values.

The provided code is the core necessary to run an integer neural network. Training must be done on a floating-point neural network for new data sets; the neural network in sepol/bp-neural-net is well-suited to this purpose. In main.cpp, the neural network runner is nearly identical to that used in a standard network. The main modification to the network is the declaration of the integer bit-depth. The max neuron value specifies the activation function's accuracy while the max weight specifies the maximum accuracy of converted weights. Greater bit-depth allows for finer resolution, and hence, more accuracy (e.g. 16), while a lower number saves space on the activation table (e.g. 8). For the sample included, 12 bits provides a decent depth for both neuron and weight values, and the accuracy lost is only a few percentage points compared to the original floating-point network.

The main file includes more notes on using the network, and it performs all of the necessary conversion operations needed to take the floating-point values and make them compatible with the integer network. The sample data is the same used in bp-neural-net. The saved values in weights.txt are derived from running the sample program in bp-neural-net as well.

About

No description, website, or topics provided.

Resources

License

Releases

No releases published

Packages

No packages published

Languages