Skip to content

Commit

Permalink
Merge branch 'main' into HGQ-integration
Browse files Browse the repository at this point in the history
  • Loading branch information
calad0i committed Dec 8, 2023
2 parents 35f5cd8 + 033d438 commit 1430729
Show file tree
Hide file tree
Showing 12 changed files with 180 additions and 32 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ exclude: (^hls4ml\/templates\/(vivado|quartus)\/(ap_types|ac_types)\/|^test/pyte

repos:
- repo: https://github.com/psf/black
rev: 23.10.1
rev: 23.11.0
hooks:
- id: black
language_version: python3
Expand Down
2 changes: 1 addition & 1 deletion CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ type: software
authors:
- given-names: "FastML Team"
title: "hls4ml"
version: "v0.8.0rc1"
version: "v0.8.0"
doi: 10.5281/zenodo.1201549
repository-code: "https://github.com/fastmachinelearning/hls4ml"
url: "https://fastmachinelearning.org/hls4ml"
Expand Down
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<p float="left">
<p align="center">
<img src="https://github.com/fastmachinelearning/fastmachinelearning.github.io/raw/master/images/hls4ml_logo.svg" alt="hls4ml" width="400"/>
</p>

Expand Down Expand Up @@ -69,7 +69,7 @@ If you use this software in a publication, please cite the software
title = {fastmachinelearning/hls4ml},
year = 2023,
publisher = {Zenodo},
version = {v0.8.0rc1},
version = {v0.8.0},
doi = {10.5281/zenodo.1201549},
url = {https://github.com/fastmachinelearning/hls4ml}
}
Expand Down Expand Up @@ -140,3 +140,13 @@ binary/ternary networks:
If you benefited from participating in our community, we ask that you please acknowledge the Fast Machine Learning collaboration, and particular individuals who helped you, in any publications.
Please use the following text for this acknowledgment:
> We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community and \<names of individuals\>, in particular, were important for the development of this project.
# Funding
We gratefully acknowledge previous and current support from the U.S. National Science Foundation (NSF) Harnessing the Data Revolution (HDR) Institute for <a href="https://a3d3.ai">Accelerating AI Algorithms for Data Driven Discovery (A3D3)</a> under Cooperative Agreement No. <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=2117997">OAC-2117997</a>, U.S. Department of Energy (DOE) Office of Science, Office of Advanced Scientific Computing Research under the Real‐time Data Reduction Codesign at the Extreme Edge for Science (XDR) Project (<a href="https://science.osti.gov/-/media/grants/pdf/foas/2021/SC_FOA_0002501.pdf">DE-FOA-0002501</a>), DOE Office of Science, Office of High Energy Physics Early Career Research Program (<a href="https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=df0ae4ab-a46e-481a-9acc-3856b6b041e5&rtc=24&PRoleId=10">DE-SC0021187</a>, DE-0000247070), and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant No. <a href="https://doi.org/10.3030/772369">772369</a>).

<p align="center">
<img src="https://github.com/fastmachinelearning/hls4ml/assets/29201053/bd1217d4-9930-47b7-8917-ad3fc430c75d" alt="A3D3" width="130"/>
<img src="https://github.com/fastmachinelearning/hls4ml/assets/4932543/16e77374-9829-40a8-800e-8d12018a7cb3" alt="NSF" width="130"/>
<img src="https://github.com/fastmachinelearning/hls4ml/assets/4932543/de6ca6ea-4d1c-4c56-9d93-f759914bbbf9" alt="DOE" width="130"/>
<img src="https://github.com/fastmachinelearning/hls4ml/assets/4932543/7a369971-a381-4bb8-932a-7162b173cbac" alt="ERC" width="130"/>
</p>
4 changes: 2 additions & 2 deletions docs/api/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ It looks like this:
OutputPredictions: keras/KERAS_3layer_predictions.dat
# Backend section (Vivado backend)
Part: xcku115-flvb2104-2-i
Part: xcvu13p-flga2577-2-e
ClockPeriod: 5
IOType: io_parallel # options: io_parallel/io_stream
Expand All @@ -97,7 +97,7 @@ There are a number of configuration options that you have. Let's go through the
The backend-specific section of the configuration depends on the backend. You can get a starting point for the necessary settings using, for example `hls4ml.templates.get_backend('Vivado').create_initial_config()`.
For Vivado backend the options are:

* **Part**\ : the particular FPGA part number that you are considering, here it's a Xilinx Virtex-7 FPGA
* **Part**\ : the particular FPGA part number that you are considering, here it's a Xilinx Virtex UltraScale+ VU13P FPGA
* **ClockPeriod**\ : the clock period, in ns, at which your algorithm runs
Then you have some optimization parameters for how your algorithm runs:
* **IOType**\ : your options are ``io_parallel`` or ``io_stream`` which defines the type of data structure used for inputs, intermediate activations between layers, and outputs. For ``io_parallel``, arrays are used that, in principle, can be fully unrolled and are typically implemented in RAMs. For ``io_stream``, HLS streams are used, which are a more efficient/scalable mechanism to represent data that are produced and consumed in a sequential manner. Typically, HLS streams are implemented with FIFOs instead of RAMs. For more information see `here <https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/pragma-HLS-stream>`__.
Expand Down
29 changes: 25 additions & 4 deletions docs/reference.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
============================
Citation and Contributors
============================
===========================================
Citation, Acknowledgments, and Contributors
===========================================


Citation
Expand All @@ -14,7 +14,7 @@ If you use this software in a publication, please cite the software
title = {fastmachinelearning/hls4ml},
year = 2023,
publisher = {Zenodo},
version = {v0.8.0rc1},
version = {v0.8.0},
doi = {10.5281/zenodo.1201549},
url = {https://github.com/fastmachinelearning/hls4ml}
}
Expand Down Expand Up @@ -90,9 +90,30 @@ Acknowledgments
===============
If you benefited from participating in our community, we ask that you please acknowledge the Fast Machine Learning collaboration, and particular individuals who helped you, in any publications.
Please use the following text for this acknowledgment:

We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community and \<names of individuals\>, in particular, were important for the development of this project.


Funding
=======
We gratefully acknowledge previous and current support from the U.S. National Science Foundation (NSF) Harnessing the Data Revolution (HDR) Institute for `Accelerating AI Algorithms for Data Driven Discovery (A3D3) <https://a3d3.ai>`_ under Cooperative Agreement No. `OAC-2117997 <https://www.nsf.gov/awardsearch/showAward?AWD_ID=2117997>`_, U.S. Department of Energy (DOE) Office of Science, Office of Advanced Scientific Computing Research under the Real‐time Data Reduction Codesign at the Extreme Edge for Science (XDR) Project (`DE-FOA-0002501 <https://science.osti.gov/-/media/grants/pdf/foas/2021/SC_FOA_0002501.pdf>`_), DOE Office of Science, Office of High Energy Physics Early Career Research Program (`DE-SC0021187 <https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=df0ae4ab-a46e-481a-9acc-3856b6b041e5&rtc=24&PRoleId=10>`_, DE-0000247070), and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant No. `772369 <https://doi.org/10.3030/772369>`_).

.. image:: https://github.com/fastmachinelearning/hls4ml/assets/4932543/d4b6e2a3-3537-4413-9809-8153a7d624d6
:height: 200
:align: center

.. image:: https://github.com/fastmachinelearning/hls4ml/assets/4932543/16e77374-9829-40a8-800e-8d12018a7cb3
:height: 200
:align: center

.. image:: https://github.com/fastmachinelearning/hls4ml/assets/4932543/de6ca6ea-4d1c-4c56-9d93-f759914bbbf9
:height: 200
:align: center

.. image:: https://github.com/fastmachinelearning/hls4ml/assets/4932543/7a369971-a381-4bb8-932a-7162b173cbac
:height: 200
:align: center

Contributors
============

Expand Down
4 changes: 2 additions & 2 deletions hls4ml/backends/vivado/vivado_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,10 +176,10 @@ def get_default_flow(self):
def get_writer_flow(self):
return self._writer_flow

def create_initial_config(self, part='xcku115-flvb2104-2-i', clock_period=5, io_type='io_parallel'):
def create_initial_config(self, part='xcvu13p-flga2577-2-e', clock_period=5, io_type='io_parallel'):
config = {}

config['Part'] = part if part is not None else 'xcku115-flvb2104-2-i'
config['Part'] = part if part is not None else 'xcvu13p-flga2577-2-e'
config['ClockPeriod'] = clock_period
config['IOType'] = io_type
config['HLSConfig'] = {}
Expand Down
2 changes: 1 addition & 1 deletion hls4ml/converters/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ def parse_yaml_config(config_file):
KerasH5: my_keras_model.h5
OutputDir: my-hls-test
ProjectName: myproject
Part: xcku115-flvb2104-2-i
Part: xcvu13p-flga2577-2-e
ClockPeriod: 5
IOType: io_stream
HLSConfig:
Expand Down
35 changes: 17 additions & 18 deletions hls4ml/utils/example_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,18 @@

from .config import create_config

ORGANIZATION = 'fastmachinelearning'
BRANCH = 'master'


def _load_data_config_avai(model_name):
"""
Check data and configuration availability for each model from this file:
https://github.com/hls-fpga-machine-learning/example-models/blob/master/available_data_config.json
https://github.com/fastmachinelearning/example-models/blob/master/available_data_config.json
"""

link_to_list = (
'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/available_data_config.json'
)
link_to_list = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/available_data_config.json'

temp_file, _ = urlretrieve(link_to_list)

Expand Down Expand Up @@ -73,12 +74,8 @@ def _load_example_data(model_name):
input_file_name = filtered_name + "_input.dat"
output_file_name = filtered_name + "_output.dat"

link_to_input = (
'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/data/' + input_file_name
)
link_to_output = (
'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/data/' + output_file_name
)
link_to_input = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/data/' + input_file_name
link_to_output = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/data/' + output_file_name

urlretrieve(link_to_input, input_file_name)
urlretrieve(link_to_output, output_file_name)
Expand All @@ -91,9 +88,7 @@ def _load_example_config(model_name):

config_name = filtered_name + "_config.yml"

link_to_config = (
'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/config-files/' + config_name
)
link_to_config = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/config-files/' + config_name

# Load the configuration as dictionary from file
urlretrieve(link_to_config, config_name)
Expand All @@ -110,7 +105,7 @@ def fetch_example_model(model_name, backend='Vivado'):
Download an example model (and example data & configuration if available) from github repo to working directory,
and return the corresponding configuration:
https://github.com/hls-fpga-machine-learning/example-models
https://github.com/fastmachinelearning/example-models
Use fetch_example_list() to see all the available models.
Expand All @@ -122,15 +117,18 @@ def fetch_example_model(model_name, backend='Vivado'):
dict: Dictionary that stores the configuration to the model
"""

# Initilize the download link and model type
download_link = 'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/'
# Initialize the download link and model type
download_link = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/'
model_type = None
model_config = None

# Check for model's type to update link
if '.json' in model_name:
model_type = 'keras'
model_config = 'KerasJson'
elif '.h5' in model_name:
model_type = 'keras'
model_config = 'KerasH5'
elif '.pt' in model_name:
model_type = 'pytorch'
model_config = 'PytorchModel'
Expand Down Expand Up @@ -158,11 +156,12 @@ def fetch_example_model(model_name, backend='Vivado'):

if _config_is_available(model_name):
config = _load_example_config(model_name)
config[model_config] = model_name # Ensure that paths are correct
else:
config = _create_default_config(model_name, model_config, backend)

# If the model is a keras model then have to download its weight file as well
if model_type == 'keras':
if model_type == 'keras' and '.json' in model_name:
model_weight_name = model_name[:-5] + "_weights.h5"

download_link_weight = download_link + model_type + '/' + model_weight_name
Expand All @@ -174,7 +173,7 @@ def fetch_example_model(model_name, backend='Vivado'):


def fetch_example_list():
link_to_list = 'https://raw.githubusercontent.com/hls-fpga-machine-learning/example-models/master/available_models.json'
link_to_list = f'https://raw.githubusercontent.com/{ORGANIZATION}/example-models/{BRANCH}/available_models.json'

temp_file, _ = urlretrieve(link_to_list)

Expand Down
2 changes: 1 addition & 1 deletion scripts/hls4ml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def main():
)
config_parser.add_argument('-p', '--project', help='Project name', default='myproject')
config_parser.add_argument('-d', '--dir', help='Project output directory', default='my-hls-test')
config_parser.add_argument('-f', '--fpga', help='FPGA part', default='xcku115-flvb2104-2-i')
config_parser.add_argument('-f', '--fpga', help='FPGA part', default='xcvu13p-flga2577-2-e')
config_parser.add_argument('-bo', '--board', help='Board used.', default='pynq-z2')
config_parser.add_argument(
'-ba', '--backend', help='Backend to use (Vivado, VivadoAccelerator, Quartus)', default='Vivado'
Expand Down
32 changes: 32 additions & 0 deletions test/pytest/test_fetch_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import ast
import io
from contextlib import redirect_stdout
from pathlib import Path

import pytest

import hls4ml

test_root_path = Path(__file__).parent


@pytest.mark.parametrize('backend', ['Vivado', 'Vitis', 'Quartus'])
def test_fetch_example_utils(backend):
f = io.StringIO()
with redirect_stdout(f):
hls4ml.utils.fetch_example_list()
out = f.getvalue()

model_list = ast.literal_eval(out) # Check if we indeed got a dictionary back

assert 'qkeras_mnist_cnn.json' in model_list['keras']

# This model has an example config that is also downloaded. Stored configurations don't set "Backend" value.
config = hls4ml.utils.fetch_example_model('qkeras_mnist_cnn.json', backend=backend)
config['KerasJson'] = 'qkeras_mnist_cnn.json'
config['KerasH5']
config['Backend'] = backend
config['OutputDir'] = str(test_root_path / f'hls4mlprj_fetch_example_{backend}')

hls_model = hls4ml.converters.keras_to_hls(config)
hls_model.compile() # For now, it is enough if it compiles, we're only testing downloading works as expected
27 changes: 27 additions & 0 deletions test/pytest/test_repack_precision.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from tensorflow import keras

from hls4ml.converters import convert_from_keras_model


def test_repack_precision():
inp = keras.Input(shape=(3, 3), name='inp')
out = keras.layers.Reshape((3, 3), name='reshape')(inp)
out = keras.layers.Conv1D(2, 2, name='conv')(out)
model = keras.Model(inp, out)

layer_conf = {
'inp': {'Precision': 'fixed<20,10>'},
'reshape': {'Precision': 'fixed<20,10>'},
'conv': {'Precision': 'fixed<20,10>'},
}

hls_config = {'Model': {'Precision': 'fixed<2,1>', 'ReuseFactor': 1}, 'LayerName': layer_conf}

# Repack only happens in io_stream
model_hls = convert_from_keras_model(model, hls_config=hls_config, io_type='io_stream')
assert 'repack_reshape' in model_hls.graph, 'repack_reshape not found in graph'
repack_precision = model_hls.graph['repack_reshape'].attributes['result_t'].precision
assert repack_precision.integer == 10, 'Precision mismatch'
assert repack_precision.fractional == 10, 'Precision mismatch'
assert repack_precision.width == 20, 'Precision mismatch'
assert repack_precision.signed is True, 'Precision mismatch'
59 changes: 59 additions & 0 deletions test/pytest/test_stream_multi_clone.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import os
import random
from pathlib import Path

import numpy as np
import pytest
import tensorflow as tf
from keras.layers import Add, Dense
from tensorflow import keras

from hls4ml.converters import convert_from_keras_model

test_root_path = Path(__file__).parent


@pytest.fixture(scope='module')
def model():
seed = 42
os.environ['RANDOM_SEED'] = f'{seed}'
np.random.seed(seed)
tf.random.set_seed(seed)
tf.get_logger().setLevel('ERROR')
random.seed(seed)

inp = keras.Input(shape=(10,))
x = Dense(10)(inp)
y = Dense(10)(inp)
z = Dense(10)(inp)
xy = Add()([x, y]) # 5
xy = Add()([xy, y]) # 5
out = Add()([xy, z]) # 5
model = keras.Model(inp, out)
return model


@pytest.fixture(scope='module')
def data():
rng = np.random.RandomState(42)
X = rng.normal(0, 1, (1000, 10))
X = np.clip(X, -16, 15)
return X


@pytest.mark.parametrize('backend', ['Vivado', 'Quartus', 'Vitis'])
def test_multi_clone(model, data, backend: str):
output_dir = str(test_root_path / f'hls4mlprj_stream_multi_clone_{backend}')
hls_config = {'Model': {'Precision': 'fixed<32,5>', 'ReuseFactor': 1}}
model_hls = convert_from_keras_model(
model,
backend=backend,
output_dir=output_dir,
hls_config=hls_config,
io_type='io_stream', # clone only happens with stream io.
)
model_hls.compile()
r_hls = model_hls.predict(data)
r_keras = model(data).numpy()

assert np.allclose(r_hls, r_keras, atol=1e-5, rtol=0)

0 comments on commit 1430729

Please sign in to comment.