This section explains how to use Apyfal with Python to run accelerators.
All of these examples require you to first install the Apyfal and to have an Accelize account (the accelize_client_id
and accelize_secret_id
parameters in following examples).
See installation
and configuration
for more information.
You also need the name of the accelerator you want to use (The accelerator
parameter in following example)
See our distribution platform <https://drmportal.accelize.com/front/customer/listpurchase for more information.
The examples below use configuration by arguments for clarity, but you can also set them using the configuration file.
You can enable the Apyfal logger to see more details about each step that’s running. This is particularly useful for when running tests or going through examples:
import apyfal
apyfal.get_logger(True)
This tutorial will describe how to create a simple accelerator and process a file using a Cloud Service Provider (CSP) as a host.
The parameters required in this case may depend on the CSP used, but will always include:
host_type
: CSP nameregion
: CSP region name (a region that supports FPGA is required).client_id
andsecret_id
: CSP account details
See api_host
for information about potential parameters of the targeted CSP.
See your CSP documentation for information about how to obtain these values.
# Import the accelerator module.
import apyfal
# Choose an accelerator to use and configure it.
with apyfal.Accelerator(
# Accelerator parameters
accelerator='my_accelerator',
# host parameters
host_type='my_provider', region='my_region',
client_id='my_client_id', secret_id='my_secret_id',
# Accelize parameters
accelize_client_id='my_accelize_client_id',
accelize_secret_id='my_accelize_secret_id') as myaccel:
# Start the accelerator:
# A new cloud instance will be created and your account details passed
# to Accelerator as host
# Note: This step can take some minutes, depending on the CSP
myaccel.start()
# Process data:
# Define which data to process and where they should be stored.
myaccel.process(src='/path/myfile1.dat', dst='/path/result1.dat')
myaccel.process(src='/path/myfile2.dat', dst='/path/result2.dat')
# ... It is possible to process any number of data
# The accelerator is automatically closed on "with" exit.
# In this case, the default stop_mode ('term') is used:
# the previously created host will be deleted and all its content lost.
Starting a host takes a long time, so it may be a good idea to keep it running for later use.
You can do this using the stop_mode
parameter.
Depending on your CSP, additional fees may apply based on the host running time. Don’t forget to terminate your cloud instance after use.
import apyfal
with apyfal.Accelerator(
accelerator='my_accelerator',
host_type='my_provider', region='my_region',
client_id='my_client_id', secret_id='my_secret_id',
accelize_client_id='my_accelize_client_id',
accelize_secret_id='my_accelize_secret_id') as myaccel:
# We can start the accelerator in "keep" stop mode to keep the
# host running
myaccel.start(stop_mode='keep')
myaccel.process(src='/path/myfile.dat', dst='/path/result.dat')
# We can get and store the host IP and instance ID for later use
my_host_instance_id = myaccel.host.instance_id
my_host_ip = myaccel.host.host_ip
# This time the host is not deleted and will stay running when the
# accelerator is closed.
With instance_id
, depending on your CSP, you can reuse an already existing host without providing the client_id
and secret_id
.
An accelerator started with instance_id
keeps control of the host and can stop it at any time.
import apyfal
# We select the host to use on Accelerator instantiation
# with its instance ID stored previously
with apyfal.Accelerator(
accelerator='my_accelerator',
host_type='my_provider', region='my_region',
# Use 'instance_id' and removed 'client_id' and 'secret_id'
instance_id='my_host_instance_id',
accelize_client_id='my_accelize_client_id',
accelize_secret_id='my_accelize_secret_id') as myaccel:
myaccel.start()
myaccel.process(src='/path/myfile.dat', dst='/path/result.dat')
With host_ip
, you can reuse an already existing host without providing any other host information.
An accelerator started with host_ip
has no control over the host and can’t stop it.
import apyfal
# We also can select the host to use on Accelerator instantiation
# with its IP address stored previously
with apyfal.Accelerator(
accelerator='my_accelerator',
# Use 'host_ip' and removed any other host parameter
host_ip='my_host_ip',
accelize_client_id='my_accelize_client_id',
accelize_secret_id='my_accelize_secret_id') as myaccel:
myaccel.start()
myaccel.process(src='/path/myfile.dat', dst='/path/result.dat')
This tutorial describes using an accelerator locally on an already-configured FPGA host.
An already-configured host is required to use this feature.
You can easily create a cloud instance using Apyfal and keep the host running using the stop_mode='keep'
; parameter. See above for more information.
Don’t forget to terminate the cloud instance after use to avoid additional fees.
You connect to your host using SSH:
key_pair
is the key pair name that can be obtained withmyaccel.host.key_pair
. The related private key file in.pem
format is generally stored in the.ssh
sub folder of user home.host_ip
is the IP address of the instance and can be obtained withmyaccel.host.host_ip
.
Linux:
ssh -Yt -i ~/.ssh/${key_pair}.pem centos@${host_ip}
Windows:
On Windows, you can use Putty to connect with SSH. The private key file needs to be in .ppk
format (puttygen.exe
, supplied with Putty, can convert .pem
to .ppk
).
putty.exe -ssh centos@%host_ip% 22 -i %userprofile%\.ssh\%key_pair%.ppk
Running Apyfal in this case is straightforward as the accelerator is preconfigured:
- By default, the
accelize_client_id
andaccelize_secret_id
values are those used when creating an instance. You can change them by passing other values. accelerator
value is the one used when creating an instance and cannot be changed.- Host related arguments are not required and don’t have any effect (
stop_mode
,host_ip
, etc)
import apyfal
with apyfal.Accelerator() as myaccel:
myaccel.start()
myaccel.process(src='/path/myfile.dat', dst='/path/result.dat')
Some accelerators require configuration before being run. An accelerator is configured using the start
and process
methods.
Parameters passed to start
apply to every process
calls that follows.
You can call start
again to change parameters.
The start
parameters is divided in two parts:
- The
src
argument: Some accelerators may require a data to run. Read the accelerator documentation to see the data to use. - The
**parameters
argument(s): Parameters are specific configuration parameters that are passed as keyword arguments. See the accelerator documentation for more information about possible specific configuration parameters. Any value passed to this argument overrides the default configuration values.
import apyfal
with apyfal.Accelerator(accelerator='my_accelerator') as myaccel:
# The parameters are passed to "start" to configure the accelerator;
# theses parameters are:
# - src: The path to "src1.dat" data.
# - parameter1, parameter2: Keywords parameters are passed to
# "**parameters" arguments.
myaccel.start(src='/path/src1.dat',
parameter1='my_parameter_1', parameter2='my_parameter_2')
# Every "process" call after start uses the previously specified
# parameters to perform processing
myaccel.process(src='/path/myfile1.dat', dst='/path/result1.dat')
myaccel.process(src='/path/myfile2.dat', dst='/path/result2.dat')
# ...
# It is possible to re-call "start" method with other parameters
myaccel.start(src='/path/src2.dat')
# The following "process" will use new parameters.
myaccel.process(src='/path/myfile3.dat', dst='/path/result3.dat')
# ...
Parameters passed to process
applies only to this process
call.
The process
method accept the following arguments:
src
: Input data. Check the accelerator documentation to see if an input data is required.dst
: Output data. Check the accelerator documentation to see if an output data is required.- The
**parameters
argument(s): Parameters are specific configuration parameters that are passed as keyword arguments. See the accelerator documentation for more information about possible specific configuration parameters. Any value passed to this argument overrides the default configuration values.
import apyfal
with apyfal.Accelerator(accelerator='my_accelerator') as myaccel:
myaccel.start()
# The parameters are passed to "process" to configure it;
# theses parameters are:
# - parameter1, parameter2: Keywords parameters are passed to
# "**parameters" arguments.
myaccel.process(src='/path/myfile1.dat', dst='/path/result1.dat',
parameter1='my_parameter_1', parameter2='my_parameter_2')
The process
method wait result from accelerator before return.
The process_submit
is a non blocking asynchronous equivalent of process
. This method returns a concurrent.futures.Future
object to handle the result and can be used to request multiple processing tasks in parallel to reduce the data transfer and network overhead.
Note that the hardware accelerated processing itself is exclusive and take no benefit of the use of parallels tasks.
import apyfal
data_list = ['/path/myfile1', '/path/myfile2', '/path/myfile3']
with apyfal.Accelerator(accelerator='my_accelerator') as myaccel:
myaccel.start()
# Submit asynchronous processing tasks for a list of data
futures = [myaccel.process_submit(src=my_data)
for my_data in data_list]
# All Processing tasks are performed in parallel.
# It is now possible to wait and get results from "Future" objects.
results = [future.result() for future in futures]
A process_map
function also exists to submit directly iterables to process.
import apyfal
data_list = ['/path/myfile1', '/path/myfile2', '/path/myfile3']
with apyfal.Accelerator(accelerator='my_accelerator') as myaccel:
myaccel.start()
# This performs the previous example in only one line
results = myaccel.process_map(srcs=data_list)
Using Accelerators consumes “units” based on your pricing plan. You can access your metering information via your Accelize account.