Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
71 lines (70 sloc) 4.2 KB
title description keywords image permalink css-package js-package members related_resources_tracks related_tags jumbotron layout flow
Artificial Intelligence
The Artificial Intelligence initiative at Linaro aims at collaborating to reduce fragmentation in the Deep learning NN acceleration ecosystem, where currently every IP vendor forks the existing open source models and frameworks to integrate their hardware blocks and then tune for performance.
Linaro, Aarch64, Performance, Kernel, assembly, Arm, Linux, hardware
/assets/images/content/Machine col.svg
/engineering/artificial-intelligence/
landing-page
engineering-landing-page
key
mi-incubator
AI/Machine Learning, Machine Learning/AI, AI and Neural Networks on Arm Summit
Automotive
ML
AI/ML
Autoware
title title-class description background-image
Artificial Intelligence
big-title
/assets/images/content/machine-learning-bg.jpg
flow
row style sections
container_row
large_type introduction_row
format style text_content
text
text-left no-padding
text
The Artificial Intelligence initiative at Linaro aims at collaborating to reduce fragmentation in the Deep learning NN acceleration ecosystem, where currently every IP vendor forks the existing open source models and frameworks to integrate their hardware blocks and then tune for performance. This leads to a duplication of effort amongst all players, perpetual cost of re-integration for every new rebasing, and overall increased total cost of ownership.
format style text_content
text
text-left no-padding
text
The initial focus is on the inference side on Cortex-A application processors with Linux and Android, both edge computing and smart devices. As part of the remit, the team will collaborate on a definition of API and modular framework for an Arm runtime inference engine architecture based on plug-ins supporting dynamic modules and optimized shared Arm compute libraries.
row style sections
container_row
youtube_embed_row light_gray_row
format youtube_embed source
custom_include
url title
Introduction to the Linaro Artificial Intelligence Initiative
components/lazy_youtube_video_embed.html
row style sections
container_row
large_type introduction_row
format style text_content
text
text-left no-padding
text
Below are some of the Artificial Intelligence related sessions from the previous [Linaro Connect](https://connect.linaro.org):
row
main_content_row
row source
custom_include_row
engineering_related_resources.html
Speaker Company ID Title
Chris Benson AI Strategist YVR18- 300K2 Keynote: Artificial Intelligence Strategy: Digital Transformation Through Deep Learning
Jem Davies Arm YVR18-300K1 Keynote: Enabling Machine Learning to Explode with Open Standards and Collaboration
Robert Elliott Arm YVR18-329 Arm NN intro
Pete Warden Google Tensorflow YVR18-338 Tensorflow for Arm devices
Mark Charlebois Qualcomm YVR18-330 Qualcomm Snapdragon AI Software
Thom Lane Amazon AWS AI YVR18-331 ONNX and Edge Deployments
Jammy Zhou Linaro YVR18-332 TVM compiler stack and ONNX support
Luba Tang Skymizer YVR18-333 ONNC (Open Neural Network Compiler) for ARM Cortex-M
Shouyong Liu Thundersoft YVR18-334 AI Alive: On Device and In-App
Ralph Wittig Xilinx YVR18-335 Xilinx: AI on FPGA and ACAP Roadmap
Andrea Gallo and others Linaro, Arm, Qualcomm, Skymizer, Xilinx YVR18-337 BoF: JIT vs offline compilers vs deploying at the Edge
You can’t perform that action at this time.