#
inference
Here are 575 public repositories matching this topic...
ncnn is a high-performance neural network inference framework optimized for the mobile platform
android
ios
caffe
deep-learning
neural-network
mxnet
tensorflow
vulkan
keras
inference
pytorch
artificial-intelligence
simd
darknet
arm-neon
high-preformance
ncnn
onnx
mlir
-
Updated
Mar 2, 2021 - C++
-
Updated
Nov 26, 2020 - Python
Runtime type system for IO decoding/encoding
-
Updated
Mar 2, 2021 - TypeScript
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
machine-learning
embedded
caffe
computer-vision
deep-learning
robotics
inference
nvidia
digits
image-recognition
segmentation
object-detection
jetson-tx1
jetson
tensorrt
jetson-tx2
video-analytics
jetson-xavier
jetson-nano
jetson-xavier-nx
-
Updated
Mar 1, 2021 - C++
Grakn Core: The Knowledge Graph
database
graph
graph-algorithms
logic
inference
datalog
knowledge-graph
graph-theory
graph-database
graphdb
knowledge-base
query-language
graph-visualisation
knowledge-representation
reasoning
knowledge-engineering
enterprise-knowledge-graph
grakn
graql
hyper-relational
-
Updated
Mar 2, 2021 - Java
An easy to use PyTorch to TensorRT converter
-
Updated
Mar 2, 2021 - Python
OpenVINO™ Toolkit repository
-
Updated
Mar 2, 2021 - C++
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
-
Updated
Mar 2, 2021 - C++
TensorFlow template application for deep learning
machine-learning
csv
deep-learning
tensorflow
inference
cnn
lstm
tensorboard
mlp
libsvm
tfrecords
wide-and-deep
serving
-
Updated
Jan 3, 2019 - Python
Acceleration package for neural networks on multi-core CPUs
cpu
neural-network
high-performance
inference
multithreading
simd
matrix-multiplication
neural-networks
high-performance-computing
convolutional-layers
fast-fourier-transform
winograd-transform
-
Updated
Dec 22, 2020 - C
DELTA is a deep learning based natural language and speech processing platform.
nlp
front-end
ops
deep-learning
text-classification
tensorflow
nlu
speech
inference
text-generation
speech-recognition
seq2seq
sequence-to-sequence
speaker-verification
asr
tensorflow-serving
emotion-recognition
custom-ops
serving
tensorflow-lite
-
Updated
Feb 6, 2021 - Python
HuaizhengZhang
commented
May 16, 2019
Deploy a ML inference service on a budget in less than 10 lines of code.
-
Updated
Feb 11, 2021 - Python
VivekPanyam
commented
May 18, 2020
Bounds check and call []
operator
Pytorch-Named-Entity-Recognition-with-BERT
curl
inference
pytorch
cpp11
named-entity-recognition
postman
pretrained-models
bert
conll-2003
bert-ner
-
Updated
Jan 24, 2020 - Python
lsy641
commented
May 15, 2020
Hi, I am so interesting in your project, and wonder if you need contributor and how could I make my own contribution?
ericangelokim
commented
Oct 23, 2019
'max_request_size' seems to refer to bytes, not mb.
High-efficiency floating-point neural network inference operators for mobile, server, and Web
cpu
neural-network
inference
multithreading
simd
matrix-multiplication
neural-networks
convolutional-neural-networks
convolutional-neural-network
inference-optimization
mobile-inference
-
Updated
Mar 2, 2021 - C
tucan9389
commented
Mar 28, 2019
typescript
matching
pattern
pattern-matching
inference
ts
type-inference
typescript-pattern-matching
-
Updated
Mar 1, 2021 - TypeScript
TensorFlow models accelerated with NVIDIA TensorRT
neural-network
tensorflow
models
realtime
inference
optimize
nvidia
image-classification
object-detection
train
tx1
jetson
tensorrt
tx2
-
Updated
Feb 14, 2021 - Python
Lua Language Server coded by Lua
-
Updated
Mar 2, 2021 - Lua
Embedded and mobile deep learning research resources
deep-neural-networks
deep-learning
inference
pruning
quantization
neural-network-compression
mobile-deep-learning
embedded-ai
efficient-neural-networks
mobile-ai
mobile-inference
-
Updated
Oct 25, 2019
Shape and dimension inference (Keras-like) for PyTorch layers and neural networks
-
Updated
Oct 6, 2020 - Python
LightSeq: A High Performance Inference Library for Sequence Processing and Generation
-
Updated
Feb 18, 2021 - Cuda
Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.
python
machine-learning
algorithm
graph
inference
toolbox
causality
causal-inference
causal-models
graph-structure-recovery
causal-discovery
-
Updated
Jan 13, 2021 - Python
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
c
golang
opencv
cmake
deep-learning
cpp
tensorflow
cuda
inference
tensorflow-cmake
tensorflow-examples
tensorflow-gpu
tensorflow-cc
-
Updated
Aug 18, 2019 - CMake
A REST API for Caffe using Docker and Go
-
Updated
Jul 20, 2018 - C++
Improve this page
Add a description, image, and links to the inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inference topic, visit your repo's landing page and select "manage topics."
When run
bazel run --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hello_world:hello_world
Downloading org_tensorflow will time out.
I want use the pre-downloaded org_tensorflow by putting the tar.gz file in the .cache corresponding directory, but it didn't work.How should I operate it correctly?