Skip to content
Orkhon: ML Inference Framework and Server Runtime
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.cargo Init commit May 18, 2019
benches Organize project May 27, 2019
ci CI configuration Jun 1, 2019
doc/logo project icon May 27, 2019
src CI configuration Jun 1, 2019
tests Sys path and test organization May 27, 2019
.gitignore Bench for tfmodel May 26, 2019
.travis.yml CI configuration Jun 1, 2019
Cargo.toml Exclude files under package entry May 27, 2019
LICENSE README May 26, 2019
README.md Glob version May 27, 2019
_CURRENT Async implementation May 23, 2019

README.md



Orkhon: ML Inference Framework and Server Runtime

Latest Release Crates.io
License Crates.io
Build Status travis build status
Downloads Crates.io
Gitter

What is it?

Orkhon is Rust framework for Machine Learning to run/use inference/prediction code written in Python, frozen models and process unseen data. It is mainly focused on serving models and processing unseen data in a performant manner. Instead of using Python directly and having scalability problems for servers this framework tries to solve them with built-in async API.

Main features

  • Sync & Async API for models.
  • Easily embeddable engine for well-known Rust web frameworks.
  • API contract for interacting with Python code.
  • High processing throughput
  • Python Module caching

Installation

You can include Orkhon into your project with;

[dependencies]
orkhon = "*"

Dependencies

You will need:

  • Rust Nightly needed (for now. until async support fully lands in)
  • Python dev dependencies installed and have proper python runtime to use Orkhon with your project.
  • Point out your PYTHONHOME environment variable to your Python installation.

Python API contract

For Python API contract you can take a look at the Project Documentation.

Examples

Minimal Async Model Request Example

let o = Orkhon::new()
    .config(OrkhonConfig::new())
    .pymodel("model_which_will_be_tested", // Unique identifier of the model
             "tests/pymodels",             // Python module directory
             "model_test",                 // Python module file name
        "model_hook"                       // Hook(Python method) that will be called by Orkhon
    )
    .build();

// Args for the request hook
let mut request_args = HashMap::new();
request_args.insert("is", 10);
request_args.insert("are", 6);
request_args.insert("you", 5);

// Kwargs for the request hook
let mut request_kwargs = HashMap::<&str, &str>::new();

// Future handle
let handle =
    o.pymodel_request_async(
        "model_which_will_be_tested",
        ORequest::with_body(
            PyModelRequest::new()
                .with_args(request_args)
                .with_kwargs(request_kwargs)
        )
    );

// Return the result
handle.await.unwrap()

License

License is MIT

Documentation

Official documentation is hosted on docs.rs.

Getting Help

Please head to our Gitter or use StackOverflow

Discussion and Development

We use Gitter for development discussions. Also please don't hesitate to open issues on GitHub ask for features, report bugs, comment on design and more! More interaction and more ideas are better!

Contributing to Orkhon Open Source Helpers

All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.

A detailed overview on how to contribute can be found in the CONTRIBUTING guide on GitHub.

You can’t perform that action at this time.