Yutian Li edited this page Apr 27, 2015 · 12 revisions

Minerva Wiki


Build and install Minerva and Owl (strongly recommended) as in Install Minerva. In the wiki, we will majorly use python interface for demonstration.


Enter ./run_owl_shell.sh in Minerva's root directory. And enter:

>>> x = owl.ones([10, 5])
>>> y = owl.ones([10, 5])
>>> z = x + y
>>> z.to_numpy()

The result will be a 10x5 array filled by value 2. Minerva supports many ndarray operations, please see the API document for more information.

Writing your own app

Before using Minerva in your own applications. You may need following API calls:

  • System Initialization: The function call must precede any owl API calls.
    • On Python: owl.initialize(sys.argv)
    • On C++: MinervaSystem::Initialize(int argc, char** argv)
  • Device Creation: At least one of CPU and GPU devices should be created before any ndarray function calls.
    • On Python: owl.create_cpu_device(), owl.create_gpu_device(gpuid)
    • On C++: MinervaSystem::Instance().CreateCpuDevice(), MinervaSystem::Instance().CreateGpuDevice(int gpuid)
    • More about device could be found in the wiki page about multi-GPU training.

So a typical Minerva-driven application will start like following (in python):

import owl
import sys
gpu = owl.create_gpu_device(0)
# application logics

Minerva allows you to write you own code for machine learning, using a ndarray interface just like Matlab or NumPy. You can use C++ or Python, whichever you prefer. The C++ and Python interface are quite similar. With Python, you can load data with NumPy and use it in Minerva, or you can convert Minerva NArrays into NumPy array and plot/print it with the tools provided in NumPy.

The NArray interface provided by Minerva is very intuitive. If you are familiar with either one of the matrix programming tools such as Matlab or NumPy, it should be very easy to get started with Minerva. More detailed documents will be available soon.

Minerva allows you to use multiple GPUs at the same time. By using the set_device function, you can specify which device you want the operation to run on. Once set, all the operations/statements that follow will be performed on this device. This simple primitive will give you flexibility to parallelize on multiple devices (either CPU or GPU).

Minerva uses asynchronous evaluation, meaning that the operations are carried out in the background. So writing computation logic in either in C++ or Python will not block user thread. Once you try to inspect the result, either by printing some of its elements, or calling result.WaitForEval(), Minerva will block and return until it has finished the requested job. In this way, you can "push" multiple operations to different devices, and then trigger the evaluation on both devices at the same time. This is how multi-GPU programming is done in Minerva. Please refer to the wiki page to get more details.

To understand more about Minerva, we recommend:

Also we welcome any contribution to Minerva and FAQ.