Skip to content
Mars is a tensor-based unified framework for large-scale data computation.
Python CSS HTML C++ Shell JavaScript
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Refine contributing rules and show that on every issue template. (#137) Jan 17, 2019
bin Add support for kubernetes (#644) Aug 23, 2019
docs Add option for plasma path (#699) Sep 2, 2019
mars Processing `index` and `columns` seperately (and correctly) in `from_… Sep 16, 2019
misc sync source Dec 6, 2018
.coveragerc Merge expressions and execution directories (#579) Jul 29, 2019
.coveragerc-threaded Add support for kubernetes (#644) Aug 23, 2019
.dockerignore Add support for kubernetes (#644) Aug 23, 2019
.flake8 Merge expressions and execution directories (#579) Jul 29, 2019
.gitignore Wait for graph to finish instead of querying with fixed intervals (#701) Sep 4, 2019
.travis.yml Add support for kubernetes (#644) Aug 23, 2019
CONTRIBUTING.rst Add contributing guide & remove tracing on tags (#181) Jan 25, 2019
LICENSE Tune frontend (#651) Aug 16, 2019
MANIFEST.in Use MurmurHash instead of hashlib when calculating hash (#376) May 7, 2019
README.rst Add fancy indexing support (#444) Jul 2, 2019
appveyor.yml Merge expressions and execution directories (#579) Jul 29, 2019
conda-spec.txt Add contributing guide & remove tracing on tags (#181) Jan 25, 2019
requirements-dev.txt Fix build issue under cp27-cp27m-manylinux1 (#588) Jul 30, 2019
requirements-extra.txt Add PCA decomposition algorithm support (#639) Aug 21, 2019
requirements-wheel.txt Fix build issue under cp27-cp27m-manylinux1 (#588) Jul 30, 2019
requirements.txt Fix build issue under cp27-cp27m-manylinux1 (#588) Jul 30, 2019
setup.py Move related files to optimizes module (#640) Aug 26, 2019

README.rst

Mars

PyPI version Docs Build Coverage Quality License

Mars is a tensor-based unified framework for large-scale data computation. Documentation.

Installation

Mars is easy to install by

pip install pymars

When you need to install dependencies needed by the distributed version, you can use the command below.

pip install 'pymars[distributed]'

For now, distributed version is only available on Linux and Mac OS.

Developer Install

When you want to contribute code to Mars, you can follow the instructions below to install Mars for development:

git clone https://github.com/mars-project/mars.git
cd mars
pip install -e ".[dev]"

More details about installing Mars can be found at getting started section in Mars document.

Mars tensor

Mars tensor provides a familiar interface like Numpy.

Numpy Mars tensor
import numpy as np
a = np.random.rand(1000, 2000)
(a + 1).sum(axis=1)
import mars.tensor as mt
a = mt.random.rand(1000, 2000)
(a + 1).sum(axis=1).execute()

The following is a brief overview of supported subset of Numpy interface.

  • Arithmetic and mathematics: +, -, *, /, exp, log, etc.
  • Reduction along axes (sum, max, argmax, etc).
  • Most of the array creation routines (empty, ones_like, diag, etc). What's more, Mars does not only support create array/tensor on GPU, but also support create sparse tensor.
  • Most of the array manipulation routines (reshape, rollaxis, concatenate, etc.)
  • Basic indexing (indexing by ints, slices, newaxes, and Ellipsis)
  • Advanced indexing (except combing boolean array indexing and integer array indexing)
  • universal functions for elementwise operations.
  • Linear algebra functions, including product (dot, matmul, etc.) and decomposition (cholesky, svd, etc.).

However, Mars has not implemented entire Numpy interface, either the time limitation or difficulty is the main handicap. Any contribution from community is sincerely welcomed. The main feature not implemented are listed below:

  • Tensor with unknown shape does not support all operations.
  • Only small subset of np.linalg are implemented.
  • Operations like sort which is hard to execute in parallel are not implemented.
  • Mars tensor doesn't implement interface like tolist and nditer etc, because the iteration or loops over a large tensor is very inefficient.

Eager Mode

Mars supports eager mode which makes it friendly for developing and easy to debug.

Users can enable the eager mode by options, set options at the beginning of the program or console session.

>>> from mars.config import options
>>> options.eager_mode = True

Or use a context.

>>> from mars.config import option_context
>>> with option_context() as options:
>>>     options.eager_mode = True
>>>     # the eager mode is on only for the with statement
>>>     ...

If eager mode is on, tensor will be executed immediately by default session once it is created.

>>> import mars.tensor as mt
>>> from mars.config import options
>>> options.eager_mode = True
>>> t = mt.arange(6).reshape((2, 3))
>>> print(t)
Tensor(op=TensorRand, shape=(2, 3), data=
[[0 1 2]
[3 4 5]])

Easy to scale in and scale out

Mars can scale in to a single machine, and scale out to a cluster with thousands of machines. Both the local and distributed version share the same piece of code, it's fairly simple to migrate from a single machine to a cluster due to the increase of data.

Running on a single machine including thread-based scheduling, local cluster scheduling which bundles the whole distributed components. Mars is also easy to scale out to a cluster by starting different components of mars distributed runtime on different machines in the cluster.

Threaded

execute method will by default run on the thread-based scheduler on a single machine.

>>> import mars.tensor as mt
>>> a = mt.ones((10, 10))
>>> a.execute()

Users can create a session explicitly.

>>> from mars.session import new_session
>>> session = new_session()
>>> session.run(a + 1)
>>> (a * 2).execute(session=session)
>>> # session will be released when out of with statement
>>> with new_session() as session2:
>>>     session2.run(a / 3)

Local cluster

Users can start the local cluster bundled with the distributed runtime on a single machine. Local cluster mode requires mars distributed version.

>>> from mars.deploy.local import new_cluster

>>> # cluster will create a session and set it as default
>>> cluster = new_cluster()

>>> # run on the local cluster
>>> (a + 1).execute()

>>> # create a session explicitly by specifying the cluster's endpoint
>>> session = new_session(cluster.endpoint)
>>> session.run(a * 3)

Distributed

After installing the distributed version on every node in the cluster, A node can be selected as scheduler and another as web service, leaving other nodes as workers. The scheduler can be started with the following command:

mars-scheduler -a <scheduler_ip> -p <scheduler_port>

Web service can be started with the following command:

mars-web -a <web_ip> -s <scheduler_endpoint> --ui-port <ui_port_exposed_to_user>

Workers can be started with the following command:

mars-worker -a <worker_ip> -p <worker_port> -s <scheduler_endpoint>

After all mars processes are started, users can run

>>> sess = new_session('http://<web_ip>:<ui_port>')
>>> a = mt.ones((2000, 2000), chunk_size=200)
>>> b = mt.inner(a, a)
>>> sess.run(b)

Getting involved

Thank you in advance for your contributions!

You can’t perform that action at this time.