Skip to content

JuliaParallel/Dagger.jl

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
.ci
 
 
 
 
 
 
 
 
lib
 
 
src
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Dagger.jl

A framework for out-of-core and parallel computing

Documentation Build Status
Build Status

At the core of Dagger.jl is a scheduler heavily inspired by Dask. It can run computations represented as directed-acyclic-graphs (DAGs) efficiently on many Julia worker processes and threads, as well as GPUs via DaggerGPU.jl.

DTable has been moved out of this repository. You can find it in a standalone package format here.

Installation

Dagger.jl can be installed using the Julia package manager. Enter the Pkg REPL mode by typing "]" in the Julia REPL and then run:

pkg> add Dagger

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("Dagger")

Usage

Once installed, the Dagger package can by used like so

using Distributed; addprocs() # get us some workers
using Dagger

# do some stuff in parallel!
a = Dagger.@spawn 1+3
b = Dagger.@spawn rand(a, 4)
c = Dagger.@spawn sum(b)
fetch(c) # some number!

Contributing Guide

PRs Welcome GitHub issues GitHub contributors

Contributions are encouraged.

There are several ways to contribute to our project:

Reporting Bugs: If you find a bug, please open an issue and describe the problem. Make sure to include steps to reproduce the issue and any error messages you receive regarding that issue.

Fixing Bugs: If you'd like to fix a bug, please create a pull request with your changes. Make sure to include a description of the problem and how your changes will address it.

Additional examples and documentation improvements are also very welcome.

Resources

List of recommended Dagger.jl resources:

Help and Discussion

For help and discussion, we suggest asking in the following places:

Julia Discourse and on the Julia Slack in the #distributed channel.

Acknowledgements

We thank DARPA, Intel, and the NIH for supporting this work at MIT.