Fetching contributors…
Cannot retrieve contributors at this time
278 lines (214 sloc) 17.4 KB
   _       _ _(_)_     |
  (_)     | (_) (_)    |   A fresh approach to technical computing
   _ _   _| |_  __ _   |
  | | | | | | |/ _` |  | 
  | | |_| | | | (_| |  |
 _/ |\__'_|_|_|\__'_|  |
|__/                   |
## The Julia Language

Julia is a high-level, high-performance dynamic language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The library, mostly written in Julia itself, also integrates mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, FFTs, and string processing. More libraries continue to be added over time. Julia programs are organized around defining functions, and overloading them for different combinations of argument types (which can also be user-defined). For a more in-depth discussion of the rationale and advantages of Julia over other systems, see the following highlights or read the introduction in the manual.

High-Performance JIT Compiler

Julia's LLVM-based JIT compiler combined with the language's design allow it to approach and often match the performance of C/C++. To get a sense of relative performance of Julia compared to other languages that can or could be used for numerical and scientific computing, we've written a small set of micro-benchmarks in a variety of languages. The source code for the various implementations can be found here: C++, Julia, Python, Matlab/Octave, JavaScript. We encourage you to skim the code to get a sense for how easy or difficult numerical programming in each language is. The following micro-benchmark results are from a MacBook Pro with a 2.53GHz Intel Core 2 Duo CPU and 8GB of 1066MHz DDR3 RAM:

                 |             |
                 |  C++ (GCC)  |    Julia    Python/NumPy    Matlab    Octave   JavaScript
                 |  4.2.1 -O3  |   bd7c16a2   2.7.1/1.5.1    R2011a      3.4   V8
                 |             |
  fib            |     .205    |     2.14        27.5      1351.      2531.        1.50
  parse_int      |     .0901   |     1.60         6.55      897.      5580.
  quicksort      |     .429    |     1.15        61.8       145.      3356.       24.0
  mandel         |     .269    |     5.87        30.3        61.6      844.        6.09
  pi_sum         |   53.8      |      .743       18.9         1.13     351.         .793
  rand_mat_stat  |    9.11     |     3.32        34.1        10.1       48.1       8.78
  rand_mat_mul   |  240.       |      .972        1.19         .715      1.68    311.

Figure: C++ numbers are benchmark times in ms; other timings are relative to C++ (smaller is better).

C++ times are absolute; others are relative to C++. Julia beats other high-level systems on all micro-benchmarks, except for JavaScript on the Fibonacci benchmark (33% faster) and Matlab on the random matrix multiplication benchmark (26% faster). Julia's LLVM JIT code even manages to beat C++ by 25% on the pi summation benchmark and by a small margin on random matrix multiplication. Relative performance between languages on other systems is similar. Matlab's ability to beat both C and Julia by such a large margin on random matrix multiplication comes from its use of the proprietary Intel Math Kernel Library, which has extremely optimized code for matrix multiplication. Users who have a licensed copy of MKL can use it with Julia, but the default BLAS is a high quality open source implementation (see below for more details).

These benchmarks, while not comprehensive, do test compiler performance on a range of common code patterns, such as function calls, string parsing, sorting, numerical loops, random number generation, and array operations. Julia is strong in an area that high-level languages have traditionally been weak: scalar arithmetic loops, such as that found in the pi summation benchmark. Matlab's JIT for floating-point arithmetic does very well here too. However, Julia has a comprehensive approach to eliminating overhead that allows it to optimize not only code involving floating-point scalars, but also code for arbitrary user-defined data types.

To give a quick taste of what Julia looks like, here is the code used in the Mandelbrot and random matrix statistics benchmarks:

function mandel(z)
    c = z
    maxiter = 80
    for n = 1:maxiter
        if abs(z) > 2
            return n-1
        z = z^2 + c
    return maxiter

function randmatstat(t)
    n = 5
    v = zeros(t)
    w = zeros(t)
    for i = 1:t
        a = randn(n,n)
        b = randn(n,n)
        c = randn(n,n)
        d = randn(n,n)
        P = [a b c d]
        Q = [a b; c d]
        v[i] = trace((P.'*P)^4)
        w[i] = trace((Q.'*Q)^4)
    std(v)/mean(v), std(w)/mean(w)

As you can see, the code is quite clear, and should feel familiar to anyone who has programmed in other mathematical languages. Although C++ beats Julia in the random matrix statistics benchmark by a factor of three, consider how much simpler this code is than the C++ implementation. There are more compiler optimizations planned that we hope will close this performance gap in the future. By design, Julia allows you to range from low-level loop and vector code, up to a high-level programming style, sacrificing some performance, but gaining the ability to express complex algorithms easily. This continuous spectrum of programming levels is a hallmark of the Julia approach to programming and is very much an intentional feature of the language.

Designed for Parallelism & Cloud Computing

Julia does not impose any particular style of parallelism on the user. Instead, it provides a number of key building blocks for distributed computation, making it flexible enough to support a number of styles of parallelism, and allowing users to add more. The following simple example demonstrates how to count the number of heads in a large number of coin tosses in parallel.

nheads = @parallel (+) for i=1:100000000

This computation is automatically distributed across all available compute nodes, and the result, reduced by summation (+), is returned at the calling node.

Although it is in the early stages, Julia already supports a fully remote cloud computing mode. Here is a screenshot of a web-based interactive Julia session, plotting an oscillating function and a Gaussian random walk:

There will eventually be full support for cloud-based operation, including data management, code editing, execution, debugging, collaboration, analysis, data exploration, and visualization. The goal is to allow people who work with big data to stop worrying about administering machines and data and get straight to the real problem: exporing their data and creating the algorithms that can solve the problems presented by their big data.

Free, Open Source & Library-Friendly

The core of the Julia implementation is licensed under the MIT license. Various libraries used by the Julia environment include their own licenses such as the GPL, LGPL, and BSD (therefore the environment, which consists of the language, user interfaces, and libraries, is under the GPL). Core functionality is included in a shared library, so users can easily and legally combine Julia with their own C/Fortran code or proprietary third-party libraries. Furthermore, Julia makes it simple to call external functions in C and Fortran shared libraries, without writing any wrapper code or even recompiling existing code. You can try calling external library functions directly from Julia's interactive prompt, playing with the interface and getting immediate feedback until you get it right. See LICENSE for the full terms of Julia's licensing.

## Resources ## Required Build Tools & External Libraries
  • GNU make — building dependencies.
  • gcc, g++, gfortran — compiling and linking C, C++ and Fortran code.
  • curl — to automatically download external libraries:
    • LLVM — compiler infrastructure.
    • fdlibm — a portable implementation of much of the system-dependent libm math library's functionality.
    • MT — a fast Mersenne Twister pseudorandom number generator library.
    • OpenBLAS — a fast, open, and maintained basic linear algebra subprograms (BLAS) library, based on Kazushige Goto's famous GotoBLAS.
    • LAPACK — library of linear algebra routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems.
    • ARPACK — a collection of subroutines designed to solve large, sparse eigenvalue problems.
    • FFTW — library for computing fast Fourier transforms very quickly and efficiently.
    • PCRE — Perl-compatible regular expressions library.
    • GNU readline — library allowing shell-like line editing in the terminal, with history and familiar key bindings.
    • D3 — javascript visualization library.
## Supported Platforms
  • GNU/Linux: x86/64 (64-bit); x86 (32-bit).
  • Darwin/OS X: x86/64 (64-bit); x86 (32-bit) is untested but should work.
## Binary Installation

Julia's binary installs ship as platform-specific tarballs:

Download the appropriate tarball and untar it somewhere; for example, if you are on an OS X (Darwin) x86/64 system, do the following:

curl -OLk
tar zxvf julia-08b1e294ed-Darwin-x86_64.tar.gz

You can either run the julia executable using its full path in the directory created above, or add that directory to your executable path so that you can run the julia program from anywhere:

export PATH="$(pwd)/julia:$PATH"

Now you should be able to run julia like this:


If everything works correctly, you will see a Julia banner and an interactive prompt into which you can enter expressions for evaluation. You can read about getting started in the manual.

## Source Download & Compilation

First, acquire the source code either by cloning the git repository (requires git to be installed):

git clone git://

or, if you don't have git installed, by using curl and tar to fetch and unpack the source:

mkdir julia && curl -Lk | tar -zxf- -C julia --strip-components 1

Next, enter the julia/ directory and run make to build the julia executable. When compiled the first time, it will automatically download and build its external dependencies. This takes a while, but only has to be done once. Note: the build process will not work if any of the build directory's parent directories have spaces in their names (this is due to a limitation in GNU make).

Once it is built, you can either run the julia executable using its full path in the directory created above, or add that directory to your executable path so that you can run the julia program from anywhere:

export PATH="$(pwd)/julia:$PATH"

Now you should be able to run julia like this:


If everything works correctly, you will see a Julia banner and an interactive prompt into which you can enter expressions for evaluation. You can read about getting started in the manual.

### Platform-Specific Notes

On some Linux distributions (for instance Ubuntu 11.10) you may need to change how the readline library is linked. If you get a build error involving readline, try changing the value of USE_SYSTEM_READLINE in to 1.

On Ubuntu, you may also need to install the package libncurses5-dev.

If OpenBLAS fails to build in getarch_2nd.c, you need to specify the architecture of your processor in

## Directories
attic/         old, now-unused code
contrib/       emacs and textmate support for julia
examples/      example julia programs
external/      external dependencies
install/       used for creating binary installs
j/             source code for julia's standard library
lib/           shared libraries loaded by julia's standard libraries
src/           source for julia language core
test/          unit and function tests for julia itself
ui/            source for various front ends
## Editor & Terminal Setup

Julia support is currently available for Emacs, Vim, and TextMate. Support files and instructions for configuring these editors can be found in contrib/.

Adjusting your terminal bindings is optional; everything will work fine without these key bindings. For the best interactive session experience, however, make sure that your terminal emulator (Terminal, iTerm, xterm, etc.) sends the ^H sequence for Backspace (delete key) and that the Shift-Enter key combination sends a \n newline character to distinguish it from just pressing Enter, which sends a \r carriage return character. These bindings allow custom readline handlers to trap and correctly deal with these key sequences; other programs will continue behave normally with these bindings. The first binding makes backspacing through text in the interactive session behave more intuitively. The second binding allows Shift-Enter to insert a newline without evaluating the current expression, even when the current expression is complete. (Pressing an unmodified Enter inserts a newline if the current expression is incomplete, evaluates the expression if it is complete, or shows an error if the syntax is irrecoverably invalid.)

On Linux systems, the Shift-Enter binding can be set by placing the following line in the file .xmodmaprc in your home directory:

keysym Return = Return Linefeed