Skip to content

Commit

Permalink
trim trailing whitespace
Browse files Browse the repository at this point in the history
  • Loading branch information
joehuchette committed Jan 28, 2015
1 parent 19cd2d8 commit e1e96d8
Show file tree
Hide file tree
Showing 60 changed files with 494 additions and 494 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Expand Up @@ -7,7 +7,7 @@ os:
notifications:
email: false
env:
matrix:
matrix:
- PKGADD="FactCheck;Cbc;Clp;GLPKMathProgInterface;Ipopt;ECOS" JULIAVERSION="juliareleases"
- PKGADD="FactCheck;GLPKMathProgInterface;ECOS" JULIAVERSION="julianightlies"
before_install:
Expand Down
4 changes: 2 additions & 2 deletions NEWS.md
Expand Up @@ -8,9 +8,9 @@ Version 0.7.3 (January 14, 2015)

Version 0.7.2 (January 9, 2015)
-------------------------------

* Fix a bug in ``sum(::JuMPDict)``
* Added the ``setCategory`` function to change a variables category (e.g. continuous or binary)
* Added the ``setCategory`` function to change a variables category (e.g. continuous or binary)
after construction, and ``getCategory`` to retrieve the variable category.

Version 0.7.1 (January 2, 2015)
Expand Down
12 changes: 6 additions & 6 deletions README.md
Expand Up @@ -28,17 +28,17 @@ commercial solvers ([CPLEX], [COIN Clp], [COIN Cbc], [ECOS], [GLPK],

JuMP makes it easy to specify and **solve optimization problems without expert knowledge**, yet at the same time allows experts to implement advanced algorithmic techniques such as exploiting efficient hot-starts in linear programming or using callbacks to interact with branch-and-bound solvers. JuMP is also **fast** - benchmarking has shown that it can create problems at similar speeds to special-purpose commercial tools such as AMPL while maintaining the expressiveness of a generic high-level programming language. JuMP can be easily embedded in complex work flows including simulations and web servers.

Our documentation includes an installation guide, quick-start guide, and reference manual.
Our documentation includes an installation guide, quick-start guide, and reference manual.

**Latest Release**: 0.7.3 (via ``Pkg.add``)
* [documentation](https://jump.readthedocs.org/en/release-0.7)
* [examples](https://github.com/JuliaOpt/JuMP.jl/tree/release-0.7/examples)
* Testing status: [![Build Status](https://travis-ci.org/JuliaOpt/JuMP.jl.png?branch=release-0.7)](https://travis-ci.org/JuliaOpt/JuMP.jl) [![JuMP](http://pkg.julialang.org/badges/JuMP_release.svg)](http://pkg.julialang.org/?pkg=JuMP&ver=release)


**Development version**:
**Development version**:
* [documentation](https://jump.readthedocs.org/en/latest)
* [examples](https://github.com/JuliaOpt/JuMP.jl/tree/master/examples)
* [examples](https://github.com/JuliaOpt/JuMP.jl/tree/master/examples)
* Testing status: [![Build Status](https://travis-ci.org/JuliaOpt/JuMP.jl.png?branch=master)](https://travis-ci.org/JuliaOpt/JuMP.jl) [![Coverage Status](https://coveralls.io/repos/JuliaOpt/JuMP.jl/badge.png)](https://coveralls.io/r/JuliaOpt/JuMP.jl)
* Changes: see [NEWS](https://github.com/JuliaOpt/JuMP.jl/tree/master/NEWS.md)

Expand All @@ -50,14 +50,14 @@ JuMP can be installed through the Julia package manager (version 0.3 required)
julia> Pkg.add("JuMP")
```

For full installation instructions, including how to install solvers, see the documentation linked above.
For full installation instructions, including how to install solvers, see the documentation linked above.



## Supported problem classes

Mathematical programming encompasses a large variety of problem classes.
We list below what is currently supported. See the documentation for more information.
Mathematical programming encompasses a large variety of problem classes.
We list below what is currently supported. See the documentation for more information.

**Objective types**

Expand Down
26 changes: 13 additions & 13 deletions doc/callbacks.rst
Expand Up @@ -4,11 +4,11 @@
Solver Callbacks
----------------

Many mixed-integer programming solvers offer the ability to modify the solve process.
Many mixed-integer programming solvers offer the ability to modify the solve process.
Examples include changing branching decisions in branch-and-bound, adding custom cutting planes, providing custom heuristics to find feasible solutions, or implementing on-demand separators to add new constraints only when they are violated by the current solution (also known as lazy constraints).

While historically this functionality has been limited to solver-specific interfaces,
JuMP provides *solver-independent* support for a number of commonly used solver callbacks. Currently, we support lazy constraints, user-provided cuts, and user-provided
JuMP provides *solver-independent* support for a number of commonly used solver callbacks. Currently, we support lazy constraints, user-provided cuts, and user-provided
heuristics for the Gurobi, CPLEX, and GLPK solvers. We do not yet support any
other class of callbacks, but they may be accessible by using the solver's
low-level interface.
Expand All @@ -24,15 +24,15 @@ that would make the current solution infeasible. For some more information about
lazy constraints, see this blog post by `Paul Rubin <http://orinanobworld.blogspot.com/2012/08/user-cuts-versus-lazy-constraints.html>`_.

There are three important steps to providing a lazy constraint callback. First we
must write a function that will analyze the current solution that takes a
must write a function that will analyze the current solution that takes a
single argument, e.g. ``function myLazyConGenerator(cb)``, where cb is a reference
to the callback management code inside JuMP. Next you will do whatever
analysis of the solution you need to inside your function to generate the new
constraint before adding it to the model with the JuMP function
``addLazyConstraint(cb, myconstraint)`` or the macro version
``@addLazyConstraint(cb, myconstraint)`` (same limitations as addConstraint).
Finally we notify JuMP that this function should be used for lazy constraint
generation using the ``setLazyCallback(m, myLazyConGenerator)`` function
generation using the ``setLazyCallback(m, myLazyConGenerator)`` function
before we call ``solve(m)``.

The following is a simple example to make this more clear. In this two-dimensional
Expand Down Expand Up @@ -100,7 +100,7 @@ will be either (0,2) or (2,2), and the final solution will be (1,2)::
println("Final solution: [ $(getValue(x)), $(getValue(y)) ]")

The code should print something like (amongst the output from Gurobi)::

In callback function, x=2.0, y=2.0
Solution was in top right, cut it off
In callback function, x=0.0, y=2.0
Expand All @@ -125,15 +125,15 @@ User cuts, or simply cuts, provide a way for the user to tighten the LP relaxati
Your user cuts should not change the set of integer feasible solutions. Equivalently, your cuts can only remove fractional solutions - that is, "tighten" the LP relaxation of the MILP. If you add a cut that removes an integer solution, the solver may return an incorrect solution.

Adding a user cut callback is similar to adding a lazy constraint callback. First we
must write a function that will analyze the current solution that takes a
must write a function that will analyze the current solution that takes a
single argument, e.g. ``function myUserCutGenerator(cb)``, where cb is a reference
to the callback management code inside JuMP. Next you will do whatever
analysis of the solution you need to inside your function to generate the new
constraint before adding it to the model with the JuMP function
``addUserCut(cb, myconstraint)`` or the macro version
``@addUserCut(cb, myconstraint)`` (same limitations as addConstraint).
Finally we notify JuMP that this function should be used for lazy constraint
generation using the ``setCutCallback(m, myUserCutGenerator)`` function
generation using the ``setCutCallback(m, myUserCutGenerator)`` function
before we call ``solve(m)``.

Consider the following example which is related to the lazy constraint example. The problem is two-dimensional, and the objective sense prefers solution in the top-right of a 2-by-2 square. There is a single constraint that cuts off the top-right corner to make the LP relaxation solution fractional. We will exploit our knowledge of the problem structure to add a user cut that will make the LP relaxation integer, and thus solve the problem at the root node::
Expand Down Expand Up @@ -169,7 +169,7 @@ Consider the following example which is related to the lazy constraint example.

# Allow for some impreciseness in the solution
TOL = 1e-6

# Check top right
if y_val + x_val > 3 + TOL
# Cut off this solution
Expand All @@ -189,7 +189,7 @@ Consider the following example which is related to the lazy constraint example.
println("Final solution: [ $(getValue(x)), $(getValue(y)) ]")

The code should print something like (amongst the output from Gurobi)::

In callback function, x=1.5, y=2.0
Fractional solution was in top right, cut it off
In callback function, x=1.0, y=2.0
Expand All @@ -212,7 +212,7 @@ Consider the following example, which is the same problem as seen in the user cu
using JuMP
using Gurobi

# We will use Gurobi and disable PreSolve, Cuts, and (in-built) Heuristics so
# We will use Gurobi and disable PreSolve, Cuts, and (in-built) Heuristics so
# only our heuristic will be used
m = Model(solver=GurobiSolver(Cuts=0, Presolve=0, Heuristics=0.0))

Expand All @@ -227,7 +227,7 @@ Consider the following example, which is the same problem as seen in the user cu
@addConstraint(m, y + x <= 3.5)

# Optimal solution of relaxed problem will be (1.5, 2.0)

# We now define our callback function that takes one argument,
# the callback handle. Note that we can access m, x, and y because
# this function is defined inside the same scope
Expand Down Expand Up @@ -256,7 +256,7 @@ Consider the following example, which is the same problem as seen in the user cu
println("Final solution: [ $(getValue(x)), $(getValue(y)) ]")

The code should print something like::

In callback function, x=1.5, y=2.0
0 0 5.50000 0 1 - 5.50000 - - 0s
H 1 0 5.0000000 5.50000 10.0% 0.0 0s
Expand Down Expand Up @@ -342,7 +342,7 @@ In the above examples the callback function is defined in the same scope as the
x_val = getValue(x)
y_val = getValue(y)
println("In callback function, x=$x_val, y=$y_val")

newcut, x_coeff, y_coeff, rhs = cornerChecker(x_val, y_val)

if newcut
Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Expand Up @@ -11,7 +11,7 @@
# All configuration values have a default; values that are commented out
# serve to show the default.

import sys, os
import sys, os
#import juliadoc

on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
Expand Down
22 changes: 11 additions & 11 deletions doc/example.rst
Expand Up @@ -14,11 +14,11 @@ The are more complex examples in the ``JuMP/examples/`` `folder <https://github.

@setObjective(m, Max, 5x + 3*y )
@addConstraint(m, 1x + 5y <= 3.0 )

print(m)

status = solve(m)

println("Objective value: ", getObjectiveValue(m))
println("x = ", getValue(x))
println("y = ", getValue(y))
Expand All @@ -33,12 +33,12 @@ Models are created with the ``Model()`` function::
m = Model()

.. note::
Your model doesn't have to be called m - it's just a name.
Your model doesn't have to be called m - it's just a name.

There are a few options for defining a variable, depending on whether you want
to have lower bounds, upper bounds, both bounds, or even no bounds. The following
commands will create two variables, ``x`` and ``y``, with both lower and upper bounds.
Note the first argument is our model variable ``m``. These variables are associated
commands will create two variables, ``x`` and ``y``, with both lower and upper bounds.
Note the first argument is our model variable ``m``. These variables are associated
with this model and cannot be used in another model.::

@defVar(m, 0 <= x <= 2 )
Expand Down Expand Up @@ -67,8 +67,8 @@ the ``print`` function is defined for models.
print(m)

Models are solved with the ``solve()`` function. This function will not raise
an error if your model is infeasible - instead it will return a flag. In this
case, the model is feasible so the value of ``status`` will be ``:Optimal``,
an error if your model is infeasible - instead it will return a flag. In this
case, the model is feasible so the value of ``status`` will be ``:Optimal``,
where ``:`` again denotes a symbol. The possible values of ``status``
are described :ref:`here <solvestatus>`.

Expand All @@ -78,14 +78,14 @@ are described :ref:`here <solvestatus>`.

Finally, we can access the results of our optimization. Getting the objective
value is simple::

println("Objective value: ", getObjectiveValue(m))

To get the value from a variable, we call the ``getValue()`` function. If ``x``
is not a single variable, but instead a range of variables, ``getValue()`` will
return a list. In this case, however, it will just return a single value.

::

println("x = ", getValue(x))
println("y = ", getValue(y))
8 changes: 4 additions & 4 deletions doc/index.rst
Expand Up @@ -5,8 +5,8 @@ JuMP --- Julia for Mathematical Programming
.. module:: JuMP
:synopsis: Julia for Mathematical Programming

`JuMP <https://github.com/JuliaOpt/JuMP.jl>`_ is a domain-specific modeling language for
`mathematical programming <http://en.wikipedia.org/wiki/Mathematical_optimization>`_
`JuMP <https://github.com/JuliaOpt/JuMP.jl>`_ is a domain-specific modeling language for
`mathematical programming <http://en.wikipedia.org/wiki/Mathematical_optimization>`_
embedded in `Julia <http://julialang.org/>`_.
It currently supports a number of open-source and commercial solvers (see below)
for a variety of problem classes, including **linear programming**, **mixed-integer programming**, **second-order conic programming**, and **nonlinear programming**.
Expand All @@ -27,7 +27,7 @@ JuMP's features include:
* Solver independence

* JuMP uses a generic solver-independent interface provided by the
`MathProgBase <https://github.com/mlubin/MathProgBase.jl>`_ package, making it easy
`MathProgBase <https://github.com/mlubin/MathProgBase.jl>`_ package, making it easy
to change between a number of open-source and commercial optimization software packages ("solvers").
* Currently supported solvers include `Cbc <https://projects.coin-or.org/Cbc>`_,
`Clp <https://projects.coin-or.org/Clp>`_,
Expand All @@ -53,7 +53,7 @@ JuMP's features include:

* JuMP is LGPL licensed, meaning that it can be embedded in commercial software that complies with the terms of the license.

While neither Julia nor JuMP have reached version 1.0 yet, the releases are stable enough for everyday use and are being used in a number of research projects and neat applications by a growing community of users who are early adopters. JuMP remains under active development, and we welcome your feedback, suggestions, and bug reports.
While neither Julia nor JuMP have reached version 1.0 yet, the releases are stable enough for everyday use and are being used in a number of research projects and neat applications by a growing community of users who are early adopters. JuMP remains under active development, and we welcome your feedback, suggestions, and bug reports.

Installing JuMP
---------------
Expand Down
12 changes: 6 additions & 6 deletions doc/installation.rst
Expand Up @@ -29,7 +29,7 @@ To start using JuMP (after installing a solver), it should be imported into the
Getting Solvers
^^^^^^^^^^^^^^^

Solver support in Julia is currently provided by writing a solver-specific package that provides a very thin wrapper around the solver's C interface and providing a standard interface that JuMP can call. If you are interested in providing an interface to your solver, please get in touch. The table below lists the currently supported solvers and their capabilities.
Solver support in Julia is currently provided by writing a solver-specific package that provides a very thin wrapper around the solver's C interface and providing a standard interface that JuMP can call. If you are interested in providing an interface to your solver, please get in touch. The table below lists the currently supported solvers and their capabilities.



Expand All @@ -48,15 +48,15 @@ Solver support in Julia is currently provided by writing a solver-specific packa
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `GLPK <http://www.gnu.org/software/glpk/>`_ | `GLPKMath... <https://github.com/JuliaOpt/GLPKMathProgInterface.jl>`_ | ``GLPKSolver[LP|MIP]()`` | GPL | X | | X | |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `Gurobi <http://gurobi.com>`_ | `Gurobi.jl <https://github.com/JuliaOpt/Gurobi.jl>`_ | ``GurobiSolver()`` | Comm. | X | X | X | |
| `Gurobi <http://gurobi.com>`_ | `Gurobi.jl <https://github.com/JuliaOpt/Gurobi.jl>`_ | ``GurobiSolver()`` | Comm. | X | X | X | |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `Ipopt <https://projects.coin-or.org/Ipopt>`_ | `Ipopt.jl <https://github.com/JuliaOpt/Ipopt.jl>`_ | ``IpoptSolver()`` | EPL | X | | | X |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `KNITRO <http://www.ziena.com/knitro.htm>`_ | `KNITRO.jl <https://github.com/JuliaOpt/KNITRO.jl>`_ | ``KnitroSolver()`` | Comm. | | | | X |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `MOSEK <http://www.mosek.com/>`_ | `Mosek.jl <https://github.com/JuliaOpt/Mosek.jl>`_ | ``MosekSolver()`` | Comm. | X | X | X | X |
| `MOSEK <http://www.mosek.com/>`_ | `Mosek.jl <https://github.com/JuliaOpt/Mosek.jl>`_ | ``MosekSolver()`` | Comm. | X | X | X | X |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+
| `NLopt <http://ab-initio.mit.edu/wiki/index.php/NLopt>`_ | `NLopt.jl <https://github.com/JuliaOpt/NLopt.jl>`_ | ``NLoptSolver()`` | LGPL | | | | X |
| `NLopt <http://ab-initio.mit.edu/wiki/index.php/NLopt>`_ | `NLopt.jl <https://github.com/JuliaOpt/NLopt.jl>`_ | ``NLoptSolver()`` | LGPL | | | | X |
+----------------------------------------------------------------------------------+---------------------------------------------------------------------------------+-----------------------------+-------------+----+------+-----+-----+

Where:
Expand All @@ -67,7 +67,7 @@ Where:
- NLP = Nonlinear programming

To install Gurobi, for example, and use it with a JuMP model ``m``, run::

Pkg.add("Gurobi")
using JuMP
using Gurobi
Expand Down Expand Up @@ -123,7 +123,7 @@ KNITRO
++++++

Requires a licence. The KNITRO.jl interface currently supports only nonlinear problems.

MOSEK
+++++

Expand Down

0 comments on commit e1e96d8

Please sign in to comment.