Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
A numeric optimization package for Torch.
Lua CMake
Branch: master

Merge pull request #60 from samehkhamis/master

Making things work under windows
latest commit 5035f46f73
@koraykv koraykv authored
Failed to load latest commit information.
dok Added dok.
test adding rmsprop rosenbrock test, rmsprop converges to 5e-5
.dokx Turned documentation into markdown format
CMakeLists.txt Dok is not supported correctly anymore...
COPYRIGHT.txt remove dependency on sys; add copyright
ConfusionMatrix.lua Merge pull request #36 from sagarwaghmare69/confusion_FRR_FAR
Logger.lua Work under windows
README.md Update README.md
adadelta.lua Rename adadelta to adadelta.lua
adagrad.lua cutorch compatibility for adagrad
adam.lua Update adam.lua
asgd.lua Better formatting for docstrings in REPL
cg.lua Replaced CUDA flag and newTensor() with x.new()
checkgrad.lua add function to numerically check gradient
fista.lua Better formatting for docstrings in REPL
init.lua add adadelta to init.lua
lbfgs.lua Merge pull request #21 from jonathantompson/cuda_lbfgs
lswolfe.lua Better formatting for docstrings in REPL
nag.lua add Nesterov Accelerated Gradient from @dilipkay
optim-1.0.3-0.rockspec rockspec changes: '&&'
optim-1.0.3-1.rockspec rockspec changes: '&&'
optim-1.0.4-0.rockspec rockspec changes: '&&'
optim-1.0.5-0.rockspec rockspec changes: '&&'
polyinterp.lua fix 2 bugs in roots() of polyinterp.lua:
rmsprop.lua adding some docs, cleanup to make things more consistent with existin…
rprop.lua Better formatting for docstrings in REPL
sgd.lua copy variables into state params for type casting

README.md

Optimization package

This package contains several optimization routines for Torch. Each optimization algorithm is based on the same interface:

x*, {f}, ... = optim.method(func, x, state)

where:

  • func: a user-defined closure that respects this API: f, df/dx = func(x)
  • x: the current parameter vector (a 1D torch.Tensor)
  • state: a table of parameters, and state variables, dependent upon the algorithm
  • x*: the new parameter vector that minimizes f, x* = argmin_x f(x)
  • {f}: a table of all f values, in the order they've been evaluated (for some simple algorithms, like SGD, #f == 1)

Important Note

The state table is used to hold the state of the algorihtm. It's usually initialized once, by the user, and then passed to the optim function as a black box. Example:

state = {
   learningRate = 1e-3,
   momentum = 0.5
}

for i,sample in ipairs(training_samples) do
    local func = function(x)
       -- define eval function
       return f,df_dx
    end
    optim.sgd(func,x,state)
end
Something went wrong with that request. Please try again.