Skip to content

Commit

Permalink
Fixed typos
Browse files Browse the repository at this point in the history
  • Loading branch information
Olivier Delalleau authored and nouiz committed Feb 9, 2012
1 parent 1652b64 commit d544d6a
Show file tree
Hide file tree
Showing 5 changed files with 13 additions and 12 deletions.
2 changes: 1 addition & 1 deletion NEWS.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Since 0.5rc2

* Fixed a memory leak with shared variable (we kept a pointer to the original value)
* Alloc, GpuAlloc are not always pre-computed(constant_folding optimization) at compile time if all its inputs are constants
* Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
Expand Down
1 change: 1 addition & 0 deletions doc/NEWS.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
Since 0.5rc2

* Fixed a memory leak with shared variable (we kept a pointer to the original value)
* Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
Expand Down
10 changes: 5 additions & 5 deletions doc/extending/op.txt
Original file line number Diff line number Diff line change
Expand Up @@ -222,14 +222,14 @@ following methods:
*Default:* Return True

By default when optimizations are enabled, we remove during
function compilation apply node that have all their input
constants. We replace the Apply node with a Theano constant
variable. This way, the apply node is not executed at each function
function compilation Apply nodes whose inputs are all constants.
We replace the Apply node with a Theano constant variable.
This way, the Apply node is not executed at each function
call. If you want to force the execution of an op during the
function call, make do_constant_folding return False.

As done in the Alloc op, you can return False only in some case by
analysing the graph from the node parameter.
As done in the Alloc op, you can return False only in some cases by
analyzing the graph from the node parameter.

At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
Expand Down
10 changes: 5 additions & 5 deletions theano/gof/op.py
Original file line number Diff line number Diff line change
Expand Up @@ -511,11 +511,11 @@ def perform(self, node, inputs, output_storage):

def do_constant_folding(self, node):
"""
This allow each op to dertermine if they want to be constant
folded when all there in put are constant. This allow them to
choose where they put their memory/speed trade off. Also, it
could make thing faster as Constant can't be used for inplace
operation(see *IncSubtensor)
This allows each op to determine if it wants to be constant
folded when all its inputs are constant. This allows it to
choose where it puts its memory/speed trade-off. Also, it
could make things faster as constants can't be used for inplace
operations (see *IncSubtensor).
"""
return True

Expand Down
2 changes: 1 addition & 1 deletion theano/tensor/opt.py
Original file line number Diff line number Diff line change
Expand Up @@ -3768,7 +3768,7 @@ def constant_folding(node):
return False
#condition: all inputs are constant
if not node.op.do_constant_folding(node):
# The op ask to don't be constant folded.
# The op asks not to be constant folded.
return False

storage_map = dict([(i, [i.data]) for i in node.inputs])
Expand Down

0 comments on commit d544d6a

Please sign in to comment.