Permalink
Browse files

Fixed typos

  • Loading branch information...
1 parent 1652b64 commit d544d6a92e064c97c5633e1c3e9b9d7b5d1eec74 Olivier Delalleau committed with nouiz Feb 9, 2012
Showing with 13 additions and 12 deletions.
  1. +1 −1 NEWS.txt
  2. +1 −0 doc/NEWS.txt
  3. +5 −5 doc/extending/op.txt
  4. +5 −5 theano/gof/op.py
  5. +1 −1 theano/tensor/opt.py
View
@@ -3,7 +3,7 @@
Since 0.5rc2
* Fixed a memory leak with shared variable (we kept a pointer to the original value)
- * Alloc, GpuAlloc are not always pre-computed(constant_folding optimization) at compile time if all its inputs are constants
+ * Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
View
@@ -3,6 +3,7 @@
Since 0.5rc2
* Fixed a memory leak with shared variable (we kept a pointer to the original value)
+ * Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
View
@@ -222,14 +222,14 @@ following methods:
*Default:* Return True
By default when optimizations are enabled, we remove during
- function compilation apply node that have all their input
- constants. We replace the Apply node with a Theano constant
- variable. This way, the apply node is not executed at each function
+ function compilation Apply nodes whose inputs are all constants.
+ We replace the Apply node with a Theano constant variable.
+ This way, the Apply node is not executed at each function
call. If you want to force the execution of an op during the
function call, make do_constant_folding return False.
- As done in the Alloc op, you can return False only in some case by
- analysing the graph from the node parameter.
+ As done in the Alloc op, you can return False only in some cases by
+ analyzing the graph from the node parameter.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
View
@@ -511,11 +511,11 @@ def perform(self, node, inputs, output_storage):
def do_constant_folding(self, node):
"""
- This allow each op to dertermine if they want to be constant
- folded when all there in put are constant. This allow them to
- choose where they put their memory/speed trade off. Also, it
- could make thing faster as Constant can't be used for inplace
- operation(see *IncSubtensor)
+ This allows each op to determine if it wants to be constant
+ folded when all its inputs are constant. This allows it to
+ choose where it puts its memory/speed trade-off. Also, it
+ could make things faster as constants can't be used for inplace
+ operations (see *IncSubtensor).
"""
return True
View
@@ -3768,7 +3768,7 @@ def constant_folding(node):
return False
#condition: all inputs are constant
if not node.op.do_constant_folding(node):
- # The op ask to don't be constant folded.
+ # The op asks not to be constant folded.
return False
storage_map = dict([(i, [i.data]) for i in node.inputs])

0 comments on commit d544d6a

Please sign in to comment.