Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some spelling typos #4909

Merged
merged 1 commit into from
Dec 4, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions .binstar.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ script:
## BINSTAR_BUILD_RESULT=[succcess|failure]
# after_script:
# - echo "The build was a $BINSTAR_BUILD_RESULT" | tee artifact1.txt
## This will be run only after a successfull build
## This will be run only after a successful build
# after_success:
# - echo "after_success!"
## This will be run only after a build failure
Expand All @@ -66,7 +66,7 @@ script:
#===============================================================================
# Build Results
# Build results are split into two categories: artifacts and targets
# You may omit either key and stiff have a successfull build
# You may omit either key and stiff have a successful build
# They may be a string, list and contain any bash glob
#===============================================================================

Expand Down
2 changes: 1 addition & 1 deletion CHANGE_LOG
Original file line number Diff line number Diff line change
Expand Up @@ -2831,7 +2831,7 @@ make it cleaner and more rational:
* The numba.vectorize namespace is gone. The vectorize decorator will
be in the main numba namespace.
* Added a guvectorize decorator in the main numba namespace. It is
similiar to numba.vectorize, but takes a dimension signature. It
similar to numba.vectorize, but takes a dimension signature. It
generates gufuncs. This is a replacement for the GUVectorize gufunc
factory which has been deprecated.

Expand Down
2 changes: 1 addition & 1 deletion docs/dagmap/jquery.graphviz.svg.js
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@
// scroll so pointer is still in same place
$element.scrollLeft((rx * $svg.width()) + 0.5 - px)
$element.scrollTop((ry * $svg.height()) + 0.5 - py)
return false // stop propogation
return false // stop propagation
}
})
}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/cuda/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ This is quite likely due to the profiling data not being flushed on program
exit, see the `NVIDIA CUDA documentation
<http://docs.nvidia.com/cuda/profiler-users-guide/#flush-profile-data>`_ for
details. To fix this simply add a call to ``numba.cuda.profile_stop()`` prior
to the exit point in your program (or whereever you want to stop profiling).
to the exit point in your program (or wherever you want to stop profiling).
For more on CUDA profiling support in Numba, see :ref:`cuda-profiling`.
2 changes: 1 addition & 1 deletion docs/source/cuda/intrinsics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ of finding a maximum in this case, but that it serves as an example::

max_example[256,64](result, arr)
print(result[0]) # Found using cuda.atomic.max
print(max(arr)) # Print max(arr) for comparision (should be equal!)
print(max(arr)) # Print max(arr) for comparison (should be equal!)


Multiple dimension arrays are supported by using a tuple of ints for the index::
Expand Down
2 changes: 1 addition & 1 deletion docs/source/proposals/integer-typing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ easily predictable types.

When using built-in Python ``int``, the user gets acceptable magnitude
(32 or 64 bits depending on the system's bitness), and the type remains
the same accross all computations.
the same across all computations.

When explicitly using smaller bitwidths, intermediate results don't
suffer from magnitude loss, since their bitwidth is promoted to ``intp``.
Expand Down
2 changes: 1 addition & 1 deletion numba/_numba_common.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# define __has_attribute(x) 0
#endif

/* This attribute marks symbols that can be shared accross C objects
/* This attribute marks symbols that can be shared across C objects
* but are not exposed outside of a shared library or executable.
* Note this is default behaviour for global symbols under Windows.
*/
Expand Down
2 changes: 1 addition & 1 deletion numba/_runtests.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def _main(argv, **kwds):


def main(*argv, **kwds):
"""keyword arguments are accepted for backward compatiblity only.
"""keyword arguments are accepted for backward compatibility only.
See `numba.testing.run_tests()` documentation for details."""
return _main(['<main>'] + list(argv), **kwds)

Expand Down
2 changes: 1 addition & 1 deletion numba/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ def fix_point_progress():
for offset in blocks:
# vars available + variable defined
avail = block_entry_vars[offset] | var_def_map[offset]
# substract variables deleted
# subtract variables deleted
avail -= var_dead_map[offset]
# add ``avail`` to each successors
for succ, _data in cfg.successors(offset):
Expand Down
2 changes: 1 addition & 1 deletion numba/cext/dictobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -696,7 +696,7 @@ numba_dict_insert(

Py_ssize_t ix = numba_dict_lookup(d, key_bytes, hash, oldval_bytes);
if (ix == DKIX_ERROR) {
// exception in key comparision in lookup.
// exception in key comparison in lookup.
return ERR_CMP_FAILED;
}

Expand Down
2 changes: 1 addition & 1 deletion numba/cuda/tests/cudapy/test_cuda_array_interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ def test_array_views(self):
self.assertEqual(arr[::2].nbytes,
arr_strided.size * arr_strided.dtype.itemsize)

# __setitem__ interface propogates into external array
# __setitem__ interface propagates into external array

# Writes to a slice
arr[:5] = np.pi
Expand Down
2 changes: 1 addition & 1 deletion numba/dummyarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def _compute_extent(self):
lastidx = [s - 1 for s in self.shape]
start = compute_index(firstidx, self.dims)
stop = compute_index(lastidx, self.dims) + self.itemsize
stop = max(stop, start) # ensure postive extent
stop = max(stop, start) # ensure positive extent
return Extent(start, stop)

def __repr__(self):
Expand Down
2 changes: 1 addition & 1 deletion numba/inline_closurecall.py
Original file line number Diff line number Diff line change
Expand Up @@ -1032,7 +1032,7 @@ def _inline_const_arraycall(block, func_ir, context, typemap, calltypes):
"""Look for array(list) call where list is a constant list created by build_list,
and turn them into direct array creation and initialization, if the following
conditions are met:
1. The build_list call immediate preceeds the array call;
1. The build_list call immediate precedes the array call;
2. The list variable is no longer live after array call;
If any condition check fails, no modification will be made.
"""
Expand Down
2 changes: 1 addition & 1 deletion numba/ir_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1320,7 +1320,7 @@ def merge_adjacent_blocks(blocks):
break
next_block = blocks[next_label]
# XXX: commented out since scope objects are not consistent
# thoughout the compiler. for example, pieces of code are compiled
# throughout the compiler. for example, pieces of code are compiled
# and inlined on the fly without proper scope merge.
# if block.scope != next_block.scope:
# break
Expand Down
2 changes: 1 addition & 1 deletion numba/npyufunc/_internal.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ typedef struct {
PyObject_HEAD
/* Borrowed reference */
PyUFuncObject *ufunc;
/* Owned reference to ancilliary object */
/* Owned reference to ancillary object */
PyObject *object;
} PyUFuncCleaner;

Expand Down
2 changes: 1 addition & 1 deletion numba/npyufunc/gufunc_scheduler.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ void divide_work(const RangeActual &full_iteration_space,
if(build.size() == dims.size()) {
assignments[start_thread] = isfRangeToActual(build);
} else {
// There are still more dimenions to add.
// There are still more dimensions to add.
// Create a copy of the incoming build.
std::vector<isf_range> new_build(build.begin()+0, build.begin()+index);
// Add an entry to new_build for this thread to handle the entire current dimension.
Expand Down
2 changes: 1 addition & 1 deletion numba/stencil.py
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ def _stencil_wrapper(self, result, sigret, return_type, typemap, calltypes, *arg
# 1) Construct a string containing a function definition for the stencil function
# that will execute the stencil kernel. This function definition includes a
# unique stencil function name, the parameters to the stencil kernel, loop
# nests across the dimenions of the input array. Those loop nests use the
# nests across the dimensions of the input array. Those loop nests use the
# computed stencil kernel size so as not to try to compute elements where
# elements outside the bounds of the input array would be needed.
# 2) The but of the loop nest in this new function is a special sentinel
Expand Down
2 changes: 1 addition & 1 deletion numba/targets/arrayobj.py
Original file line number Diff line number Diff line change
Expand Up @@ -2225,7 +2225,7 @@ def array_complex_attr(context, builder, typ, value, attr):
^ ^ ^

(`R` indicates a float for the real part;
`C` indicates a float for the imaginery part;
`C` indicates a float for the imaginary part;
the `^` indicates the start of each element)

To get the real part, we can simply change the dtype and itemsize to that
Expand Down
2 changes: 1 addition & 1 deletion numba/targets/codegen.py
Original file line number Diff line number Diff line change
Expand Up @@ -690,7 +690,7 @@ def _check_llvm_bugs(self):
"""
# Check the locale bug at https://github.com/numba/numba/issues/1569
# Note we can't cache the result as locale settings can change
# accross a process's lifetime. Also, for this same reason,
# across a process's lifetime. Also, for this same reason,
# the check here is a mere heuristic (there may be a race condition
# between now and actually compiling IR).
ir = """
Expand Down
2 changes: 1 addition & 1 deletion numba/targets/linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@


# fortran int type, this needs to match the F_INT C declaration in
# _lapack.c and is present to accomodate potential future 64bit int
# _lapack.c and is present to accommodate potential future 64bit int
# based LAPACK use.
F_INT_nptype = np.int32
F_INT_nbtype = types.int32
Expand Down
2 changes: 1 addition & 1 deletion numba/targets/listobj.py
Original file line number Diff line number Diff line change
Expand Up @@ -1035,7 +1035,7 @@ def list_reverse_impl(lst):

def load_sorts():
"""
Load quicksort lazily, to avoid circular imports accross the jit() global.
Load quicksort lazily, to avoid circular imports across the jit() global.
"""
g = globals()
if g['_sorting_init']:
Expand Down
2 changes: 1 addition & 1 deletion numba/tests/test_array_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -652,7 +652,7 @@ def check_arr(arr, layout=False):
cres = compile_isolated(pyfunc, (typeof(arr), typeof(x), typeof(y)))
expected = pyfunc(arr, x, y)
got = cres.entry_point(arr, x, y)
# Contiguity of result varies accross Numpy versions, only
# Contiguity of result varies across Numpy versions, only
# check contents. NumPy 1.11+ seems to stabilize.
if numpy_version < (1, 11):
self.assertEqual(got.dtype, expected.dtype)
Expand Down
2 changes: 1 addition & 1 deletion numba/tests/test_dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -1340,7 +1340,7 @@ def _test_pycache_fallback(self):
mod = self.import_module()
f = mod.add_usecase
# Remove this function's cache files at the end, to avoid accumulation
# accross test calls.
# across test calls.
self.addCleanup(shutil.rmtree, f.stats.cache_path, ignore_errors=True)

self.assertPreciseEqual(f(2, 3), 6)
Expand Down
2 changes: 1 addition & 1 deletion numba/tests/test_listobject.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
The tests here should exercise everything within an `@njit` context.
Importantly, the tests should not return a typed list from within such a
context as this would require code from numba/typed/typedlist.py (this is
tested seperately). Tests in this file build on each other in the order of
tested separately). Tests in this file build on each other in the order of
writing. For example, the first test, tests the creation, append and len of the
list. These are the barebones to do anything useful with a list. The subsequent
test for getitem assumes makes use of these three operations and therefore
Expand Down
2 changes: 1 addition & 1 deletion numba/tests/test_parfors.py
Original file line number Diff line number Diff line change
Expand Up @@ -2959,7 +2959,7 @@ def assert_fusion_equivalence(self, got, expected):
self.assertEqual(a, b)

def _fusion_equivalent(self, thing):
# parfors indexes the Parfors class instance id's from whereever the
# parfors indexes the Parfors class instance id's from wherever the
# internal state happens to be. To assert fusion equivalence we just
# check that the relative difference between fusion adjacency lists
# is the same. For example:
Expand Down
9 changes: 6 additions & 3 deletions numba/tests/test_typedlist.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,8 @@ def test_getitem_slice(self):
""" Test getitem using a slice.

This tests suffers from combinatorial explosion, so we parametrize it
and compare results agains the regular list in a quasi fuzzing approach.
and compare results against the regular list in a quasi fuzzing
approach.

"""
# initialize regular list
Expand Down Expand Up @@ -234,7 +235,8 @@ def test_setitem_slice(self):
""" Test setitem using a slice.

This tests suffers from combinatorial explosion, so we parametrize it
and compare results agains the regular list in a quasi fuzzing approach.
and compare results against the regular list in a quasi fuzzing
approach.

"""

Expand Down Expand Up @@ -370,7 +372,8 @@ def test_delitem_slice(self):
""" Test delitem using a slice.

This tests suffers from combinatorial explosion, so we parametrize it
and compare results agains the regular list in a quasi fuzzing approach.
and compare results against the regular list in a quasi fuzzing
approach.

"""

Expand Down
2 changes: 1 addition & 1 deletion numba/tests/test_ufuncs.py
Original file line number Diff line number Diff line change
Expand Up @@ -1212,7 +1212,7 @@ def test_divide_array_op(self):
@tag('important')
def test_floor_divide_array_op(self):
# Avoid floating-point zeros as x // 0.0 can have varying results
# depending on the algorithm (which changed accross Numpy versions)
# depending on the algorithm (which changed across Numpy versions)
self.inputs = [
(np.uint32(1), types.uint32),
(np.int32(-2), types.int32),
Expand Down
2 changes: 1 addition & 1 deletion numba/types/functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ def get_call_type(self, context, args, kws):
if len(failures) == 0:
raise AssertionError("Internal Error. "
"Function resolution ended with no failures "
"or successfull signature")
"or successful signature")
failures.raise_error()

def get_call_signatures(self):
Expand Down
2 changes: 1 addition & 1 deletion numba/types/npytypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -519,7 +519,7 @@ class NestedArray(Array):
"""
A NestedArray is an array nested within a structured type (which are "void"
type in NumPy parlance). Unlike an Array, the shape, and not just the number
of dimenions is part of the type of a NestedArray.
of dimensions is part of the type of a NestedArray.
"""

def __init__(self, dtype, shape):
Expand Down
2 changes: 1 addition & 1 deletion numba/unsafe/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
This subpackage is intented for low-level extension developers and compiler
This subpackage is intended for low-level extension developers and compiler
developers. Regular user SHOULD NOT use code in this module.

This contains compilable utility functions that can interact directly with
Expand Down
2 changes: 1 addition & 1 deletion numba/withcontexts.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def typeof_contextmanager(val, c):
def _get_var_parent(name):
"""Get parent of the variable given its name
"""
# If not a temprary variable
# If not a temporary variable
if not name.startswith('$'):
# Return the base component of the name
return name.split('.', )[0]
Expand Down
2 changes: 1 addition & 1 deletion tutorials/Numpy and numba.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1094,7 +1094,7 @@
"\n",
"- elements in a row of the first operand *must equal* the elements in a column of the second operand. Both are 'n'.\n",
"\n",
"As you can see, the arity of the dimensions of the result can be infered from the source operands:\n",
"As you can see, the arity of the dimensions of the result can be inferred from the source operands:\n",
"\n",
"- Result will have as many rows as rows has the first operand. Both are 'm'.\n",
"\n",
Expand Down