Skip to content

Commit

Permalink
[cleanup] Fix typos in source code (#1693)
Browse files Browse the repository at this point in the history
- clarify some unclear comments
- Fix typos with `typos -w`
- Fix remaining typos manually

While the typos spell checker could report these typos
it couldn't automatically fix them since there are multiple possible
corrections, e.g. `agains` could be `again` or `against`.
  • Loading branch information
not-my-profile committed Aug 12, 2023
1 parent b22dccf commit ca86826
Show file tree
Hide file tree
Showing 172 changed files with 225 additions and 225 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ BYTECODE_ZIP := bytecode-opy.zip

HAVE_OBJCOPY := $(shell command -v objcopy 2>/dev/null)

# For faster tesing of builds
# For faster testing of builds
#default: _bin/oil.ovm-dbg

# What the end user should build when they type 'make'.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ These files make the slow "Oils Python" build, which is very different than the
configure
install

These files are for teh C++ `oils-for-unix` tarball (in progress):
These files are for the C++ `oils-for-unix` tarball (in progress):

_build/
oils.sh
Expand Down
2 changes: 1 addition & 1 deletion asdl/gen_cpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ def Emit(s, depth=depth):
# This is the base class.
Emit('class %(sum_name)s_t {')
# Can't be constructed directly. Note: this shows up in uftrace in debug
# mode, e.g. when we intantiate Token. Do we need it?
# mode, e.g. when we instantiate Token. Do we need it?
Emit(' protected:')
Emit(' %s_t() {' % sum_name)
Emit(' }')
Expand Down
2 changes: 1 addition & 1 deletion asdl/gen_python.py
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ def VisitCompoundSum(self, sum, sum_name, depth):
if variant.shared_type:
continue # Don't generate a class for shared types.
if len(variant.fields) == 0:
# We must use the old-style nameing here, ie. command__NoOp, in order
# We must use the old-style naming here, ie. command__NoOp, in order
# to support zero field variants as constants.
class_name = '%s__%s' % (sum_name, variant.name)
self._GenClass(variant, class_name, (sum_name + '_t',), i + 1)
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/osh-parser.sh
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@ EOF
fi

if test -f $in_dir/elapsed.csv; then
cmark <<< '#### Elasped Time (milliseconds)'
cmark <<< '#### Elapsed Time (milliseconds)'
echo
csv2html $in_dir/elapsed.csv
fi
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/pypy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ parse-with-cpython() {

# ~4.8 seconds
# NOTE: We could run it in a loop to see if the JIT warms up, but that would
# only be for curiousity. Most shell processes are short-lived, so it's the
# only be for curiosity. Most shell processes are short-lived, so it's the
# wrong thing to optimize for.
parse-with-pypy() {
parse-abuild $PYPY
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/report.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# Usage:
# benchmarks/report.R OUT_DIR [TIMES_CSV...]

# Supress warnings about functions masked from 'package:stats' and 'package:base'
# Suppress warnings about functions masked from 'package:stats' and 'package:base'
# filter, lag
# intersect, setdiff, setequal, union
library(dplyr, warn.conflicts = FALSE)
Expand Down Expand Up @@ -1198,7 +1198,7 @@ Percent = function(n, total) {
}

PrettyPrintLong = function(d) {
tr = t(d) # tranpose
tr = t(d) # transpose

row_names = rownames(tr)

Expand Down
2 changes: 1 addition & 1 deletion benchmarks/vm-baseline.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ measure() {
# TODO:
# print-tasks should:
# - use the whole shell path like _bin/osh
# - the host name shoudl be a column
# - the host name should be a column
# - the join ID can be a file, and construct the task name from that
# - Then maybe use tsv_columns_from_files.py like we do with cachegrind

Expand Down
4 changes: 2 additions & 2 deletions build/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,10 @@ readonly CLANGXX=$CLANG_DIR/bin/clang++
# I'm not sure if there's a GCC version of this?
export ASAN_SYMBOLIZER_PATH=$CLANG_DIR_RELATIVE/bin/llvm-symbolizer

# ThreadSanitizer doesn't always give us all locaitons, but this doesn't help
# ThreadSanitizer doesn't always give us all locations, but this doesn't help
# export TSAN_SYMBOLIZER_PATH=$ASAN_SYMBOLIZER_PATH

# equivalent of 'cc' for C++ langauge
# equivalent of 'cc' for C++ language
# https://stackoverflow.com/questions/172587/what-is-the-difference-between-g-and-gcc
CXX=${CXX:-'c++'}

Expand Down
4 changes: 2 additions & 2 deletions build/cpython_defs.py
Original file line number Diff line number Diff line change
Expand Up @@ -388,8 +388,8 @@ def __call__(self, rel_path, def_name, method_name):

#log('= %s %s', def_name, method_name)

# If it doesn't appear in the .py source, it can't be used. (Execption: it
# coudl be used in C source with dynamic lookup? But I don't think CPython
# If it doesn't appear in the .py source, it can't be used. (Exception: it
# could be used in C source with dynamic lookup? But I don't think CPython
# does that.)
#if method_name not in self.py_names:
if 0:
Expand Down
2 changes: 1 addition & 1 deletion build/deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ install-ubuntu-packages() {
### Packages for build/py.sh all, building wedges, etc.

# python2-dev is no longer available on Debian 12
# python-dev als seems gone
# python-dev also seems gone
#
# g++: essential
# libreadline-dev: needed for the build/prepare.sh Python build.
Expand Down
2 changes: 1 addition & 1 deletion build/dev-shell-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ test-cli() {
test-python2() {
banner "Testing python2"

# Can't do this beacuse of vendor/typing.py issue.
# Can't do this because of vendor/typing.py issue.
# log "Testing oils_for_unix.py"
# bin/oils_for_unix.py --help | head -n 2

Expand Down
2 changes: 1 addition & 1 deletion build/ovm-actions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ join-modules() {

# Filter out comments, print the first line.
#
# TODO: I don't want to depend on egrep and GNU flags on the target sytems?
# TODO: I don't want to depend on egrep and GNU flags on the target systems?
# Ship this file I guess.
egrep --no-filename --only-matching '^[a-zA-Z0-9_\.]+' $static $discovered \
| sort | uniq
Expand Down
2 changes: 1 addition & 1 deletion core/alloc.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ def SnipCodeString(self, left, right):
Used for ALIAS expansion, which happens in the PARSER.
The argument to aliases can span multiple lines, like htis:
The argument to aliases can span multiple lines, like this:
$ myalias '1 2 3'
"""
Expand Down
2 changes: 1 addition & 1 deletion core/comp_ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -572,7 +572,7 @@ def InitReadline(
# This determines the boundaries you get back from get_begidx() and
# get_endidx() at completion time!
# We could be more conservative and set it to ' ', but then cases like
# 'ls|w<TAB>' would try to complete the whole thing, intead of just 'w'.
# 'ls|w<TAB>' would try to complete the whole thing, instead of just 'w'.
#
# Note that this should not affect the OSH completion algorithm. It only
# affects what we pass back to readline and what readline displays to the
Expand Down
2 changes: 1 addition & 1 deletion core/dev.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ class CrashDumper(object):
debug info for the source? Or does that come elsewhere?
Yeah I think you sould have two separate files.
Yeah I think you should have two separate files.
- debug info for a given piece of code (needs hash)
- this could just be the raw source files? Does it need anything else?
- I think it needs a hash so the VM dump can refer to it.
Expand Down
4 changes: 2 additions & 2 deletions core/main_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,7 @@ def Interactive(flag, cmd_ev, c_parser, display, prompt_plugin, waiter, errfmt):

# TODO: Replace this with a shell hook? with 'trap', or it could be just
# like command_not_found. The hook can be 'echo $?' or something more
# complicated, i.e. with timetamps.
# complicated, i.e. with timestamps.
if flag.print_status:
print('STATUS\t%r' % status)

Expand All @@ -311,7 +311,7 @@ def Batch(cmd_ev, c_parser, errfmt, cmd_flags=0):
Returns:
int status, e.g. 2 on parse error
Can this be combined with interative loop? Differences:
Can this be combined with interactive loop? Differences:
- Handling of parse errors.
- Have to detect here docs at the end?
Expand Down
8 changes: 4 additions & 4 deletions core/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ def _PushDup(self, fd1, blame_loc):
fd2 = cast(redir_loc.Fd, UP_loc).fd

if fd1 == fd2:
# The user could have asked for it to be open on descrptor 3, but open()
# The user could have asked for it to be open on descriptor 3, but open()
# already returned 3, e.g. echo 3>out.txt
return NO_FD

Expand Down Expand Up @@ -1260,7 +1260,7 @@ def StartPipeline(self, waiter):

pid = proc.StartProcess(trace.PipelinePart)
if i == 0 and pgid != INVALID_PGID:
# Mimick bash and use the PID of the first process as the group for the
# Mimic bash and use the PID of the first process as the group for the
# whole pipeline.
pgid = pid

Expand Down Expand Up @@ -1528,7 +1528,7 @@ def __init__(self):
# Counter used to assign IDs to jobs. It is incremented every time a job
# is created. Once all active jobs are done it is reset to 1. I'm not
# sure if this reset behavior is mandated by POSIX, but other shells do
# it, so we mimick for the sake of compatability.
# it, so we mimic for the sake of compatibility.
self.job_id = 1

def AddJob(self, job):
Expand Down Expand Up @@ -1738,7 +1738,7 @@ def NumRunning(self):
Used by 'wait' and 'wait -n'.
"""
count = 0
for _, job in iteritems(self.jobs): # mycpp rewite: from itervalues()
for _, job in iteritems(self.jobs): # mycpp rewrite: from itervalues()
if job.State() == job_state_e.Running:
count += 1
return count
Expand Down
8 changes: 4 additions & 4 deletions core/pyos.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def WaitPid(waitpid_options):
# - The arg -1 makes it like wait(), which waits for any process.
# - WUNTRACED is necessary to get stopped jobs. What about WCONTINUED?
# - We don't retry on EINTR, because the 'wait' builtin should be
# interruptable.
# interruptible.
# - waitpid_options can be WNOHANG
pid, status = posix.waitpid(-1, WUNTRACED | waitpid_options)
except OSError as e:
Expand Down Expand Up @@ -113,7 +113,7 @@ def ReadLine():
# type: () -> str
"""Read a line from stdin.
This is a SLOW PYTHON implementation taht calls read(0, 1) too many times. I
This is a SLOW PYTHON implementation that calls read(0, 1) too many times. I
tried to write libc.stdin_readline() which uses the getline() function, but
somehow that makes spec/oil-builtins.test.sh fail. We use Python's
f.readline() in frontend/reader.py FileLineReader with f == stdin.
Expand Down Expand Up @@ -362,8 +362,8 @@ def TakePendingSignals(self):
# `self.pending_signals`. In the worst case the signal handler might write to
# `new_queue` and the corresponding trap handler won't get executed
# until the main loop calls this function again.
# NOTE: It's important to distinguish between signal-saftey an
# thread-saftey here. Signals run in the same process context as the main
# NOTE: It's important to distinguish between signal-safety an
# thread-safety here. Signals run in the same process context as the main
# loop, while concurrent threads do not and would have to worry about
# cache-coherence and instruction reordering.
new_queue = [] # type: List[int]
Expand Down
2 changes: 1 addition & 1 deletion core/shell.py
Original file line number Diff line number Diff line change
Expand Up @@ -885,7 +885,7 @@ def Main(lang, arg_r, environ, login_shell, loader, readline):
process.InitInteractiveShell() # Set signal handlers

# The interactive shell leads a process group which controls the terminal.
# It MUST give up the termianl afterward, otherwise we get SIGTTIN /
# It MUST give up the terminal afterward, otherwise we get SIGTTIN /
# SIGTTOU bugs.
with process.ctx_TerminalControl(job_control, errfmt):

Expand Down
6 changes: 3 additions & 3 deletions core/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ def PushEval(self):
node = self._MakeOutputNode()
self.result_stack = [node]

self.output = None # remove last reuslt
self.output = None # remove last result

def PopEval(self):
# type: () -> None
Expand Down Expand Up @@ -826,7 +826,7 @@ def ShowShoptOptions(self, opt_names):
# type: (List[str]) -> None
"""For 'shopt -p'."""

# Respect option gropus.
# Respect option groups.
opt_nums = [] # type: List[int]
for opt_name in opt_names:
opt_group = consts.OptionGroupNum(opt_name)
Expand Down Expand Up @@ -1981,7 +1981,7 @@ def InternalSetGlobal(self, name, new_val):

def GetValue(self, name, which_scopes=scope_e.Shopt):
# type: (str, scope_t) -> value_t
"""Used by the WordEvaluator, ArithEvalutor, ysh/expr_eval.py, etc.
"""Used by the WordEvaluator, ArithEvaluator, ysh/expr_eval.py, etc.
TODO:
- Many of these should be value.Int, not value.Str
Expand Down
2 changes: 1 addition & 1 deletion cpp/fanos_shared.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
//
// This library is shared between cpp/ and pyext/.

// Callers should initalize
// Callers should initialize
// FanosError to { 0, NULL }, and
// FanosResult to { NULL, FANOS_INVALID_LEN }

Expand Down
4 changes: 2 additions & 2 deletions cpp/frontend_pyreadline.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@

#include "mycpp/runtime.h"

// hacky foward decl
// hacky forward decl
namespace completion {
class ReadlineCallback;
Str* ExecuteReadlineCallback(ReadlineCallback*, Str*, int);
} // namespace completion

// hacky foward decl
// hacky forward decl
namespace comp_ui {
class _IDisplay;
void ExecutePrintCandidates(_IDisplay*, Str*, List<Str*>*, int);
Expand Down
4 changes: 2 additions & 2 deletions cpp/stdlib.cc
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ void execve(Str* argv0, List<Str*>* argv, Dict<Str*, Str*>* environ) {
throw Alloc<OSError>(errno);
}

// ::execve() never returns on succcess
// ::execve() never returns on success
FAIL(kShouldNotGetHere);
}

Expand Down Expand Up @@ -188,7 +188,7 @@ time_t time() {

// NOTE(Jesse): time_t is specified to be an arithmetic type by C++. On most
// systems it's a 64-bit integer. 64 bits is used because 32 will overflow in
// 2038. Someone on a comittee somewhere thought of that when moving to
// 2038. Someone on a committee somewhere thought of that when moving to
// 64-bit architectures to prevent breaking ABI again; on 32-bit systems it's
// usually 32 bits. Point being, using anything but the time_t typedef here
// could (unlikely, but possible) produce weird behavior.
Expand Down
4 changes: 2 additions & 2 deletions data_lang/demo.sh
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ test-programs() {
ls --escape
echo
# Test out error message
# It's basicallly correct, but ugly. There are too many segments, and
# It's basically correct, but ugly. There are too many segments, and
# there's an unnecessary leading ''.
# QSN is shorter and more consistent.

Expand Down Expand Up @@ -141,7 +141,7 @@ test-errors() {
grep z "$byte_then_char" || true
grep z "$char_then_byte" || true

# python doens't print it smehow?
# python doesn't print it somehow?
banner 'python'
# BUG: Python prints terminal sequences
#python "$BOLD" || true
Expand Down
2 changes: 1 addition & 1 deletion demo/complete.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ audit() {

echo
echo --
# Search for special complation var usage
# Search for special completion var usage
grep -E --color 'COMP_[A-Z]+' $file

echo
Expand Down
2 changes: 1 addition & 1 deletion demo/coproc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ simple-demo() {
# In ksh or zsh, the pipes to and from the co-process are accessed with >&p
# and <&p.
# But in bash, the file descriptors of the pipe from the co-process and the
# other pipe to the co-proccess are returned in the $COPROC array
# other pipe to the co-process are returned in the $COPROC array
# (respectively ${COPROC[0]} and ${COPROC[1]}.

argv ${COPROC[@]}
Expand Down
2 changes: 1 addition & 1 deletion demo/old/benchmarks-oheap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ encode-all() {
$0 encode-one
}

# Out of curiousity, compress oheap and originals.
# Out of curiosity, compress oheap and originals.

compress-oheap() {
local c_dir=$BASE_DIR/oheap-compressed
Expand Down
2 changes: 1 addition & 1 deletion demo/old/gen_oheap_cpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
address space. If 1, then we have 16MiB of code. If 4, then we have 64 MiB.
Everything is decoded on the fly, or is a char*, which I don't think has to be
aligned (because the natural alignment woudl be 1 byte anyway.)
aligned (because the natural alignment would be 1 byte anyway.)
"""
from __future__ import print_function

Expand Down
4 changes: 2 additions & 2 deletions demo/old/old_code.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,14 @@ def _Led(self, token):
#


# Possible optmization for later:
# Possible optimization for later:
def _TreeCount(tree_word):
"""Count output size for allocation purposes.
We can count the number of words expanded into, and the max number of parts
in a word.
Every word can have a differnt number of parts, e.g. -{'a'b,c}- expands into
Every word can have a different number of parts, e.g. -{'a'b,c}- expands into
words of 4 parts, then 3 parts.
"""
# TODO: Copy the structure of _BraceExpand and _BraceExpandOne.
Expand Down

0 comments on commit ca86826

Please sign in to comment.