Skip to content

Commit

Permalink
Forbid trailing whitespace (#53406)
Browse files Browse the repository at this point in the history
Summary:
Context: #53299 (comment)

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: #53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
  • Loading branch information
samestep authored and facebook-github-bot committed Mar 6, 2021
1 parent cab2689 commit 8c798e0
Show file tree
Hide file tree
Showing 238 changed files with 799 additions and 798 deletions.
2 changes: 1 addition & 1 deletion .circleci/scripts/binary_ios_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ rm cert.txt
if ! [ -x "$(command -v xcodebuild)" ]; then
echo 'Error: xcodebuild is not installed.'
exit 1
fi
fi
PROFILE=PyTorch_CI_2021
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}
3 changes: 3 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ jobs:
rm -r "shellcheck-${scversion}"
shellcheck --version
.jenkins/run-shellcheck.sh
- name: Ensure no trailing spaces
run: |
(! git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' || (echo "The above files have trailing spaces; please remove them"; false))
- name: Ensure no tabs
run: |
(! git grep -I -l $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above files have tabs; please convert them to spaces"; false))
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/caffe2/bench.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ if (( $num_gpus == 0 )); then
fi
if (( $num_gpus >= 1 )); then
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 1
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
# We can resume when we (1) bindmount the miopen cache folder in jenkins; (2) install the pre-compiled miopen kernel library in the docker
# "$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 256 --epoch_size 25600 --num_epochs 2 --num_gpus 1 --float16_compute --dtype float16
fi
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
check whether your Git local or global config file contains any `submodule.*` settings. If yes, remove them and try again.
(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-submoduleltnamegturl) for more info).

- If you encountered error such as
- If you encountered error such as
```
fatal: unable to access 'https://github.com/pybind11/pybind11.git': could not load PEM client certificate ...
```
Expand All @@ -169,11 +169,11 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
openssl x509 -noout -in <cert_file> -dates
```

- If you encountered error that some third_party modules are not checkout correctly, such as
- If you encountered error that some third_party modules are not checkout correctly, such as
```
Could not find .../pytorch/third_party/pybind11/CMakeLists.txt
```
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.

## Nightly Checkout & Pull

Expand Down
8 changes: 4 additions & 4 deletions GLOSSARY.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# PyTorch Glossary
# PyTorch Glossary

- [PyTorch Glossary](#pytorch-glossary)
- [Operation and Kernel](#operation-and-kernel)
Expand Down Expand Up @@ -39,7 +39,7 @@ For example, this
to create Custom Operations.

## Kernel
Implementation of a PyTorch operation, specifying what should be done when an
Implementation of a PyTorch operation, specifying what should be done when an
operation executes.

## Compound Operation
Expand All @@ -57,7 +57,7 @@ Same as Compound Operation.
## Leaf Operation
An operation that's considered a basic operation, as opposed to a Compound
Operation. Leaf Operation always has dispatch functions defined, usually has a
derivative function defined as well.
derivative function defined as well.

## Device Kernel
Device-specific kernel of a leaf operation.
Expand All @@ -79,4 +79,4 @@ using just-in-time compilation.

## Scripting
Using `torch.jit.script` on a function to inspect source code and compile it as
TorchScript code.
TorchScript code.
2 changes: 1 addition & 1 deletion aten/src/ATen/BatchingRegistrations.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ Tensor trace_backward_batching_rule(const Tensor& grad, IntArrayRef input_sizes)
auto grad_input = at::zeros(grad_physical.getPhysicalShape(input_sizes), grad.options());
// Batched Diagonal View
auto grad_input_diag = at::diagonal(grad_input, /*offset*/0, /*dim1*/-2, /*dim2*/-1);
// Append a dimension of size one to the grad output
// Append a dimension of size one to the grad output
auto grad_physical_tensor = grad_physical.tensor().unsqueeze(-1);
grad_input_diag.copy_(grad_physical_tensor);
return grad_physical.getPhysicalToLogicalMap().apply(grad_input);
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/CPUGeneratorImpl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ struct CPUGeneratorImplStateLegacy {
* new data introduced in at::CPUGeneratorImpl and the legacy state. It is used
* as a helper for torch.get_rng_state() and torch.set_rng_state()
* functions.
*/
*/
struct CPUGeneratorImplState {
CPUGeneratorImplStateLegacy legacy_pod;
float next_float_normal_sample;
Expand Down Expand Up @@ -119,7 +119,7 @@ uint64_t CPUGeneratorImpl::seed() {
* must be a strided CPU byte tensor and of the same size as either
* CPUGeneratorImplStateLegacy (for legacy CPU generator state) or
* CPUGeneratorImplState (for new state).
*
*
* FIXME: Remove support of the legacy state in the future?
*/
void CPUGeneratorImpl::set_state(const c10::TensorImpl& new_state) {
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/SparseTensorUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ TORCH_API Tensor flatten_indices(const Tensor& indices, IntArrayRef full_size, b
// new_indices = [ 3, 1, 3 ] # uncoalesced
TORCH_API Tensor flatten_indices_by_dims(const Tensor& indices, const IntArrayRef& sizes, const IntArrayRef& dims_to_flatten);

// Find the CSR representation for a row `indices` from the COO format
// Find the CSR representation for a row `indices` from the COO format
TORCH_API Tensor coo_to_csr(const int64_t* indices, int64_t dim, int64_t nnz);

}} // namespace at::sparse
2 changes: 1 addition & 1 deletion aten/src/ATen/Version.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ std::string used_cpu_capability() {
case native::CPUCapability::AVX2:
ss << "AVX2";
break;
#endif
#endif
default:
break;
}
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/VmapTransforms.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ using VmapDimVector = SmallVector<int64_t, kVmapStaticDimVecSize>;
// argument.

// VmapTransform for operators that take tensors with multiple batch dims.
// Given one or more logical views on Tensors, `logicalToPhysical`
// Given one or more logical views on Tensors, `logicalToPhysical`
// permutes all of the batch dims to the front of the tensor, aligns
// and expands the batch dims to match each other (according to their `level`),
// and returns a VmapPhysicalView on the tensor(s).
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/core/Generator.h
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ namespace detail {
/**
* Helper function for checking the validity of new random generator
* state. Right now following conditions are checked:
*
*
* - The new state tensor must be a torch.ByteTensor
* - Data of the new state tensor must be contiguous
*/
Expand Down
14 changes: 7 additions & 7 deletions aten/src/ATen/core/PhiloxRNGEngine.h
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ typedef at::detail::Array<float, 2> FLOAT2;
* Note that currently this implementation of the philox engine is not used
* anywhere except for tests in cpu_generator_test.cpp. However, this engine
* will replace curandStatePhilox4_32_10_t in the future.
*
*
* The philox engine takes a seed value, a subsequeunce
* for starting the generation and an offset for the subsequence.
* Think of this engine as an algorithm producing a huge array. We are
* parallelizing this array by partitioning the huge array and assigning
* a thread index to each partition. In other words, each seed value
* (there are 2^64 possible seed values) gives a sub array of size
* Think of this engine as an algorithm producing a huge array. We are
* parallelizing this array by partitioning the huge array and assigning
* a thread index to each partition. In other words, each seed value
* (there are 2^64 possible seed values) gives a sub array of size
* 2^128 (each element in that array is a 128 bit number). Reasoning
* behind the array being of size 2^128 is, there are 2^64 possible
* thread index value and there is an array of size 2^64 for each of
Expand All @@ -59,9 +59,9 @@ typedef at::detail::Array<float, 2> FLOAT2;
* seed: Seed values could be any number from 0 to 2^64-1.
* subsequence: Subsequence is just the cuda thread indexing with:
* - blockIdx.x * blockDim.x + threadIdx.x
* offset: The offset variable in PhiloxEngine decides how many 128-bit
* offset: The offset variable in PhiloxEngine decides how many 128-bit
* random numbers to skip (i.e. how many groups of 4, 32-bit numbers to skip)
* and hence really decides the total number of randoms that can be achieved
* and hence really decides the total number of randoms that can be achieved
* for the given subsequence.
*/

Expand Down
2 changes: 0 additions & 2 deletions aten/src/ATen/core/op_registration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,5 +254,3 @@ Also, there's some requirements on the operator schema for it to be callable fro
* Except for `Tensor` or `Tensor[]`, only arguments of type `int`, `double` and `bool` are supported. These can be in any position in the argument list and will be read from the caffe2 operator arguments, based on the argument name in the operator schema.
* We do not support lists (`int[]`, `double[]` or `bool[]`) or optionals (`int?`, `double?`, `bool?`) yet.
* The operator must return a single `Tensor` or multiple tensors as in `(Tensor, Tensor, Tensor)`. It cannot return a list `Tensor[]`, optional `Tensor?` or any primitive types.


48 changes: 24 additions & 24 deletions aten/src/ATen/core/type.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1124,12 +1124,12 @@ std::string ClassType::getForwardPreHookErrorMessage(int pre_hook_idx) const {
const FunctionSchema& forward_schema = getMethod("forward").getSchema();
std::string input_types = getSchemaInputTypesString(forward_schema);
const std::vector<Argument>& forward_args = forward_schema.arguments();

std::string single_output = "";
if (forward_args.size() == 2 &&
forward_args[1].type()->cast<TupleType>() == nullptr) {
// if the output type is a single tuple, it needs to be wrapped in an outer tuple
// to match eager's behavior
// to match eager's behavior
single_output = ", '" + forward_args[1].type()->annotation_str() + "',";
}
std::string pre_hook_schema =
Expand All @@ -1138,17 +1138,17 @@ std::string ClassType::getForwardPreHookErrorMessage(int pre_hook_idx) const {
"This error occured while scripting the forward pre-hook '" +
pre_hook_name + "' on module '" + name()->name() +
"'. If you did not want to script this pre-hook remove it from the "
"original NN module before scripting. Pre-hooks for module '" +
name()->name() + "' are expected to have the following signature: "
+ pre_hook_schema + " with a return type of either 'None'" +
"original NN module before scripting. Pre-hooks for module '" +
name()->name() + "' are expected to have the following signature: "
+ pre_hook_schema + " with a return type of either 'None'" +
single_output + " or 'Tuple[" + input_types + "]'.";
return return_string;
}

std::string ClassType::getForwardHookErrorMessage(int hook_idx) const {
const std::string& hook_name = forward_hooks_[hook_idx]->name();
const FunctionSchema& forward_schema = getMethod("forward").getSchema();
std::string input_types = getSchemaInputTypesString(forward_schema);
std::string input_types = getSchemaInputTypesString(forward_schema);

// create expected output types string
const Argument& pre_output =
Expand All @@ -1160,33 +1160,33 @@ std::string ClassType::getForwardHookErrorMessage(int hook_idx) const {
std::string hook_schema = hook_name + "(self, input: Tuple[" +
input_types + "], output: " + output_types + ")";
std::string return_string =
"This error occured while scripting the forward hook '"
"This error occured while scripting the forward hook '"
+ hook_name + "' on module " + name()->name() +
". If you did not want to script this hook remove it from" +
" the original NN module before scripting. This hook was" +
" expected to have the following signature: " + hook_schema +
". The type of the output arg is the returned type from" +
" either the forward method or the previous hook if it exists. " +
"Note that hooks can return anything, but if the hook is " +
". The type of the output arg is the returned type from" +
" either the forward method or the previous hook if it exists. " +
"Note that hooks can return anything, but if the hook is " +
"on a submodule the outer module is expecting" +
" the same return type as the submodule's forward.";
return return_string;
}

void checkForwardHookInputArguments(
const FunctionSchema& forward_schema,
const FunctionSchema& hook_schema,
const std::string& hook_id,
const FunctionSchema& forward_schema,
const FunctionSchema& hook_schema,
const std::string& hook_id,
const std::string& hook_err_msg) {
// check for proper tuple input types
const std::vector<Argument>& forward_args = forward_schema.arguments();
const Argument input_arg = hook_schema.arguments()[1];
TORCH_CHECK(
input_arg.type()->cast<TupleType>() != nullptr,
input_arg.type()->cast<TupleType>() != nullptr,
hook_id,
"expected the input argument to be typed as a Tuple but found type: '",
input_arg.type()->annotation_str(),
"' instead.\n",
input_arg.type()->annotation_str(),
"' instead.\n",
hook_err_msg
);

Expand Down Expand Up @@ -1229,7 +1229,7 @@ void checkForwardHookInputArguments(
}

void ClassType::checkForwardPreHookSchema(
int pre_hook_idx,
int pre_hook_idx,
const FunctionSchema& pre_hook_schema) const {
const torch::jit::Function* pre_hook = forward_pre_hooks_[pre_hook_idx];
std::string hook_id =
Expand Down Expand Up @@ -1261,17 +1261,17 @@ void ClassType::checkForwardPreHookSchema(
pre_hook_err_msg
);
const Argument return_arg = pre_hook_schema.returns()[0];
std::string wrong_type_returned_err_msg = hook_id +
std::string wrong_type_returned_err_msg = hook_id +
"returned the wrong type of: '" +
return_arg.type()->annotation_str() + "'.";

if (return_arg.type()->kind() == NoneType::get()->kind()) {
return;
}
if (forward_args.size() == 2 && *forward_args[1].type() == *return_arg.type()) {
// TORCH_CHECK below is for the edge case where forward's input is a tuple and the
// TORCH_CHECK below is for the edge case where forward's input is a tuple and the
// pre-hook returns a matching tuple. Eager doesn't support this- the working eager return
// for a tuple type is the forward's input tuple wrapped inside of another tuple.
// for a tuple type is the forward's input tuple wrapped inside of another tuple.
TORCH_CHECK(
return_arg.type()->cast<TupleType>() == nullptr,
wrong_type_returned_err_msg,
Expand Down Expand Up @@ -1316,7 +1316,7 @@ void ClassType::checkForwardPreHookSchema(
for (int i = 1; i < forward_args.size(); ++i) {
if (*forward_args[i].type() != *return_tuple_types[i - 1]) {
TORCH_CHECK(
false,
false,
wrong_type_returned_err_msg,
" The returned tuple contains the wrong inner types.\n",
pre_hook_err_msg);
Expand All @@ -1325,7 +1325,7 @@ void ClassType::checkForwardPreHookSchema(
}

void ClassType::checkForwardHookSchema(
int hook_idx,
int hook_idx,
const FunctionSchema& hook_schema) const {
const torch::jit::Function* hook = forward_hooks_[hook_idx];
std::string hook_id =
Expand Down Expand Up @@ -1388,8 +1388,8 @@ torch::jit::Function& ClassType::getMethod(const std::string& name) const {
torch::jit::Function* ClassType::findHook(const std::string& name) const {
auto hook = findForwardHook(name);
if (hook == nullptr) {
hook = findForwardPreHook(name);
}
hook = findForwardPreHook(name);
}
return hook;
}

Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/cpu/vec256/vec256_double.h
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ template <> class Vec256<double> {
const auto not_nan_mask = _mm256_cmp_pd(values, values, _CMP_EQ_OQ);
const auto nan_mask = _mm256_cmp_pd(not_nan_mask, zero_vec, _CMP_EQ_OQ);
const auto pi = _mm256_set1_pd(c10::pi<double>);

const auto neg_mask = _mm256_cmp_pd(values, zero_vec, _CMP_LT_OQ);
auto angle = _mm256_blendv_pd(zero_vec, pi, neg_mask);
angle = _mm256_blendv_pd(angle, nan_vec, nan_mask);
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/cpu/vec256/vec256_float.h
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ template <> class Vec256<float> {
const auto not_nan_mask = _mm256_cmp_ps(values, values, _CMP_EQ_OQ);
const auto nan_mask = _mm256_cmp_ps(not_nan_mask, zero_vec, _CMP_EQ_OQ);
const auto pi = _mm256_set1_ps(c10::pi<float>);

const auto neg_mask = _mm256_cmp_ps(values, zero_vec, _CMP_LT_OQ);
auto angle = _mm256_blendv_ps(zero_vec, pi, neg_mask);
angle = _mm256_blendv_ps(angle, nan_vec, nan_mask);
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/cpu/vec256/vsx/vec256_complex_double_vsx.h
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ class Vec256<ComplexDbl> {
}

Vec256<ComplexDbl> sqrt() const {
return map(std::sqrt);
return map(std::sqrt);
}

Vec256<ComplexDbl> reciprocal() const {
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/cpu/vec256/vsx/vec256_complex_float_vsx.h
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ class Vec256<ComplexFlt> {
}

Vec256<ComplexFlt> sqrt() const {
return map(std::sqrt);
return map(std::sqrt);
}

Vec256<ComplexFlt> reciprocal() const {
Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/cpu/vec256/vsx/vec256_double_vsx.h
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ class Vec256<double> {
blend(const Vec256<double>& a, const Vec256<double>& b) {
return { a._vec0, b._vec1 };
}


template <int64_t mask>
static std::enable_if_t<blendChoiceDbl(mask) == 4, Vec256<double>> C10_ALWAYS_INLINE
Expand Down Expand Up @@ -206,7 +206,7 @@ class Vec256<double> {
for (int i = 0; i < size()/2; i++) {
ret._vec0[i] = f(_vec0[i], other._vec0[i]);
}
for (int i = 0; i < size()/2; i++) {
for (int i = 0; i < size()/2; i++) {
ret._vec1[i] = f(_vec1[i], other._vec1[i]);
}
return ret;
Expand Down Expand Up @@ -314,7 +314,7 @@ class Vec256<double> {
Vec256<double> C10_ALWAYS_INLINE sqrt() const {
return {vec_sqrt(_vec0), vec_sqrt(_vec1)};
}
Vec256<double> C10_ALWAYS_INLINE reciprocal() const {
Vec256<double> C10_ALWAYS_INLINE reciprocal() const {
return {
vec_div(vd_one, _vec0), // vec_re(_vec0) is estimated one.
vec_div(vd_one, _vec1)};
Expand Down
Loading

0 comments on commit 8c798e0

Please sign in to comment.