Skip to content

Commit

Permalink
Update on "unbreak mypy torch/quantization"
Browse files Browse the repository at this point in the history
Summary:

Somehow `mypy torch/quantization` got broken in the past couple of days:
https://gist.github.com/vkuzo/07af454246f0a68e6fa8929beeec7e0d
.  I didn't see any relevant PRs other than
#47725, which doesn't seem
related. The error doesn't seem real, as the arguments to
`_cudnn_rnn_flatten_weight` seem correct. For now,
ignoring the failure so we have a clean `mypy` run on
`torch/quantization`.

Test Plan:

```
mypy torch/quantization
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25616972](https://our.internmc.facebook.com/intern/diff/D25616972)

[ghstack-poisoned]
  • Loading branch information
vkuzo committed Dec 21, 2020
2 parents cdbe662 + 7ed140a commit 371a8d0
Show file tree
Hide file tree
Showing 347 changed files with 3,913 additions and 1,686 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -639,7 +639,7 @@ jobs:
export CIRCLE_JOB="$CIRCLE_JOB"
export CIRCLE_WORKFLOW_ID="$CIRCLE_WORKFLOW_ID"
cd workspace
python test/print_test_stats.py test
python test/print_test_stats.py --upload-to-s3 test
EOL
echo "(cat docker_commands.sh | docker exec -u jenkins -i "$id" bash) 2>&1" > command.sh
unbuffer bash command.sh | ts
Expand Down
1 change: 1 addition & 0 deletions .circleci/docker/common/install_conda.sh
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
numba \
llvmlite \
unittest-xml-reporting \
boto3==1.16.34 \
coverage \
hypothesis==4.53.2 \
mypy==0.770 \
Expand Down
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ jobs:
export CIRCLE_JOB="$CIRCLE_JOB"
export CIRCLE_WORKFLOW_ID="$CIRCLE_WORKFLOW_ID"
cd workspace
python test/print_test_stats.py test
python test/print_test_stats.py --upload-to-s3 test
EOL
echo "(cat docker_commands.sh | docker exec -u jenkins -i "$id" bash) 2>&1" > command.sh
unbuffer bash command.sh | ts
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -754,7 +754,7 @@ than Linux, which are worth keeping in mind when fixing these problems.
1. Symbols are NOT exported by default on Windows; instead, you have to explicitly
mark a symbol as exported/imported in a header file with `__declspec(dllexport)` /
`__declspec(dllimport)`. We have codified this pattern into a set of macros
which follow the convention `*_API`, e.g., `CAFFE2_API` inside Caffe2 and ATen.
which follow the convention `*_API`, e.g., `TORCH_API` inside Caffe2, Aten and Torch.
(Every separate shared library needs a unique macro name, because symbol visibility
is on a per shared library basis. See c10/macros/Macros.h for more details.)

Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/CPUGeneratorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

namespace at {

struct CAFFE2_API CPUGeneratorImpl : public c10::GeneratorImpl {
struct TORCH_API CPUGeneratorImpl : public c10::GeneratorImpl {
// Constructors
CPUGeneratorImpl(uint64_t seed_in = default_rng_seed_val);
~CPUGeneratorImpl() = default;
Expand Down Expand Up @@ -36,8 +36,8 @@ struct CAFFE2_API CPUGeneratorImpl : public c10::GeneratorImpl {

namespace detail {

CAFFE2_API const Generator& getDefaultCPUGenerator();
CAFFE2_API Generator createCPUGenerator(uint64_t seed_val = default_rng_seed_val);
TORCH_API const Generator& getDefaultCPUGenerator();
TORCH_API Generator createCPUGenerator(uint64_t seed_val = default_rng_seed_val);

} // namespace detail

Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/Context.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ namespace at {

class Tensor;

class CAFFE2_API Context {
class TORCH_API Context {
public:
Context();

Expand Down Expand Up @@ -225,13 +225,13 @@ class CAFFE2_API Context {
std::unique_ptr<THHState, void(*)(THHState*)> thh_state;
};

CAFFE2_API Context& globalContext();
TORCH_API Context& globalContext();

static inline void init() {
globalContext();
}

CAFFE2_API Allocator* getCPUAllocator();
TORCH_API Allocator* getCPUAllocator();

static inline DeprecatedTypeProperties& getDeprecatedTypeProperties(Backend p, ScalarType s) {
return globalDeprecatedTypePropertiesRegistry().getDeprecatedTypeProperties(
Expand Down
10 changes: 5 additions & 5 deletions aten/src/ATen/DLConvertor.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@

namespace at {

CAFFE2_API ScalarType toScalarType(const DLDataType& dtype);
CAFFE2_API DLManagedTensor* toDLPack(const Tensor& src);
CAFFE2_API Tensor fromDLPack(const DLManagedTensor* src);
CAFFE2_API DLDataType getDLDataType(const Tensor& t);
CAFFE2_API DLContext getDLContext(const Tensor& tensor, const int64_t& device_id);
TORCH_API ScalarType toScalarType(const DLDataType& dtype);
TORCH_API DLManagedTensor* toDLPack(const Tensor& src);
TORCH_API Tensor fromDLPack(const DLManagedTensor* src);
TORCH_API DLDataType getDLDataType(const Tensor& t);
TORCH_API DLContext getDLContext(const Tensor& tensor, const int64_t& device_id);

} //namespace at
6 changes: 3 additions & 3 deletions aten/src/ATen/DynamicLibrary.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ namespace at {
struct DynamicLibrary {
AT_DISALLOW_COPY_AND_ASSIGN(DynamicLibrary);

CAFFE2_API DynamicLibrary(const char* name);
TORCH_API DynamicLibrary(const char* name);

CAFFE2_API void* sym(const char* name);
TORCH_API void* sym(const char* name);

CAFFE2_API ~DynamicLibrary();
TORCH_API ~DynamicLibrary();

private:
void* handle = nullptr;
Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/ExpandUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@

namespace at {

CAFFE2_API std::vector<int64_t> infer_size(IntArrayRef a, IntArrayRef b);
CAFFE2_API std::tuple<std::vector<int64_t>, std::vector<int64_t>>
TORCH_API std::vector<int64_t> infer_size(IntArrayRef a, IntArrayRef b);
TORCH_API std::tuple<std::vector<int64_t>, std::vector<int64_t>>
inferExpandGeometry(
IntArrayRef tensor_sizes,
IntArrayRef tensor_strides,
IntArrayRef sizes);

CAFFE2_API std::vector<int64_t> infer_dense_strides(
TORCH_API std::vector<int64_t> infer_dense_strides(
IntArrayRef tensor_sizes,
IntArrayRef tensor_strides);

Expand Down
18 changes: 9 additions & 9 deletions aten/src/ATen/MemoryOverlap.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,19 @@ enum class MemOverlap { NO, YES, TOO_HARD };

enum class MemOverlapStatus { FULL, PARTIAL, NO, TOO_HARD };

CAFFE2_API MemOverlap has_internal_overlap(const Tensor& t);
CAFFE2_API MemOverlap has_internal_overlap(TensorImpl* t);
TORCH_API MemOverlap has_internal_overlap(const Tensor& t);
TORCH_API MemOverlap has_internal_overlap(TensorImpl* t);

CAFFE2_API void assert_no_internal_overlap(const Tensor& t);
CAFFE2_API void assert_no_internal_overlap(TensorImpl* t);
TORCH_API void assert_no_internal_overlap(const Tensor& t);
TORCH_API void assert_no_internal_overlap(TensorImpl* t);

CAFFE2_API MemOverlapStatus get_overlap_status(const Tensor& a, const Tensor& b);
CAFFE2_API MemOverlapStatus get_overlap_status(TensorImpl* a, TensorImpl* b);
TORCH_API MemOverlapStatus get_overlap_status(const Tensor& a, const Tensor& b);
TORCH_API MemOverlapStatus get_overlap_status(TensorImpl* a, TensorImpl* b);

CAFFE2_API void assert_no_partial_overlap(const Tensor& a, const Tensor& b);
TORCH_API void assert_no_partial_overlap(const Tensor& a, const Tensor& b);
void assert_no_partial_overlap(TensorImpl* a, TensorImpl* b);

CAFFE2_API void assert_no_overlap(const Tensor& a, const Tensor& b);
CAFFE2_API void assert_no_overlap(TensorImpl* a, TensorImpl* b);
TORCH_API void assert_no_overlap(const Tensor& a, const Tensor& b);
TORCH_API void assert_no_overlap(TensorImpl* a, TensorImpl* b);

}
48 changes: 24 additions & 24 deletions aten/src/ATen/NamedTensorUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ inline bool has_names(TensorList tensors) {

// Converts dim to an positional index. Errors if `dim` cannot be used to
// refer to any dimension of tensor.
CAFFE2_API int64_t dimname_to_position(const Tensor& tensor, Dimname dim);
CAFFE2_API std::vector<int64_t> dimnames_to_positions(const Tensor& tensor, DimnameList dims);
TORCH_API int64_t dimname_to_position(const Tensor& tensor, Dimname dim);
TORCH_API std::vector<int64_t> dimnames_to_positions(const Tensor& tensor, DimnameList dims);

// Unifies two DimnameList to produce a third. This is useful for implementing
// the named inference rule for binary broadcasting operations like add.
Expand All @@ -28,7 +28,7 @@ CAFFE2_API std::vector<int64_t> dimnames_to_positions(const Tensor& tensor, Dimn
// 2) Check misaligned: If a name `n` is in `names`, then it must appear at
// the same index from the right in other.
// 3) The output names are obtained by unifying the names individually from the right.
CAFFE2_API std::vector<Dimname>
TORCH_API std::vector<Dimname>
unify_from_right(DimnameList names, DimnameList other, const char* action = "broadcast");

[[noreturn]] inline void reportNYIDimnameOverload(const char* op_name) {
Expand Down Expand Up @@ -75,50 +75,50 @@ namespace namedinference {
// `names` can be empty; see [NOTE] Writing name inference rules
// If `names` is not empty, `names.size()` should equal `result.dim()`.
// When in doubt, use this overload instead of the others.
CAFFE2_API Tensor& propagate_names_if_nonempty(
TORCH_API Tensor& propagate_names_if_nonempty(
Tensor& result,
DimnameList maybe_names,
bool validate_names = false);

// Propagates `names` to `result`. Only use this if we are certain that there are
// names to propagate (that names is not empty).
CAFFE2_API Tensor& propagate_names(
TORCH_API Tensor& propagate_names(
Tensor& result,
DimnameList names,
bool validate_names = false);

// Propagates all names from src to result.
CAFFE2_API void propagate_names(Tensor& result, const Tensor& src);
TORCH_API void propagate_names(Tensor& result, const Tensor& src);

// Propagates all names except for those at the excluded_idxs.
CAFFE2_API void propagate_names_except(Tensor& result, const Tensor& src, IntArrayRef excluded_idxs);
TORCH_API void propagate_names_except(Tensor& result, const Tensor& src, IntArrayRef excluded_idxs);

// Used for reduction ops that have a `keepdim` arg.
CAFFE2_API void propagate_names_for_reduction(Tensor& result, const Tensor& src, IntArrayRef excluded_idxs, bool keepdim);
TORCH_API void propagate_names_for_reduction(Tensor& result, const Tensor& src, IntArrayRef excluded_idxs, bool keepdim);

CAFFE2_API void propagate_names_for_expand(Tensor& result, const Tensor& self);
TORCH_API void propagate_names_for_expand(Tensor& result, const Tensor& self);

CAFFE2_API std::vector<Dimname> compute_cat_outnames(TensorList tensors);
TORCH_API std::vector<Dimname> compute_cat_outnames(TensorList tensors);

CAFFE2_API std::vector<Dimname> compute_broadcast_outnames(
TORCH_API std::vector<Dimname> compute_broadcast_outnames(
const Tensor& self,
const Tensor& other);

CAFFE2_API std::vector<Dimname> broadcast_to_outnames(
TORCH_API std::vector<Dimname> broadcast_to_outnames(
const Tensor& tensor,
const Tensor& reference_tensor,
const char* op_name);

CAFFE2_API std::vector<Dimname> compute_matmul_outnames(const Tensor& self, const Tensor& other);
TORCH_API std::vector<Dimname> compute_matmul_outnames(const Tensor& self, const Tensor& other);

CAFFE2_API std::vector<Dimname> compute_cdist_outnames(const Tensor& self, const Tensor& other);
TORCH_API std::vector<Dimname> compute_cdist_outnames(const Tensor& self, const Tensor& other);

CAFFE2_API std::vector<Dimname> compute_bmm_outnames(
TORCH_API std::vector<Dimname> compute_bmm_outnames(
Tensor& result,
const Tensor& self,
const Tensor& other);

CAFFE2_API std::vector<Dimname> compute_squeeze_outnames(const Tensor& tensor);
TORCH_API std::vector<Dimname> compute_squeeze_outnames(const Tensor& tensor);

std::vector<Dimname> compute_diagonal_outnames(
const Tensor& tensor,
Expand All @@ -127,40 +127,40 @@ std::vector<Dimname> compute_diagonal_outnames(

// TensorImpl* overloads for Legacy TH/THC code. Use these sparingly.

CAFFE2_API TensorImpl* propagate_names_if_nonempty(
TORCH_API TensorImpl* propagate_names_if_nonempty(
TensorImpl* result,
DimnameList maybe_names,
bool validate_names = false);

CAFFE2_API TensorImpl* propagate_names(
TORCH_API TensorImpl* propagate_names(
TensorImpl* result,
DimnameList names,
bool validate_names = false);

CAFFE2_API void propagate_names(TensorImpl* result, /*const */TensorImpl* src);
TORCH_API void propagate_names(TensorImpl* result, /*const */TensorImpl* src);

// result = m1 @ m2 + bias
CAFFE2_API void propagate_names_for_addmm(
TORCH_API void propagate_names_for_addmm(
Tensor& result,
const Tensor& m1,
const Tensor& m2,
const Tensor& bias);

CAFFE2_API void propagate_names_for_addmv(
TORCH_API void propagate_names_for_addmv(
Tensor& result,
const Tensor& mat,
const Tensor& vec,
const Tensor& bias);

CAFFE2_API void check_names_for_dot(TensorImpl* vec1, TensorImpl* vec2);
TORCH_API void check_names_for_dot(TensorImpl* vec1, TensorImpl* vec2);

CAFFE2_API std::vector<Dimname> compute_baddbmm_outnames(
TORCH_API std::vector<Dimname> compute_baddbmm_outnames(
Tensor& result,
const Tensor& self,
const Tensor& other,
const Tensor& bias);

CAFFE2_API bool are_names_equal(TensorImpl* self, TensorImpl* other);
TORCH_API bool are_names_equal(TensorImpl* self, TensorImpl* other);

} // namespace namedinference

Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/OpaqueTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ namespace at {
// "shallow copy" in order to add support.

template <typename OpaqueHandle>
struct CAFFE2_API OpaqueTensorImpl : public TensorImpl {
struct TORCH_API OpaqueTensorImpl : public TensorImpl {
// public constructor for now...
OpaqueTensorImpl(
at::DispatchKeySet key_set,
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/PTThreadPool.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

namespace at {

class CAFFE2_API PTThreadPool : public c10::ThreadPool {
class TORCH_API PTThreadPool : public c10::ThreadPool {
public:
explicit PTThreadPool(
int pool_size,
Expand Down
26 changes: 13 additions & 13 deletions aten/src/ATen/Parallel.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,25 @@ inline int64_t divup(int64_t x, int64_t y) {
}

// Called during new thread initialization
CAFFE2_API void init_num_threads();
TORCH_API void init_num_threads();

// Sets the number of threads to be used in parallel region
CAFFE2_API void set_num_threads(int);
TORCH_API void set_num_threads(int);

// Returns the maximum number of threads that may be used in a parallel region
CAFFE2_API int get_num_threads();
TORCH_API int get_num_threads();

// Returns the current thread number (starting from 0)
// in the current parallel region, or 0 in the sequential region
CAFFE2_API int get_thread_num();
TORCH_API int get_thread_num();

// Checks whether the code runs in parallel region
CAFFE2_API bool in_parallel_region();
TORCH_API bool in_parallel_region();

namespace internal {

// Initialise num_threads lazily at first parallel call
inline CAFFE2_API void lazy_init_num_threads() {
inline TORCH_API void lazy_init_num_threads() {
thread_local bool init = false;
if (C10_UNLIKELY(!init)) {
at::init_num_threads();
Expand Down Expand Up @@ -110,29 +110,29 @@ inline scalar_t parallel_reduce(
const SF& sf);

// Returns a detailed string describing parallelization settings
CAFFE2_API std::string get_parallel_info();
TORCH_API std::string get_parallel_info();

// Sets number of threads used for inter-op parallelism
CAFFE2_API void set_num_interop_threads(int);
TORCH_API void set_num_interop_threads(int);

// Returns the number of threads used for inter-op parallelism
CAFFE2_API int get_num_interop_threads();
TORCH_API int get_num_interop_threads();

// Launches inter-op parallel task
CAFFE2_API void launch(std::function<void()> func);
TORCH_API void launch(std::function<void()> func);
namespace internal {
void launch_no_thread_state(std::function<void()> fn);
} // namespace internal

// Launches intra-op parallel task
CAFFE2_API void intraop_launch(std::function<void()> func);
TORCH_API void intraop_launch(std::function<void()> func);

// Launches intra-op parallel task, returns a future
CAFFE2_API std::shared_ptr<c10::ivalue::Future> intraop_launch_future(
TORCH_API std::shared_ptr<c10::ivalue::Future> intraop_launch_future(
std::function<void()> func);

// Returns number of intra-op threads used by default
CAFFE2_API int intraop_default_num_threads();
TORCH_API int intraop_default_num_threads();

} // namespace at

Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/ParallelNative.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ inline std::tuple<size_t, size_t> calc_num_tasks_and_chunk_size(
return std::make_tuple(num_tasks, chunk_size);
}

CAFFE2_API void _parallel_run(
TORCH_API void _parallel_run(
const int64_t begin,
const int64_t end,
const int64_t grain_size,
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/SparseTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#include <c10/util/Exception.h>

namespace at {
struct CAFFE2_API SparseTensorImpl : public TensorImpl {
struct TORCH_API SparseTensorImpl : public TensorImpl {
// Stored in COO format, indices + values.

// INVARIANTS:
Expand Down

0 comments on commit 371a8d0

Please sign in to comment.