Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull in master #4

Merged
merged 2,739 commits into from
Jan 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
2739 commits
Select commit Hold shift + click to select a range
f83d57f
[Don't review] Clean up type annotations in caffe2/torch/nn (#50079)
r-barnes Jan 7, 2021
09eefec
Clean up some type annotations in android (#49944)
r-barnes Jan 7, 2021
ce37039
[Gradient Compression] Remove the extra comma after "bucket" in Power…
Jan 7, 2021
870ab04
add type annotations to torch._utils (#49705)
guilhermeleobas Jan 8, 2021
bf4fcab
Fix SyncBatchNorm usage without stats tracking (#50126)
malfet Jan 8, 2021
2e7c6cc
[PyTorch] Devirtualize TensorImpl::numel() with macro (#49766)
swolchok Jan 8, 2021
1a1b665
[PyTorch] validate that SparseTensorImpl::dim needn't be overridden (…
swolchok Jan 8, 2021
4de6b27
[PyTorch] Devirtualize TensorImpl::dim() with macro (#49770)
swolchok Jan 8, 2021
84e3237
Let RpcAgent::send() return JitFuture (#49906)
mrshenli Jan 8, 2021
25ef605
Replace FutureMessage with ivalue::Future in distributed/autograd/uti…
mrshenli Jan 8, 2021
008206d
Replace FutureMessage with ivalue::Future in RRefContext (#49960)
mrshenli Jan 8, 2021
d730c7e
Replace FutureMessage with ivalue::Future in RpcAgent retry logic (#4…
mrshenli Jan 8, 2021
2d5f57c
Completely remove FutureMessage from RRef Implementations (#50004)
mrshenli Jan 8, 2021
b2da0b5
Completely remove FutureMessage from RPC TorchScript implementations …
mrshenli Jan 8, 2021
0c94393
Completely remove FutureMessage from distributed autograd (#50020)
mrshenli Jan 8, 2021
1deb895
Remove FutureMessage from sender ProcessGroupAgent (#50023)
mrshenli Jan 8, 2021
0684d07
Remove FutureMessage from sender TensorPipeAgent (#50024)
mrshenli Jan 8, 2021
2831af9
Completely remove FutureMessage from FaultyProcessGroupAgent (#50025)
mrshenli Jan 8, 2021
1f795e1
Remove FutureMessage from RPC request callback logic (#50026)
mrshenli Jan 8, 2021
0987510
Completely Remove FutureMessage from RPC cpp tests (#50027)
mrshenli Jan 8, 2021
171648e
Completely Remove FutureMessage from RPC agents (#50028)
mrshenli Jan 8, 2021
c480eeb
Completely remove FutureMessage type (#50029)
mrshenli Jan 8, 2021
882ddb2
[PyTorch] Introduce packed SizesAndStrides abstraction (#47507)
swolchok Jan 8, 2021
b73c018
[PyTorch] Change representation of SizesAndStrides (#47508)
swolchok Jan 8, 2021
5a63c45
Disable cuDNN persistent RNN on sm_86 devices (#49534)
xwang233 Jan 8, 2021
294b786
Address clang-tidy warnings in ProcessGroupNCCL (#50131)
rohan-varma Jan 8, 2021
c215ffb
Revert D25687465: [PyTorch] Devirtualize TensorImpl::dim() with macro
luciang Jan 8, 2021
fc2ead0
Autograd engine, only enqueue task when it is fully initialized (#50164)
albanD Jan 8, 2021
9f832c8
[numpy] torch.exp: promote integer inputs to float (#50093)
kshitij12345 Jan 8, 2021
006cfeb
Update autograd related comments (#50166)
albanD Jan 8, 2021
5c5abd5
Implement torch.linalg.svd (#45562)
antocuni Jan 8, 2021
d00aceb
Add tensor.view(dtype) (#47951)
zasdfgbnm Jan 8, 2021
54ce171
Fix persistent_workers + pin_memory (#48543)
ssnl Jan 8, 2021
55919a4
add type annotations to torch.nn.quantized.modules.conv (#49702)
guilhermeleobas Jan 8, 2021
88bd69b
Stop using c10::scalar_to_tensor in float_power. (#50105)
gchanan Jan 8, 2021
b5ab0a7
Improve torch.linalg.qr (#50046)
antocuni Jan 8, 2021
81778e2
[onnx] Do not deref nullptr in scalar type analysis (#50237)
malfet Jan 8, 2021
a4f30d4
Clean up some type annotations in test/jit (#50158)
r-barnes Jan 8, 2021
5d45140
[numpy] torch.{all/any} : output dtype is always bool (#47878)
kshitij12345 Jan 8, 2021
d78b638
Convert string => raw strings so char classes can be represented in P…
Jan 8, 2021
0bb341d
Dump state when hitting ambiguous_autogradother_kernel. (#50246)
Jan 8, 2021
f9f758e
Apply clang-format to rpc cpp files (#50236)
mrshenli Jan 8, 2021
1bb7d8f
Revert D25717504: Clean up some type annotations in test/jit
heitorschueroff Jan 8, 2021
8f31621
Fix MKL builds on Ubuntu (#50212)
antocuni Jan 8, 2021
2c4b6ec
Unused exception variables (#50181)
alexhenrie Jan 8, 2021
aa18d17
add type annotations to torch.nn.modules.fold (#49479)
guilhermeleobas Jan 8, 2021
1c12cbe
Optimize Vulkan command buffer submission rate. (#49112)
Jan 9, 2021
49bb0a3
Support scripting classmethod called with object instances (#49967)
ppwwyyxx Jan 9, 2021
c2d37cd
Change CMake config to enable universal binary for Mac (#50243)
janeyx99 Jan 9, 2021
36ddb00
[fix] torch.cat: Don't resize out if it is already of the correct siz…
kshitij12345 Jan 9, 2021
ea087e2
JIT: guard DifferentiableGraph node (#49433)
t-vi Jan 9, 2021
ba1ce71
Document single op replacement (#50116)
Jan 9, 2021
d4c1684
reuse consant from jit (#49916)
cccclai Jan 9, 2021
8530c65
[codemod][fbcode/caffe2] Apply clang-format update fixes
zertosh Jan 9, 2021
375c30a
Avg pool 0 dim acceptance. (#50008)
v0dro Jan 10, 2021
4774c68
Added linalg.inv (#48261)
IvanYashchuk Jan 10, 2021
26cc630
Allow arbitrary docstrings to be inside torchscript interface methods…
tugsbayasgalan Jan 10, 2021
92fcb59
Automated submodule update: tensorpipe (#50267)
facebook-github-bot Jan 10, 2021
fd92bcf
Use FileStore in TorchScript for store registry (#50248)
wanchaol Jan 11, 2021
839c2f2
treat Parameter the same way as Tensor (#48963)
Jan 11, 2021
632a440
clean up imports for tensor.py (#48964)
Jan 11, 2021
d31a760
move has_torch_function to C++, and make a special case object_has_to…
Jan 11, 2021
6a3fc0c
Treat has_torch_function and object_has_torch_function as static Fals…
Jan 11, 2021
9d8bd21
Use Unicode friendly API in fused kernel related code (#49781)
skyline75489 Jan 11, 2021
eb87686
svd_backward: more memory and computationally efficient. (#50109)
nikitaved Jan 11, 2021
e29082b
Run mypy over test/test_utils.py (#50278)
rgommers Jan 11, 2021
acaf091
Vulkan convolution touchups. (#50329)
Jan 11, 2021
186fe48
Format RPC files with clang-format (#50367)
lw Jan 11, 2021
0f412aa
Move scalar_to_tensor_default_dtype out of ScalarOps.h because it's o…
gchanan Jan 11, 2021
6eb8e83
[aten] embedding_bag_byte_rowwise_offsets_out (#49561)
ajyu Jan 11, 2021
f10e7aa
[quant][graphmode][fx] Scope support for call_method in QuantizationT…
jerryzh168 Jan 11, 2021
a7e92f1
[FX} Implement wrap() by patching module globals during symtrace (#50…
Jan 11, 2021
d390e3d
[FX] Make graph target printouts more user-friendly (#50296)
Jan 11, 2021
271240a
[JIT] Ensure offset is a multiple of 4 to fix "Philox" RNG in jitted …
mcarilli Jan 11, 2021
55ac7e5
[quant][graphmode][fx] Support preserved_attributes in prepare_fx (#5…
jerryzh168 Jan 11, 2021
559e2d8
Implement optimization bisect (#49031)
tugsbayasgalan Jan 11, 2021
ec51b67
Fix elu backward operation for negative alpha (#49272)
H-Huang Jan 11, 2021
3d263d1
Update op replacement tutorial (#50377)
Jan 11, 2021
080a097
Add docstring for Proxy (#50145)
Jan 11, 2021
4d3c12d
[JIT] Print better error when class attribute IValue conversion fails…
Jan 11, 2021
a48640a
[JIT] Update clang-format hashes (#50399)
Jan 11, 2021
fd09270
.circleci: Remove CUDA 9.2 binary build jobs (#50388)
seemethere Jan 11, 2021
7efc212
Add link to tutorial in Timer doc (#50374)
albanD Jan 11, 2021
e160362
Add range assert in autograd engine queue lookup (#50372)
albanD Jan 11, 2021
d76176c
Raise warning during validation when arg_constraints not defined (#50…
Jan 11, 2021
bb97503
[fix] Indexing.cu: Move call to C10_CUDA_KERNEL_LAUNCH_CHECK to make …
kshitij12345 Jan 11, 2021
9a3305f
Automated submodule update: tensorpipe (#50369)
facebook-github-bot Jan 11, 2021
ba83aea
[GPU] Calculate strides for metal tensors (#50309)
xta0 Jan 12, 2021
b001c4c
Stop using an unnecessary scalar_to_tensor(..., device) call. (#50114)
gchanan Jan 12, 2021
f39f258
Ensure DDP + Pipe works with find_unused_parameters. (#49908)
pritamdamania Jan 12, 2021
5f8e1a1
add type annotations to torch.nn.modules.module (#49045)
guilhermeleobas Jan 12, 2021
a72c6fd
[GPU] Fix the broken strides value for 2d transpose (#50310)
xta0 Jan 12, 2021
2193544
[GPU] Clean up the operator tests (#50311)
xta0 Jan 12, 2021
09f4844
Pytorch Distributed RPC Reinforcement Learning Benchmark (Throughput …
osandoval-fb Jan 12, 2021
72c1d9d
Minor Fix: Double ";" typo in transformerlayer.h (#50300)
hebo-yang Jan 12, 2021
bee6b0b
Fix warning when running scripts/build_ios.sh (#49457)
skyline75489 Jan 12, 2021
4fed585
[MacOS] Add unit tests for Metal ops (#50312)
xta0 Jan 12, 2021
c3b4b20
[PyTorch] List::operator[] can return const ref for Tensor & string (…
swolchok Jan 12, 2021
8c5b024
Fix PyTorch NEON compilation with gcc-7 (#50389)
malfet Jan 12, 2021
78e71ce
warn user once for possible unnecessary find_unused_params (#50133)
rohan-varma Jan 12, 2021
4da9ceb
[doc] fix doc formatting for `torch.randperm` and `torch.repeat_inter…
kshitij12345 Jan 12, 2021
fb73cc4
Migrate some torch.fft tests to use OpInfos (#48428)
peterbell10 Jan 12, 2021
d25c673
Cleanup unnecessary SpectralFuncInfo logic (#48712)
peterbell10 Jan 12, 2021
5347398
test_ops: Only run complex gradcheck when complex is supported (#49018)
peterbell10 Jan 12, 2021
5546a12
remove redundant tests from tensor_op_tests (#50096)
kshitij12345 Jan 12, 2021
314351d
Fix Error with torch.flip() for cuda tensors when dims=() (#50325)
dheerajgattupalli Jan 12, 2021
9384d31
Added linalg.pinv (#48399)
IvanYashchuk Jan 12, 2021
4411b5a
add type annotations to torch.nn.modules.normalization (#49035)
guilhermeleobas Jan 12, 2021
6420071
Disable complex dispatch on min/max functions (#50347)
imaginary-person Jan 12, 2021
5834438
Enable fast pass tensor_fill for single element complex tensors (#50383)
anjali411 Jan 12, 2021
158c98a
Add new patterns for ConcatAddMulReplaceNaNClip (#50249)
ShijunK Jan 12, 2021
b5d3826
[PyTorch] Devirtualize TensorImpl::sizes() with macro (#50176)
swolchok Jan 12, 2021
035229c
[JIT] Frozen Graph Conv-BN fusion (#50074)
Jan 12, 2021
6971149
[JIT] Add Frozen Conv-> Add/Sub/Mul/Div fusion (#50075)
Jan 12, 2021
a69f008
[JIT] Factor out peephole to own test file (#50220)
Jan 12, 2021
30aeed7
Peephole Optimize out conv(x).dim(), which prevents BN fusion (#50221)
Jan 12, 2021
a389b30
Add Post Freezing Optimizations, turn on by default in torch.jit.free…
Jan 12, 2021
b2f7ff7
Fix MultiheadAttention docstring latex (#50430)
jankrepl Jan 12, 2021
5cdc32b
[vmap] Add batching rules for comparisons ops (#50364)
RockingJavaBean Jan 12, 2021
725640e
Check CUDA kernel launches in caffe2/caffe2/utils/math (#50238)
jessijzhao Jan 12, 2021
cf45d65
Clean up some type annotations in test/jit/...../test_class_type.py (…
r-barnes Jan 12, 2021
c198e6c
Stop moving scalars to GPU for one computation in leaky_rrelu_backwar…
gchanan Jan 12, 2021
6d94706
fixing autodiff to support Optional[Tensor] on inputs (#49430)
jjsjann123 Jan 12, 2021
50744cd
[package] better error message when unpickling a mocked obj (#50159)
suo Jan 12, 2021
412e3f4
Automated submodule update: tensorpipe (#50441)
facebook-github-bot Jan 12, 2021
39aac65
[quant][bug] Fixing the mapping getter to return a copy (#50297)
z-a-f Jan 12, 2021
7d28f1c
[quant][refactor] Minor refactor of some typos (#50304)
z-a-f Jan 12, 2021
cb37709
[te] Create TargetMachine only once with correct options to fix perf …
bertmaher Jan 12, 2021
374951d
Add type annotations to torch.nn.modules.padding (#49494)
guilhermeleobas Jan 12, 2021
4c97ef8
Create subgraph rewriter (#49540)
Jan 13, 2021
8c25b97
Type annotations in test/jit (#50293)
r-barnes Jan 13, 2021
af968cd
[Pytorch Mobile] Remove caching (in code) of interned strings (#50390)
dhruvbird Jan 13, 2021
49896c4
Caffe2 Concat operator benchmark (#50449)
Jan 13, 2021
4e76616
[StaticRuntime][ATen] Add out variant for narrow_copy (#49502)
Jan 13, 2021
4e248eb
Change watchdog timeout logging from INFO to ERROR. (#50455)
pritamdamania Jan 13, 2021
dea529a
Add torch.cuda.can_device_access_peer (#50446)
malfet Jan 13, 2021
a0f7b18
Fix `fmod` type promotion (#48278)
ejguan Jan 13, 2021
ca5d961
Fix remainder type promotion (#48668)
ejguan Jan 13, 2021
b54240d
[PyTorch] Gate tls_local_dispatch_key_set inlining off for Android (#…
swolchok Jan 13, 2021
057be23
[doc] Add note about `torch.flip` returning new tensor and not view. …
kshitij12345 Jan 13, 2021
4a3a378
Fix fft slow tests (#50435)
zasdfgbnm Jan 13, 2021
2a60314
[AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --ta…
Jan 13, 2021
deba3bd
Fix TORCH_LIBRARIES variables when do static build (#49458)
gemfield Jan 13, 2021
664126b
Enables build with oneDNN (MKL-DNN) on AArch64 (#50400)
nSircombe Jan 13, 2021
4a2d3d1
MAINT: char class regex simplify (#50294)
tylerjereddy Jan 13, 2021
05542f6
EMA op (#50393)
Jan 13, 2021
7d0eecc
Clean up some type annotations in benchmarks/fastrnns (#49946)
r-barnes Jan 13, 2021
a4383a6
Clean up some type annotations in caffe2/test (#49943)
r-barnes Jan 13, 2021
fc5db42
[BE] replace unittest.main with run_tests (#50451)
Jan 13, 2021
d2e96fc
Update loss module doc (#48596)
ssnl Jan 13, 2021
48318eb
Fix TestOpInfoCUDA.test_unsupported_dtypes_addmm_cuda_bfloat16 on amp…
zasdfgbnm Jan 13, 2021
36ae3fe
[te] Benchmark comparing fused overhead to unfused (#50305)
bertmaher Jan 13, 2021
62f676f
[te] Optimize allocation of kernel outputs (#50318)
bertmaher Jan 13, 2021
b89827b
Drop unused imports (#49972)
r-barnes Jan 13, 2021
7426878
Exclude test/generated_type_hints_smoketest.py from flake8 (#50497)
samestep Jan 13, 2021
30a8ba9
Remove a blacklist reference (#50477)
r-barnes Jan 13, 2021
aeefe2c
[ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Jan 13, 2021
08b6b78
[FX] Make FX stability warning reference beta (#50394)
Jan 13, 2021
21542b4
[FX] Update docstring code/graph printout (#50396)
Jan 13, 2021
9ebea77
[PyTorch] Reapply D25687465: Devirtualize TensorImpl::dim() with macr…
swolchok Jan 13, 2021
5025671
[PyTorch] Make TensorImpl::empty_tensor_restride non-virtual (#50301)
swolchok Jan 13, 2021
c6cb632
[PyTorch] Make SROpFunctor a raw function pointer (#50395)
swolchok Jan 13, 2021
4a0d17b
[PyTorch][codemod] Replace immediately-dereferenced expect calls w/ex…
swolchok Jan 14, 2021
0b49778
[package] mangle imported module names (#50049)
suo Jan 14, 2021
a3f9cf9
Fix fastrnn benchmark regression introduced by 49946 (#50517)
malfet Jan 14, 2021
5ea9584
Assemble technical overview of FX (#50291)
Jan 14, 2021
52ea372
[tools] Update clang-format linux hash (#50520)
Jan 14, 2021
fc9f013
HalfCauchy should ValueError if _validate_args (#50403)
feynmanliang Jan 14, 2021
19a8e68
Structured kernel definition for upsample_nearest2d (#50189)
soulitzer Jan 14, 2021
269193f
Revert D25859132: [te] Optimize allocation of kernel outputs
Jan 14, 2021
4ee631c
Revert D25856891: [te] Benchmark comparing fused overhead to unfused
Jan 14, 2021
934805b
cleaned up ModuleAttributeError (#50298)
jonykarki Jan 14, 2021
2639f1d
Revert D25717510: Clean up some type annotations in benchmarks/fastrnns
Jan 14, 2021
d2c3733
Reorder torch.distributed.rpc.init_rpc docstring arguments (#50419)
pbelevich Jan 14, 2021
0abe7f5
[BE] fix subprocess wrapped test cases reported as failure (#50515)
Jan 14, 2021
443412e
Add batched grad testing to gradcheck, turn it on in test_autograd (#…
zou3519 Jan 14, 2021
ef6be0e
Revert D25903846: [pytorch][PR] Structured kernel definition for upsa…
soulitzer Jan 14, 2021
0be1a24
Drop unused imports from caffe2/quantization (#50493)
r-barnes Jan 14, 2021
e05882d
Back out "reuse consant from jit" (#50521)
cccclai Jan 14, 2021
1ea3909
Link to mypy wiki page from CONTRIBUTING.md (#50540)
samestep Jan 14, 2021
7fb9358
enable CPU tests back (#50490)
zhaojuanmao Jan 14, 2021
3dcf126
Validate args in HalfCauchy and HalfNormal (#50492)
fritzo Jan 14, 2021
554a1a7
[quant] update embedding module to not store qweight (#50418)
supriyar Jan 14, 2021
30e45bb
Enable GPU-to-GPU comm in TensorPipeAgent (#44418)
mrshenli Jan 14, 2021
468c99f
Reapply D25856891: [te] Benchmark comparing fused overhead to unfused…
bertmaher Jan 14, 2021
51157e8
Use separate mypy caches for TestTypeHints cases (#50539)
samestep Jan 14, 2021
171f265
Back out "Revert D25717510: Clean up some type annotations in benchma…
malfet Jan 14, 2021
1908f56
Fix warnings in "ForeachOpsKernels" (#50482)
r-barnes Jan 14, 2021
2ceaec7
Fix warnings in TensorShape (#50486)
r-barnes Jan 14, 2021
08baffa
Drop blacklist from glow (#50480)
r-barnes Jan 15, 2021
4de9d04
[TensorExpr] Hook Fuser Pass to JIT opt-limit utility. (#50518)
Jan 15, 2021
be51de4
Minor doc improvement(?) on ArrayRef::slice (#50541)
r-barnes Jan 15, 2021
9efe153
Revert D25563542: Add batched grad testing to gradcheck, turn it on i…
malfet Jan 15, 2021
e9dc8fc
[TensorExpr] Add python bindings. (#49698)
Jan 15, 2021
adc65e7
[ONNX] Handle sequence output shape and type inference (#46542)
neginraoof Jan 15, 2021
6882f9c
[FX] Add wrap() docstring to docs and add decorator example (#50555)
Jan 15, 2021
d9f71b5
[WIP][FX] new sections in docs (#50562)
Jan 15, 2021
ffefa44
Automated submodule update: tensorpipe (#50572)
facebook-github-bot Jan 15, 2021
366b00a
[AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --ta…
Jan 15, 2021
a9db2f8
Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape a…
nairbv Jan 15, 2021
070a30b
[BE] add warning message to cmake against env var "-std=c++xx" (#50491)
walterddr Jan 15, 2021
00d432a
Remove optional for veiw_fn during View Tracking (#50067)
ejguan Jan 15, 2021
0d981ee
add type annotations to torch.nn.modules.conv (#49564)
guilhermeleobas Jan 15, 2021
296e4a0
.circleci: Set +u for all conda install commands (#50505)
seemethere Jan 15, 2021
8e74024
Move irange to c10 (#46414)
r-barnes Jan 15, 2021
0ae0fac
Clarify, make consistent, and test the behavior of logspace when dtyp…
xuhdev Jan 15, 2021
687f6a5
[PyTorch] Remove unnecessary dispatcher.h include in builtin_function…
swolchok Jan 15, 2021
60a1831
[PyTorch] Remove unnecessary dispatcher.h include in op_registration.…
swolchok Jan 15, 2021
c78e7db
[PyTorch] Remove unnecessary dispatcher.h include in mobile/interpret…
swolchok Jan 15, 2021
ab1ba8f
[RPC] Support timeout in rref._get_type() (#50498)
rohan-varma Jan 15, 2021
d64184e
[RPC] Support timeout for RRef proxy functions (#50499)
rohan-varma Jan 15, 2021
6e3e570
Add complex support for torch.nn.L1Loss (#49912)
soulitzer Jan 15, 2021
8e60bf9
add RequiresGradCheck (#50392)
Krovatkin Jan 16, 2021
2569dc7
Reapply D25859132: [te] Optimize allocation of kernel outputs (#50546)
bertmaher Jan 16, 2021
b832604
Fix caffee2 for llvm trunk
WenleiHe Jan 16, 2021
585ee11
Updated codecov config settings (#50601)
malfet Jan 16, 2021
0291f35
[FX] Make len traceable and scriptable with wrap (#50184)
Jan 16, 2021
3df5f9c
Revert D25843351: [pytorch][PR] Clarify, make consistent, and test th…
Jan 16, 2021
c99f356
Stable sort for CPU (#50052)
nikitaved Jan 16, 2021
0ea1abe
[PyTorch] Add missing Dispatcher.h include in quantized_ops.cpp (#50646)
swolchok Jan 16, 2021
da5d439
remove dulicate newlines (#50648)
Jan 16, 2021
a469336
Fix pytorch-doc build (#50651)
malfet Jan 16, 2021
2001f3a
Finished fleshing out the tensor expr bindings in expr.cpp (#50643)
Chillee Jan 16, 2021
7e05d07
[distributed_test_c10d]Enable disabled ROCm tests. (#50629)
jaglinux Jan 17, 2021
534c821
fix bn channels_last contiguity check (#50659)
ngimel Jan 18, 2021
1fdc35d
[BE] Fix the broken test -- caffe2/caffe2/python:hypothesis_test - te…
houseroad Jan 18, 2021
3f052ba
Remove unnecessary dtype checks for complex types & disable complex d…
imaginary-person Jan 18, 2021
eae1b40
Introduced operator variant to OpInfo (#50370)
Jan 18, 2021
7f3a407
Multi label margin loss (#50007)
v0dro Jan 18, 2021
227acc2
Complex autograd support for torch.{baddbmm, addbmm, addmm, addmv} (#…
anjali411 Jan 18, 2021
d140ca8
Optimize implementation of torch.pow (#46830)
Kiyosora Jan 18, 2021
f32b10e
[BE] Fix the broken test caffe2/caffe2/python:lazy_dyndep_test - test…
houseroad Jan 19, 2021
8b501df
Fix memory leak in TensorPipeAgent. (#50564)
pritamdamania Jan 19, 2021
94d9a7e
Enable TensorPipe CUDA sending to self (#50674)
mrshenli Jan 19, 2021
ce30dba
Enable TensorPipe CUDA fallback channel (#50675)
mrshenli Jan 19, 2021
e9b369c
Add SELU Activation to calculate_gain (#50664)
ajsanjoaquin Jan 19, 2021
d5e5c54
[ROCm] re-enable test_sparse.py tests (#50557)
KyleCZH Jan 19, 2021
b75cdce
[package] Properly demangle all accesses of `__name__` in importer.py…
suo Jan 19, 2021
5252e98
[pytorch] clean up unused util srcs under tools/autograd (#50611)
ljk53 Jan 19, 2021
c458558
kill `multinomial_alias_setup/draw` (#50489)
nikitaved Jan 19, 2021
5f13cc8
Automated submodule update: tensorpipe (#50684)
facebook-github-bot Jan 19, 2021
316f0b8
[testing] Port `torch.{repeat, tile}` tests to use OpInfo machinery (…
kshitij12345 Jan 19, 2021
f7a8bfd
Add batched grad testing to gradcheck, turn it on in test_autograd (#…
zou3519 Jan 19, 2021
f9a5ba7
Added linalg.slogdet (#49194)
IvanYashchuk Jan 19, 2021
1000403
Adding missing decorator for test_device_map_gpu_mixed_self_4 (#50732)
mrshenli Jan 19, 2021
5d64658
Add complex support for `torch.{acosh, asinh, atanh}` (#50387)
anjali411 Jan 19, 2021
1154a85
Add instructional error message for cudnn RNN double backward workaro…
zou3519 Jan 19, 2021
1a38fa9
Striding for lists Part 1 (#48719)
tugsbayasgalan Jan 19, 2021
937eff5
Consolidate mypy tests and args (#50631)
samestep Jan 19, 2021
4511f2c
Clean up complex autograd test list (#50615)
anjali411 Jan 19, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
2 changes: 1 addition & 1 deletion .circleci/cimodel/data/binary_build_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,12 @@ def get_processor_arch_name(gpu_version):
"cu" + gpu_version.strip("cuda") if gpu_version.startswith("cuda") else gpu_version
)


LINUX_PACKAGE_VARIANTS = OrderedDict(
manywheel=[
"3.6m",
"3.7m",
"3.8m",
"3.9m"
],
conda=dimensions.STANDARD_PYTHON_VERSIONS,
libtorch=[
Expand Down
6 changes: 3 additions & 3 deletions .circleci/cimodel/data/dimensions.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
PHASES = ["build", "test"]

CUDA_VERSIONS = [
"92",
"101",
"102",
"110",
]

ROCM_VERSIONS = [
"3.7",
"3.8",
"3.10",
"4.0",
]

ROCM_VERSION_LABELS = ["rocm" + v for v in ROCM_VERSIONS]
Expand All @@ -20,4 +19,5 @@
"3.6",
"3.7",
"3.8",
"3.9"
]
48 changes: 35 additions & 13 deletions .circleci/cimodel/data/pytorch_build_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@
("clang", [
("5", [
("3.6", [
("asan", [XImportant(True)]),
("asan", [
(True, [
("shard_test", [XImportant(True)]),
]),
]),
]),
]),
("7", [
Expand All @@ -45,14 +49,22 @@
]),
("10.2", [
("3.6", [
("important", [X(True)]),
("libtorch", [X(True)]),
("shard_test", [XImportant(True)]),
("libtorch", [
(True, [
('build_only', [X(True)]),
]),
]),
]),
]),
("11.0", [
("11.1", [
("3.8", [
X(True),
("libtorch", [XImportant(True)])
("libtorch", [
(True, [
('build_only', [XImportant(True)]),
]),
]),
]),
]),
]),
Expand All @@ -72,12 +84,16 @@
("gcc", [
("9", [
("3.8", [
("coverage", [XImportant(True)]),
("coverage", [
(True, [
("shard_test", [XImportant(True)]),
]),
]),
]),
]),
]),
("rocm", [
("3.7", [
("3.9", [
("3.6", [
('build_only', [XImportant(True)]),
]),
Expand Down Expand Up @@ -158,6 +174,7 @@ def child_constructor(self):
"libtorch": LibTorchConfigNode,
"important": ImportantConfigNode,
"build_only": BuildOnlyConfigNode,
"shard_test": ShardTestConfigNode,
"cuda_gcc_override": CudaGccOverrideConfigNode,
"coverage": CoverageConfigNode,
"pure_torch": PureTorchConfigNode,
Expand Down Expand Up @@ -195,7 +212,7 @@ def init2(self, node_name):
self.props["is_asan"] = node_name

def child_constructor(self):
return ImportantConfigNode
return ExperimentalFeatureConfigNode


class ONNXConfigNode(TreeConfigNode):
Expand Down Expand Up @@ -250,7 +267,7 @@ def init2(self, node_name):
self.props["is_libtorch"] = node_name

def child_constructor(self):
return ImportantConfigNode
return ExperimentalFeatureConfigNode


class CudaGccOverrideConfigNode(TreeConfigNode):
Expand All @@ -260,17 +277,24 @@ def init2(self, node_name):
def child_constructor(self):
return ExperimentalFeatureConfigNode

class BuildOnlyConfigNode(TreeConfigNode):

class BuildOnlyConfigNode(TreeConfigNode):
def init2(self, node_name):
self.props["build_only"] = node_name

def child_constructor(self):
return ExperimentalFeatureConfigNode


class CoverageConfigNode(TreeConfigNode):
class ShardTestConfigNode(TreeConfigNode):
def init2(self, node_name):
self.props["shard_test"] = node_name

def child_constructor(self):
return ImportantConfigNode


class CoverageConfigNode(TreeConfigNode):
def init2(self, node_name):
self.props["is_coverage"] = node_name

Expand All @@ -290,7 +314,6 @@ def get_children(self):


class XenialCompilerConfigNode(TreeConfigNode):

def modify_label(self, label):
return label or "<unspecified>"

Expand All @@ -304,7 +327,6 @@ def child_constructor(self):


class BionicCompilerConfigNode(TreeConfigNode):

def modify_label(self, label):
return label or "<unspecified>"

Expand Down
39 changes: 28 additions & 11 deletions .circleci/cimodel/data/pytorch_build_definitions.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import cimodel.lib.conf_tree as conf_tree
import cimodel.lib.miniutils as miniutils
from cimodel.data.pytorch_build_data import CONFIG_TREE_DATA, TopLevelNode
from cimodel.data.simple.util.branch_filters import gen_filter_dict
from cimodel.data.simple.util.branch_filters import gen_filter_dict, RC_PATTERN
from cimodel.data.simple.util.docker_constants import gen_docker_image


Expand Down Expand Up @@ -110,6 +110,8 @@ def gen_workflow_params(self, phase):
parameters["resource_class"] = resource_class
if phase == "build" and self.rocm_version is not None:
parameters["resource_class"] = "xlarge"
if hasattr(self, 'filters'):
parameters['filters'] = self.filters
return parameters

def gen_workflow_job(self, phase):
Expand Down Expand Up @@ -139,14 +141,16 @@ def gen_workflow_job(self, phase):

# TODO This is a hack to special case some configs just for the workflow list
class HiddenConf(object):
def __init__(self, name, parent_build=None):
def __init__(self, name, parent_build=None, filters=None):
self.name = name
self.parent_build = parent_build
self.filters = filters

def gen_workflow_job(self, phase):
return {
self.gen_build_name(phase): {
"requires": [self.parent_build.gen_build_name("build")]
"requires": [self.parent_build.gen_build_name("build")],
"filters": self.filters,
}
}

Expand All @@ -166,7 +170,8 @@ def gen_workflow_job(self, phase):
"branch": self.branch,
"requires": [self.parent_build],
"context": "org-member",
"filters": gen_filter_dict(branches_list=["nightly"])
"filters": gen_filter_dict(branches_list=["nightly"],
tags_list=RC_PATTERN)
}
}

Expand Down Expand Up @@ -205,7 +210,9 @@ def gen_docs_configs(xenial_parent_config):
configs.append(
HiddenConf(
"pytorch_python_doc_build",
parent_build=xenial_parent_config
parent_build=xenial_parent_config,
filters=gen_filter_dict(branches_list=r"/.*/",
tags_list=RC_PATTERN),
)
)
configs.append(
Expand All @@ -219,7 +226,9 @@ def gen_docs_configs(xenial_parent_config):
configs.append(
HiddenConf(
"pytorch_cpp_doc_build",
parent_build=xenial_parent_config
parent_build=xenial_parent_config,
filters=gen_filter_dict(branches_list=r"/.*/",
tags_list=RC_PATTERN),
)
)
configs.append(
Expand Down Expand Up @@ -263,6 +272,7 @@ def instantiate_configs():
compiler_version = fc.find_prop("compiler_version")
is_xla = fc.find_prop("is_xla") or False
is_asan = fc.find_prop("is_asan") or False
is_coverage = fc.find_prop("is_coverage") or False
is_onnx = fc.find_prop("is_onnx") or False
is_pure_torch = fc.find_prop("is_pure_torch") or False
is_vulkan = fc.find_prop("is_vulkan") or False
Expand Down Expand Up @@ -301,7 +311,10 @@ def instantiate_configs():
parms_list.append("asan")
python_version = fc.find_prop("pyver")
parms_list[0] = fc.find_prop("abbreviated_pyver")
restrict_phases = ["build", "test1", "test2"]

if is_coverage:
parms_list_ignored_for_docker_image.append("coverage")
python_version = fc.find_prop("pyver")

if is_onnx:
parms_list.append("onnx")
Expand All @@ -317,13 +330,13 @@ def instantiate_configs():
is_important = fc.find_prop("is_important") or False
parallel_backend = fc.find_prop("parallel_backend") or None
build_only = fc.find_prop("build_only") or False
is_coverage = fc.find_prop("is_coverage") or False
shard_test = fc.find_prop("shard_test") or False
# TODO: fix pure_torch python test packaging issue.
if shard_test:
restrict_phases = ["build"] if restrict_phases is None else restrict_phases
restrict_phases.extend(["test1", "test2"])
if build_only or is_pure_torch:
restrict_phases = ["build"]
if is_coverage and restrict_phases is None:
restrict_phases = ["build", "coverage_test"]


gpu_resource = None
if cuda_version and cuda_version != "10":
Expand All @@ -348,6 +361,8 @@ def instantiate_configs():

# run docs builds on "pytorch-linux-xenial-py3.6-gcc5.4". Docs builds
# should run on a CPU-only build that runs on all PRs.
# XXX should this be updated to a more modern build? Projects are
# beginning to drop python3.6
if (
distro_name == "xenial"
and fc.find_prop("pyver") == "3.6"
Expand All @@ -358,6 +373,8 @@ def instantiate_configs():
and compiler_name == "gcc"
and fc.find_prop("compiler_version") == "5.4"
):
c.filters = gen_filter_dict(branches_list=r"/.*/",
tags_list=RC_PATTERN)
c.dependent_tests = gen_docs_configs(c)

if cuda_version == "10.2" and python_version == "3.6" and not is_libtorch:
Expand Down
38 changes: 22 additions & 16 deletions .circleci/cimodel/data/simple/docker_definitions.py
Original file line number Diff line number Diff line change
@@ -1,49 +1,55 @@
from collections import OrderedDict

from cimodel.lib.miniutils import quote
from cimodel.data.simple.util.branch_filters import gen_filter_dict, RC_PATTERN


# TODO: make this generated from a matrix rather than just a static list
IMAGE_NAMES = [
"pytorch-linux-bionic-cuda11.1-cudnn8-py3.6-gcc9",
"pytorch-linux-bionic-cuda11.1-cudnn8-py3.8-gcc9",
"pytorch-linux-bionic-cuda11.0-cudnn8-py3.6-gcc9",
"pytorch-linux-bionic-cuda11.0-cudnn8-py3.8-gcc9",
"pytorch-linux-bionic-cuda10.2-cudnn7-py3.8-gcc9",
"pytorch-linux-bionic-py3.6-clang9",
"pytorch-linux-bionic-cuda10.2-cudnn7-py3.6-clang9",
"pytorch-linux-bionic-py3.8-gcc9",
"pytorch-linux-bionic-rocm3.5.1-py3.6",
"pytorch-linux-xenial-cuda10-cudnn7-py3-gcc7",
"pytorch-linux-xenial-cuda10.1-cudnn7-py3-gcc7",
"pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7",
"pytorch-linux-xenial-cuda11.0-cudnn8-py3-gcc7",
"pytorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7",
"pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc5.4",
"pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7",
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c",
"pytorch-linux-xenial-py3-clang5-asan",
"pytorch-linux-xenial-py3-clang7-onnx",
"pytorch-linux-xenial-py3.8",
"pytorch-linux-xenial-py3.6-clang7",
"pytorch-linux-xenial-py3.6-gcc4.8",
"pytorch-linux-xenial-py3.6-gcc5.4",
"pytorch-linux-xenial-py3.6-gcc5.4", # this one is used in doc builds
"pytorch-linux-xenial-py3.6-gcc7.2",
"pytorch-linux-xenial-py3.6-gcc7",
"pytorch-linux-bionic-rocm3.7-py3.6",
"pytorch-linux-bionic-rocm3.8-py3.6",
"pytorch-linux-bionic-rocm3.9-py3.6",
"pytorch-linux-bionic-rocm3.10-py3.6",
]


def get_workflow_jobs():
"""Generates a list of docker image build definitions"""
return [
OrderedDict(
ret = []
for image_name in IMAGE_NAMES:
parameters = OrderedDict({
"name": quote(f"docker-{image_name}"),
"image_name": quote(image_name),
})
if image_name == "pytorch-linux-xenial-py3.6-gcc5.4":
# pushing documentation on tags requires CircleCI to also
# build all the dependencies on tags, including this docker image
parameters['filters'] = gen_filter_dict(branches_list=r"/.*/",
tags_list=RC_PATTERN)
ret.append(OrderedDict(
{
"docker_build_job": OrderedDict(
{
"name": quote(f"docker-{image_name}"),
"image_name": quote(image_name),
}
)
"docker_build_job": parameters
}
)
for image_name in IMAGE_NAMES
]
))
return ret