@@ -1228,17 +1228,35 @@ def opcheck(
1228
1228
``opcheck`` tests these metadata and properties.
1229
1229
1230
1230
Concretely, we test the following:
1231
- - test_schema: if the operator's schema is correct.
1232
- - test_autograd_registration: if autograd was registered correctly.
1231
+
1232
+ - test_schema: If the schema matches the implementation of
1233
+ the operator. For example: if the schema specifies a Tensor is mutated,
1234
+ then we check the implementation mutates the Tensor. If the schema
1235
+ specifies that we return a new Tensor, then we check that the
1236
+ implementation returns a new Tensor (instead of an existing one or
1237
+ a view of an existing one).
1238
+ - test_autograd_registration: If the operator supports training
1239
+ (autograd): we check that its autograd formula is registered via
1240
+ torch.library.register_autograd or a manual registration to one
1241
+ or more DispatchKey::Autograd keys. Any other DispatchKey-based
1242
+ registrations may lead to undefined behavior.
1233
1243
- test_faketensor: If the operator has a FakeTensor kernel
1234
- (and if it is correct). The FakeTensor kernel is necessary (
1235
- but not sufficient) for the operator to work with PyTorch compilation
1236
- APIs (torch.compile/export/FX).
1244
+ (and if it is correct). The FakeTensor kernel is necessary (
1245
+ but not sufficient) for the operator to work with PyTorch compilation
1246
+ APIs (torch.compile/export/FX). We check that a FakeTensor kernel
1247
+ (also sometimes known as a meta kernel) was registered for the
1248
+ operator and that it is correct. This test takes the result of
1249
+ running the operator on real tensors and the result of running
1250
+ the operator on FakeTensors and checks that they have the same
1251
+ Tensor metadata (sizes/strides/dtype/device/etc).
1237
1252
- test_aot_dispatch_dynamic: If the operator has correct behavior
1238
- with PyTorch compilation APIs (torch.compile/export/FX).
1239
- This checks that the outputs (and gradients, if applicable) are the
1240
- same under eager-mode PyTorch and torch.compile.
1241
- This test is a superset of ``test_faketensor``.
1253
+ with PyTorch compilation APIs (torch.compile/export/FX).
1254
+ This checks that the outputs (and gradients, if applicable) are the
1255
+ same under eager-mode PyTorch and torch.compile.
1256
+ This test is a superset of ``test_faketensor`` and is an e2e test;
1257
+ other things it tests are that the operator supports
1258
+ functionalization and that the backward pass (if it exists) also
1259
+ supports FakeTensor and functionalization.
1242
1260
1243
1261
For best results, please call ``opcheck`` multiple times with a
1244
1262
representative set of inputs. If your operator supports
0 commit comments