Skip to content

Commit

Permalink
Rewrite existing links to custom ops gdocs with the landing page (#12…
Browse files Browse the repository at this point in the history
…7423)

NB: these links will be live after the docs build happens, which is once
a day.

Test Plan:
- existing tests

Pull Request resolved: #127423
Approved by: https://github.com/jansel, https://github.com/williamwen42
ghstack dependencies: #127291, #127292, #127400
  • Loading branch information
zou3519 authored and pytorchmergebot committed May 30, 2024
1 parent 18a3f78 commit c9beea1
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions aten/src/ATen/core/MetaFallbackKernel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ static void metaFallback(
"fake impl or Meta kernel registered. You may have run into this message "
"while using an operator with PT2 compilation APIs (torch.compile/torch.export); "
"in order to use this operator with those APIs you'll need to add a fake impl. "
"Please see the following doc for next steps: "
"https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU/edit");
"Please see the following for next steps: "
"https://pytorch.org/docs/main/notes/custom_operators.html");
}

TORCH_LIBRARY_IMPL(_, Meta, m) {
Expand Down
2 changes: 1 addition & 1 deletion c10/core/StorageImpl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ void throwNullDataPtrError() {
"If you're using torch.compile/export/fx, it is likely that we are erroneously "
"tracing into a custom kernel. To fix this, please wrap the custom kernel into "
"an opaque custom op. Please see the following for details: "
"https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ");
"https://pytorch.org/docs/main/notes/custom_operators.html");
}

// NOTE: [FakeTensor.data_ptr deprecation]
Expand Down
2 changes: 1 addition & 1 deletion c10/core/TensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -1580,7 +1580,7 @@ struct C10_API TensorImpl : public c10::intrusive_ptr_target {
"If you're using torch.compile/export/fx, it is likely that we are erroneously "
"tracing into a custom kernel. To fix this, please wrap the custom kernel into "
"an opaque custom op. Please see the following for details: "
"https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ\n"
"https://pytorch.org/docs/main/notes/custom_operators.html\n"
"If you're using Caffe2, Caffe2 uses a lazy allocation, so you will need to call "
"mutable_data() or raw_mutable_data() to actually allocate memory.");
// Caller does the type check.
Expand Down
2 changes: 1 addition & 1 deletion torch/_dynamo/output_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -1676,7 +1676,7 @@ def example_value_from_input_node(self, node: torch.fx.Node):
"(and fall back to eager-mode PyTorch) on all ops "
"that have do not have the 'pt2_compliant_tag'. "
"Please see the following doc for how to mark this op as PT2 compliant "
"https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ"
"https://pytorch.org/docs/main/notes/custom_operators.html"
)


Expand Down
2 changes: 1 addition & 1 deletion torch/library.py
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ def register_fake(
This API may be used as a decorator (see examples).
For a detailed guide on custom ops, please see
https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ/edit
https://pytorch.org/docs/main/notes/custom_operators.html
Examples:
>>> import torch
Expand Down

0 comments on commit c9beea1

Please sign in to comment.