Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch all code blocks to use valid Code Hike languages #815

Merged
merged 2 commits into from
Dec 1, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 10 additions & 6 deletions apps/consulting/posts/review-functorch-contribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ calculating per-sample gradients becomes the straightforward application of
Here are a few of the tasks we had the opportunity to tackle:

## Adding Batching Rules for `vmap`

`vmap` is a transformation that accepts a function that operates on non-batched
tensors and returns a new function that operates on batched tensors.
When processing a batched input, an additional
Expand All @@ -36,6 +37,7 @@ operation efficiently by pushing the `for` loop into the PyTorch operations,
allowing the batches to run in parallel.

Consider the following example:

```python
import torch

Expand Down Expand Up @@ -81,9 +83,10 @@ and other simpler composite operators. If we implement batching rules for every
primitive operator, we automatically get the batching rules for composite operators.

There are two ways to add batching support for an operator:
* Manually write the batching rule. See for example the [batching rule for torch.dot](https://github.com/pytorch/pytorch/blob/b30ee35a6f141d3247a24fd09f96ea50a7e2b3c7/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp#L25-L34)
* Decompose operators using other operators for which we already have a
batching rule. See for example the [batching rule for torch.vdot](https://github.com/pytorch/pytorch/blob/b30ee35a6f141d3247a24fd09f96ea50a7e2b3c7/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp#L35-L37)

- Manually write the batching rule. See for example the [batching rule for torch.dot](https://github.com/pytorch/pytorch/blob/b30ee35a6f141d3247a24fd09f96ea50a7e2b3c7/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp#L25-L34)
- Decompose operators using other operators for which we already have a
batching rule. See for example the [batching rule for torch.vdot](https://github.com/pytorch/pytorch/blob/b30ee35a6f141d3247a24fd09f96ea50a7e2b3c7/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp#L35-L37)

## Composite Compliance

Expand Down Expand Up @@ -208,7 +211,7 @@ generate specialized code. In this case, it has fused `sin` and `square` to run
within the same `for`-loop. This allows the generated program to do more compute
per read/write effectively improving the memory bandwidth utilization.

```c++
```cpp
extern "C" void kernel(const float* in_ptr0, float* out_ptr0) {
for (long i0 = 0L; i0 < 16L; i0 += 8L) {
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + i0);
Expand Down Expand Up @@ -264,15 +267,15 @@ successfully trace through this program in one graph.
class GraphModule(torch.nn.Module):
def forward(self, L_x_ : torch.Tensor):
l_x_ = L_x_

# File: torch/_functorch/apis.py:363, code:
# return eager_transforms.grad_impl(func, argnums, has_aux, args, kwargs)
grad_body_0 = self.grad_body_0
grad_proxy = torch.func.grad(grad_body_0, 0, False); grad_body_0 = None
call = grad_proxy.__call__(l_x_); grad_proxy = l_x_ = None
contiguous = call.contiguous(); call = None
return (contiguous,)

class GraphModule(torch.nn.Module):
def forward(self, l_x_):
# No stacktrace found for following nodes
Expand Down Expand Up @@ -317,6 +320,7 @@ minimal limitations, providing a more comprehensive compilation support for `tor
transforms

## Closing Remarks

This project was yet another instance of the tight collaboration between Quansight
and Meta within PyTorch. In particular, we would like to thank Richard Zou and
Horace He, the `torch.func` creators, for all the design discussions and
Expand Down
4 changes: 2 additions & 2 deletions apps/labs/posts/cxx-numba-interoperability.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ optional arguments:
Next, let us consider the following C++ header and source file that we
will use as a model of a C++ library:

```c++
```cpp
/* File: foo.hpp */
#include <iostream>
int foo(int a);
Expand Down Expand Up @@ -202,7 +202,7 @@ Notice that the generated [cxx2py_libfoo.cpp](/posts/cxx-numba-interoperability/
contains light-weight C functions for returning the addresses of C++
functions:

```c++
```cpp
#include <memory>
#include <cstdint>
#include "foo.hpp"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ don't match (e.g., `concat` in the standard instead of `np.concatenate`).

Here is an example of a naming inconsistency between NumPy and the Array API standard:

```ipython
```py

In [1]: import numpy as np
In [2]: import numpy.array_api as npx
Expand All @@ -404,7 +404,7 @@ Out[4]: <function numpy.array_api._manipulation_functions.concat...>

And here is an example of a behavioral inconsistency for an indexing operation:

```ipython
```py
In [1]: import numpy as np
In [2]: import numpy.array_api as npx

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ you will be able to enable it by passing `boundscheck=True` to `@njit`, or by
setting the `NUMBA_BOUNDSCHECK=1` environment variable. This will make it
easier to detect out of bounds issues like the one above. It will work like

```pycon
```py
>>> @njit(boundscheck=True)
.def outtabounds(x):
. A = 0
Expand Down Expand Up @@ -166,8 +166,7 @@ Some reasons why `import *` is bad:
level, `import *` will import every public (doesn't start with an
underscore) name defined in the module file. This can often include things
like standard library imports or loop variables defined at the top-level of
the file. For imports from modules (from `__init__.py`), `from module import
*` will include every submodule defined in that module. Using `__all__` in
the file. For imports from modules (from `__init__.py`), `from module import *` will include every submodule defined in that module. Using `__all__` in
modules and `__init__.py` files is also good practice, as these things are
also often confusing even for interactive use where `import *` is
acceptable.
Expand Down Expand Up @@ -309,7 +308,6 @@ With the new [sphinx-math-dollar](https://www.sympy.org/sphinx-math-dollar/)
Sphinx extension, this is now possible. Writing `$\nu$` produces $\nu$, and
the above docstring can now be written as


```py
class besselj(BesselBase):
"""
Expand All @@ -334,8 +332,7 @@ class besselj(BesselBase):
J_{-n}(z) = (-1)^n J_n(z).
```

We also plan to add support for `$$double dollars$$` for display math so that `..
math ::` is no longer needed either .
We also plan to add support for `$$double dollars$$` for display math so that `.. math ::` is no longer needed either .

For end users, the documentation on [docs.sympy.org](https://docs.sympy.org)
will continue to render exactly the same, but for developers, it is much
Expand Down
8 changes: 4 additions & 4 deletions apps/labs/posts/sympy-documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ before being removed.
Two - a new `SymPyDeprecationWarning` class for deprecation warnings, which
gives much more user friendly error messages. For example

```python-console
```py
>>> import sympy.core.compatibility
<stdin>:1: SymPyDeprecationWarning:

Expand Down Expand Up @@ -282,7 +282,7 @@ class log(Function):
return 1/self.args[0]
```

```python-console
```py
>>> x = sympy.Symbol('x')
>>> log(1)
0
Expand All @@ -307,7 +307,7 @@ goes over some ways to avoid these pitfalls.
For example, one pitfall that many new SymPy users run into is using strings
as inputs to SymPy functions, like

```python-console
```py
>>> from sympy import expand
>>> expand("(x**2 + x)/x")
x + 1
Expand All @@ -316,7 +316,7 @@ x + 1
It's much better to define symbolic variables and create expressions directly,
like

```python-console
```py
>>> from sympy import symbols
>>> x = symbols('x')
>>> expand((x**2 + x)/x)
Expand Down
12 changes: 6 additions & 6 deletions apps/labs/posts/whats-new-in-sympy-14.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ If you want the string form of an expression for copy-pasting, you can use

Simplification of relational and piecewise expressions has been improved:

```pycon
```py
>>> x, y, z, w = symbols('x y z w')
>>> init_printing()
>>> expr = And(Eq(x,y), x >= y, w < y, y >= z, z < y)
Expand All @@ -92,7 +92,7 @@ x = y ∧ x ≥ y ∧ y ≥ z ∧ w < y ∧ z < y
x = y ∧ y > Max(w, z)
```

```pycon
```py
>>> expr = Piecewise((x*y, And(x >= y, Eq(y, 0))), (x - 1, Eq(x, 1)), (0, True))
>>> expr
⎧ x⋅y for y = 0 ∧ x ≥ y
Expand All @@ -109,7 +109,7 @@ x = y ∧ y > Max(w, z)
The MathML presentation printer has been greatly improved, putting it on par
with the existing Unicode and LaTeX pretty printers.

```pycon
```py
>>> mathml(Integral(exp(-x**2), (x, -oo, oo)), 'presentation')
<mrow><msubsup><mo>&#x222B;</mo><mrow><mo>-</mo><mi>&#x221E;</mi></mrow><mi>&#x221E;</mi></msubsup><msup><mi>&ExponentialE;</mi><mrow><mo>-</mo><msup><mi>x</mi><mn>2</mn></msup></mrow></msup><mo>&dd;</mo><mi>x</mi></mrow>
```
Expand All @@ -128,7 +128,7 @@ presentation form for `Integral(exp(-x**2), (x, -oo, oo))` below:

Several improvements have been made to the solvers.

```pycon
```py
>>> eq = Eq((x**2 - 7*x + 11)**(x**2 - 13*x + 42), 1)
>>> eq
2
Expand All @@ -145,7 +145,7 @@ been added.
`'nth_algebraic'` solves ODEs using `solve` by inverting the derivatives
algebraically:

```pycon
```py
>>> f = Function('f')
>>> eq = Eq(f(x) * (f(x).diff(x)**2 - 1), 0)
>>> eq
Expand All @@ -160,7 +160,7 @@ algebraically:
`'nth_order_reducible'` solves ODEs that only involve derivatives of `f(x)`,
via the substitution $g(x)=f^\{(n)\}(x)$.

```pycon
```py
>>> eq = Eq(Derivative(f(x), (x, 2)) + x*Derivative(f(x), x), x)
>>> eq
2
Expand Down
Loading