The torch package contains data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0
torch
is_tensor is_storage is_complex is_floating_point is_nonzero set_default_dtype get_default_dtype set_default_tensor_type numel set_printoptions set_flush_denormal
Note
Random sampling creation ops are listed under random-sampling
and include: torch.rand
torch.rand_like
torch.randn
torch.randn_like
torch.randint
torch.randint_like
torch.randperm
You may also use torch.empty
with the inplace-random-sampling
methods to create torch.Tensor
s with values sampled from a broader range of distributions.
tensor sparse_coo_tensor as_tensor as_strided from_numpy zeros zeros_like ones ones_like arange range linspace logspace eye empty empty_like empty_strided full full_like quantize_per_tensor quantize_per_channel dequantize complex polar heaviside
cat chunk column_stack dstack gather hstack index_select masked_select movedim narrow nonzero reshape row_stack split squeeze stack swapaxes swapdims t take tensor_split tile transpose unbind unsqueeze vstack where
Generator
seed manual_seed initial_seed get_rng_state set_rng_state
torch.default_generator
bernoulli multinomial normal poisson rand rand_like randint randint_like randn randn_like randperm
There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
torch.Tensor.bernoulli_
- in-place version oftorch.bernoulli
torch.Tensor.cauchy_
- numbers drawn from the Cauchy distributiontorch.Tensor.exponential_
- numbers drawn from the exponential distributiontorch.Tensor.geometric_
- elements drawn from the geometric distributiontorch.Tensor.log_normal_
- samples from the log-normal distributiontorch.Tensor.normal_
- in-place version oftorch.normal
torch.Tensor.random_
- numbers sampled from the discrete uniform distributiontorch.Tensor.uniform_
- numbers sampled from the continuous uniform distribution
quasirandom.SobolEngine
save load
get_num_threads set_num_threads get_num_interop_threads set_num_interop_threads
The context managers torch.no_grad
, torch.enable_grad
, and torch.set_grad_enabled
are helpful for locally disabling and enabling gradient computation. See locally-disable-grad
for more details on their usage. These context managers are thread local, so they won't work if you send work to another thread using the threading
module, etc.
Examples:
>>> x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
... y = x * 2
>>> y.requires_grad
False
>>> torch.set_grad_enabled(True) # this can also be used as a function
>>> y = x * 2
>>> y.requires_grad
True
>>> torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
False
no_grad enable_grad set_grad_enabled
abs absolute acos arccos acosh arccosh add addcdiv addcmul angle asin arcsin asinh arcsinh atan arctan atanh arctanh atan2 bitwise_not bitwise_and bitwise_or bitwise_xor ceil clamp clip conj copysign cos cosh deg2rad div divide digamma erf erfc erfinv exp exp2 expm1 fix floor floor_divide fmod frac imag ldexp lerp lgamma log log10 log1p log2 logaddexp logaddexp2 logical_and logical_not logical_or logical_xor logit hypot i0 igamma igammac mul multiply mvlgamma nan_to_num neg negative nextafter polygamma pow rad2deg real reciprocal remainder round rsqrt sigmoid sign signbit sin sinh sqrt square sub subtract tan tanh true_divide trunc
argmax argmin amax amin max min dist logsumexp mean median nanmedian mode norm nansum prod quantile nanquantile std std_mean sum unique unique_consecutive var var_mean count_nonzero
allclose argsort eq equal ge greater_equal gt greater isclose isfinite isinf isposinf isneginf isnan isreal kthvalue le less_equal lt less maximum minimum ne not_equal sort topk
fft ifft rfft irfft stft istft bartlett_window blackman_window hamming_window hann_window kaiser_window
atleast_1d atleast_2d atleast_3d bincount block_diag broadcast_tensors bucketize cartesian_prod cdist clone combinations cross cummax cummin cumprod cumsum diag diag_embed diagflat diagonal einsum flatten flip fliplr flipud kron rot90 gcd histc meshgrid lcm logcumsumexp ravel renorm repeat_interleave roll searchsorted tensordot trace tril tril_indices triu triu_indices vander view_as_real view_as_complex
addbmm addmm addmv addr baddbmm bmm chain_matmul cholesky cholesky_inverse cholesky_solve dot eig geqrf ger inverse det logdet slogdet lstsq lu lu_solve lu_unpack matmul matrix_power matrix_rank matrix_exp mm mv orgqr ormqr outer pinverse qr solve svd svd_lowrank pca_lowrank symeig lobpcg trapz triangular_solve vdot
compiled_with_cxx11_abi result_type can_cast promote_types set_deterministic is_deterministic vmap _assert