Skip to content

zhouzq-thu/PyTorch-API-Summary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

  • torch.nn

    Parameters

    • class torch.nn. Parameter[source]

    Containers

    • class torch.nn. Module[source]

    • class torch.nn. Sequential(*args)[source]

    • class torch.nn. ModuleList(modules=None)[source]

    • class torch.nn. ParameterList(parameters=None)[source]

    Convolution Layers

    • class torch.nn. Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

    • class torch.nn. Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

    • class torch.nn. Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

    • class torch.nn. ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]

    • class torch.nn. ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]

    • class torch.nn. ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]

    Pooling Layers

    • class torch.nn. MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

    • class torch.nn. MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

    • class torch.nn. MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

    • class torch.nn. MaxUnpool1d(kernel_size, stride=None, padding=0)[source]

    • class torch.nn. MaxUnpool2d(kernel_size, stride=None, padding=0)[source]

    • class torch.nn. MaxUnpool3d(kernel_size, stride=None, padding=0)[source]

    • class torch.nn. AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]

    • class torch.nn. AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]

    • class torch.nn. AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]

    • class torch.nn. FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

    • class torch.nn. LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)[source]

    • class torch.nn. AdaptiveMaxPool1d(output_size, return_indices=False)[source]

    • class torch.nn. AdaptiveMaxPool2d(output_size, return_indices=False)[source]

    • class torch.nn. AdaptiveMaxPool3d(output_size, return_indices=False)[source]

    • class torch.nn. AdaptiveAvgPool1d(output_size)[source]

    • class torch.nn. AdaptiveAvgPool2d(output_size)[source]

    • class torch.nn. AdaptiveAvgPool3d(output_size)[source]

    Padding Layers

    • class torch.nn. ReflectionPad2d(padding)[source]

    • class torch.nn. ReplicationPad2d(padding)[source]

    • class torch.nn. ReplicationPad3d(padding)[source]

    • class torch.nn. ZeroPad2d(padding)[source]

    • class torch.nn. ConstantPad2d(padding, value)[source]

    Non-linear Activations

    • class torch.nn. ReLU(inplace=False)[source]

    • class torch.nn. ReLU6(inplace=False)[source]

    • class torch.nn. ELU(alpha=1.0, inplace=False)[source]

    • class torch.nn. SELU(inplace=False)[source]

    • class torch.nn. PReLU(num_parameters=1, init=0.25)[source]

    • class torch.nn. LeakyReLU(negative_slope=0.01, inplace=False)[source]

    • class torch.nn. Threshold(threshold, value, inplace=False)[source]

    • class torch.nn. Hardtanh(min_val=-1, max_val=1, inplace=False, min_value=None, max_value=None)[source]

    • class torch.nn. Sigmoid[source]

    • class torch.nn. Tanh[source]

    • class torch.nn. LogSigmoid[source]

    • class torch.nn. Softplus(beta=1, threshold=20)[source]

    • class torch.nn. Softshrink(lambd=0.5)[source]

    • class torch.nn. Softsign[source]

    • class torch.nn. Tanhshrink[source]

    • class torch.nn. Softmin(dim=None)[source]

    • class torch.nn. Softmax(dim=None)[source]

    • class torch.nn. Softmax2d[source]

    • class torch.nn. LogSoftmax(dim=None)[source]

    Normalization layers

    • class torch.nn. BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True)[source]

    • class torch.nn. BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)[source]

    • class torch.nn. BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True)[source]

    • class torch.nn. InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False)[source]

    • class torch.nn. InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False)[source]

    • class torch.nn. InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False)[source]

    Recurrent layers

    • class torch.nn. RNN(*args, **kwargs)[source]

    • class torch.nn. LSTM(*args, **kwargs)[source]

    • class torch.nn. GRU(*args, **kwargs)[source]

    • class torch.nn. RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh')[source]

    • class torch.nn. LSTMCell(input_size, hidden_size, bias=True)[source]

    • class torch.nn. GRUCell(input_size, hidden_size, bias=True)[source]

    Linear layers

    • class torch.nn. Linear(in_features, out_features, bias=True)[source]

    • class torch.nn. Bilinear(in1_features, in2_features, out_features, bias=True)[source]

    Dropout layers

    • class torch.nn. Dropout(p=0.5, inplace=False)[source]

    • class torch.nn. Dropout2d(p=0.5, inplace=False)[source]

    • class torch.nn. Dropout3d(p=0.5, inplace=False)[source]

    • class torch.nn. AlphaDropout(p=0.5)[source]

    Sparse layers

    • class torch.nn. Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False)[source]

    • class torch.nn. EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean')[source]

    Distance functions

    • class torch.nn. CosineSimilarity(dim=1, eps=1e-08)[source]

    • class torch.nn. PairwiseDistance(p=2, eps=1e-06)[source]

    Loss functions

    • class torch.nn. L1Loss(size_average=True, reduce=True)[source]

    • class torch.nn. MSELoss(size_average=True, reduce=True)[source]

    • class torch.nn. CrossEntropyLoss(weight=None, size_average=True, ignore_index=-100, reduce=True)[source]

    • class torch.nn. NLLLoss(weight=None, size_average=True, ignore_index=-100, reduce=True)[source]

    • class torch.nn. PoissonNLLLoss(log_input=True, full=False, size_average=True, eps=1e-08)[source]

    • class torch.nn. NLLLoss2d(weight=None, size_average=True, ignore_index=-100, reduce=True)[source]

    • class torch.nn. KLDivLoss(size_average=True, reduce=True)[source]

    • class torch.nn. BCELoss(weight=None, size_average=True)[source]

    • class torch.nn. BCEWithLogitsLoss(weight=None, size_average=True)[source]

    • class torch.nn. MarginRankingLoss(margin=0, size_average=True)[source]

    • class torch.nn. HingeEmbeddingLoss(margin=1.0, size_average=True)[source]

    • class torch.nn. MultiLabelMarginLoss(size_average=True)[source]

    • class torch.nn. SmoothL1Loss(size_average=True, reduce=True)[source]

    • class torch.nn. SoftMarginLoss(size_average=True)[source]

    • class torch.nn. MultiLabelSoftMarginLoss(weight=None, size_average=True)[source]

    • class torch.nn. CosineEmbeddingLoss(margin=0, size_average=True)[source]

    • class torch.nn. MultiMarginLoss(p=1, margin=1, weight=None, size_average=True)[source]

    • class torch.nn. TripletMarginLoss(margin=1.0, p=2, eps=1e-06, swap=False)[source]

    Vision layers

    • class torch.nn. PixelShuffle(upscale_factor)[source]

    • class torch.nn. Upsample(size=None, scale_factor=None, mode='nearest')[source]

    • class torch.nn. UpsamplingNearest2d(size=None, scale_factor=None)[source]

    • class torch.nn. UpsamplingBilinear2d(size=None, scale_factor=None)[source]

    DataParallel layers (multi-GPU, distributed)

    • class torch.nn. DataParallel(module, device_ids=None, output_device=None, dim=0)[source]

    • class torch.nn.parallel. DistributedDataParallel(module, device_ids=None, output_device=None, dim=0)[source]

    Utilities

    • torch.nn.utils. clip_grad_norm(parameters, max_norm, norm_type=2)[source]

    • torch.nn.utils. weight_norm(module, name='weight', dim=0)[source]

    • torch.nn.utils. remove_weight_norm(module, name='weight')[source]

    • torch.nn.utils.rnn. PackedSequence(data, batch_sizes)[source]

    • torch.nn.utils.rnn. pack_padded_sequence(input, lengths, batch_first=False)[source]

    • torch.nn.utils.rnn. pad_packed_sequence(sequence, batch_first=False, padding_value=0.0)[source]

    torch.nn.functional

    Convolution functions

    • torch.nn.functional. conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]

    • torch.nn.functional. conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]

    • torch.nn.functional. conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]

    • torch.nn.functional. conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]

    • torch.nn.functional. conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]

    • torch.nn.functional. conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]

    Pooling functions

    • torch.nn.functional. avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]

    • torch.nn.functional. avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Variable

    • torch.nn.functional. avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Variable

    • torch.nn.functional. max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

    • torch.nn.functional. max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

    • torch.nn.functional. max_pool3d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

    • torch.nn.functional. max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

    • torch.nn.functional. max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

    • torch.nn.functional. max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

    • torch.nn.functional. lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)[source]

    • torch.nn.functional. adaptive_max_pool1d(input, output_size, return_indices=False)[source]

    • torch.nn.functional. adaptive_max_pool2d(input, output_size, return_indices=False)[source]

    • torch.nn.functional. adaptive_max_pool3d(input, output_size, return_indices=False)[source]

    • torch.nn.functional. adaptive_avg_pool1d(input, output_size)[source]

    • torch.nn.functional. adaptive_avg_pool2d(input, output_size)[source]

    • torch.nn.functional. adaptive_avg_pool3d(input, output_size)[source]

    Non-linear activation functions

    • torch.nn.functional. threshold(input, threshold, value, inplace=False)[source]

    • torch.nn.functional. threshold_(input, threshold, value) → Variable

    • torch.nn.functional. relu(input, threshold, value, inplace=False) → Variable[source]

    • torch.nn.functional. relu_(input)[source]

    • torch.nn.functional. hardtanh(input, min_val=-1., max_val=1., inplace=False) → Variable[source]

    • torch.nn.functional. hardtanh_(input, min_val=-1., max_val=1.) → Variable

    • torch.nn.functional. relu6(input, inplace=False) → Variable[source]

    • torch.nn.functional. elu(input, alpha=1.0, inplace=False)[source]

    • torch.nn.functional. elu_(input, alpha=1.) → Variable

    • torch.nn.functional. selu(input, inplace=False) → Variable[source]

    • torch.nn.functional. leaky_relu(input, negative_slope=0.01, inplace=False) → Variable[source]

    • torch.nn.functional. leaky_relu_(input, negative_slope=0.01) → Variable

    • torch.nn.functional. prelu(input, weight) → Variable

    • torch.nn.functional. rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Variable[source]

    • torch.nn.functional. rrelu_(input, lower=1./8, upper=1./3, training=False) → Variable

    • torch.nn.functional. glu(input, dim=-1) → Variable

    • torch.nn.functional. logsigmoid(input) → Variable

    • torch.nn.functional. hardshrink(input, lambd=0.5) → Variable

    • torch.nn.functional. tanhshrink(input) → Variable[source]

    • torch.nn.functional. softsign(input) → Variable[source]

    • torch.nn.functional. softplus(input, beta=1, threshold=20) → Variable

    • torch.nn.functional. softmin(input, dim=None, _stacklevel=3)[source]

    • torch.nn.functional. softmax(input, dim=None, _stacklevel=3)[source]

    • torch.nn.functional. softshrink(input, lambd=0.5) → Variable[source]

    • torch.nn.functional. log_softmax(input, dim=None, _stacklevel=3)[source]

    • torch.nn.functional. tanh(input) → Variable[source]

    • torch.nn.functional. sigmoid(input) → Variable[source]

    Normalization functions

    • torch.nn.functional. batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)[source]

    • torch.nn.functional. normalize(input, p=2, dim=1, eps=1e-12)[source]

    Linear functions

    • torch.nn.functional. linear(input, weight, bias=None)[source]

    Dropout functions

    • torch.nn.functional. dropout(input, p=0.5, training=False, inplace=False)[source]

    • torch.nn.functional. alpha_dropout(input, p=0.5, training=False)[source]

    • torch.nn.functional. dropout2d(input, p=0.5, training=False, inplace=False)[source]

    • torch.nn.functional. dropout3d(input, p=0.5, training=False, inplace=False)[source]

    Distance functions

    • torch.nn.functional. pairwise_distance(x1, x2, p=2, eps=1e-06)[source]

    • torch.nn.functional. cosine_similarity(x1, x2, dim=1, eps=1e-08)[source]

    Loss functions

    • torch.nn.functional. binary_cross_entropy(input, target, weight=None, size_average=True)[source]

    • torch.nn.functional. poisson_nll_loss(input, target, log_input=True, full=False, size_average=True, eps=1e-08)[source]

    • torch.nn.functional. cosine_embedding_loss(input1, input2, target, margin=0, size_average=True) → Variable[source]

    • torch.nn.functional. cross_entropy(input, target, weight=None, size_average=True, ignore_index=-100, reduce=True)[source]

    • torch.nn.functional. hinge_embedding_loss(input, target, margin=1.0, size_average=True) → Variable[source]

    • torch.nn.functional. kl_div(input, target, size_average=True) → Variable

    • torch.nn.functional. l1_loss(input, target, size_average=True, reduce=True) → Variable[source]

    • torch.nn.functional. mse_loss(input, target, size_average=True, reduce=True) → Variable[source]

    • torch.nn.functional. margin_ranking_loss(input1, input2, target, margin=0, size_average=True) → Variable[source]

    • torch.nn.functional. multilabel_margin_loss(input, target, size_average=True) → Variable

    • torch.nn.functional. multilabel_soft_margin_loss(input, target, weight=None, size_average=True) → Variable[source]

    • torch.nn.functional. multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=True) → Variable[source]

    • torch.nn.functional. nll_loss(input, target, weight=None, size_average=True, ignore_index=-100, reduce=True)[source]

    • torch.nn.functional. binary_cross_entropy_with_logits(input, target, weight=None, size_average=True)[source]

    • torch.nn.functional. smooth_l1_loss(input, target, size_average=True) → Variable

    • torch.nn.functional. soft_margin_loss(input, target, size_average=True) → Variable

    • torch.nn.functional. triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False)[source]

    Vision functions

    • torch.nn.functional. pixel_shuffle(input, upscale_factor)[source]

    • torch.nn.functional. pad(input, pad, mode='constant', value=0)[source]

    • torch.nn.functional. upsample(input, size=None, scale_factor=None, mode='nearest')[source]

    • torch.nn.functional. upsample_nearest(input, size=None, scale_factor=None)[source]

    • torch.nn.functional. upsample_bilinear(input, size=None, scale_factor=None)[source]

    • torch.nn.functional. grid_sample(input, grid, mode='bilinear', padding_mode='zeros')[source]

    • torch.nn.functional. affine_grid(theta, size)[source]

    torch.nn.init

    • torch.nn.init. calculate_gain(nonlinearity, param=None)[source]

    • torch.nn.init. uniform(tensor, a=0, b=1)[source]

    • torch.nn.init. normal(tensor, mean=0, std=1)[source]

    • torch.nn.init. constant(tensor, val)[source]

    • torch.nn.init. eye(tensor)[source]

    • torch.nn.init. dirac(tensor)[source]

    • torch.nn.init. xavier_uniform(tensor, gain=1)[source]

    • torch.nn.init. xavier_normal(tensor, gain=1)[source]

    • torch.nn.init. kaiming_uniform(tensor, a=0, mode='fan_in')[source]

    • torch.nn.init. kaiming_normal(tensor, a=0, mode='fan_in')[source]

    • torch.nn.init. orthogonal(tensor, gain=1)[source]

    • torch.nn.init. sparse(tensor, sparsity, std=0.01)[source]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages