Export 38 missing torch_* namespace functions#9
Merged
TroyHernandez merged 7 commits intomainfrom Apr 11, 2026
Merged
Conversation
Adds top-level exported wrappers for 34 ops that had C++ wrappers and tensor methods but no namespace-level torch_*() functions: argmax, argmin, bmm, ceil, clone, conv2d, conv_transpose1d, eq, flatten, floor_divide, gather, ge, gt, index_select, le, log10, log2, logical_not, lt, max, min, narrow, ne, neg, remainder, repeat_interleave, reshape, round, sign, squeeze, t, transpose, trunc, unsqueeze. Adds torch_* aliases for 4 nnf_* ops: softmax, log_softmax, layer_norm, batch_norm. Adds reduction constants: torch_reduction_none (0L), torch_reduction_mean (1L), torch_reduction_sum (2L). torch API compatibility: 79.9% -> 89.7% (339/378 torch_* functions). Remaining 39 are dtype aliases, utility functions (save/load/device), and format constants — not tensor ops.
Adds remaining 39 torch_* functions: - Dtype aliases: half, short, cfloat, cdouble, cfloat32/64/128, qint8, qint32, quint8 - Format/layout/quantization constants: contiguous_format, channels_last_format, preserve_format, strided, sparse_coo, per_tensor_affine/symmetric, per_channel_affine/symmetric - Ops: conj, index, index_put, lu (delegates to linalg_lu) - Dtype queries: is_complex, is_floating_point - RNG: manual_seed (new C wrapper via at::manual_seed) - Dtype info: finfo, iinfo (pure R) - Utility: device, generator, is_installed, install_path, get/set_default_dtype, get/set_rng_state (stubs where needed) - Serialization: save, load, serialize (exported from serialize.R) Fixes 5 argument name mismatches (self vs input) in zeros_like, randn_like, randint_like, allclose, result_type. torch_* compatibility: 100% (378/378), 0 test failures.
Adds 192 new exports: - 72 nnf_* aliases delegating to existing C_torch_* wrappers - 20 linalg_* aliases (without torch_ prefix, matching torch R package) - 21 nn_* activation modules (celu, hardsigmoid, hardswish, leaky_relu, log_softmax, prelu, relu6, selu, softmax, softsign, tanh, etc.) - 18 nn_* loss modules (bce, cross_entropy, mse, nll, huber, etc.) - 15 nn_* pooling modules (avg/max/adaptive pool 1d/2d/3d) - 8 nn_init_* weight initialization functions - 11 type check functions (is_nn_module, is_torch_dtype, etc.) - 11 context managers (local_no_grad, with_enable_grad, etc.) - 8 backend query stubs - nn_group_norm, nn_batch_norm3d, nn_flatten, nn_dropout2d/3d - torch_manual_seed (new C wrapper) - Misc: clone_module, slc, install_torch alias, finfo, iinfo Coverage: 56.4% -> 81.9% (617/753).
Adds nn_conv3d, nn_conv_transpose2d/3d, nn_threshold, nn_unflatten, nn_pairwise_distance, nn_softmax2d. Adds nnf_cross_entropy, nnf_logsigmoid, nnf_softmin, nnf_softsign, nnf_tanhshrink, nnf_dropout2d, nnf_dropout3d. Adds nn_init_zeros_, nn_init_xavier_uniform_, nn_init_xavier_normal_. Adds lr_step, lr_lambda, lr_multiplicative, lr_reduce_on_plateau, lr_cosine_annealing, lr_one_cycle, lr_scheduler. Adds cuda_synchronize, cuda_current_device, cuda_get_device_capability, cuda_runtime_version, cuda_get/set_rng_state. Adds autograd_set_grad_mode, autograd_backward, load_state_dict. Coverage: 81.9% -> 86.3% (650/753). Remaining 103 are JIT (12, removed), distributions (7), data loading infrastructure (7), complex nn modules needing new C++ (multihead_attention, GRU, RNN, transformer, embedding_bag), contrib extensions, ignite optimizers, and torch-specific internal plumbing.
New codegen: tools/codegen_compat.R reads torch R package source
(~/torch/R/nn-*.R, nnf-*.R) via R's parse() + AST extraction, and
generates compatible tinytorch wrappers in R/zzz-compat-ops.R.
Handles the generator->instance API difference: torch's nn_module()
returns generators, tinytorch returns instances. The codegen wraps
each module in a function that defines nn_module() and immediately
calls the constructor.
Generated 38 new wrappers. Skips 171 already-implemented ops.
Skips 3 modules with complex inheritance (nn_conv_transpose_nd,
nn_gru, nn_rnn) that tinytorch hand-writes.
Also removes hand-written wrappers from zzz-gen-ops.R that the
codegen now generates (avoids duplicates).
Re-runnable on torch version bumps:
r -e 'source("tools/codegen_compat.R"); codegen_compat()'
torch:: coverage: 86.3% -> 90.2% (679/753).
…9.2% relevant) Optimizers: RMSprop, Adagrad, Adadelta, ASGD, Rprop implemented in pure R with tensor ops (no C++ needed). L-BFGS stubbed. Pure-R optimizer base (make_optimizer) with step/zero_grad dispatch. Distributions: Normal, Bernoulli, Categorical, Poisson, Gamma, Chi2, Multivariate Normal, MixtureSameFamily. Environment-based (no R6), each with sample(), log_prob(), entropy() methods. Data loading: dataset(), tensor_dataset(), dataset_subset(), dataloader(), dataloader_make_iter(), dataloader_next(), sampler(), enumerate(), as_iterator(), loop(), yield(). nn modules: nn_gru, nn_rnn (with C-backed forward via C_torch_gru/ C_torch_rnn_tanh/relu), nn_module_dict, nn_utils_clip_grad_norm_, nn_utils_clip_grad_value_, nn_utils_weight_norm (stub), RNN packing stubs. All adapted from torch R package (MIT, Daniel Falbel). DESCRIPTION updated to credit Daniel for the adapted designs. Comprehensive compatibility test: 141 tests, 0 failures. Coverage: 718/724 relevant exports (99.2%). 6 non-intentional gaps: nn_init_orthogonal_, nn_init_dirac_, nn_init_sparse_, autograd_function, autograd_grad, nn_adaptive_log_softmax_with_loss.
…n/grad, adaptive_log_softmax nn_init_orthogonal_: QR-based orthogonal initialization. nn_init_dirac_: Identity-like initialization for conv tensors. nn_init_sparse_: Column-sparse initialization. autograd_function: Struct for custom forward/backward (stub-level). autograd_grad: Stub (needs C++ autograd engine). nn_adaptive_log_softmax_with_loss: Stub. torch:: relevant coverage: 724/724 (100.0%). 141 compatibility tests, 0 failures.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
torch_*()namespace functions (bmm, max, min, reshape, softmax, flatten, conv2d, etc.)torch_*aliases for 4nnf_*ops (softmax, log_softmax, layer_norm, batch_norm)Test plan