We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An issue to track some baseline functionality:
celu
elu
exp
gelu
hard_tanh
hard_sigmoid
hard_silu
hard_swish
leaky_relu
log_sigmoid
relu
relu6
selu
sigmoid
silu
softmax
softplus
softsign
tanh
glorot_uniform
glorot_normal
he_normal
he_uniform
lecun_uniform
lecun_normal
normal
ones
orthogonal
uniform
zeros
binary_crossentropy
categorical_crossentropy
categorical_hinge
cosine_similarity
ctc
hinge
kl_divergence
log_cosh
margin_ranking
mean_absolute_error
mean_squared_error
poisson
soft_margin
accuracy
defndelegate
precision
recall
sensitivity
specificty
Optax style transformations:
scale
scale_by_adam
scale_by_rss
scale_by_belief
scale_by_rms
trace
clip
clip_by_global_norm
centralize
scale_by_trust_ratio
scale_by_schedule
scale_by_radam
scale_by_stddev
polynomial_schedule
exponential_decay_schedule
cosine_decay_schedule
constant_schedule
For now, just functional implementations resembling torch.nn.functional or tf.nn:
torch.nn.functional
tf.nn
dense
bilinear
conv
conv_transpose
depthwise_conv
separable_conv2d
separable_conv3d
avg_pool
max_pool
lp_pool
adaptive_avg_pool
adaptive_max_pool
adaptive_lp_pool
global_avg_pool
global_max_pool
global_lp_pool
batch_norm
group_norm
instance_norm
layer_norm
dropout
alpha_dropout
feature_alpha_dropout
spatial_dropout
dot_product_attention
additive_attention
repeat
gather
resize
We can drop off dimensional suffixes in favor of generic implementations too.
The text was updated successfully, but these errors were encountered:
Closing in favor of smaller issues
Sorry, something went wrong.
No branches or pull requests
An issue to track some baseline functionality:
Activations
celu
elu
exp
gelu
hard_tanh
hard_sigmoid
hard_silu
/hard_swish
leaky_relu
log_sigmoid
relu
relu6
selu
sigmoid
silu
softmax
softplus
softsign
tanh
Initializers
glorot_uniform
glorot_normal
he_normal
he_uniform
lecun_uniform
lecun_normal
normal
ones
orthogonal
- requires Support vmap nx#174uniform
zeros
Loss Functions
binary_crossentropy
categorical_crossentropy
categorical_hinge
cosine_similarity
- requires Support vmap nx#174ctc
hinge
kl_divergence
log_cosh
margin_ranking
mean_absolute_error
mean_squared_error
poisson
soft_margin
Metrics
accuracy
mean_squared_error
- requiresdefndelegate
mean_absolute_error
- requiresdefndelegate
precision
recall
sensitivity
specificty
Optimizers
Optax style transformations:
scale
scale_by_adam
scale_by_rss
scale_by_belief
scale_by_rms
trace
clip
clip_by_global_norm
centralize
scale_by_trust_ratio
scale_by_schedule
scale_by_radam
scale_by_stddev
Schedules
polynomial_schedule
exponential_decay_schedule
cosine_decay_schedule
constant_schedule
Layers
For now, just functional implementations resembling
torch.nn.functional
ortf.nn
:Linear Layers
dense
bilinear
- requires Make Nx.dot/4 aware of batch dimensions nx#182Convolutional Layers
conv
conv_transpose
depthwise_conv
separable_conv2d
separable_conv3d
Pooling Layers
avg_pool
max_pool
lp_pool
adaptive_avg_pool
adaptive_max_pool
adaptive_lp_pool
global_avg_pool
global_max_pool
global_lp_pool
Normalization Layers
batch_norm
group_norm
instance_norm
layer_norm
Dropout Layers
dropout
alpha_dropout
feature_alpha_dropout
spatial_dropout
Attention Layers
dot_product_attention
- requires Make Nx.dot/4 aware of batch dimensions nx#182additive_attention
- requiresrepeat
/gather
on NxVisual Layers
resize
We can drop off dimensional suffixes in favor of generic implementations too.
The text was updated successfully, but these errors were encountered: