Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Functionality Roadmap #1

Closed
90 of 93 tasks
seanmor5 opened this issue Jan 27, 2021 · 1 comment
Closed
90 of 93 tasks

Functionality Roadmap #1

seanmor5 opened this issue Jan 27, 2021 · 1 comment
Labels
kind:feature New feature or request note:upstream The issue must be tackled upstream

Comments

@seanmor5
Copy link
Contributor

seanmor5 commented Jan 27, 2021

An issue to track some baseline functionality:

Activations

  • celu
  • elu
  • exp
  • gelu
  • hard_tanh
  • hard_sigmoid
  • hard_silu/hard_swish
  • leaky_relu
  • log_sigmoid
  • relu
  • relu6
  • selu
  • sigmoid
  • silu
  • softmax
  • softplus
  • softsign
  • tanh

Initializers

  • glorot_uniform
  • glorot_normal
  • he_normal
  • he_uniform
  • lecun_uniform
  • lecun_normal
  • normal
  • ones
  • orthogonal - requires Support vmap nx#174
  • uniform
  • zeros

Loss Functions

  • binary_crossentropy
  • categorical_crossentropy
  • categorical_hinge
  • cosine_similarity - requires Support vmap nx#174
  • ctc
  • hinge
  • kl_divergence
  • log_cosh
  • margin_ranking
  • mean_absolute_error
  • mean_squared_error
  • poisson
  • soft_margin

Metrics

  • accuracy
  • mean_squared_error - requires defndelegate
  • mean_absolute_error - requires defndelegate
  • precision
  • recall
  • sensitivity
  • specificty

Optimizers

Optax style transformations:

  • scale
  • scale_by_adam
  • scale_by_rss
  • scale_by_belief
  • scale_by_rms
  • trace
  • clip
  • clip_by_global_norm
  • centralize
  • scale_by_trust_ratio
  • scale_by_schedule
  • scale_by_radam
  • scale_by_stddev

Schedules

  • polynomial_schedule
  • exponential_decay_schedule
  • cosine_decay_schedule
  • constant_schedule

Layers

For now, just functional implementations resembling torch.nn.functional or tf.nn:

Linear Layers

Convolutional Layers

  • conv
  • conv_transpose
  • depthwise_conv
  • separable_conv2d
  • separable_conv3d

Pooling Layers

  • avg_pool
  • max_pool
  • lp_pool
  • adaptive_avg_pool
  • adaptive_max_pool
  • adaptive_lp_pool
  • global_avg_pool
  • global_max_pool
  • global_lp_pool

Normalization Layers

  • batch_norm
  • group_norm
  • instance_norm
  • layer_norm

Dropout Layers

  • dropout
  • alpha_dropout
  • feature_alpha_dropout
  • spatial_dropout

Attention Layers

Visual Layers

  • resize

We can drop off dimensional suffixes in favor of generic implementations too.

@seanmor5 seanmor5 mentioned this issue Apr 1, 2021
8 tasks
@seanmor5 seanmor5 added kind:feature New feature or request note:upstream The issue must be tackled upstream labels Apr 1, 2021
@seanmor5 seanmor5 added this to the v0.1.0 milestone Sep 23, 2021
@seanmor5 seanmor5 removed this from the v0.1.0 milestone Jun 7, 2022
@seanmor5
Copy link
Contributor Author

seanmor5 commented Jun 7, 2022

Closing in favor of smaller issues

@seanmor5 seanmor5 closed this as completed Jun 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:feature New feature or request note:upstream The issue must be tackled upstream
Projects
None yet
Development

No branches or pull requests

1 participant