Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In TF-v2.12 there are no methods called _set_hyper and _get_hyper #62021

Closed
maifeeulasad opened this issue Sep 30, 2023 · 4 comments
Closed

In TF-v2.12 there are no methods called _set_hyper and _get_hyper #62021

maifeeulasad opened this issue Sep 30, 2023 · 4 comments
Assignees
Labels
comp:keras Keras related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.12 For issues related to Tensorflow 2.12 type:support Support issues

Comments

@maifeeulasad
Copy link

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

No

Source

source

TensorFlow version

v2.12.0-rc1-12-g0db597d0d75 2.12.0

Custom code

Yes

OS platform and distribution

Kaggle kernel

Mobile device

No response

Python version

3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

There are no methods called _set_hyper and _get_hyper

Standalone code to reproduce the issue

import tensorflow as tf

class MyAdamOptimizer(tf.keras.optimizers.Optimizer):
    def __init__(self, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7, name="MyAdamOptimizer", **kwargs):
        super(MyAdamOptimizer, self).__init__(name, **kwargs)
        
        self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))
        self._set_hyper("beta_1", beta_1)
        self._set_hyper("beta_2", beta_2)
        self._set_hyper("epsilon", epsilon)
        
    def _create_slots(self, var_list):
        for var in var_list:
            self.add_slot(var, "m")
            self.add_slot(var, "v")
            
    def _resource_apply_dense(self, grad, var):
        lr = self._get_hyper("learning_rate", var_dtype=var.dtype.base_dtype)
        beta_1 = self._get_hyper("beta_1", var_dtype=var.dtype.base_dtype)
        beta_2 = self._get_hyper("beta_2", var_dtype=var.dtype.base_dtype)
        epsilon = self._get_hyper("epsilon", var_dtype=var.dtype.base_dtype)
        
        m = self.get_slot(var, "m")
        v = self.get_slot(var, "v")
        
        m.assign_add((1 - beta_1) * (grad - m))
        v.assign_add((1 - beta_2) * (tf.square(grad) - v))
        
        m_hat = m / (1 - tf.math.pow(beta_1, tf.cast(self.iterations + 1, tf.float32)))
        v_hat = v / (1 - tf.math.pow(beta_2, tf.cast(self.iterations + 1, tf.float32)))
        
        var_update = lr * m_hat / (tf.sqrt(v_hat) + epsilon)
        
        var.assign_sub(var_update)
        
        return var_update
        
    def _resource_apply_sparse(self, grad, var):
        raise NotImplementedError("Sparse gradient updates are not supported.")

    
optimizer = MyAdamOptimizer(learning_rate=0.001)

Relevant log output

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[5], line 42
     38     def _resource_apply_sparse(self, grad, var):
     39         raise NotImplementedError("Sparse gradient updates are not supported.")
---> 42 optimizer = MyAdamOptimizer(learning_rate=0.001)

Cell In[5], line 7, in MyAdamOptimizer.__init__(self, learning_rate, beta_1, beta_2, epsilon, name, **kwargs)
      4 def __init__(self, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7, name="MyAdamOptimizer", **kwargs):
      5     super(MyAdamOptimizer, self).__init__(name, **kwargs)
----> 7     self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))
      8     self._set_hyper("beta_1", beta_1)
      9     self._set_hyper("beta_2", beta_2)

AttributeError: 'MyAdamOptimizer' object has no attribute '_set_hyper'
@SuryanarayanaY
Copy link
Collaborator

Hi @maybeLee,

You can use legacy optimizers for using the mentioned hyper parameters.Please refer attached gist. It seems this option has been dropped in latest version of optimizers.

@SuryanarayanaY SuryanarayanaY added TF 2.12 For issues related to Tensorflow 2.12 comp:keras Keras related issues type:support Support issues stat:awaiting response Status - Awaiting response from author labels Oct 6, 2023
@github-actions
Copy link

This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Oct 14, 2023
@github-actions
Copy link

This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:keras Keras related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.12 For issues related to Tensorflow 2.12 type:support Support issues
Projects
None yet
Development

No branches or pull requests

2 participants