Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating Actor Critic Method example to Keras 3 (TF-Only) #1759

Merged
merged 2 commits into from
Feb 22, 2024

Conversation

sitamgithub-MSIT
Copy link
Contributor

@sitamgithub-MSIT sitamgithub-MSIT commented Feb 12, 2024

This PR changes the Actor Critic Method example to Keras 3.0 [TF Only Backend].

For example, here is the notebook link provided:
https://colab.research.google.com/drive/15pFWOBmb0bVPv7qmAFC2zjMzUiQeAbuT?usp=sharing

cc: @fchollet @divyashreepathihalli

The following describes the Git difference for the changed files:

Changes:
diff --git a/examples/rl/actor_critic_cartpole.py b/examples/rl/actor_critic_cartpole.py
index 14787341..e34000d3 100644
--- a/examples/rl/actor_critic_cartpole.py
+++ b/examples/rl/actor_critic_cartpole.py
@@ -39,11 +39,15 @@ remains upright. The agent, therefore, must learn to keep the pole from falling
 ## Setup
 """
 
+import os
+
+os.environ["KERAS_BACKEND"] = "tensorflow"
 import gym
 import numpy as np
+import keras
+from keras import ops
+from keras import layers
 import tensorflow as tf
-from tensorflow import keras
-from tensorflow.keras import layers
 
 # Configuration parameters for the whole setup
 seed = 42
@@ -97,8 +101,8 @@ while True:  # Run until solved
             # env.render(); Adding this line would show the attempts
             # of the agent in a pop up window.
 
-            state = tf.convert_to_tensor(state)
-            state = tf.expand_dims(state, 0)
+            state = ops.convert_to_tensor(state)
+            state = ops.expand_dims(state, 0)
 
             # Predict action probabilities and estimated future rewards
             # from environment state
@@ -107,7 +111,7 @@ while True:  # Run until solved
 
             # Sample action from action probability distribution
             action = np.random.choice(num_actions, p=np.squeeze(action_probs))
-            action_probs_history.append(tf.math.log(action_probs[0, action]))
+            action_probs_history.append(ops.log(action_probs[0, action]))
 
             # Apply the sampled action in our environment
             state, reward, done, _ = env.step(action)
@@ -151,7 +155,7 @@ while True:  # Run until solved
             # The critic must be updated so that it predicts a better estimate of
             # the future rewards.
             critic_losses.append(
-                huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0))
+                huber_loss(ops.expand_dims(value, 0), ops.expand_dims(ret, 0))
             )
 
         # Backpropagation
(END)

Copy link
Contributor

@divyashreepathihalli divyashreepathihalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! please add the ipynb and .md file

@sitamgithub-MSIT
Copy link
Contributor Author

LGTM! please add the ipynb and .md file

Every file is added. It appears to be prepared for merging. @fchollet

Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thank you!

@fchollet fchollet merged commit 77f512b into keras-team:master Feb 22, 2024
3 checks passed
sitamgithub-MSIT added a commit to sitamgithub-MSIT/keras-io that referenced this pull request May 30, 2024
…m#1759)

* migrated the example to tf only backend

* .md and .ipynb file added
@sitamgithub-MSIT sitamgithub-MSIT deleted the rl-actor-critic branch May 30, 2024 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants