-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement Proposal: Prioritized Experience Replay and Dueling DQN Implementation in Reinforcement Learning for Market Hedging #155
Comments
Hi there! Thanks for opening this issue. We appreciate your contribution to this open-source project. We aim to respond or assign your issue as soon as possible. |
Thankyou for providing such a detailed description of your proposed approach.You can start working on it. |
Thank you for considering I surely work on it and enhance it with the following proposal |
Can you please give access as I can't be able to contribute. I have updated the code with the following enhancements which I have given but the thing I can't able to clone the repo and make the push and pull request please tell me what can I do. Can you please check from your side because i can't able to clone the repo in local machine. I have done the changes please reply. |
I have checked it, no other people are facing this issue.It is from your side only.You can try installing your github desktop again |
Ok thanks I will do and make pull request as soon as possible |
This is really strange.Let me find the solution |
Can you please check I have pulled the request from github only because cloning is not done that's why I have directly done from github without cloning into the repo. I have pulled the request also.Please review my changes. |
@Shubhanshu-356 You can clone repo via vscode |
Ok thanks for the help but I already done and open the pulled request also. Please review my changes @Akshat111111 |
Have you checked my pull request as it is not closed. Please check and review the changes |
@Shubhanshu-356 He might be busy due to workload. I'm mentioning this because the mentor is also busy with his examinations. You can work on another issue in the meantime. |
Ok no problem thanks for the info |
Hey, Thanks for it.You are absolutely correct. Apart from it I have GSOC tasks as well. |
I understood, Good luck for GSOC |
Thanks! |
Dear Akshat,
I hope this message finds you well. I would like to propose two significant enhancements to our trading agent project that I believe will significantly improve its performance and learning efficiency.
1.Prioritized Experience Replay:
Prioritizes informative transitions in the replay buffer based on their significance for learning.
Enhances training efficiency by sampling transitions with higher priorities more frequently, leading to faster convergence.
2.Dueling DQN:
Separates the estimation of state value and action advantages in the neural network architecture.
Improves learning efficiency by enabling the agent to learn the value of states and actions separately, resulting in more effective decision-making.
Both enhancements involve modifications to the codebase, including adjustments to the memory structure, replay methods, and model architecture. I believe that implementing these enhancements will significantly elevate the performance and robustness of our trading agent. Please give me this I work on it.
Thank You
The text was updated successfully, but these errors were encountered: