-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on the project! #1
Comments
Dear Antonio, Very sorry that I have not had time to write a proper README file for the project yet. In this project, we trained a sheepdog to drive a flock of sheep toward a target position. Multiple environments are in consideration. In the shepherding task, there are multiple models, each has different types of behaviours. We used 2 basic behaviours called collecting and driving in Strombom model [1]. However, the exact implementation is from El-Fiqi et al.[2], who is one of my colleague. I suggest you look at those two papers. The main files of the deep reinforcement learning code are
The format of the obstacle is a NxM matrix. The rows are the obstacles and the columns are the features of the obstacles. The original code was written for generalization so it can be used for multiple dogs if needed. Unfortunately, in this version, I modified some parts to fit 1-dog problem and reduce the computational complexity. However, if you can provide me some screenshots of the errors, I will give you some hints of what happen, and the way to make it executable for multiple dogs. As the paper is still under review, I have not uploaded our proposed algorithm yet. The code there is for simulation and basic deep reinforcement learning algorithm. [1] Strömbom, D., Mann, R.P., Wilson, A.M., Hailes, S., Morton, A.J., Sumpter, D.J. and King, A.J., 2014. Solving the shepherding problem: heuristics for herding autonomous, interacting agents. Journal of the royal society interface, 11(100), p.20140719. Hope this help! |
Thank you so much for your feedback!
I’m very interested to your project! Is well done! But if it’s good for your, can you send me the repo with multi dogs in the environment? Because I want to study how implement this features in my project and then implement the imitation learning technique!
Thank you very much for your help and for you support!
Best regards,
Antonio Lomuscio
… Il giorno 23 lug 2021, alle ore 04:54, Tung Duy Nguyen ***@***.***> ha scritto:
Dear Antonio,
Very sorry that I have not had time to write a proper README file for the project yet.
In this project, we trained a sheepdog to drive a flock of sheep toward a target position. Multiple environments are in consideration.
In the shepherding task, there are multiple models, each has different types of behaviours. We used 2 basic behaviours called collecting and driving in Strombom model [1]. However, the exact implementation is from El-Fiqi et al.[2], who is one of my colleague. I suggest you look at those two papers.
In this case, we assume the collecting behaviour is default while the driving behaviour is learned with deep reinforcement learning.
The main files of the deep reinforcement learning code are
'main_train.py' for initial training of agents in specific environments.
'main_test.py' for testing the performance.
'main_retrain.py' for retraining (baseline method) the network when the pretrained networks are transferred to other environments.
'main_retrain_with_rules.py': similar to 'main_retrain.py' but with our proposed algorithm (our algorithm has not been uploaded yet, so this code file is not working).
The format of the obstacle is a NxM matrix. The rows are the obstacles and the columns are the features of the obstacles.
3 columns are: Col1 = x-coordinate of the centre of the obstacle; Col2 = y-coordinate of the centre of the obstacle; and Col3 = the length of the squared obstacle.
The original code was written for generalization so it can be used for multiple dogs if needed. Unfortunately, in this version, I modified some parts to fit 1-dog problem and reduce the computational complexity. However, if you can provide me some screenshots of the errors, I will give you some hints of what happen, and the way to make it executable for multiple dogs.
As the paper is still under review, I have not uploaded our proposed algorithm yet. The code there is for simulation and basic deep reinforcement learning algorithm.
[1] Strömbom, D., Mann, R.P., Wilson, A.M., Hailes, S., Morton, A.J., Sumpter, D.J. and King, A.J., 2014. Solving the shepherding problem: heuristics for herding autonomous, interacting agents. Journal of the royal society interface, 11(100), p.20140719.
[2] El-Fiqi, H., Campbell, B., Elsayed, S., Perry, A., Singh, H.K., Hunjet, R. and Abbass, H.A., 2020. The Limits of Reactive Shepherding Approaches for Swarm Guidance. IEEE Access, 8, pp.214658-214671.
Hope this help!
Cheers
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Dear Antonio, Please kindly find the link to the multiple shepherding model repo below:
If you use my code asset in any of your projects, please cite the original paper which provides the model (El-Fiqi et al, the source is provided in the README file) and provide the credit to my copyrighted code asset. Thank you. Cheers. |
Dear Tung,
Thanks for you helping! Now I’m seeing your code!
Yeah of course I’ll cite you, the paper from the README file and the copyright!
Really thank you for your time and for your helping!
Best regards,
Antonio Lomuscio
… Il giorno 24 lug 2021, alle ore 01:10, Tung Duy Nguyen ***@***.***> ha scritto:
Dear Antonio,
Please kindly find the link to the multiple shepherding model repo below:
https://github.com/tudngn/multi-shepherd <https://github.com/tudngn/multi-shepherd>
Please note that
This is the original code for shepherding model in our project. Thus, it is reactive model, no learning is included.
The sky shepherds are considered, no collision for sheepdogs are considered. However, you can add some more interactions if you like.
If you use my code asset in any of your projects, please cite the original paper which provides the model (El-Fiqi et al, the source is provided in the README file) and provide the credit to my copyrighted code asset. Thank you.
Cheers.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADSWT22TYECO5LDEIB5TUF3TZI4MLANCNFSM5A2YWNAQ>.
|
Hi tudngn!
I've seen your project on GitHub and I'm very interested to understand what have you done, because I'm working in a shepherd environment with imitation learning technique with obstacles and more than one dog! But I'm having a lot of problem and today I've seen your code and I noticed that you have implemented obstacles and more that one NumberOfShepherds.
Can you help me to understand what is the core of your code and then how have you implemented all of this? And how is done the format of obstacles? Why when I change the NumberOfShepherds, I've the problem of the shape?
Thanks a lot for you helping!
See you!
The text was updated successfully, but these errors were encountered: