-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about velocity adjustment #89
Comments
Hi, As far as I can see, I see no issue in simply multiplying the Did you try changing seed values for your training? It might just be that with reduced top speed, the samples for training are just not varied enough and it does not arrive at the goal point often or quickly enough and your Bellman equation might need a larger discount factor. You can also try increasing the Bellman discount factor so it would be meaningful over more samples. Alternatively you can try increasing the time delta how long each step is propagated. This should have a similar effect. |
I did a quick test run with limiting the actions in 0.34 range and increasing the reward and the model did not work for me either. Then I additionally increased time delta to 0.3 and the model seems to work better. Though, keep in mind that I just did some short training runs and did not fully train the network. But that is something you could try as well. Training on low speeds will always make the training slow though. |
Geniuely appreciate your reply and help(really happy :D)! I've tried setting different seed value and as well the TIME_DELTA as 0.3 to train the model the moment I finished reading your email. I am training two models with different seed values on two PC currently, I'm just so excited to see what result they will get tomorrow, wish good result I will see there hahahaaaa. You're so cool! It's my pleasure to have the chance of talking to you LOL. |
WOW moment! It works just fine haha, many thanks for your kind help. I've trained the model for nearly 3 days so far, and the metrics currently show like this: I was wondering why the loss result keeps stably climbing up though I still get many successful path-planning on reaching goal? (it's just so different from what I know about DL) I think the simulation on PC is about okay now, so I'm planning to deploy it on a real robot to see the physical effect, looking forward to seeing how it delivers next. |
Hi, Good to hear that it is working for you as well. Regarding the "loss" function, it is not the same type of loss as you would encounter in BC or IL tasks where you have a specific dataset that you try to fit your model to and the difference is the loss. Here, calling it "loss" might actually be a bit misleading. |
Many thanks :P I've sent an email to your email adress reinis.cimurs@de.bosch.com, do you receive it? pls do check it, sincerely looking forward to your reply. |
Hi, I have sent a reply yesterday. |
Have you ever tried using 360 degrees laser data(currently it's 180 degrees laser data and env_dim=20, which means 9 degrees a gap)? I changed it to 360 degrees range and set env_dim = 40, reserving 9 degrees a gap as before. However, the model keeps going around in circles or just moving little each step, I thought maybe it's becuase of the seed value at first, so I also tried dozens of seed values(from 0 to 10), but it seems no help at all. 1.I was wondering if the doubled env_dim makes the network hard to reference(in other words, the network input becomes larger, but the network itself has such shallow and basic layer components that it cannot deal with the larger input)? 2.Or should I just try more seed values(but how many numbers do I exactly have to try? and how could you be sure of this problem is seed value related or not?) Looking forward to your reply, thank you for your time and consideration :) |
Hi, Did you also update the gap creation function here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py#L91C18-L91C18 You can see that it is later used in the velodyne callback for creating gaps: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py#L135 There might be some mismatch here. Doubled env_dim should not have too much effect for training in general. I have trained models with 40 laser values as well as lower number of parameters in MLP layers and it was successful. There might be some other parameters, like learning rate, that might need an adjustment, but it should still work. It does blow up the size of the saved state representations though. |
|
Copy that hhhhhh I will try to set different LR to see the difference, as for the RNN I shall do some research on how to add them properly(guess the design of NN is not that easy, Idk...just the DNN always seems like a blackbox to me, like adding the RNN here or there, the effect is hard to foresee and lack of good interpretability) Bootstrapping methods and angle representation are somewhat fresh to me haha, this is learn and live LOL Many thanks for sharing with me those good ideas(idea is the very precious thing in this world, it is just so hard to come up with one, so heartily thx :p), cannot wait to discover new possibilities and see how they works. |
There is an implementation of GRU based on this work that uses history in navigation which might be a good start looking into that direction: https://github.com/Barry2333/DRL_Navigation |
COOL haha! Thx a lot, it shall be a good inspiration :) |
Dear Reinis Cimurs, During recent model training trials, I encountered somewhat strange phenomena(as the videos shown below. 1st video shows the model begins to swing left and right with little move forward each step, until it exceeds the maxsteps_per_episode, so the episode is done without reaching goal. 2nd video shows the model seems always tend to take a detour to reach the goal while the goal is clearly in front of it without any obs). swingleftandright.mp4contours.mp4I just cannot tell what leads to them, cuz the model generally can reach the goal most times(like 7 episodes/10 episodes), and the goal point in the videos are apparently not that hard to reach as well. Also, the paths it plans are not intelligent enough compared to the source code model. And what I've changed are: What's your opinions about it? Sincerely need your guidance. Thank you for your time and consideration 👍 |
How many epochs have elapsed in this training? For the second case, it is again the local optimum. Or better explanation would be that the evaluation of the state is such that it "fears" collision more than reaching the goal. At this stage in training the robot evaluates that there is more danger of colliding with the obstacle as that is most likely something that has happened in previous runs where robot collided with an obstacle on the right side. So closeness to obstacle on the right has a negative value. But it is not the same on the left side as the state is different and the evaluation of it will be different as well. As you can see the effect is around the 1m mark from the obstacle which would be consistant also with r3 calculation (unless you have changed it). Since the robot is somewhat slower in your case, you can consider lowering the reward for r3. In any case, finding the right reward function and the right coefficients may solve these issues. Also, theoretically, the right value of the state should be calculated after seeing each state many times so all you might need is to just train the model longer. You also should not judge the performance too much from training episodes, but rather the evaluation episodes as no noise or bootstrapping is applied there. |
Thx for your kind help :P In this training the model has been trained for ~2500 episodes, so I think there might be little improvement by just training it longer if without other adjustment. Heartily appreciate your telling me the reasons behind these behaviours so patiently, I shall set about trying those possible solutions to see how they work LOL. BTW, is the random noise here refers to expl_noise(cuz there're expl_noise and policy_noise, which I think the expl_noise is used to make the robot do the actual actions, while the policy_noise is only for loss calculations)? |
Hi, Yes I mean the |
Dear Reinis Cimurs, Heartily appreciate your help, your suggestions have inspire me to try some possible ways. And these days I have tried some different methods, belows are my exps:
Idk whether I miss something or these methods are not proper enough? What's your opinions about them, sincerely need your help 👍 |
Hi
|
Dear Reinis Cimurs, So glad to have your kind help. 1.Yep, for the "swing detection code", the random action will just be executed once each time it detects "swing left and right" phenomenon.
Thanks for your suggestions, I have changed this part code like this: So far the training has reached ~100 episodes, but I cannot tell if the swing phenomenon can be eliminated now...I will keep training it, hope this will work. 2.I didn't expect this will happen " The reason why I try to change the reward function like that is that my robot seems so blind to nav to the goal(not even detours but also like completely ignores the goal), just as the pic shows below. Now I try changing the reward fucntion like this: So far I gradually find the DRL is quite unique, cuz one has to speculates what on earth leads the robot acts like this or that based on different phenomena, and how to adjust the code to make it acts the way one expects it learns to. So magic but also challenging hhh Looking forward to your reply, Sincerely appreciate your guidance <3 |
|
Dear Reinis Cimurs, Thanks for your timely reply. 1.The dist and angle calculations are from the source code, and I haven't changed them yet. Also I print the dist and angle in each step, the values do show the corresponding actions(like once the robot gets away from goal, the dist does become larger). 2. the c.d.e.f. I made are totally for a.b. a+b = the model always goes in circles without any learning Hard to believe just these tiny differences make so different. Frankly speaking, I'm kind of sad now, but I don't want to give it up. Appreciate your giving me so much support. |
Dear Reinis Cimurs, Currently the model has been trained for ~2000 episodes, although #89 (comment) this version "swing detection code" does help decrease swing phenomenon greatly in training stage, the model still swings left and right a lot in test stage(but not in every episode, this phenomenon occasionally occurs). Feel like the methods are exhausted but the result is no getting better... swing1.mp4swing2.mp4swing3.mp4 |
Hi, Do you by any chance have a repo or similar to look at the full code? Sorry, I am also out of ideas and I don't think I can help that much more without it. I will try to get around to train a model with smaller speed on the weekend if I have time to see if I can get a working model. |
Dear Reinis Cimurs, During recent training I suddenly find there's a fault writing in my reward function, and the swing phenomenon just gone after I correct this fault(how careless of myself, also I'm so sorry for misleading you that much) However, the model still cannot "see" the goal after the above correction, I haven't founded the reason behind it... I've sent an email to your mailbox, and the attachment is the full code, very very grateful for your kind help, Looking forward to your reply <3 |
Hi, I have received the files. I will try to take a look at the program when I have the time. |
Dear Reinis Cimurs, I try to train the model much longer, and its behaviors get a little better, but far inferior to the source code model: #89 (comment) (this model has been trained for ~400 episodes)(update here: the swing phenomenon dosen't gone, but only occurs few times during later training, so maybe it has something to do with the reward function but not all. Moreover, with longer training, the detour phenomenon disappears, which means it finally can "see" the goal) BTW, Do you encounter the same situation with the smaller speed? Looking forward to your reply, Sincerely appreciate your guidance :) |
Hi, I have trained a couple of times with the code you provided. It is a bit difficult to validate the issues here, since the training on such slow speeds takes quite some time. Is there a specific reason why you would want to use such slow robot speeds? On the code side, I was able to train to the point where it also encounters the swinging issue. It also did not initially want to go to the goal directly but with longer training it did improve. I simplified the velocity capping and reward function to closer resemble the original implementation as in the original repo. I noticed that mostly you cap velocities to 0.34 but at some instances (in case of vel_flag==1) the velocity shoots up quite high and has a huge non-gradual jump in velocity and as such, huge change in reward function. I will train a couple of more times to see what possible issues there might be and let you know if I find something. |
Dear Reinis Cimurs,
Highly appreciate for your kind help, your supports giving me much more confidence to crack these barriers. Looking forward to your reply :p |
Hi, Before going on vacation I also ran some trainings with slower speed and larger FOV of lidar but also could not get a consistently good performance. It would also fall into local minima and "lock up", however it did seem to go to the goal.
|
Dear Reinis Cimurs, Cordially thanks for your help. Recently I have also tried training some models, and my findings are as belows.
thus, I tried to smooth the velocity adjustment, using tanh-like activation function like this to gurantee the consistency(the green curve):
a=>all models showed smoother moves a+b=>I tried training several models with different seed values, and finally I got one model hardly swing(specifically speaking, maybe 2 or 3 times swinging during every 1000 episodes in test stage). So, I created a simple test map to further examine its performance, sadly, it swings a lot then... a+c=>I tried training several models with different seed values, I do find some of which jump out of the "swing local-minima"(no swing at all). However, they don't swing but they also don't reach the goals(blind again, I even trained to ~8000 episodes) 5. I once trained a model: 180FOV + 2Dlaser+ lr1e-3 + [a_in[0]*0.34, a_in[1]*0.34] + time_delta0.3. It works okay, except moving too slow... 6.
Wish there's suprise ahead 👍 |
Hi,
Best of luck! Sorry I can't provide more insights |
Dear Reinis Cimurs,
I recently read your essay titled "Goal-Driven Autonomous Exploration Through Deep Reinforcement Learning", I do appreciate your work in robot path planning using the powerful DRL technology, I believe it would be a valuable resource and guidance for many others' research about DRL.
I've reproduced your work through your github code, everything works just fine until I change the output velocity range to [-0.34,0.34] (in your work linear velocity ranges [0,+1], angular velocity ranges [-1,+1]), which leads to a divergence in loss as shown below.
![image](https://private-user-images.githubusercontent.com/57956361/286442812-d54d4787-f08a-4230-9707-438355e486f0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDI4MTItZDU0ZDQ3ODctZjA4YS00MjMwLTk3MDctNDM4MzU1ZTQ4NmYwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWFlNzNmYzcwYzMyMjQzNzlhMmU5YThhNzkyYmE0MmRmZjI4OGI3OTI5NGIzNzBjN2E5M2Q5YmFjYjJiODJmZmEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.PtgvjtLcTrDtBokun1wF-NaYtcye8q8pCYBNxyMViRo)
![image](https://private-user-images.githubusercontent.com/57956361/286442854-5a0a0ac2-ecdc-419f-a73f-a75464856c60.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDI4NTQtNWEwYTBhYzItZWNkYy00MTlmLWE3M2YtYTc1NDY0ODU2YzYwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc5NDZhZTFhMjgyY2NlNTMyZTdjOTY1YjYxMzU4MDdiMDc1NGZjNTYwMjZjZjA0NGExMTk1OWY5NDE2Yjk5NzQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.4QtKza-QVtOV3APLRLKWNdvHaQ5IRRaue_OHda49aqA)
To solve the loss problem, I also try to adjust the reward function as below, thankfully it finally converges like this.
![image](https://private-user-images.githubusercontent.com/57956361/286442903-68de02b9-4143-47c0-8624-f67023cd2cc3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDI5MDMtNjhkZTAyYjktNDE0My00N2MwLTg2MjQtZjY3MDIzY2QyY2MzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM5OTQ3NzBlZGYwZmRmNzQ4MzY3YzdlNTMzNDJhNDc4OTJhMDM0Njk4ODVjNmNmOGE4ZDk2YTY4M2I3YWUyY2ImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.BnKjfAn3uB7spHMf6cUw7Mka-Ysw2-xSvWhXT3AVwqw)
![image](https://private-user-images.githubusercontent.com/57956361/286442926-1aa384e5-f388-4943-83a6-accd147bb6c7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDI5MjYtMWFhMzg0ZTUtZjM4OC00OTQzLTgzYTYtYWNjZDE0N2JiNmM3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI1ZmUwODA3NWNjNGUwYTllM2I4YmM2M2E1YzA5OWJlOTNkNGE5NDk5ZWQyZGRjZTQ1ZTMxNTgzZWE5YzYwODEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.g7E3PASjdgSmCzheNHBQd4jPvS0ANMhEO00txrpsjDQ)
However, the loss may seem okay but the actual simulation result is not as good as before, the robot may collide 4 or 5 episodes out of 10 episodes, and once the goal is at back of the robot, it seems that the robot dose not know to turn around to navigate to the goal and just go straight to hit the obs bofore it...
Besides, I also try to brutely mutiply a coefficient and tanh() to adjust the output velocity of Actor, and it fails like this:
![image](https://private-user-images.githubusercontent.com/57956361/286447100-bf283814-b341-42dd-8abd-3e4a3f84e283.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDcxMDAtYmYyODM4MTQtYjM0MS00MmRkLThhYmQtM2U0YTNmODRlMjgzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThlZWEzYWM2NmY0OTIxMmIwN2Y5ZDllNGUyZmM2MzMyODAyOTFjNzZiMmEzYTM1MDg5N2E0ZjMzYjA1MGExZmImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.rgp9IMKdufR6GVa5ZjyibiXSoi13HGlnbA4IJWWIg58)
![企业微信截图_17012222173465](https://private-user-images.githubusercontent.com/57956361/286448251-42420b81-fdae-489b-b292-1117da3712bc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDgyNTEtNDI0MjBiODEtZmRhZS00ODliLWIyOTItMTExN2RhMzcxMmJjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk1MjE5M2RjYzE0NWU4YzdiN2U5NzM3NDZjMDRiZTNhMTNhZDE0Zjk0NTVmZGQ2NDQ3Yjk4MGFjMWY0MGU1ZGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.VmW1vvAKQaehM3SPuaO3rHZAKBW7MODdRzk-jOl7kzw)
![企业微信截图_17012212652481](https://private-user-images.githubusercontent.com/57956361/286445846-9c194d48-1f00-44f1-81eb-0a76000611ac.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk2ODExNzQsIm5iZiI6MTcxOTY4MDg3NCwicGF0aCI6Ii81Nzk1NjM2MS8yODY0NDU4NDYtOWMxOTRkNDgtMWYwMC00NGYxLTgxZWItMGE3NjAwMDYxMWFjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI5VDE3MDc1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU3N2ZjY2JiOTc3MDMyN2YzOWRlYWM2Mjc1MzNmOTg0NmIxODhkZjFiMTJhNzE1MTJjZmU2MTQ1MTk1NjNmNTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.kDLYXXbux1SiqMG_X1Kx-NDck9YKWJVobSk4BWkosx0)
Thank you for your time and consideration. I really do need your help, I've been stuck here for a week long, it really drives me crazy(sad...)
The text was updated successfully, but these errors were encountered: