-
Notifications
You must be signed in to change notification settings - Fork 13
Run data through trained network #4
Comments
That is very odd 😨 Thanks for the input! |
It seems like there's something broken overall... :( I'll make sure to let you know after it's fixed! |
Thanks for the quick response! Happy to help :) I briefly looked over the code and it seems pretty well structured and since i didn't find any other neat-library for go that comes anywhere near this design i would be happy to contribute to this project in form of feature branches, as soon as it's stable again. I hope you'll find the bug :) |
Hi again! After closely looking at the code i might have found the error. Look at the fitness sharing function within the species:
While this is penalizing the members, when the fitness function is to be maximized, it actually increases the fitness when its value has to be minimized. I might have overlooked some other part of the code where this makes sense but if not, this might be the trouble maker :) |
There i am again. I tried to fix the issues but every time i found something that could fix it, some new bugs came up. There seem to be several things that need to be fixed. But since you should know your code better i decided to let you handle it and just wait. Here are some things i found beside the one i posted above: in NEAT.Run() - you update the best genome, after you reproduced, deleted and probably mutated many of them and didn't update their fitness value. You should either assign the best individual just after the evaluation, or evaluate at the end of this function and then assign it. Neuron Signals - Those should be set to 0 after every evaluation. Otherwise you keep the signals for the next one, even though those evaluations are supposed to be separated and ran on a "fresh" network. EvaluationFunction - It should be possible to reset the signals here too at certain points if testing independent data. Take the XORTest() for example. If you do not set the Signals to 0 after each combination of input, your output will depend on the order and the inputs that you ran through the network - and while this is what you might want when passing multiple inputs that are somehow connected, this is definitely not the case when running all xor-combinations. My point about the neuron signals can be observed, when you retrieve the best genome and run it through the evaluation function again - it produces different outputs than creating a NewNeuralNetwork using that genome (where signals are set to 0) and running the test data through it. I'd appreciate if you told me, how interested you are in improving this project on regular basis. If you have no time/interest to further work on it, i'll probably fork this and do it on my own, cause i don't want to bother you all the time :) |
Hello,
Thank you so much for the comment, I actually recently went through the code and decided that there are more bugs than I expected. I think I should have designed it more thoroughly. I planned out some changes I wanted to make, but I just didn’t have time for it recently.
I’m kinda busy with another project right now, but I should be able to start working on it soon. To be honest, this project is the open source project I’m focusing on the most (outside of my research), so I’ll be working on it for a while. My current plan is to extend it as far as HyperNEAT.
Thank you again for the help!
… On Jul 3, 2017, at 6:03 AM, azaryc2s ***@***.***> wrote:
There i am again. I tried to fix the issues but every time i found something that could fix it, some new bugs came up. There seem to be several things that need to be fixed. But since you should know your code better i decided to let you handle it and just wait. Here are some things i found beside the one i posted above:
in NEAT.Run() - you update the best genome, after you reproduced, deleted and probably mutated many of them and didn't update their fitness value. You should either assign the best individual just after the evaluation, or evaluate at the end of this function and then assign it.
Neuron Signals - Those should be set to 0 after every evaluation. Otherwise you keep the signals for the next one, even though those evaluations are supposed to be separated and ran on a "fresh" network.
EvaluationFunction - It should be possible to reset the signals here too at certain points if testing independent data. Take the XORTest() for example. If you do not set the Signals to 0 after each combination of input, your output will depend on the order and the inputs that you ran through the network - and while this is what you might want when passing multiple inputs that are somehow connected, this is definitely not the case when running all xor-combinations.
My point about the neuron signals can be observed, when you retrieve the best genome and run it through the evaluation function again - it produces different outputs than creating a NewNeuralNetwork using that genome (where signals are set to 0) and running the test data through it.
I'd appreciate if you told me, how interested you are in improving this project on regular basis. If you have no time/interest to further work on it, i'll probably fork this and do it on my own, cause i don't want to bother you all the time :)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#4 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AI1DK1gawxmFUgaxHfcdcnIqHiZBs7nzks5sKMp6gaJpZM4OLLQY>.
|
Hi! But hey: i'm not really an certified expert on that matter so i might be wrong :) In this case i'd appreciate the clarification. About HyperNEAT: I hope your are planning to do this as an additional option and not replace standard neat completely, since both have their pros and cons. Keep up the good work! |
Don't worry about it :) I actually decided to flush signal on feedforward so that I can get rid of recurrency actually. I think I made the problem a little more complicated by allowing it, and I wanted to see if I can fix it without recurrency and get back to it. And in terms of evaluation, I think it should be fine, since a new neural network is created each time the evaluation function is called (unless I remember it wrong..). Sorry if I'm not making much sense XD I'm a bit delusional from not sleeping lol I'll try to keep fixing things, and I'll definitely let you know when it's more thoroughly tested! And yes! I will be making a sub-package that extends neat, not replace it :) I figured that would be useful. p.s. By the way, it's really nice to see that there's someone out there who's interested in neuroevolution AND go! |
Ah ok. Didn't mean to lecture you on the subject. Wanted to report this just in case you did this accidently cause you were delusional from not sleeping ;) I'm a little hyped by genetic/evolutionary algorithms and neural networks for some time already. Just finished my masters course on this subject yesterday. I was mostly working with Matlab (without any framework) and actually LOVE most of it's features but then i started to wonder if it wouldn't be faster if i implement it in Go (which i'm using for another project, and also love) so i tried doing that for the genetic algorithm to solve the knapsack-problem and it WAYY outperformed Matlab. So now i tried to do the same for neat, but figured out i could use a library for it, so that's how i got here :) I'm also wondering if it makes any sense to implement NSGA-II in Go as a package. Maybe i'll do so :) Ps. Don't feel pressured by me. Take your time to fix stuff. I'm going on vacation anyway in few days so i won't be spamming here for a while xD |
That's nice to hear! I'm hoping to continue as a graduate student once I graduate as well! It's great to have at least a little bit of pressure to motivate myself, so I actually really appreciate your advice :) Have fun on your vacation! |
Hey! I just wanted to check on the progress (if there is any)? Can we expect the fix any time soon? I might need this library soon and if you're dropping the project i'd have to fork it and fix the issues myself. Thanks in advance! |
Hi!
First i wanted to thank you for the effort you put into this project. I'm having issues evaluating even the example you posted (XOR). So i trained the network until the avg Fitness was around 1.5 and the best Fitness = 0 (which should solve all possible xor cases). But when i create a Network using the best genome and run some data through it, it doesn't seem to work. Am i doing something wrong or is there some kind of a bug regarding retrieving the best individual? Here's the code i use for the test:
The output of this is in most cases:
2017/07/01 03:09:31 Fitness: 0.000000
2017/07/01 03:09:31 true, true = 0.000000
2017/07/01 03:09:31 true, false = 0.000000
2017/07/01 03:09:31 false, true = 0.000000
2017/07/01 03:09:31 false, false = 0.000000
But sometimes it also goes for all of it:
2017/07/01 03:22:18 Fitness: 0.000000
2017/07/01 03:22:18 true, true = 1.000000
2017/07/01 03:22:18 true, false = 1.000000
2017/07/01 03:22:18 false, true = 1.000000
2017/07/01 03:22:18 false, false = 1.000000
I'd appreciate your feedback on this issue. Chances are i'm simply too dumb/tired to use it :)
The text was updated successfully, but these errors were encountered: