-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GA, TSP, and MaxK Color #34
Comments
Also for Knapsack, Ga did the best by far rather than MIMIC. same repo has example code. I made up the problems with increasing complexity based on the documentation example |
Hi @tmsuidan , I'm really sorry, but I just don't have the time to review the code that you posted the link to. If you can identify any specific issues with the mlrose code itself, then I can look at those. However, given my time constraints, that's the best I can do. Regards, Genevieve. |
The specific issue is that the fitness functions don’t seem to be defined so that the algorithms that should work the best don’t as in Papers from Isbell and others |
Hi @tmsuidan , |
For MaxK, I haven't found any that i could directly convert to code TravellingSales appears to be just a sum of the distances between each node as they should be but that might be an issue with the GA algorithm where it seems to discard the parent. I know you addressed that in another Issue. On knapsack where GA does best and shouldn't, it is returning the total values as it should. So I suppose I believe there is a problem with GA and its performance, and possibly with the maxKcolor I apologize that I'm just pointing out a problem without a suggested solution, but when I have time to look deeper into the code and see what's happening that shouldn't or vice versa, I will but I definitely thought you should know about it |
Hi @tmsuidan , Firstly, I just want to say that I really appreciate your feedback. I'm not really sure what to do with it, right now, but it's something that I will keep in the back of my mind, and will hopefully be able to address at some time in the future. I had a look at the maxKcolor function you sent through and I don't think that is the cause of the problem. Although the function differs from the one encoded in mlrose, it's not in any way that should matter (i.e. the function in the paper you sent through divides everything by the number of edges in the graph, but that is just multiplying the function by a positive constant, so shouldn't change what the optimal state is. Also the function in the paper counts the number of edges where the colors of the nodes that the edge connects are different, whereas mlrose counts the number of edges where the colors are the same. However, that just means that in the former case you need to maximize the fitness function, and in the latter, you need to minimize it, so, again, it shouldn't matter). If you do have time to look deeper into the code and figure out what the issue is (or if one does or does not exist at all), please let me know. Until then, I will just keep this in mind in case something similar pops up. Regards, Genevieve. |
Hi Genevieve, Please let me say how much I greatly appreciate that you created this library. I've literally done all the toy problems that you provide. The 3 I bring up are the ones that have unexpected results which is what made me question the fitness determination. While it might be something else, all other toy problems produce expected results. (I am running MaxKColor with maximize=False as well as Queens and TSP.) I will try over the next few months as I have to move on to other projects but should have time to revisit afterwards. I did rerun Knapsack with the release from last night with identical results. The release notes said it fixed some bugs in some problems, so I wanted to see if it affected anything I had seen. For NN weight optimization, Gradient Descent didn't give the same results as the more frequently used ABIGAIL package, but I've compared the rest of the NN weight optimization and they're similar. I've run it both with 5 fold CV and with just test, train, and validation splits. Since I used 5 fold CV for rhc, ga, and sa, I just ran sklearn's NN with 5 fold CV since it uses backpropagation by default. |
Hi @tmsuidan , If you do have time to revisit this over the next few months, I would be very interested to hear what you come up with. I will also try to have a look at this when I get the time. Although, at the moment, another project that I am working on is taking up a lot of my time, so it might take a while to get to it. Thanks once again for your feedback on this and good luck with whatever project you're working on at the moment. Regards, Genevieve. |
I've run several experiments with the optimization problems. Unexpectedly, GA doesn't perform best on TSP as expected (in fact rhc was best followed closely by sa). Also, for MaxKColor, MIMIC should perform best but it's only slightly better than GA while RHC is far better and with SA closely behind again. Is it possible there's a bug in the fitness determination or am I setting up the problems incorrectly?
https://github.com/tmsuidan/cs7641-assignment2/tree/master/MaxkColor
https://github.com/tmsuidan/cs7641-assignment2/tree/master/tsp
The text was updated successfully, but these errors were encountered: