Skip to content
This repository has been archived by the owner on Dec 11, 2018. It is now read-only.

Run data through trained network #4

Open
azaryc2m opened this issue Jul 1, 2017 · 11 comments
Open

Run data through trained network #4

azaryc2m opened this issue Jul 1, 2017 · 11 comments

Comments

@azaryc2m
Copy link

azaryc2m commented Jul 1, 2017

Hi!
First i wanted to thank you for the effort you put into this project. I'm having issues evaluating even the example you posted (XOR). So i trained the network until the avg Fitness was around 1.5 and the best Fitness = 0 (which should solve all possible xor cases). But when i create a Network using the best genome and run some data through it, it doesn't seem to work. Am i doing something wrong or is there some kind of a bug regarding retrieving the best individual? Here's the code i use for the test:

package main

import (
	neat "github.com/jinyeom/neat"
	"log"
	"math"

)

func main(){
	// First, create a new instance of Config from the JSON file created above.
	// If there's a file import error, the program will crash.
	config, err := neat.NewConfigJSON("config.json")
	if err != nil{
		log.Fatal(err)
	}
	
	neatInst := neat.New(config, neat.XORTest())
	neatInst.Run()
	log.Printf("Fitness: %f\n", neatInst.Best.Fitness)
	nn := neat.NewNeuralNetwork(neatInst.Best)
	
	inputs := make([]float64, 3)
	inputs[0] = 1.0 // bias	
	b := []bool{true,false}
	for i := 0; i < len(b); i++{
		for j := 0; j < len(b); j++{
			a := b[i]
			b := b[j]
			if a{inputs[1] = 1.0}else{inputs[1] = 0.0}
			if b{inputs[2] = 1.0}else{inputs[2] = 0.0}
			output, _ := nn.FeedForward(inputs)
			log.Printf("%t, %t = %f\n",a,b,output[0])
		}
	}
}

The output of this is in most cases:

2017/07/01 03:09:31 Fitness: 0.000000
2017/07/01 03:09:31 true, true = 0.000000
2017/07/01 03:09:31 true, false = 0.000000
2017/07/01 03:09:31 false, true = 0.000000
2017/07/01 03:09:31 false, false = 0.000000

But sometimes it also goes for all of it:
2017/07/01 03:22:18 Fitness: 0.000000
2017/07/01 03:22:18 true, true = 1.000000
2017/07/01 03:22:18 true, false = 1.000000
2017/07/01 03:22:18 false, true = 1.000000
2017/07/01 03:22:18 false, false = 1.000000

I'd appreciate your feedback on this issue. Chances are i'm simply too dumb/tired to use it :)

@jinyeom
Copy link
Owner

jinyeom commented Jul 1, 2017

That is very odd 😨
I'll look into it! I'm actually working on optimization, so I'll let you know if I find anything wrong with the code.

Thanks for the input!

@jinyeom
Copy link
Owner

jinyeom commented Jul 1, 2017

It seems like there's something broken overall... :( I'll make sure to let you know after it's fixed!

@azaryc2m
Copy link
Author

azaryc2m commented Jul 1, 2017

Thanks for the quick response! Happy to help :) I briefly looked over the code and it seems pretty well structured and since i didn't find any other neat-library for go that comes anywhere near this design i would be happy to contribute to this project in form of feature branches, as soon as it's stable again. I hope you'll find the bug :)

@azaryc2m
Copy link
Author

azaryc2m commented Jul 1, 2017

Hi again! After closely looking at the code i might have found the error. Look at the fitness sharing function within the species:

// ExplicitFitnessSharing adjust this species' members fitness via explicit
// fitness sharing.
func (s *Species) ExplicitFitnessSharing() {
	for _, genome := range s.Members {
		// do not let its fitness be negative
		if genome.Fitness < 0.0 {
			genome.Fitness = 0.0001
		}
		genome.Fitness /= float64(len(s.Members))
	}
}

While this is penalizing the members, when the fitness function is to be maximized, it actually increases the fitness when its value has to be minimized. I might have overlooked some other part of the code where this makes sense but if not, this might be the trouble maker :)

@azaryc2m
Copy link
Author

azaryc2m commented Jul 3, 2017

There i am again. I tried to fix the issues but every time i found something that could fix it, some new bugs came up. There seem to be several things that need to be fixed. But since you should know your code better i decided to let you handle it and just wait. Here are some things i found beside the one i posted above:

in NEAT.Run() - you update the best genome, after you reproduced, deleted and probably mutated many of them and didn't update their fitness value. You should either assign the best individual just after the evaluation, or evaluate at the end of this function and then assign it.

Neuron Signals - Those should be set to 0 after every evaluation. Otherwise you keep the signals for the next one, even though those evaluations are supposed to be separated and ran on a "fresh" network.

EvaluationFunction - It should be possible to reset the signals here too at certain points if testing independent data. Take the XORTest() for example. If you do not set the Signals to 0 after each combination of input, your output will depend on the order and the inputs that you ran through the network - and while this is what you might want when passing multiple inputs that are somehow connected, this is definitely not the case when running all xor-combinations.

My point about the neuron signals can be observed, when you retrieve the best genome and run it through the evaluation function again - it produces different outputs than creating a NewNeuralNetwork using that genome (where signals are set to 0) and running the test data through it.

I'd appreciate if you told me, how interested you are in improving this project on regular basis. If you have no time/interest to further work on it, i'll probably fork this and do it on my own, cause i don't want to bother you all the time :)

@jinyeom
Copy link
Owner

jinyeom commented Jul 3, 2017 via email

@jinyeom jinyeom closed this as completed Jul 3, 2017
@jinyeom jinyeom reopened this Jul 3, 2017
@azaryc2m
Copy link
Author

azaryc2m commented Jul 3, 2017

Hi!
Too bad i can't contact you differently here on Github, so i have to keep spamming here :) I saw that you started bug fixing along with some of my proposed improvements. I saw that you are resetting the neuron signals in the FeedForward-function. In case you misunderstood me: that's not what i meant. What happens now is that the network is only able to process completely independent data and draws no connections between past inputs. What i meant by being able to reset the signal, was for the EvaluationFunction (optionally - in case that's needed to train the network. For example when testing it for multiple data sets) and that it should be set back to 0 after every generation-loop-run in NEAT.Run(). Setting the signal to 0 after each call of FeedForward is not that great.

But hey: i'm not really an certified expert on that matter so i might be wrong :) In this case i'd appreciate the clarification.

About HyperNEAT: I hope your are planning to do this as an additional option and not replace standard neat completely, since both have their pros and cons. Keep up the good work!

@jinyeom
Copy link
Owner

jinyeom commented Jul 3, 2017

Don't worry about it :)

I actually decided to flush signal on feedforward so that I can get rid of recurrency actually. I think I made the problem a little more complicated by allowing it, and I wanted to see if I can fix it without recurrency and get back to it. And in terms of evaluation, I think it should be fine, since a new neural network is created each time the evaluation function is called (unless I remember it wrong..). Sorry if I'm not making much sense XD I'm a bit delusional from not sleeping lol

I'll try to keep fixing things, and I'll definitely let you know when it's more thoroughly tested!

And yes! I will be making a sub-package that extends neat, not replace it :) I figured that would be useful.

p.s. By the way, it's really nice to see that there's someone out there who's interested in neuroevolution AND go!

@azaryc2m
Copy link
Author

azaryc2m commented Jul 4, 2017

Ah ok. Didn't mean to lecture you on the subject. Wanted to report this just in case you did this accidently cause you were delusional from not sleeping ;)

I'm a little hyped by genetic/evolutionary algorithms and neural networks for some time already. Just finished my masters course on this subject yesterday. I was mostly working with Matlab (without any framework) and actually LOVE most of it's features but then i started to wonder if it wouldn't be faster if i implement it in Go (which i'm using for another project, and also love) so i tried doing that for the genetic algorithm to solve the knapsack-problem and it WAYY outperformed Matlab. So now i tried to do the same for neat, but figured out i could use a library for it, so that's how i got here :)

I'm also wondering if it makes any sense to implement NSGA-II in Go as a package. Maybe i'll do so :)

Ps. Don't feel pressured by me. Take your time to fix stuff. I'm going on vacation anyway in few days so i won't be spamming here for a while xD

@jinyeom
Copy link
Owner

jinyeom commented Jul 5, 2017

That's nice to hear! I'm hoping to continue as a graduate student once I graduate as well! It's great to have at least a little bit of pressure to motivate myself, so I actually really appreciate your advice :)

Have fun on your vacation!

@azaryc2m
Copy link
Author

Hey! I just wanted to check on the progress (if there is any)? Can we expect the fix any time soon? I might need this library soon and if you're dropping the project i'd have to fork it and fix the issues myself. Thanks in advance!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants