Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural evolution algorithms implementation (CNE, NEAT, HyperNEAT) #686

Closed
wants to merge 72 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
dc27518
change .gitignore to ignore .DS_Store
BangLiu May 17, 2016
88788d6
create gene.hpp, add ne_test.cpp, and created or modified multiple CM…
BangLiu May 20, 2016
6ab9add
Created gene.happ, add tests and changed or created CMakeLists.txt
BangLiu May 20, 2016
ca73f95
Add aDepth to NeuronGene.
BangLiu May 20, 2016
30ec0ba
implement CNE, haven't finish.
BangLiu May 28, 2016
7fdd7ca
delete blank files
BangLiu May 28, 2016
e4fbabd
finished genome.hpp, revised gene.hpp
BangLiu Jun 7, 2016
3c8aa62
modified gene.hpp
BangLiu Jun 7, 2016
cbb79a2
implemented CNE skeleton; more genome and population functions.
BangLiu Jun 9, 2016
65f093d
implemented XOR test case; need debugging; one heart function -- CNE:…
BangLiu Jun 10, 2016
eb16ce2
solved most bugs for testing XOR
BangLiu Jun 11, 2016
c6976ac
further solved some bugs in code.
BangLiu Jun 11, 2016
8919bb9
Solved bugs. Need to fulfill Reproduce() to finish CNE.
BangLiu Jun 12, 2016
ee548b6
CNE algorithm finished. XOR test passes. Need to revise coding style …
BangLiu Jun 12, 2016
faf47a2
Create Species class.
BangLiu Jun 19, 2016
adc45b2
Revised LinkGene and related file for NEAT
BangLiu Jun 20, 2016
95c2e53
revised sortSpecies()
BangLiu Jun 21, 2016
1ee07a0
add aAdjustedFitness to genome
BangLiu Jun 21, 2016
39ef07b
fix bug in CNE Reproduce()
BangLiu Jun 21, 2016
74fb4a0
Implemented neat mutations, replace size_t by ssize_t
BangLiu Jun 24, 2016
d0883f4
Implemented crossover, some bug exist
BangLiu Jun 26, 2016
a605eef
Revise Crossover, solve bugs, more functions implementations.
BangLiu Jun 26, 2016
e059af0
revised neat functions. In paper,disabled links also consider for cro…
BangLiu Jun 27, 2016
6c11f32
neat almost finished. Need debugging,need revise coding style, need t…
BangLiu Jun 28, 2016
3c10995
NEAT finished, but has bugs. XOR not pass.
BangLiu Jun 29, 2016
57d4165
In progress of debugging. Add lots of printf ... Will clear them afte…
BangLiu Jun 30, 2016
6a770ec
seems bugs solved. keep printf. Seems information such as species siz…
BangLiu Jun 30, 2016
cce115b
solve more bugs
BangLiu Jun 30, 2016
2a9899c
solved more. But still have problem.
BangLiu Jun 30, 2016
1a2cfd8
solved CalcAverageRank bug. Still have more.
BangLiu Jun 30, 2016
dacc2e6
currently seems no bug. Previously we removed stale species even when…
BangLiu Jun 30, 2016
2a4c715
revised WeightDiff
BangLiu Jun 30, 2016
6f901a1
Removed redundant members in species, population. haven't remove in g…
BangLiu Jul 1, 2016
a80f5f7
Further cleaned some code.
BangLiu Jul 1, 2016
3db85fd
changed the place where we should set the childGenome's NumInput, Num…
BangLiu Jul 1, 2016
8168779
changed neuron_gene, genome's activation. Still need to revise neat.
BangLiu Jul 2, 2016
554a279
Revised MutateAddLink.
BangLiu Jul 2, 2016
1fe7d73
solved some bugs about activation.
BangLiu Jul 7, 2016
ff45785
Add Cart Pole test. Result not good. Fitness doesn't improve.
BangLiu Jul 10, 2016
48e15c4
Solved more bugs.
BangLiu Jul 13, 2016
fc982b9
Fixed a bug in task. Removed adjustFitness. Problems maybe the parame…
BangLiu Jul 16, 2016
66b0f41
further revised something.
BangLiu Jul 17, 2016
109f05b
Passed Cart Pole problem. Revised neat, task.
BangLiu Jul 20, 2016
8054f00
Implemented test MountainCar, not pass yet.
BangLiu Jul 21, 2016
5ef61fe
Seems passed Mountain Car.. in 1 iteration
BangLiu Jul 21, 2016
f213dd8
Revised a bug in TaskMountainCar Action()
BangLiu Jul 21, 2016
f234ebd
added Success() for tasks.
BangLiu Jul 21, 2016
96ccef5
Finished double pole. Non-Markov have bug.
BangLiu Aug 3, 2016
24f9baf
revised non-markov, still have bug
BangLiu Aug 3, 2016
4decb0e
revised neat MutateAddLink. Non-Markov still have bug.
BangLiu Aug 3, 2016
c46436e
revised neat, clean genome before evaluate.
BangLiu Aug 4, 2016
000d6aa
Merge pull request #1 from mlpack/master
BangLiu Aug 5, 2016
2599077
revised a bug in Activate()
BangLiu Aug 5, 2016
408b40f
fix conflict
BangLiu Aug 5, 2016
0fc792c
changed header file in ne_test
BangLiu Aug 5, 2016
847db95
solved compile bug
BangLiu Aug 5, 2016
10b99ab
change ssize_t, size_t to int. Windows doesn't support ssize_t
BangLiu Aug 6, 2016
da55507
revised comment style, bracket style etc. Have strange compile error.
BangLiu Aug 8, 2016
beb8ba8
rebuild, solved compile problem. Add default constructors back. Witho…
BangLiu Aug 8, 2016
3b66431
Revised the style of all current code
BangLiu Aug 10, 2016
9480379
revised some tiny styles
BangLiu Aug 11, 2016
4483fc8
Revised NeuronGene to add aCoordinate. Tests all previous testings an…
BangLiu Aug 13, 2016
190e639
Revised neat.hpp. NeuronInnovation considers activation function type…
BangLiu Aug 13, 2016
4cd0054
Implemented first version of Hyperneat.
BangLiu Aug 15, 2016
668fa6e
revised HyperNEAT. No inheritance.
BangLiu Aug 16, 2016
62d5885
Solved bugs. Implemented XOR test. Not pass. Need debug.
BangLiu Aug 16, 2016
e176571
in the progress of debugging XOR test
BangLiu Aug 18, 2016
dec32ac
Find important bug in Activate()! revised. Still not pass hyperneat xor.
BangLiu Aug 19, 2016
bc178c3
small change.
BangLiu Aug 19, 2016
432331b
Solved the duplicated link bug. Changed MutateAddNeuron. Still not pa…
BangLiu Aug 21, 2016
0944516
Make the new neuron type random when add new neuron. Still not pass xor.
BangLiu Aug 26, 2016
d66bb14
Revised naming style
BangLiu Aug 29, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
188 changes: 59 additions & 129 deletions src/mlpack/methods/ne/genome.hpp
Expand Up @@ -12,9 +12,6 @@
#include <map>

#include <mlpack/core.hpp>
#include <mlpack/methods/ann/activation_functions/logistic_function.hpp>
#include <mlpack/methods/ann/activation_functions/rectifier_function.hpp>
#include <mlpack/methods/ann/activation_functions/tanh_function.hpp>

#include "link_gene.hpp"
#include "neuron_gene.hpp"
Expand Down Expand Up @@ -44,15 +41,13 @@ class Genome {
const std::vector<LinkGene>& linkGenes,
ssize_t numInput,
ssize_t numOutput,
ssize_t depth,
double fitness,
double adjustedFitness):
aId(id),
aNeuronGenes(neuronGenes),
aLinkGenes(linkGenes),
aNumInput(numInput),
aNumOutput(numOutput),
aDepth(depth),
aFitness(fitness),
aAdjustedFitness(adjustedFitness)
{}
Expand All @@ -64,7 +59,6 @@ class Genome {
aLinkGenes = genome.aLinkGenes;
aNumInput = genome.aNumInput;
aNumOutput = genome.aNumOutput;
aDepth = genome.aDepth;
aFitness = genome.aFitness;
aAdjustedFitness = genome.aAdjustedFitness;
}
Expand All @@ -80,7 +74,6 @@ class Genome {
aLinkGenes = genome.aLinkGenes;
aNumInput = genome.aNumInput;
aNumOutput = genome.aNumOutput;
aDepth = genome.aDepth;
aFitness = genome.aFitness;
aAdjustedFitness = genome.aAdjustedFitness;
}
Expand Down Expand Up @@ -120,12 +113,6 @@ class Genome {
// Set output length.
void NumOutput(ssize_t numOutput) { aNumOutput = numOutput; }

// Get depth.
ssize_t Depth() const { return aDepth; }

// Set depth.
void Depth(ssize_t depth) { aDepth = depth; }

// Set fitness.
void Fitness(double fitness) { aFitness = fitness; }

Expand All @@ -150,9 +137,6 @@ class Genome {

// Whether specified neuron id exist in this genome.
bool HasNeuronId(ssize_t id) const {
assert(id > 0);
assert(NumNeuron() > 0);

for (ssize_t i=0; i<NumNeuron(); ++i) {
if (aNeuronGenes[i].Id() == id) {
return true;
Expand Down Expand Up @@ -216,137 +200,86 @@ class Genome {
return false;
}

// Calculate Neuron depth.
ssize_t NeuronDepth(ssize_t id, ssize_t depth) {
// Network contains loop.
ssize_t loopDepth = NumNeuron() - NumInput() - NumOutput() + 1; // If contains loop in network.
if (depth > loopDepth) {
return loopDepth;
}

// Find all links that output to this neuron id.
std::vector<int> inputLinksIndex;
for (ssize_t i=0; i<NumLink(); ++i) {
if (aLinkGenes[i].ToNeuronId() == id) {
inputLinksIndex.push_back(i);
}
}

// INPUT or BIAS or isolated neurons.
if (inputLinksIndex.size() == 0) {
return 0;
}

// Recursively get neuron depth.
ssize_t currentDepth;
ssize_t maxDepth = depth;

for (ssize_t i=0; i<inputLinksIndex.size(); ++i) {
currentDepth = NeuronDepth(aLinkGenes[inputLinksIndex[i]].FromNeuronId(), depth + 1);
if (currentDepth > maxDepth) {
maxDepth = currentDepth;
}
// Set neurons' input and output to zero.
void Flush() {
for (ssize_t i=0; i<aNeuronGenes.size(); ++i) {
aNeuronGenes[i].Activation(0);
aNeuronGenes[i].Input(0);
}

return maxDepth;
}

// Calculate Genome depth.
// It is the max depth of all output neuron genes.
ssize_t GenomeDepth() {
ssize_t numNeuron = NumNeuron();

// If empty genome.
if (numNeuron == 0) {
aDepth = 0;
return aDepth;
}
// Sort neuron genes by depth.
static bool CompareNeuronGene(NeuronGene ln, NeuronGene rn) {
return (ln.Depth() < rn.Depth());
}
void SortHiddenNeuronGenes() {
std::sort(aNeuronGenes.begin() + NumInput() + NumOutput(), aNeuronGenes.end(), CompareNeuronGene);
}

// If no hidden neuron, depth is 1.
if (aNumInput + aNumOutput == numNeuron) {
aDepth = 1;
return aDepth;
}

// Find all OUTPUT neuron id.
std::vector<ssize_t> outputNeuronsId;
for (ssize_t i=0; i<NumNeuron(); ++i) {
if (aNeuronGenes[i].Type() == OUTPUT) {
outputNeuronsId.push_back(aNeuronGenes[i].Id());
// Sort link genes by toNeuron's depth.
void SortLinkGenes() {
struct DepthAndLink
{
double depth;
LinkGene link;

DepthAndLink(double d, LinkGene& l) : depth(d), link(l) {}

bool operator < (const DepthAndLink& dL) const
{
return (depth < dL.depth);
}
};

std::vector<double> toNeuronDepths;
for (ssize_t i=0; i<aLinkGenes.size(); ++i) {
NeuronGene toNeuron = GetNeuronById(aLinkGenes[i].ToNeuronId());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's a good idea, to add a GetDepthById function, in this case we could avoid to copy the object.

toNeuronDepths.push_back(toNeuron.Depth());
}

// Get max depth of all output neurons.
ssize_t genomeDepth = 0;
for (ssize_t i=0; i<outputNeuronsId.size(); ++i) {
ssize_t outputNeuronDepth = NeuronDepth(outputNeuronsId[i], 0);
if (outputNeuronDepth > genomeDepth) {
genomeDepth = outputNeuronDepth;
}
std::vector<DepthAndLink> depthAndLinks;
ssize_t linkGenesSize = aLinkGenes.size();
for (ssize_t i=0; i<linkGenesSize; ++i) {
depthAndLinks.push_back(DepthAndLink(toNeuronDepths[i], aLinkGenes[i]));
}
aDepth = genomeDepth;

return aDepth;
}
std::sort(depthAndLinks.begin(), depthAndLinks.end());

// Set neurons' input and output to zero.
void Flush() {
for (ssize_t i=0; i<aNeuronGenes.size(); ++i) {
aNeuronGenes[i].aActivation = 0;
aNeuronGenes[i].aInput = 0;
for (ssize_t i=0; i<linkGenesSize; ++i) {
aLinkGenes[i] = depthAndLinks[i].link;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, we could speed this up, if we would just store the index of all linkGenes, an use another list, that contains all linkGenes as a reference.

}
}

// Activate genome. The last dimension of input is always 1 for bias. 0 means no bias.
// NOTICE: make sure depth is set before activate.
void Activate(std::vector<double>& input) {
assert(input.size() == aNumInput);
//Flush();

// Set inputs.
for (ssize_t i=0; i<aNumInput; ++i) {
aNeuronGenes[i].aActivation = input[i]; // assume INPUT, BIAS, OUTPUT, HIDDEN sequence
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we reset the activation at the beginning, we can't use the activation at time (t - 1) from a unit that has a recurrent connection, right? I'm not sure we have to reset the activation at all, we could just overwrite the existing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zoq Yeah, seems reasonable. I add the Flush() step to make sure it will be correct for activation. But I also think it seems not a required step.

// Construct neuron id: index dictionary.
std::map<ssize_t, ssize_t> neuronIdToIndex;
SortLinkGenes();

// Set all neurons' input to be 0.
for (ssize_t i=0; i<NumNeuron(); ++i) {
neuronIdToIndex.insert(std::pair<ssize_t, ssize_t>(aNeuronGenes[i].Id(), i));
aNeuronGenes[i].Input(0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we reset the input, we also lose the input of a recurrent connection, for the next iteration.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A recurrent connection's input is the neuron's activation, so I think it won't lose actually.

}

// Activate layer by layer.
for (ssize_t i=0; i<aDepth; ++i) {
// Loop links to calculate neurons' input sum.
for (ssize_t j=0; j<aLinkGenes.size(); ++j) {
aNeuronGenes[neuronIdToIndex.at(aLinkGenes[j].ToNeuronId())].aInput +=
aLinkGenes[j].Weight() *
aNeuronGenes[neuronIdToIndex.at(aLinkGenes[j].FromNeuronId())].aActivation *
((int) aLinkGenes[j].Enabled());
}
// Set input neurons.
for (ssize_t i=0; i<aNumInput; ++i) {
aNeuronGenes[i].Activation(input[i]); // assume INPUT, BIAS, OUTPUT, HIDDEN sequence
}

// Loop neurons to calculate neurons' activation.
for (ssize_t j=aNumInput; j<aNeuronGenes.size(); ++j) {
double x = aNeuronGenes[j].aInput; // TODO: consider bias. Difference?
aNeuronGenes[j].aInput = 0;

double y = 0;
switch (aNeuronGenes[j].Type()) { // TODO: more cases.
case SIGMOID:
y = ann::LogisticFunction::fn(x);
break;
case TANH:
y = ann::TanhFunction::fn(x);
break;
case RELU:
y = ann::RectifierFunction::fn(x);
break;
case LINEAR:
y = x;
default:
y = ann::LogisticFunction::fn(x);
break;
// Activate hidden and output neurons.
for (ssize_t i = 0; i < NumLink(); ++i) {
if (aLinkGenes[i].Enabled()) {
ssize_t toNeuronIdx = GetNeuronIndex(aLinkGenes[i].ToNeuronId());
ssize_t fromNeuronIdx = GetNeuronIndex(aLinkGenes[i].FromNeuronId());
double input = aNeuronGenes[toNeuronIdx].Input() +
aNeuronGenes[fromNeuronIdx].Activation() * aLinkGenes[i].Weight();
aNeuronGenes[toNeuronIdx].Input(input);

if (i == NumLink() - 1) {
aNeuronGenes[toNeuronIdx].CalcActivation();
} else if (GetNeuronIndex(aLinkGenes[i + 1].ToNeuronId()) != toNeuronIdx) {
aNeuronGenes[toNeuronIdx].CalcActivation();
}
aNeuronGenes[j].aActivation = y;
}
}
}
Expand All @@ -355,7 +288,7 @@ class Genome {
std::vector<double> Output() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use a reference here, also, it might be a good idea, to use an arma::vec here:

Output(std::vector<double>& output)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revised.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, right.

I did some testing, and encounter an unexpected behaviour. This are the modifications I made: https://gist.github.com/zoq/181c036cb4b903f8e7cb2977639ee6b8

I've also, printed the activation function type in CalcActivation(), which is the RELU function, but I expected it to be SIGMOID, do we randomly choose the activation function, somewhere? The seed genome uses the SIGMOID for all neurons except for the BIAS neuron.

std::vector<double> output;
for (ssize_t i=0; i<aNumOutput; ++i) {
output.push_back(aNeuronGenes[aNumInput + i].aActivation);
output.push_back(aNeuronGenes[aNumInput + i].Activation());
}
return output;
}
Expand Down Expand Up @@ -390,9 +323,6 @@ class Genome {
// Output length.
ssize_t aNumOutput;

// Network maximum depth.
ssize_t aDepth;

// Genome fitness.
double aFitness;

Expand Down
5 changes: 4 additions & 1 deletion src/mlpack/methods/ne/neat.hpp
Expand Up @@ -191,6 +191,8 @@ class NEAT {
if (!genome.aLinkGenes[linkIdx].Enabled()) return;

genome.aLinkGenes[linkIdx].Enabled(false);
NeuronGene fromNeuron = genome.GetNeuronById(genome.aLinkGenes[linkIdx].FromNeuronId());
NeuronGene toNeuron = genome.GetNeuronById(genome.aLinkGenes[linkIdx].ToNeuronId());

// Check innovation already exist or not.
ssize_t splitLinkInnovId = genome.aLinkGenes[linkIdx].InnovationId();
Expand All @@ -199,6 +201,7 @@ class NEAT {
NeuronGene neuronGene(aNeuronInnovations[innovIdx].newNeuronId,
HIDDEN,
SIGMOID, // TODO: make it random??
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NEAT doesn't change the activation function, but it could be interesting to see if that would increase the performance. However, we have to find a good way, to abstract that functionality from the rest of the code. I think, it's good for now to go with a static activation function.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree.

(fromNeuron.Depth() + toNeuron.Depth()) / 2,
0,
0);
genome.AddHiddenNeuron(neuronGene);
Expand All @@ -224,6 +227,7 @@ class NEAT {
NeuronGene neuronGene(neuronInnov.newNeuronId,
HIDDEN,
SIGMOID, // TODO: make it random??
(fromNeuron.Depth() + toNeuron.Depth()) / 2,
0,
0);
genome.AddHiddenNeuron(neuronGene);
Expand Down Expand Up @@ -696,7 +700,6 @@ class NEAT {
////printf("breed 8\n");
childGenome.NumInput(childGenome.NumInput());
childGenome.NumOutput(childGenome.NumOutput());
childGenome.GenomeDepth();
return true;
}

Expand Down