Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output is always 1 #2

Closed
log69 opened this issue Feb 14, 2018 · 12 comments
Closed

output is always 1 #2

log69 opened this issue Feb 14, 2018 · 12 comments

Comments

@log69
Copy link

log69 commented Feb 14, 2018

Hi Alexey, could you kindly help me?

I'm training a network and it always gives an output of 1. No matter how I change the number of inputs (10, 100 or 500) or hidden neurons (2 or 20). There is 1 hidden layer and I'm using default values for learning rate etc.

I've got a training data consisting of groups with 100 numbers so I'd like to choose an input layer with 100 neurons. I train the network with positive set only, setting the output to 1 in every case. So when I finish training, I test the network with random input. I would expect that the network would predict a very low value since the random input differs entirely from my training data. But I always get 1.

Training it with random data always gives 1 either.

I also tested to train it with a single data to leave it untrained and when feeding random data in it, still get an output of 1. I would expect random output since the network is not trained yet and I'd believe that the hidden weights should still have random values.

What can be the problem? Thank you in advance.

@DEgITx
Copy link
Owner

DEgITx commented Feb 14, 2018

It looks like it simply untrained. Not trained network can give constant result even on different input data and random weights. Make sure that there was training proccess, check iteration() output for example after training to get number of learning cycles.
By the way: train() method without error option iterates only 1 time over data. To keep learning in cycle set error option to something small (may be will help)

@log69
Copy link
Author

log69 commented Feb 14, 2018

Actually I have real data with 3 million numbers, each number between -250 and +500 in a group of 100, and I tried to train the network with 1000 samples of 100 inputs (squashing input to 0..1 interval), but still getting 1. I also provided the error option.

Why can't I train the network? This seems awkward because I have real data where the 100 groups are very similar so I'd expect the network to converge fast. Also checked iterations() and it gives me the number of training samples (1000).

Thanks for your quick answer.

@DEgITx
Copy link
Owner

DEgITx commented Feb 15, 2018

This means that train cycle called only once and probably didn't learn anything, iteration() must be greater than 1000.
Please show how you call train() method, with error option it must be called more than once. Which value do you set to error option?

@log69
Copy link
Author

log69 commented Feb 15, 2018

Should train() be called more than once? That may be my problem then because I don't do that. What I was doing is putting all training data for the input in a 2d array and feed it to train(), but only once. Like this:

nen = require("nen");
nn = nen(100, 1, 1, 20);
errors = nn.train(  [ [n001, n002, n003..n100], [n101, n102, n103..n200], ...], [ [1], [1], [1] ... ], { error: 0.1, sync: true } );

So you're saying that instead of the above I'm supposed to do the following?

nn.train( [n001, n002, n003..n100], [1], { error: 0.1, sync: true } );
nn.train( [n101, n102, n103..n200], [1], { error: 0.1, sync: true } );
nn.train( [n201, n202, n203..n300], [1], { error: 0.1, sync: true } );
nn.train( [n301, n302, n303..n400], [1], { error: 0.1, sync: true } );
...

Putting it in a cycle of course. So can that be the problem? Thanks.

@DEgITx
Copy link
Owner

DEgITx commented Feb 15, 2018

No, this is ok in first one, try to set error to something lower may be it will take effect. As you said your network without learning output "1" to every sample, so probably it didn't learn because your network error from the start is pretty low and "ok".
You can try to force learning by change error to iterations
errors = nn.train( [ [n001, n002, n003..n100], [n101, n102, n103..n200], ...], [ [1], [1], [1] ... ], { iterations: 800, sync: true } );
(install last version, there this is possible)

Also you can test and change every output to something different than 1:
[ [0], [0], [0] ... ] or even [ [0], [1], [0] ... ]

@log69
Copy link
Author

log69 commented Feb 15, 2018

I must be missing something because setting error to 0.001 doesn't help, and setting only iterations up to 80 000 doesn't help either. At the end nn.iterations() gives only 10 or 100, whatever is the size of the input batch. Result is still 1 in every case. Why can't the value of nn.iterations() increase after running nn.train() ?

@log69
Copy link
Author

log69 commented Feb 15, 2018

In the meantime I've updated nen with npm and still the same results. Thanks a lot.

@log69
Copy link
Author

log69 commented Feb 15, 2018

Changing the output values from 1 to something else here and there greatly increases the iterations() and so I think it starts learning. But this way the learning will not be accurate because I have a good sample set that I want to train the network with, so later I'll be able to compare other data to this one and I expect that the output would then show me in % how much the new data differs from the original training one. Do I think this right that I can do this? Train the net with data with output of 1 always and check how much new data differs?

@DEgITx
Copy link
Owner

DEgITx commented Feb 15, 2018

May be, if I understand it right. Proper train set must have some actual output data that relevant to every input, otherwise if, for example, every output value is 1 the net will think that any input value must output "1" from any data and will learn nothing - it's normal.
It also possible to compare data sets by comparing error from network that thay provide using error().

@log69
Copy link
Author

log69 commented Feb 15, 2018

Then what if I train it with random input and marking output zero? Do you think it would help converge?

@DEgITx
Copy link
Owner

DEgITx commented Feb 15, 2018

As I said before you can force learning by using "iterations" options for such case network don't depend on output error and learn exact number of steps, but result will be probably the same, network will just retrain "how to output zero task" and will output 0 on every data, I'm not sure that what you want.

In case of comparing two datasets tasks, you can try to add some random data to your good datataset marked as 0 output, and try learning this combination. That how you can represent to network that random data is bad and your good dataset is what you want.

@log69
Copy link
Author

log69 commented Feb 15, 2018

Thank you.

@log69 log69 closed this as completed Feb 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants