-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
output is always 1 #2
Comments
It looks like it simply untrained. Not trained network can give constant result even on different input data and random weights. Make sure that there was training proccess, check iteration() output for example after training to get number of learning cycles. |
Actually I have real data with 3 million numbers, each number between -250 and +500 in a group of 100, and I tried to train the network with 1000 samples of 100 inputs (squashing input to 0..1 interval), but still getting 1. I also provided the error option. Why can't I train the network? This seems awkward because I have real data where the 100 groups are very similar so I'd expect the network to converge fast. Also checked iterations() and it gives me the number of training samples (1000). Thanks for your quick answer. |
This means that train cycle called only once and probably didn't learn anything, iteration() must be greater than 1000. |
Should train() be called more than once? That may be my problem then because I don't do that. What I was doing is putting all training data for the input in a 2d array and feed it to train(), but only once. Like this:
So you're saying that instead of the above I'm supposed to do the following?
Putting it in a cycle of course. So can that be the problem? Thanks. |
No, this is ok in first one, try to set error to something lower may be it will take effect. As you said your network without learning output "1" to every sample, so probably it didn't learn because your network error from the start is pretty low and "ok". Also you can test and change every output to something different than 1: |
I must be missing something because setting error to 0.001 doesn't help, and setting only iterations up to 80 000 doesn't help either. At the end nn.iterations() gives only 10 or 100, whatever is the size of the input batch. Result is still 1 in every case. Why can't the value of nn.iterations() increase after running nn.train() ? |
In the meantime I've updated nen with npm and still the same results. Thanks a lot. |
Changing the output values from 1 to something else here and there greatly increases the iterations() and so I think it starts learning. But this way the learning will not be accurate because I have a good sample set that I want to train the network with, so later I'll be able to compare other data to this one and I expect that the output would then show me in % how much the new data differs from the original training one. Do I think this right that I can do this? Train the net with data with output of 1 always and check how much new data differs? |
May be, if I understand it right. Proper train set must have some actual output data that relevant to every input, otherwise if, for example, every output value is 1 the net will think that any input value must output "1" from any data and will learn nothing - it's normal. |
Then what if I train it with random input and marking output zero? Do you think it would help converge? |
As I said before you can force learning by using "iterations" options for such case network don't depend on output error and learn exact number of steps, but result will be probably the same, network will just retrain "how to output zero task" and will output 0 on every data, I'm not sure that what you want. In case of comparing two datasets tasks, you can try to add some random data to your good datataset marked as 0 output, and try learning this combination. That how you can represent to network that random data is bad and your good dataset is what you want. |
Thank you. |
Hi Alexey, could you kindly help me?
I'm training a network and it always gives an output of 1. No matter how I change the number of inputs (10, 100 or 500) or hidden neurons (2 or 20). There is 1 hidden layer and I'm using default values for learning rate etc.
I've got a training data consisting of groups with 100 numbers so I'd like to choose an input layer with 100 neurons. I train the network with positive set only, setting the output to 1 in every case. So when I finish training, I test the network with random input. I would expect that the network would predict a very low value since the random input differs entirely from my training data. But I always get 1.
Training it with random data always gives 1 either.
I also tested to train it with a single data to leave it untrained and when feeding random data in it, still get an output of 1. I would expect random output since the network is not trained yet and I'd believe that the hidden weights should still have random values.
What can be the problem? Thank you in advance.
The text was updated successfully, but these errors were encountered: