-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are other datasets than object recognition in images supported? #223
Comments
|
|
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi All!
I have a neural network in tensorflow for classical channel estimation which I want to compress. This network is a simple Fully Connected 'denoising' network consisting of ten layers and 48x1 inputs and 48x1 outputs. This network easily solves the classical problem x=H*y with mean square error, where x is known and put in the NN (can be seen as image data) and y should be the optimal result (can be seen as a label that suits a particular x). So the NN should solve y with a small as possible mse (when x is input). (i hope this is clear, else let me know).
I have this network trained in TF, with optimizer='rmsprop',loss='mse'.
My training schedule looks like this with 2000 training samples;
epochs=100,batch_size=3211
epochs=100,batch_size=3221
epochs=100,batch_size=3241
epochs=100,batch_size=3281
epochs=100,batch_size=32161
Now I want to know if it is possible to compress this network with my dataset with this Framework?
I have read the 'self-defined models' section. From here it is easy to define a 'Network definition' file (as my model is made in TF. But I think I should change the loss function? As it is especially for the softmax of CNN's? (I only use mse to minimize diff between x and y). And how about the accuracy? I have never used such a measurement. How should I change the setup_lrn_rate function to have similar training as above?
Furthermore the "network definition" doesn't look hard to change as well. I have loaded my data, then parsed as numpy arrays (both x and y are the same). But now comes the tricky part. First of all, I don't have any classes... Furthermore, how should I choose my training samples, validation samples, and evaluation samples? I only trained with 2000 samples before. Furthermore, how should I choose batch_size and batch_size_eval? I always used 1000 sample to do a prediction. Do I need to change more things?
At the moment if have done the above, and it runs without any error. But the loss looks totally different from what i had in TF. So i really don't know if the performance is correctly measured, and if even correctly internally applied in the framework to change all the weights and biases? And maybe it is even not possible to use this framework with other data sets than bject recognition in images?
Hope to hear from anyone.
The text was updated successfully, but these errors were encountered: