Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mnist demo not working #18

Closed
dano1234 opened this issue Nov 10, 2017 · 5 comments
Closed

mnist demo not working #18

dano1234 opened this issue Nov 10, 2017 · 5 comments

Comments

@dano1234
Copy link

No description provided.

shiffman added a commit that referenced this issue Nov 11, 2017
@shiffman
Copy link
Member

I pushed a "quick fix" by just adding back in deeplearn.js locally to the project. @cvalenzuela, what do you think is the best pathway to end-users to accessing native elements of deeplearn from within coding importing p5ml.js? It could be a static reference like p5ML.deeplearn.Thing but maybe best to keep deeplearn in the global scope?

@cvalenzuela
Copy link
Member

Since we can load the complete deeplearn library with p5ML I think making it global makes more sense. I'll push an update to reflect that.

@cvalenzuela
Copy link
Member

Also, should we leave deeplearn graph api as it is? or build something around it? The mnist demo is works by building the whole thing in deeplearn:

function buildModelGraph(checkpoints) {
  var g = new deeplearn.Graph();
  var input = g.placeholder('input', [784]);
  var hidden1W = g.constant(checkpoints['hidden1/weights']);
  var hidden1B = g.constant(checkpoints['hidden1/biases']);
  var hidden1 = g.relu(g.add(g.matmul(input, hidden1W), hidden1B));
  var hidden2W = g.constant(checkpoints['hidden2/weights']);
  var hidden2B = g.constant(checkpoints['hidden2/biases']);
  var hidden2 = g.relu(g.add(g.matmul(hidden1, hidden2W), hidden2B));
  var softmaxW = g.constant(checkpoints['softmax_linear/weights']);
  var softmaxB = g.constant(checkpoints['softmax_linear/biases']);
  var logits = g.add(g.matmul(hidden2, softmaxW), softmaxB);
  return [input, g.argmax(logits)];
}

Do you think we should modify/simplify this?

@shiffman
Copy link
Member

shiffman commented Nov 12, 2017

I'm not sure, but probably? I think we hold off for now as working through other examples and scenarios will inform our thinking. I could imagine a keras-like layer perhaps above deeplearn for architecting a model? We could also do a simple MNIST demo with the NeuralNetwork class perhaps?

I also like the idea of avoiding MNIST and using something more artist-friendly for a classification demo as discussed in #8.

@cvalenzuela
Copy link
Member

This relates to the old examples and API.
Closing this since we don't have an MNIST example and https://github.com/CodingTrain/Toy-Neural-Network-JS address this demo better!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants