New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Synaptic 2.x #140

Open
Jabher opened this Issue Sep 12, 2016 · 80 comments

Comments

Projects
None yet
@Jabher
Collaborator

Jabher commented Sep 12, 2016

So, we want to make Synaptic more mature.
I've created a design draft to think on it.
https://github.com/cazala/synaptic/wiki/Design-draft:-2.x-API

Let's discuss.

The most significant changes in design:

  • Pre-built architectures are dropped. Examples will be provided and should be used instead.
    They are actually an examples of "how-to" and so they are usually useless for real projects.
  • biggest API parts (activate, train etc) are async now.
  • Every "moving part" used by an API is placed separately, and so if NN is simply created and loaded - there will be no need in trainer, optimization objectives, optimizers and everything else required for training. This is very useful for a client run, when only activation is needed without any training.
  • Multiple back-ends are supported. Each of them has its own strengths and weaknesses.
  • API fingerprint is as small as possible now.
  • Lot of stuff is configurable, lot of staff is pre-built.
@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Sep 12, 2016

Collaborator

Important question btw. Do we actually need Neuron as an exposed entity?

Collaborator

Jabher commented Sep 12, 2016

Important question btw. Do we actually need Neuron as an exposed entity?

@cazala

This comment has been minimized.

Show comment
Hide comment
@cazala

cazala Sep 13, 2016

Owner

I guess that depends on the underlying implementation. Neuron and Layers can be replaced by just Networks that handle everything. The fact that Synaptic has Neurons is because when I first read Derek Monner's paper he described a generalized unit, that could use the same algorithm and behave differently according just to their position in the topology of the network (i.e. a self-connected neuron acts as a memory cell, a neuron gating that self-connection acts as a forget gate but the same neuron gating the input of the memory cell would act as an input gate filtering the noise. And all those neurons would essentially follow the same algorithm internally). That's what I found really cool about that paper and that's why I coded first the Neuron. Then the Layer is just an array of them, and a Network is an array of Layers. The advantage of having the individual units is that you can connect them in any way easily and try new topologies (Like LSTM w/wo forget gates, w/wo peepholes, w/wo connections among the memory cells, etc). But I know that the approach that other NN libraries take is more like matrix math at network level, instead of having the individual units. This is probably way better for optimization/parallelization so I'm up for it, as long as we can keep an easy and intuitive API, that allows the user to create flexible/complex topologies.

Owner

cazala commented Sep 13, 2016

I guess that depends on the underlying implementation. Neuron and Layers can be replaced by just Networks that handle everything. The fact that Synaptic has Neurons is because when I first read Derek Monner's paper he described a generalized unit, that could use the same algorithm and behave differently according just to their position in the topology of the network (i.e. a self-connected neuron acts as a memory cell, a neuron gating that self-connection acts as a forget gate but the same neuron gating the input of the memory cell would act as an input gate filtering the noise. And all those neurons would essentially follow the same algorithm internally). That's what I found really cool about that paper and that's why I coded first the Neuron. Then the Layer is just an array of them, and a Network is an array of Layers. The advantage of having the individual units is that you can connect them in any way easily and try new topologies (Like LSTM w/wo forget gates, w/wo peepholes, w/wo connections among the memory cells, etc). But I know that the approach that other NN libraries take is more like matrix math at network level, instead of having the individual units. This is probably way better for optimization/parallelization so I'm up for it, as long as we can keep an easy and intuitive API, that allows the user to create flexible/complex topologies.

@jocooler

This comment has been minimized.

Show comment
Hide comment
@jocooler

jocooler Sep 16, 2016

I'm not totally clear on what it means to expose a neuron, but it was extremely important for my application to be able to clearly see the neurons (using toJSON). I trained the network using synaptic and implemented the results in another program (excel).

jocooler commented Sep 16, 2016

I'm not totally clear on what it means to expose a neuron, but it was extremely important for my application to be able to clearly see the neurons (using toJSON). I trained the network using synaptic and implemented the results in another program (excel).

@olehf

This comment has been minimized.

Show comment
Hide comment
@olehf

olehf Sep 16, 2016

From my understanding it would be more important to expose an ability to override/specify neurones' activation function than to have direct access to neurones. In that way a developer can concentrate on implementing higher level functionality, e.g. by stacking layers or networks and having access to neurones' activation functions to implement custom networks.
That assumed there are basic networks are implemented (convolutional, recurrent, Bayesian etc).

I would love to contribute to the new version if any additional help is still required.

Regards,
Oleh Filipchuk

On 16 Sep 2016, at 14:50, jocooler notifications@github.com wrote:

I'm not totally clear on what it means to expose a neuron, but it was extremely important for my application to be able to clearly see the neurons (using toJSON). I trained the network using synaptic and implemented the results in another program (excel).


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

olehf commented Sep 16, 2016

From my understanding it would be more important to expose an ability to override/specify neurones' activation function than to have direct access to neurones. In that way a developer can concentrate on implementing higher level functionality, e.g. by stacking layers or networks and having access to neurones' activation functions to implement custom networks.
That assumed there are basic networks are implemented (convolutional, recurrent, Bayesian etc).

I would love to contribute to the new version if any additional help is still required.

Regards,
Oleh Filipchuk

On 16 Sep 2016, at 14:50, jocooler notifications@github.com wrote:

I'm not totally clear on what it means to expose a neuron, but it was extremely important for my application to be able to clearly see the neurons (using toJSON). I trained the network using synaptic and implemented the results in another program (excel).


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Sep 18, 2016

Collaborator

@jocooler That's a great note. From my point of view - removal of Neuron will significantly reduce memory consumption. However, this is a good reminder that we do need a human-readable export.

Collaborator

Jabher commented Sep 18, 2016

@jocooler That's a great note. From my point of view - removal of Neuron will significantly reduce memory consumption. However, this is a good reminder that we do need a human-readable export.

@schang933

This comment has been minimized.

Show comment
Hide comment
@schang933

schang933 Sep 19, 2016

Thanks for spearheading this thread @Jabher. From a user's perspective, I might add having improved documentation and examples. Especially since the site is down now (#141), it's harder to reference back to examples I've found in the past.

Specifically, I think for my use case an LSTM may be a more natural approach, but I am hesitant to test it because I have not trained one before and the mechanism for RNNs seems quite different. Having more than 1 example (preferably 3+) would help as users can pick the one that best matches their use-case. It might help to encourage users to contribute their own examples as well (maybe something I can do if I figure this out myself).

Another point should be on optimizations. I think a big reason people are using other libraries is due to limitations, especially in memory. It could help to have a short guide on the Wiki discussing how to set Node to allocate more memory before getting OOM, or using a mini batch approach for strategies that support it. Also, regarding exposing the Neuron, I may suggest something similar to compilers where the toJSON method can be human-friendly in debug mode and machine-friendly otherwise. I'm seeing my memory filled with Neurons when conducting a heap analysis.

schang933 commented Sep 19, 2016

Thanks for spearheading this thread @Jabher. From a user's perspective, I might add having improved documentation and examples. Especially since the site is down now (#141), it's harder to reference back to examples I've found in the past.

Specifically, I think for my use case an LSTM may be a more natural approach, but I am hesitant to test it because I have not trained one before and the mechanism for RNNs seems quite different. Having more than 1 example (preferably 3+) would help as users can pick the one that best matches their use-case. It might help to encourage users to contribute their own examples as well (maybe something I can do if I figure this out myself).

Another point should be on optimizations. I think a big reason people are using other libraries is due to limitations, especially in memory. It could help to have a short guide on the Wiki discussing how to set Node to allocate more memory before getting OOM, or using a mini batch approach for strategies that support it. Also, regarding exposing the Neuron, I may suggest something similar to compilers where the toJSON method can be human-friendly in debug mode and machine-friendly otherwise. I'm seeing my memory filled with Neurons when conducting a heap analysis.

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Sep 19, 2016

Collaborator

@olehf Convolutional (1d and 2d + max/avg pooling for them) are mentioned in design document, same for RNN - I think GRU and LSTM would be enough for average user.

Activation functions should definitely be passable to any layer, only issue we can encounter is a custom implementation of them (as we will need multiple implementations of one function) so we should keep as much as possible in design.

Help is (as usual) always appreciated. It is a lot of work to do, and any contribution will be significant. As soon as we will decide that we're good with this design, GH project board will be created with tasks to do.

Collaborator

Jabher commented Sep 19, 2016

@olehf Convolutional (1d and 2d + max/avg pooling for them) are mentioned in design document, same for RNN - I think GRU and LSTM would be enough for average user.

Activation functions should definitely be passable to any layer, only issue we can encounter is a custom implementation of them (as we will need multiple implementations of one function) so we should keep as much as possible in design.

Help is (as usual) always appreciated. It is a lot of work to do, and any contribution will be significant. As soon as we will decide that we're good with this design, GH project board will be created with tasks to do.

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Sep 20, 2016

Collaborator

@schang933 there's new temp website - http://caza.la/synaptic/. Old one will be back soon.

Speaking about examples - this is a good concern, but first we should implement an API itself.
Good set of examples can be a https://github.com/fchollet/keras/tree/master/examples (did I mention that Keras is documented in a great way?).

Another point should be on optimizations. I think a big reason people are using other libraries is due to limitations, especially in memory. It could help to have a short guide on the Wiki discussing how to set Node to allocate more memory before getting OOM, or using a mini batch approach for strategies that support it.

That's totally correct, but we should keep in mind that Node.js is not the only runtime target. Multiple browsers (latest Edge, Firefox and Chrome) with WebGL/WebCL and workers are also a runtime targets, and nodejs-chakra is also a preferrable option as it looks like it do not have memory limitations at all (it should be checked).

About Neuron exposion and keeping human-readable output - I totally agree.

Collaborator

Jabher commented Sep 20, 2016

@schang933 there's new temp website - http://caza.la/synaptic/. Old one will be back soon.

Speaking about examples - this is a good concern, but first we should implement an API itself.
Good set of examples can be a https://github.com/fchollet/keras/tree/master/examples (did I mention that Keras is documented in a great way?).

Another point should be on optimizations. I think a big reason people are using other libraries is due to limitations, especially in memory. It could help to have a short guide on the Wiki discussing how to set Node to allocate more memory before getting OOM, or using a mini batch approach for strategies that support it.

That's totally correct, but we should keep in mind that Node.js is not the only runtime target. Multiple browsers (latest Edge, Firefox and Chrome) with WebGL/WebCL and workers are also a runtime targets, and nodejs-chakra is also a preferrable option as it looks like it do not have memory limitations at all (it should be checked).

About Neuron exposion and keeping human-readable output - I totally agree.

@arqex

This comment has been minimized.

Show comment
Hide comment
@arqex

arqex Sep 20, 2016

Hey, thanks for this great library! It is really nice that its development is still running.

I am not an ML expert, and synaptic has helped me so much to understand how the neural networks work ,thanks especially to the prebuilt architectures. If you are dropping them from the core repo, please move them to a separate one, maybe with the examples, because it makes really simple for ML non-experts to get initiated and running.

Cheers

arqex commented Sep 20, 2016

Hey, thanks for this great library! It is really nice that its development is still running.

I am not an ML expert, and synaptic has helped me so much to understand how the neural networks work ,thanks especially to the prebuilt architectures. If you are dropping them from the core repo, please move them to a separate one, maybe with the examples, because it makes really simple for ML non-experts to get initiated and running.

Cheers

@Hongbo-Miao

This comment has been minimized.

Show comment
Hide comment
@Hongbo-Miao

Hongbo-Miao Sep 24, 2016

Contributor

Most JavaScript related libraries are not actively maintained any more. Hope this one can keep going!
Hope one day I can have enough knowledge contributing on this library.
Cheers!

Contributor

Hongbo-Miao commented Sep 24, 2016

Most JavaScript related libraries are not actively maintained any more. Hope this one can keep going!
Hope one day I can have enough knowledge contributing on this library.
Cheers!

@menduz

This comment has been minimized.

Show comment
Hide comment
@menduz

menduz Sep 25, 2016

Collaborator
import {
  WebCL,
  AsmJS,
  WorkerAsmJS,
  CUDA,
  OpenCL,
  CPU
} from 'synaptic/optimizers';

I think those shall be separated packages. In order to keep the original package as small and simple as possible.

  • That will help us to create browser bundles.
  • Will reduce the scope of the main project, helping us to create specific tests for each optimiser.
Collaborator

menduz commented Sep 25, 2016

import {
  WebCL,
  AsmJS,
  WorkerAsmJS,
  CUDA,
  OpenCL,
  CPU
} from 'synaptic/optimizers';

I think those shall be separated packages. In order to keep the original package as small and simple as possible.

  • That will help us to create browser bundles.
  • Will reduce the scope of the main project, helping us to create specific tests for each optimiser.
@yogevizhak

This comment has been minimized.

Show comment
Hide comment
@yogevizhak

yogevizhak Sep 25, 2016

I agree but it should be both options for easy installation
On Sep 25, 2016 16:10, "Agustin Mendez" notifications@github.com wrote:

import {
WebCL,
AsmJS,
WorkerAsmJS,
CUDA,
OpenCL,
CPU
} from 'synaptic/optimizers';

I think those shall be separated packages. In order to keep the original
package as small and simple as possible.

  • That will help us to create browser bundles.
  • Will reduce the scope of the main project, helping us to create
    specific tests for each optimiser.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMo8NrmvKGffvCAvsxLnVOqZlqxyUsIKks5qtnLVgaJpZM4J6oXe
.

yogevizhak commented Sep 25, 2016

I agree but it should be both options for easy installation
On Sep 25, 2016 16:10, "Agustin Mendez" notifications@github.com wrote:

import {
WebCL,
AsmJS,
WorkerAsmJS,
CUDA,
OpenCL,
CPU
} from 'synaptic/optimizers';

I think those shall be separated packages. In order to keep the original
package as small and simple as possible.

  • That will help us to create browser bundles.
  • Will reduce the scope of the main project, helping us to create
    specific tests for each optimiser.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMo8NrmvKGffvCAvsxLnVOqZlqxyUsIKks5qtnLVgaJpZM4J6oXe
.

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Sep 25, 2016

Collaborator

@menduz I'd suggest multiple modules + 1 meta-module ("synaptic" itself) to expose them all in that case?

Collaborator

Jabher commented Sep 25, 2016

@menduz I'd suggest multiple modules + 1 meta-module ("synaptic" itself) to expose them all in that case?

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Oct 3, 2016

Collaborator

There's a nice line in TensorFlow docs:
https://www.tensorflow.org/versions/r0.11/api_docs/index.html

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua, R, and perhaps others. With SWIG, it's relatively easy to develop a TensorFlow interface for your favorite language.

ASAP we actually need only few functions, some incomplete port of TF would not be so hard to use, and it will deal with most of server-side performance issues, so we will actually be able to train with good speed, I think.

Collaborator

Jabher commented Oct 3, 2016

There's a nice line in TensorFlow docs:
https://www.tensorflow.org/versions/r0.11/api_docs/index.html

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua, R, and perhaps others. With SWIG, it's relatively easy to develop a TensorFlow interface for your favorite language.

ASAP we actually need only few functions, some incomplete port of TF would not be so hard to use, and it will deal with most of server-side performance issues, so we will actually be able to train with good speed, I think.

@yonatanmn

This comment has been minimized.

Show comment
Hide comment
@yonatanmn

yonatanmn Oct 8, 2016

  1. I'd love to see reinforcement network implemented.
  2. More utils will be cool - I couldn't even find any npm module that normalize numbers. JS is missing many important tools to work with data.
    In this section I could happily contribute.
    some options:
normalizeNum(min, max, num) =>  0->1 // (curried?),
deNormalizeNum(min, max,  0->1) => num // (curried?), 
normalizeNumericalArray ([num]) => [0->1] // min and max from array
normalizeCategoricalArr([string]) => [0|1] //based on uniqueness. 
etc...

yonatanmn commented Oct 8, 2016

  1. I'd love to see reinforcement network implemented.
  2. More utils will be cool - I couldn't even find any npm module that normalize numbers. JS is missing many important tools to work with data.
    In this section I could happily contribute.
    some options:
normalizeNum(min, max, num) =>  0->1 // (curried?),
deNormalizeNum(min, max,  0->1) => num // (curried?), 
normalizeNumericalArray ([num]) => [0->1] // min and max from array
normalizeCategoricalArr([string]) => [0|1] //based on uniqueness. 
etc...
@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Oct 9, 2016

Collaborator

reinforcement learning sounds cool but in can be a module over synaptic instead of thing inside the core - possibly some additional package. See https://github.com/karpathy/convnetjs/blob/master/build/deepqlearn.js - it's not related to network itself, it's working over it.

utils - we're planning to use Vectorious for matrix operations (https://github.com/mateogianolio/vectorious/wiki) which do have lot of nice functions (including vector normalization). For curried functions Ramda (http://ramdajs.com/) functional programming lib can be used.

Collaborator

Jabher commented Oct 9, 2016

reinforcement learning sounds cool but in can be a module over synaptic instead of thing inside the core - possibly some additional package. See https://github.com/karpathy/convnetjs/blob/master/build/deepqlearn.js - it's not related to network itself, it's working over it.

utils - we're planning to use Vectorious for matrix operations (https://github.com/mateogianolio/vectorious/wiki) which do have lot of nice functions (including vector normalization). For curried functions Ramda (http://ramdajs.com/) functional programming lib can be used.

@cusspvz

This comment has been minimized.

Show comment
Hide comment
@cusspvz

cusspvz Nov 8, 2016

Just discovered about now the turbo.js project. Has anyone considered this as a optimizer for synaptic on the browser?

cusspvz commented Nov 8, 2016

Just discovered about now the turbo.js project. Has anyone considered this as a optimizer for synaptic on the browser?

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Nov 8, 2016

Collaborator

@cusspvz I discovered it today too.

Cazala is playing with https://github.com/MaiaVictor/WebMonkeys now as a back-end for v2, this is similar, and he says it's better, I agree as it supports back-end computations too.

Collaborator

Jabher commented Nov 8, 2016

@cusspvz I discovered it today too.

Cazala is playing with https://github.com/MaiaVictor/WebMonkeys now as a back-end for v2, this is similar, and he says it's better, I agree as it supports back-end computations too.

@rafis

This comment has been minimized.

Show comment
Hide comment
@rafis

rafis Nov 10, 2016

Please take a look onto APIs of Torch and adnn. Also as adnn is a competitor take a look if it has better parts.

rafis commented Nov 10, 2016

Please take a look onto APIs of Torch and adnn. Also as adnn is a competitor take a look if it has better parts.

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Nov 10, 2016

Collaborator

@rafis speaking of popular libs - probably the best reference is Keras (as it provides the most consistent API), I've been investigating through most of popular libs for that.

But thanks a lot for ADNN - that lib looks very interesting, and can be investigated deeply.

Collaborator

Jabher commented Nov 10, 2016

@rafis speaking of popular libs - probably the best reference is Keras (as it provides the most consistent API), I've been investigating through most of popular libs for that.

But thanks a lot for ADNN - that lib looks very interesting, and can be investigated deeply.

@corpr8

This comment has been minimized.

Show comment
Hide comment
@corpr8

corpr8 Nov 11, 2016

Gpu acceleration (gpu.js?)
composite networks distributed across multiple hosts (mongo + socket.io?)

corpr8 commented Nov 11, 2016

Gpu acceleration (gpu.js?)
composite networks distributed across multiple hosts (mongo + socket.io?)

@yonatanmn

This comment has been minimized.

Show comment
Hide comment
@yonatanmn

yonatanmn Nov 12, 2016

some method to modify weights of connections. I need it for evolution algorithm of NN.
current solution is - toJSON, manual change, fromJSON.
maybe Network should have connections property pointing to the real connections , similar to the result of toJSON, and each Connection will have a method to update weight manually.

as a general approach - exposing of all the internal logic to public methods. NN has so many different use cases, and everyone needs his own configurations.

yonatanmn commented Nov 12, 2016

some method to modify weights of connections. I need it for evolution algorithm of NN.
current solution is - toJSON, manual change, fromJSON.
maybe Network should have connections property pointing to the real connections , similar to the result of toJSON, and each Connection will have a method to update weight manually.

as a general approach - exposing of all the internal logic to public methods. NN has so many different use cases, and everyone needs his own configurations.

@cazala

This comment has been minimized.

Show comment
Hide comment
@cazala

cazala Nov 13, 2016

Owner

hey guys, just to let you know, I'm playing in this repo trying to build something similar to what we have in the design draft, taking all the comments here into consideration. Feedback, comments, and critics are more than welcome. This is a WIP so don't expect it to work yet :P

Owner

cazala commented Nov 13, 2016

hey guys, just to let you know, I'm playing in this repo trying to build something similar to what we have in the design draft, taking all the comments here into consideration. Feedback, comments, and critics are more than welcome. This is a WIP so don't expect it to work yet :P

@cusspvz

This comment has been minimized.

Show comment
Hide comment
@cusspvz

cusspvz Nov 14, 2016

Nice work @cazala !! Are you expecting to have everything on the Engine so you can have better management on the import/export thing?

I've built a kind of from scratch and opted out them to be external so I can have various types of neurons. Do you think you can have more than the "bias" processor neuron on this?

EDIT: Also, I think it is important to define an API, or at least give some implementation room, for plugins.
It would allow a community to grow underneath the synaptic one.

cusspvz commented Nov 14, 2016

Nice work @cazala !! Are you expecting to have everything on the Engine so you can have better management on the import/export thing?

I've built a kind of from scratch and opted out them to be external so I can have various types of neurons. Do you think you can have more than the "bias" processor neuron on this?

EDIT: Also, I think it is important to define an API, or at least give some implementation room, for plugins.
It would allow a community to grow underneath the synaptic one.

@cazala

This comment has been minimized.

Show comment
Hide comment
@cazala

cazala Nov 14, 2016

Owner

Thank you @cusspvz :)

  1. What do you mean by more than the bias neuron? the algorithm i'm using for synaptic (lstm-g) defines all the units equally, what changes the roll of the unit/neuron is the way it connects to the others. ie. a neuron's can be used as a gate, in this case its activation will be used as the gain of the connection that it is gating. Or a neuron can connect to itself to act as a memory cell. Or they can be connected in layers to form a feedforward net. Or a neuron can have a fixed activation of 1 and be connected to all the other units and act as a bias unit. Each unit/neuron can have its own activation function, independently of the others. If you see the the examples in the layers directory you will see there's no math, only topology. Each layer is just a different way to connect neurons, but there's no definition of how to do the math. The advantage of this is that we have a single algorithm for all the different topologies, and this can be isolated into the backend, and be then heavily optimized, or be ported to different platforms, instead of having to do that for each layer individually. The role of the engine is just to hold the values of the weights, the states, the traces, and the relationships between units (how units are connected, or gated, which units are inputs or projections of other units). It serves two purposes: first, to be storable/clonable/transportable, since it's just plain objects and arrays, and it's the minimum information required to reproduce the network elsewhere; second, to give the Backend everything it needs already served, so the Backend can focus only on doing the calculations and updating the values in the Engine.
  2. I would love to make synaptic as extensible as possible, do you have examples of these kind of plugins or APIs?
Owner

cazala commented Nov 14, 2016

Thank you @cusspvz :)

  1. What do you mean by more than the bias neuron? the algorithm i'm using for synaptic (lstm-g) defines all the units equally, what changes the roll of the unit/neuron is the way it connects to the others. ie. a neuron's can be used as a gate, in this case its activation will be used as the gain of the connection that it is gating. Or a neuron can connect to itself to act as a memory cell. Or they can be connected in layers to form a feedforward net. Or a neuron can have a fixed activation of 1 and be connected to all the other units and act as a bias unit. Each unit/neuron can have its own activation function, independently of the others. If you see the the examples in the layers directory you will see there's no math, only topology. Each layer is just a different way to connect neurons, but there's no definition of how to do the math. The advantage of this is that we have a single algorithm for all the different topologies, and this can be isolated into the backend, and be then heavily optimized, or be ported to different platforms, instead of having to do that for each layer individually. The role of the engine is just to hold the values of the weights, the states, the traces, and the relationships between units (how units are connected, or gated, which units are inputs or projections of other units). It serves two purposes: first, to be storable/clonable/transportable, since it's just plain objects and arrays, and it's the minimum information required to reproduce the network elsewhere; second, to give the Backend everything it needs already served, so the Backend can focus only on doing the calculations and updating the values in the Engine.
  2. I would love to make synaptic as extensible as possible, do you have examples of these kind of plugins or APIs?
@cusspvz

This comment has been minimized.

Show comment
Hide comment
@cusspvz

cusspvz Nov 14, 2016

The advantage of this is that we have a single algorithm for all the different topologies, and this can be isolated into the backend, and be then heavily optimized, or be ported to different platforms, instead of having to do that for each layer individually

Awesome, I've understood your structure just by looking at the code, seemed a lot clever when I've saw it!

I've also saw that you're introducing flow layers for better piping which is awesome for assisted neural networks but I can't see how it might help building non-assisted ones.

A brief story to explain what I mean with "processing neurons":

I see myself as an explorer, in fact, I've been self-educating Machine Learning for a while and I came up with the Liquid-State Neural Network Architecture before having the knowledge about it.

On the first versions of my Architecture, the one that was alike Liquid-State NNA, I had what I call "bias/gate/weight based neurons" working slightly different to include neuro-evolution in a different way of what people are doing. Each one had a lifespan that would be increased on each activation, once a neuron was dead, the network would notice it was ticking with lesser neurons and would compensate with a randomly placed one.
That feature allowed me to control the "forgot" aspect by passing parameters to the neural network, I've hooked up the IO to my mouse cursor and it was capable of replicate my movements (without assistive training).

Note: at this point, the network was already processing async, so didn't need inputs for having it doing something, a simple neuron change could trigger activations trough the net till an Output neuron.

It worked awesome for directions and images but not so for sound patterns, so I've changed again the network and added more neurons:

  • Remote neurons - neurons that allow the network to work with others in a cluster, by passing-thru the activations trough a socket
  • Input neurons - Would only activate and pass the input value.
  • Output neurons - Would trigger a cb for each activation. NOTE: async
  • Global influential/gated neurons
  • Buffered neurons
  • Processing neurons - kind of what you're doing
  • etc... (I've reached a list of 12 types)

All of this work is, by now, private and personal, but I would like to contribute or share if it could help the developments of the synaptic v2.

I could see some of the things fitting into the "Layers" structure, but I have some doubts related with it such as:

  • Does the layers pipeline flows unidirectional?
  • Can an LSNN be created with it?

second, to give the Backend everything it needs already served, so the Backend can focus only on doing the calculations and updating the values in the Engine.
do you have examples of these kind of plugins or APIs?

a) I really like the way "babel" works out of the box using their name/prefix priorities, it could be an idea for use with the "backend", "layer" and so on like:

new Network({
  // under the hood would call 'synaptic-backend-nvidia-cuda'
  backend: 'nvidia-cuda'
})

new Network({
  // under the hood would call './backends/gpu.js'
  backend: 'gpu'
})

new Network({
  // under the hood would call './backends/web-worker.js'
  backend: 'web-worker'
})

It should be easy to implement:
Note: I've attached a code snippet here, but then I've removed and placed as a gist

b) We must have a stable API for backends and layers before the first release, which means we must think on one by now.

c) When you mean "async", does it allow the network to trigger neurons as a callback (like multiple times) or only as a promise (where it just counts once when solved)?

Edit: Thanks for your time! :)

cusspvz commented Nov 14, 2016

The advantage of this is that we have a single algorithm for all the different topologies, and this can be isolated into the backend, and be then heavily optimized, or be ported to different platforms, instead of having to do that for each layer individually

Awesome, I've understood your structure just by looking at the code, seemed a lot clever when I've saw it!

I've also saw that you're introducing flow layers for better piping which is awesome for assisted neural networks but I can't see how it might help building non-assisted ones.

A brief story to explain what I mean with "processing neurons":

I see myself as an explorer, in fact, I've been self-educating Machine Learning for a while and I came up with the Liquid-State Neural Network Architecture before having the knowledge about it.

On the first versions of my Architecture, the one that was alike Liquid-State NNA, I had what I call "bias/gate/weight based neurons" working slightly different to include neuro-evolution in a different way of what people are doing. Each one had a lifespan that would be increased on each activation, once a neuron was dead, the network would notice it was ticking with lesser neurons and would compensate with a randomly placed one.
That feature allowed me to control the "forgot" aspect by passing parameters to the neural network, I've hooked up the IO to my mouse cursor and it was capable of replicate my movements (without assistive training).

Note: at this point, the network was already processing async, so didn't need inputs for having it doing something, a simple neuron change could trigger activations trough the net till an Output neuron.

It worked awesome for directions and images but not so for sound patterns, so I've changed again the network and added more neurons:

  • Remote neurons - neurons that allow the network to work with others in a cluster, by passing-thru the activations trough a socket
  • Input neurons - Would only activate and pass the input value.
  • Output neurons - Would trigger a cb for each activation. NOTE: async
  • Global influential/gated neurons
  • Buffered neurons
  • Processing neurons - kind of what you're doing
  • etc... (I've reached a list of 12 types)

All of this work is, by now, private and personal, but I would like to contribute or share if it could help the developments of the synaptic v2.

I could see some of the things fitting into the "Layers" structure, but I have some doubts related with it such as:

  • Does the layers pipeline flows unidirectional?
  • Can an LSNN be created with it?

second, to give the Backend everything it needs already served, so the Backend can focus only on doing the calculations and updating the values in the Engine.
do you have examples of these kind of plugins or APIs?

a) I really like the way "babel" works out of the box using their name/prefix priorities, it could be an idea for use with the "backend", "layer" and so on like:

new Network({
  // under the hood would call 'synaptic-backend-nvidia-cuda'
  backend: 'nvidia-cuda'
})

new Network({
  // under the hood would call './backends/gpu.js'
  backend: 'gpu'
})

new Network({
  // under the hood would call './backends/web-worker.js'
  backend: 'web-worker'
})

It should be easy to implement:
Note: I've attached a code snippet here, but then I've removed and placed as a gist

b) We must have a stable API for backends and layers before the first release, which means we must think on one by now.

c) When you mean "async", does it allow the network to trigger neurons as a callback (like multiple times) or only as a promise (where it just counts once when solved)?

Edit: Thanks for your time! :)

@Jabher

This comment has been minimized.

Show comment
Hide comment
@Jabher

Jabher Nov 15, 2016

Collaborator

@cusspvz Thanks for such a great feedback! And yes, actual help will be greatly appreciated. There's lot of routine coming (lots of layers, lots of backends), and any help will be great.

Speaking of what you're proposing:
1.
Layer-based design is actually universal - we're abstracting groups of neurons (with more or less complicated design) into one meta-neuron with array of outputs.
On other hand, some specially-designed Layer with same external API can be implemented for the purposes of liquid state machine, or RNN, or convnet, or computation nodes.

a) Actually it's already discussed, for now API is something like


import {
  AsmJS,
  TensorFlow,
  WorkerAsmJS,
  WebCL,
} from 'synaptic/optimizers';
...
await train_network.optimize(new WorkerAsmJS());

Suggested back-ends are C++ binded TensorFlow (as it supports GPGPU, multiple CPU tricks and so on), AsmJS-compiled math engine (concurrent via webWorkers and same-thread implementations), raw JS engine, and WebCL (possibly via Webmonkeys).

b) Agree.

c) Promise way, possibly. Thing is that computations will be probably working in async way, so any math operation will turn into asyncronious one.

Collaborator

Jabher commented Nov 15, 2016

@cusspvz Thanks for such a great feedback! And yes, actual help will be greatly appreciated. There's lot of routine coming (lots of layers, lots of backends), and any help will be great.

Speaking of what you're proposing:
1.
Layer-based design is actually universal - we're abstracting groups of neurons (with more or less complicated design) into one meta-neuron with array of outputs.
On other hand, some specially-designed Layer with same external API can be implemented for the purposes of liquid state machine, or RNN, or convnet, or computation nodes.

a) Actually it's already discussed, for now API is something like


import {
  AsmJS,
  TensorFlow,
  WorkerAsmJS,
  WebCL,
} from 'synaptic/optimizers';
...
await train_network.optimize(new WorkerAsmJS());

Suggested back-ends are C++ binded TensorFlow (as it supports GPGPU, multiple CPU tricks and so on), AsmJS-compiled math engine (concurrent via webWorkers and same-thread implementations), raw JS engine, and WebCL (possibly via Webmonkeys).

b) Agree.

c) Promise way, possibly. Thing is that computations will be probably working in async way, so any math operation will turn into asyncronious one.

@oxygen

This comment has been minimized.

Show comment
Hide comment
@oxygen

oxygen Nov 22, 2016

I'm a noob at neural networks and haven't used yours, so excuse any insults or me completely missing the boat :)

If you want to remove the Neuron to minimise memory consumption, maybe you can actually replace it with a lazy Neuron interface/class which would only read on lazy instantiation (when needed) from the better packed representation and could also allow you to modify all neurons at once (or maybe even individually?).

oxygen commented Nov 22, 2016

I'm a noob at neural networks and haven't used yours, so excuse any insults or me completely missing the boat :)

If you want to remove the Neuron to minimise memory consumption, maybe you can actually replace it with a lazy Neuron interface/class which would only read on lazy instantiation (when needed) from the better packed representation and could also allow you to modify all neurons at once (or maybe even individually?).

@menduz

This comment has been minimized.

Show comment
Hide comment
@menduz

menduz Nov 23, 2016

Collaborator

@Oxygens Thanks, neurons doesn't even exist on the new draft!

Collaborator

menduz commented Nov 23, 2016

@Oxygens Thanks, neurons doesn't even exist on the new draft!

@frabaglia

This comment has been minimized.

Show comment
Hide comment
@frabaglia

frabaglia May 9, 2017

@Jabher I really get it, but it was really hard to say before talking with you.

From my side I'll be playing around with Keras and TF in next weeks and i expect to be, at least useful to give some opinions for the next draft.

Going back to concurrency, this React fiber design explanation looks intresting to see how did Facebook deal with it. (Actually they did not)

frabaglia commented May 9, 2017

@Jabher I really get it, but it was really hard to say before talking with you.

From my side I'll be playing around with Keras and TF in next weeks and i expect to be, at least useful to give some opinions for the next draft.

Going back to concurrency, this React fiber design explanation looks intresting to see how did Facebook deal with it. (Actually they did not)

@frabaglia

This comment has been minimized.

Show comment
Hide comment
@frabaglia

frabaglia May 9, 2017

i forgot tl;dr, the concurrency topic starts at:

Aren't you just working around the lack of threads in the language?
(2nd comment)

cc @cazala

frabaglia commented May 9, 2017

i forgot tl;dr, the concurrency topic starts at:

Aren't you just working around the lack of threads in the language?
(2nd comment)

cc @cazala

@maximilianmetti

This comment has been minimized.

Show comment
Hide comment
@maximilianmetti

maximilianmetti May 23, 2017

Hey guys, nice library! I've been using it for training some basic neural nets for some pattern recognition and wanted to point out that it's really nice to have a tool that can handle both training and activating nets in JS (only activations are currently supported in keras-js, if I understand correctly).

Training a large variety of nets is no small task, is this the main benefit of Synaptic v2 over a library like Keras.js?

@Jabher how (& when) do you see synaptic2 using florida? Is it supposed to be one of the optimizers or the core optimizer until other backends are hooked up? Maybe that's a question for @cazala

For my purposes, I have been using synaptic to train many small nets rather than a few large ones; keeping the training process quick and easy (and all in JS) was a huge benefit for me; this helped me avoid switching between js, python, c++, etc.

I'm very eager to follow any advances you guys make and would be glad to lend a hand to this project as I think internet browsers are a great environment for innovative applications.

maximilianmetti commented May 23, 2017

Hey guys, nice library! I've been using it for training some basic neural nets for some pattern recognition and wanted to point out that it's really nice to have a tool that can handle both training and activating nets in JS (only activations are currently supported in keras-js, if I understand correctly).

Training a large variety of nets is no small task, is this the main benefit of Synaptic v2 over a library like Keras.js?

@Jabher how (& when) do you see synaptic2 using florida? Is it supposed to be one of the optimizers or the core optimizer until other backends are hooked up? Maybe that's a question for @cazala

For my purposes, I have been using synaptic to train many small nets rather than a few large ones; keeping the training process quick and easy (and all in JS) was a huge benefit for me; this helped me avoid switching between js, python, c++, etc.

I'm very eager to follow any advances you guys make and would be glad to lend a hand to this project as I think internet browsers are a great environment for innovative applications.

@Bondifrench

This comment has been minimized.

Show comment
Hide comment
@Bondifrench

Bondifrench May 25, 2017

So is Florida going to replace CPU.js in the backends of Synaptic2 ?
Wouldn't it be beneficial as well to separate the Activation functions (Sigmoid, Tanh etc..) in a separate module?

Bondifrench commented May 25, 2017

So is Florida going to replace CPU.js in the backends of Synaptic2 ?
Wouldn't it be beneficial as well to separate the Activation functions (Sigmoid, Tanh etc..) in a separate module?

@Plazmaz

This comment has been minimized.

Show comment
Hide comment
@Plazmaz

Plazmaz Jun 14, 2017

I am unsure if this is the proper place to discuss this, but it would be amazing to have stronger support for genetic-based training and NEAT models. There are some scenarios where it is simply more appropriate to use evolutionary methods over backpropagation.

Plazmaz commented Jun 14, 2017

I am unsure if this is the proper place to discuss this, but it would be amazing to have stronger support for genetic-based training and NEAT models. There are some scenarios where it is simply more appropriate to use evolutionary methods over backpropagation.

@wagenaartje

This comment has been minimized.

Show comment
Hide comment
@wagenaartje

wagenaartje Jun 14, 2017

Contributor

@Plazmaz, you might want to check out my library Neataptic - I think it is exactly what you need :).

Contributor

wagenaartje commented Jun 14, 2017

@Plazmaz, you might want to check out my library Neataptic - I think it is exactly what you need :).

@Plazmaz

This comment has been minimized.

Show comment
Hide comment
@Plazmaz

Plazmaz Jun 14, 2017

@wagenaartje I was just about to update my comment with that. It still seems like something that would fit in this project as well. I'm not sure why you decided to branch off versus making a PR to this repo. It seems like you've got some great improvements.

Plazmaz commented Jun 14, 2017

@wagenaartje I was just about to update my comment with that. It still seems like something that would fit in this project as well. I'm not sure why you decided to branch off versus making a PR to this repo. It seems like you've got some great improvements.

@wagenaartje

This comment has been minimized.

Show comment
Hide comment
@wagenaartje

wagenaartje Jun 14, 2017

Contributor

@Plazmaz I didn't create a PR because I had to change how the networks are made up entirely. But once Synaptic v2 has been released, i'll make a seperate extension for it that allows neuro-evolution 👍

Contributor

wagenaartje commented Jun 14, 2017

@Plazmaz I didn't create a PR because I had to change how the networks are made up entirely. But once Synaptic v2 has been released, i'll make a seperate extension for it that allows neuro-evolution 👍

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Jul 20, 2017

@Jabher
I just got introduced to Synatic a few of days ago and I just stumbled onto this...

c) Promise way, possibly. Thing is that computations will be probably working in async way, so any math operation will turn into asyncronious one.

Have you considered using rxjs (which is more like promises on steroid) over Promises? Since it can handle multiple values and is lazy in a sense that nothing gets executed until there is a subscriber, and it can be cancel or reused, all sounds like a good fit for this use case with large data and heavy computations.

playground commented Jul 20, 2017

@Jabher
I just got introduced to Synatic a few of days ago and I just stumbled onto this...

c) Promise way, possibly. Thing is that computations will be probably working in async way, so any math operation will turn into asyncronious one.

Have you considered using rxjs (which is more like promises on steroid) over Promises? Since it can handle multiple values and is lazy in a sense that nothing gets executed until there is a subscriber, and it can be cancel or reused, all sounds like a good fit for this use case with large data and heavy computations.

@buckle2000

This comment has been minimized.

Show comment
Hide comment
@buckle2000

buckle2000 Aug 6, 2017

One question:
Why async? Computational code should run in sync for best performance.

buckle2000 commented Aug 6, 2017

One question:
Why async? Computational code should run in sync for best performance.

@riatzukiza

This comment has been minimized.

Show comment
Hide comment
@riatzukiza

riatzukiza Aug 6, 2017

riatzukiza commented Aug 6, 2017

@buckle2000

This comment has been minimized.

Show comment
Hide comment
@buckle2000

buckle2000 Aug 11, 2017

@riatzukiza

  • sync code can be run in multiple threads using fork
  • you can easily wrap sync function to be async, but hard to do the opposite
  • never run computational massive functions with frontend code in one thread

buckle2000 commented Aug 11, 2017

@riatzukiza

  • sync code can be run in multiple threads using fork
  • you can easily wrap sync function to be async, but hard to do the opposite
  • never run computational massive functions with frontend code in one thread
@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Aug 11, 2017

@buckle2000 if the heavy computation task takes on the frontend, we can utilize webworker.

playground commented Aug 11, 2017

@buckle2000 if the heavy computation task takes on the frontend, we can utilize webworker.

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Aug 11, 2017

Is Synaptic 2 available anytime soon for preview?

playground commented Aug 11, 2017

Is Synaptic 2 available anytime soon for preview?

@oxygen

This comment has been minimized.

Show comment
Hide comment
@oxygen

oxygen Aug 11, 2017

Here's a (young) library to seamlessly bind together master and workers:
https://github.com/bigstepinc/jsonrpc-bidirectional

It's made by me, so if you need help or features, I'm here for you.

oxygen commented Aug 11, 2017

Here's a (young) library to seamlessly bind together master and workers:
https://github.com/bigstepinc/jsonrpc-bidirectional

It's made by me, so if you need help or features, I'm here for you.

@vickylance

This comment has been minimized.

Show comment
Hide comment
@vickylance

vickylance Aug 23, 2017

I know you guys might have seen this. But I just wanted to throw this in here in case if you guys have not seen it. But there is a library being built in javascript specifically for Machine Learning and much like numpy for accurate and big numbers manipulation in javascript. And yup its a mathematical library.

https://github.com/stdlib-js/stdlib

So it can come in handy... ¯\(ツ)

vickylance commented Aug 23, 2017

I know you guys might have seen this. But I just wanted to throw this in here in case if you guys have not seen it. But there is a library being built in javascript specifically for Machine Learning and much like numpy for accurate and big numbers manipulation in javascript. And yup its a mathematical library.

https://github.com/stdlib-js/stdlib

So it can come in handy... ¯\(ツ)

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Aug 23, 2017

I started looking into synaptic and like how it's put together. I was hoping synaptic 2 will be available soon. What other JS libraries for machine learning and deep learning would you recommend?

playground commented Aug 23, 2017

I started looking into synaptic and like how it's put together. I was hoping synaptic 2 will be available soon. What other JS libraries for machine learning and deep learning would you recommend?

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground commented Aug 23, 2017

Thanks @vickylance

@buckle2000

This comment has been minimized.

Show comment
Hide comment
@buckle2000

buckle2000 Sep 3, 2017

Also, msgpack (schema-less **but also support schema) or flatbuffers (with schema) can be used to export/import data (e.g. networks).

buckle2000 commented Sep 3, 2017

Also, msgpack (schema-less **but also support schema) or flatbuffers (with schema) can be used to export/import data (e.g. networks).

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Sep 3, 2017

@buckle2000 @vickylance do you have any working exmples or repo using these libraries?

playground commented Sep 3, 2017

@buckle2000 @vickylance do you have any working exmples or repo using these libraries?

@vickylance

This comment has been minimized.

Show comment
Hide comment
@vickylance

vickylance Sep 3, 2017

@buckle2000 but it will only help in shrinking the size of the JSON network file right? So only the saving and retrieving time would be reduced also I am not sure if it will help in reducing the saving time because the encoding time may take longer than JSON. Only the file size will be reduced. But it wont help much on the processing of the network, the computation will still remain the same.

Correct me if I am wrong?

vickylance commented Sep 3, 2017

@buckle2000 but it will only help in shrinking the size of the JSON network file right? So only the saving and retrieving time would be reduced also I am not sure if it will help in reducing the saving time because the encoding time may take longer than JSON. Only the file size will be reduced. But it wont help much on the processing of the network, the computation will still remain the same.

Correct me if I am wrong?

@buckle2000

This comment has been minimized.

Show comment
Hide comment

buckle2000 commented Sep 3, 2017

@vickylance

This comment has been minimized.

Show comment
Hide comment
@vickylance

vickylance Sep 3, 2017

@playground As of right now I don't have anything with those libraries.
@buckle2000 ok.

Also there is a new Math library being built for JavaScript for machine learning because the default Math library in JavaScript and NodeJS is very erroneous. And it only offers a few mathematical functions and advanced trigonometric functions and other n-dimentional functions are not present and the ones that are present are using the default math library in JavaScript and NodeJS and hence every library on npm on mathematics is erroneous and not safe for Machine Learning. And they only use upto float32 which is pretty limited when it comes to ML. So, check out this math library which is being built very industry standard. And every mathematical functions have a citation paper associated with it to prove its accuracy.

https://github.com/stdlib-js/stdlib

vickylance commented Sep 3, 2017

@playground As of right now I don't have anything with those libraries.
@buckle2000 ok.

Also there is a new Math library being built for JavaScript for machine learning because the default Math library in JavaScript and NodeJS is very erroneous. And it only offers a few mathematical functions and advanced trigonometric functions and other n-dimentional functions are not present and the ones that are present are using the default math library in JavaScript and NodeJS and hence every library on npm on mathematics is erroneous and not safe for Machine Learning. And they only use upto float32 which is pretty limited when it comes to ML. So, check out this math library which is being built very industry standard. And every mathematical functions have a citation paper associated with it to prove its accuracy.

https://github.com/stdlib-js/stdlib

@pandaGaume

This comment has been minimized.

Show comment
Hide comment
@pandaGaume

pandaGaume Jan 11, 2018

Hello and Thanks for the library,
I'm porting the underlying algorithm on microcontroller (c language), supporting Perceptron an LSTM. The goal is to build and train a network using synaptic over decent cpu for pattern recognition and/or time series analysis. And finally upload the trained model to the edge. In this process, the exchange format will be central. I'm quite happy with json, but wanted to know how the json v2 will look like. Also, if you can introduce a Version properties such { version:"2.x", neurons:[...],..}.
Ready to help with somes topics if you need.

pandaGaume commented Jan 11, 2018

Hello and Thanks for the library,
I'm porting the underlying algorithm on microcontroller (c language), supporting Perceptron an LSTM. The goal is to build and train a network using synaptic over decent cpu for pattern recognition and/or time series analysis. And finally upload the trained model to the edge. In this process, the exchange format will be central. I'm quite happy with json, but wanted to know how the json v2 will look like. Also, if you can introduce a Version properties such { version:"2.x", neurons:[...],..}.
Ready to help with somes topics if you need.

@vickylance

This comment has been minimized.

Show comment
Hide comment
@vickylance

vickylance Jan 21, 2018

So this project is officially dead?

vickylance commented Jan 21, 2018

So this project is officially dead?

@dhrubesh

This comment has been minimized.

Show comment
Hide comment
@dhrubesh

dhrubesh Jan 22, 2018

Can someone help me on how to run the "paint an image" demo locally on my system?

dhrubesh commented Jan 22, 2018

Can someone help me on how to run the "paint an image" demo locally on my system?

@pandaGaume

This comment has been minimized.

Show comment
Hide comment
@pandaGaume

pandaGaume Jan 22, 2018

if i understand correctly, you want to reproduce the demo on localhost. For the purpose, you have the code of the demo at
http://caza.la/synaptic/scripts/controllers/paint-an-image.js
you simply have to embed/modify it into local html page within synaptic lib referenced locally.
The graphical presentation or the use of any framework such Angular is up to you.

pandaGaume commented Jan 22, 2018

if i understand correctly, you want to reproduce the demo on localhost. For the purpose, you have the code of the demo at
http://caza.la/synaptic/scripts/controllers/paint-an-image.js
you simply have to embed/modify it into local html page within synaptic lib referenced locally.
The graphical presentation or the use of any framework such Angular is up to you.

@freelogic

This comment has been minimized.

Show comment
Hide comment
@freelogic

freelogic Mar 19, 2018

is it support cuda or gpu?

freelogic commented Mar 19, 2018

is it support cuda or gpu?

@DhavalW

This comment has been minimized.

Show comment
Hide comment
@DhavalW

DhavalW Jul 26, 2018

Would you consider making the backend / computation engine pluggable ?

I'd love to integrate it with St8Flo - http://www.st8flo.com, so that processing can be distributed over clusters of machines in parallel.

St8Flo aims to enable combining IoT + bigData processing + Blockchain storage + ML analysis / REST interfaces etc etc, into a single seamless hybrid distributed system, that can run across clusters of commodity computers / IoT devices - All built with Javascript (NodeJS)

It's already reached a working prototype. Not yet published / documented though.

Currently building integrations and higher level API's for

  • Blockchain
  • Distributed stores / DB
  • ML / Analytics
  • Existing Infrastructure interfacing

Would love to integrate neural network capabilities through your library !

DhavalW commented Jul 26, 2018

Would you consider making the backend / computation engine pluggable ?

I'd love to integrate it with St8Flo - http://www.st8flo.com, so that processing can be distributed over clusters of machines in parallel.

St8Flo aims to enable combining IoT + bigData processing + Blockchain storage + ML analysis / REST interfaces etc etc, into a single seamless hybrid distributed system, that can run across clusters of commodity computers / IoT devices - All built with Javascript (NodeJS)

It's already reached a working prototype. Not yet published / documented though.

Currently building integrations and higher level API's for

  • Blockchain
  • Distributed stores / DB
  • ML / Analytics
  • Existing Infrastructure interfacing

Would love to integrate neural network capabilities through your library !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment