You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the students perspective, they do not have expensive NVidia gpu.
Some people even run Linux and Julia on Samsung Galaxy mobile phone via DeX.
(connect monitor, keyboard, mouse, and you have a PC)
Tensorflow was using all my cpu 12 cores when training the model.
Training and running models on CPU ensures economical and amazingly fast neural networks development.
Super expensive LLM approach is not wise for all use cases.
Julia Parallelism easily allows creating Julia computers cluster.
Training process on 10 computers where 2 of them have supported gpu would allow to speed up the process.
For example, all Lab hardware at the nights can train neural networks models.
Computers mostly stay unused, all the time.
This all goes together with genetic algorithms to model layers architecture, and tune various config parameters, even Dataset data formatting and sizes.
Here, gpu won't help.
10 Julia computers cluster would speed up genetic algorithms process significantly.
The text was updated successfully, but these errors were encountered:
Genetic Algorithm to discover and construct neural network model from the scratch to solve the problem would be perfect scientific publications topic as well.
Lux already uses multiple cores by default. Note that it won't always use multiple-cores since parallelizing has an inherent cost associated with it, so it will only parallelize if there is a benefit.
Distributed computing is covered in this part of the documentation: https://lux.csail.mit.edu/stable/manual/distributed_utils. Note that distributing on heterogeneous architecture is not as simple as letting them run of the computers, since you need to handle load balancing else the computation is bottlenecked by the slowest machines.
Genetic Algorithm to discover and construct neural network model from the scratch to solve the problem would be perfect scientific publications topic as well.
This particular area is called Neural Architecture Search. It is a pretty interesting domain, though I am not sure if anyone has use Lux for such applications. Would be cool to try it out for sure.
I am going to close this issue, please open a new issue with a concrete example of a model where Lux wasn't using multiple cores.
From the students perspective, they do not have expensive NVidia gpu.
Some people even run Linux and Julia on Samsung Galaxy mobile phone via DeX.
(connect monitor, keyboard, mouse, and you have a PC)
Tensorflow was using all my cpu 12 cores when training the model.
Training and running models on CPU ensures economical and amazingly fast neural networks development.
Super expensive LLM approach is not wise for all use cases.
Julia Parallelism easily allows creating Julia computers cluster.
Training process on 10 computers where 2 of them have supported gpu would allow to speed up the process.
For example, all Lab hardware at the nights can train neural networks models.
Computers mostly stay unused, all the time.
This all goes together with genetic algorithms to model layers architecture, and tune various config parameters, even Dataset data formatting and sizes.
Here, gpu won't help.
10 Julia computers cluster would speed up genetic algorithms process significantly.
The text was updated successfully, but these errors were encountered: