We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
多线程训练的情况下,有的线程更新参数g,有的线程读了参数s, 虽然有锁程序运行没问题,但感觉可能出现一条样本对参数的跟新不一致的问题 280 mu.mtx.lock(); 个人感觉应该加在循环体外
The text was updated successfully, but these errors were encountered:
完全可以把锁加在更外层,甚至加到feature粒度上,但会影响运行效率。所以我这里采用了尽可能小的粒度上加锁。parameter server也是类似的做法,参数更新并非严格一致,实践表明完全不影响收敛。甚至你会发现,即便完全不加锁在大部分情况下也没什么太大影响。
Sorry, something went wrong.
hi,castellan,这里有没有相关的证明呢?同样有这个confuse。
我写了个多线程训练的代码,跟新参数时加锁,测试时发现总会出现模型auc波动很大,多次训练学到的权重也有些起伏,所以我就放弃了多线程版,用单线程一直很稳定
No branches or pull requests
多线程训练的情况下,有的线程更新参数g,有的线程读了参数s, 虽然有锁程序运行没问题,但感觉可能出现一条样本对参数的跟新不一致的问题
280 mu.mtx.lock();
个人感觉应该加在循环体外
The text was updated successfully, but these errors were encountered: