We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用fashion_mnist_train_test.py。 �FashionMNIST预先下载到本地(download=False),remote node没有下载,但是程序依然可以运行。
fashion_mnist_train_test.py
猜测: worker_loop读取了本地训练数据,然后发送给了remote node? 对ray不是很熟悉,只是猜测。如果真是这样,这样的效率其实很低。 更需要的模式是:
The text was updated successfully, but these errors were encountered:
看了一下FashionMNIST是把数据预读到了内存。所以dataset再传输的过程会非常大。我再测试一下不预读到内存的情况
Sorry, something went wrong.
是这样的,分布式的情况下本身会带来网络的开销,我们必须的确保数据预处理的时间 >> 网络时间才能拿到收益的。不是所有的鱼处理都适合做分布式预处理的。然后你说的没错,FashionMNIST数据集是一次性把数据集加载到内存中的
No branches or pull requests
测试情景
使用
fashion_mnist_train_test.py
。 �FashionMNIST预先下载到本地(download=False),remote node没有下载,但是程序依然可以运行。猜测:
worker_loop读取了本地训练数据,然后发送给了remote node? 对ray不是很熟悉,只是猜测。如果真是这样,这样的效率其实很低。 更需要的模式是:
The text was updated successfully, but these errors were encountered: