You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am facing an issue in file transfers , I am unable to get the maximum throughput on infinitiband on file transfer. My mellanox device details are CA 'mlx4_0' | CA type: MT4099.
Here is the screenshot of client side
i also wanted to ask if I can increase the buffer limit to increase the file throughput because the maximum I can stretch my buffer limit is 32.
The text was updated successfully, but these errors were encountered:
hi @jahanxb , thanks for the post. Could you describe your source and destination node environments a little more? It would be good to know what the expected storage I/O performance is expected to be, for example. Is this an HPC cluster you are working on, a local testbed, etc?
Without transferring files (no -f flag), are you able to saturate the network with memory-to-memory tests?
hi @disprosium8 , sorry for the late reply, I am using emulab testbeds.
I have been able to connect to nodes without the flag and it seems that I get the speed intended speed.
I am facing an issue in file transfers , I am unable to get the maximum throughput on infinitiband on file transfer. My mellanox device details are CA 'mlx4_0' | CA type: MT4099.
![G9ImLey](https://user-images.githubusercontent.com/10818137/163294879-fdbd5073-81b4-4e3f-9586-160b94590fbd.png)
Here is the screenshot of client side
i also wanted to ask if I can increase the buffer limit to increase the file throughput because the maximum I can stretch my buffer limit is 32.
The text was updated successfully, but these errors were encountered: