Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Awesome stuff #5

Open
FlorianHeigl opened this issue Jul 16, 2024 · 0 comments
Open

Awesome stuff #5

FlorianHeigl opened this issue Jul 16, 2024 · 0 comments

Comments

@FlorianHeigl
Copy link

FlorianHeigl commented Jul 16, 2024

Thank you for not letting this rot away as a student project we'll only ever find papers about - instead you shared it.
And it's feasible to try out, albeit in a smaller scale once one has some pmem enabled systems. Wonderful!

(I wish I were able to port this over to the 48core Cavium stuff, since they also got partitioning. From the paper, you stopped measuring at 8 clients due to line rate exhaustion, as a sysadmin I have to say that is where it starts to get interesting and your parallel "busy" handling looked very good. It would have been interesting to see how it holds up under very high overload, i.e. 1024 clients, as that's the expectation for real enterprise storage to handle gracefully (IOW: an architecture that is load-independent, as to not topple, fall over, implode, fold into itself when queues become excessive. your work seems to have had potential for that, even if it of course was very specific with the use of pmem. Today with nvme direct (gpu direct?) reads it would probably even be able to integrate slower storage in the same manner)

oh well. maybe some lucky day an engineer notices this code and it becomes a thing ;-) till then, again ty for sharing.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant