We kindly thank (in no particular order)
- Artem Babenko and Vladimir Aliev for helpful discussions and editorial review of the paper,
- Jacob R. Steeves for discussions on RPC frameworks and NAT traversal and peer-to-peer technologies.
- Dmitry Afanasiev for his guidance on networking and communication technologies,
- Lidi Zheng and grpc-aio contributors for their awesome framework and this PR
- Brian Muller for his implementations of kademlia and rpcudp
- Alexander Sherbakov for helpful discussions on PC and server component architecture,
- Yandex School of Data Analysis students, for helping us run first truly collaborative experiments.
- The Neuropark community, for hosting early collaborative training experiments of sahajBERT with hivemind.
- Our early adopters, contributors, and conference reviewers.
In this section, we list several organizations and research projects that bring humanity closer to the dream of world-scale deep learning with volunteer computing.
- Hugging Face — an AI community with world-leading NLP research that builds collaborative hub training using hivemind.
- EYDLE — a start-up that works towards distributed deep learning on volunteer hardware using centralized infrastructure.
- BitTensor — a decentralized deep learning ecosystem with incentive mechanism. Each peer trains for its own objective and rewards others for useful features.
- Also building collaborative deep learning? Let us know!
hivemind-team <at> hotmail.com