Skip to content
Biocodr edited this page Nov 19, 2014 · 4 revisions

In order to improve the stability and performance of the Hive2Hive library, some optimizations have been built-in.


Concurrency

Some operations in the User Management and File Management might take quite some time du to:

  • Key Pair Generation
  • Network Latency

For this reason, all operations are executed in an asynchronous manner such that they can be run in parallel. Furthermore, each operation is compound using process components of the Hive2Hive Process Framework. This allows to fine-tune the concurrency of process-internal steps even more.

Caching

Oftentimes, some data objects have to be fetched from the DHT for more than only one operation. Instead of getting and putting such objects multiple times, they get cached locally instead.

User Profile Caching

As mentioned above, all operations can run in parallel. Race-conditions of one process overwriting the other processes changes were detected soon. Although the concurrency handling described here is elaborate and trustworthy, such errors slow down the execution of processes dramatically because processes need to rollback and restart. Most of the processes require to read and / or modify the UserProfile. As a solution, a central and process-overreaching mechanism to fetch and modify the UserProfile by multiple processes was built. Requests for read-only or read-and-writes can be made, which are queued in two separate queues: Qw for write-requests and Qr for read-only requests. It is allowed that multiple processes can read the profile in parallel, thus they can be served at the same time. Allowing multiple processes to modify the UserProfile simultaneously leads to race-conditions and inconsistencies. Thus, a request for a change blocks until the change has been made (or the maximum allowed modification time expired). As soon as the process releases the lock, the changes are applied and uploaded and the processing of the queues can be continued and waiting processes are served with the updated profile. Qw has a higher priority than Qr because every write-request includes a fetch of the newest profile from the DHT, which is then modified. After this fetch, not only the foremost process in Qw, but all processes in Qr can be served with the fetched and yet unmodified profile. Each node in the network has an own local UserProfile manager and concurrency issues among multiple peers is not reduced. However concurrency within a peer is eliminated. This UserProfile manager not only speeds up the processes because version conflicts are omitted, but also improves the network efficiency. The UserProfile does not have to be fetched by every process that tries to read it, but only once by the UserProfile manager, which then distributes it among the waiting processes.

User Encryption Public Key Caching

Since a user's User Encryption Public Key is never changed and used a lot by other users, these other user's User Clients cache this key for some time. On every usage of the key, the lifespan of this cache is extended for a little longer.

This cache is stored locally on the user client's disk such that it can be re-used once it reconnects to the network.