Skip to content

Commit

Permalink
Documentation: captain log, readme and golint fixes
Browse files Browse the repository at this point in the history
License: MIT
Signed-off-by: Hector Sanjuan <hector@protocol.ai>
  • Loading branch information
hsanjuan committed Feb 15, 2017
1 parent 469cf51 commit 8e45ce6
Show file tree
Hide file tree
Showing 8 changed files with 110 additions and 57 deletions.
17 changes: 17 additions & 0 deletions CAPTAIN.LOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
# IPFS Cluster - Captain's log

## 20170215 | @hsanjuan

A global replication factor is now supported! A new configuration file option `replication_factor` allows to specify how many peers should be allocated to pin a CID. `-1` means "Pin everywhere", and maintains compatibility with the previous behaviour. A replication factor >= 1 pin request is subjec to a number of requirements:

* It needs to not be allocated already. If it is the pin will return with an error saying so.
* It needs to find enough peers to pin.

How the peers are allocated content has been most of the work in this feature. We have two three componenets for doing so:

* An `Informer` component. Informer is used to fetch some metric (agnostic to Cluster). The metric has a Time-to-Live and it is pushed in TTL/2 intervals to the Cluster leader.
* An `Allocator` component. The allocator is used to provide an `Allocate()` method which, given current allocations, candidate peers and the last valid metrics pushed from the `Informers`, can decide which peers should perform the pinning. For example, a metric could be the used disk space in a cluster peer, and the allocation algorithm would be to sort candidate peers according to that metrics. The first in the list are the ones with less disk used, and will then be chosen to perform the pin. An `Allocator` could also work by receiving a location metric and making sure that the most preferential location is different from the already existing ones etc.
* A `PeerMonitor` component, which is in charge of logging metrics and providing the last valid ones. It will be extended in the future to detect peer failures and trigger alerts.

The current allocation strategy is a simple one called `numpin`, which just distributes the pins according to the number of CIDs peers are already pinning. More useful strategies should come in the future (help wanted!).

The next steps in Cluster will be wrapping up this milestone with failure detection and re-balancing.

## 20170208 | @hsanjuan

So much for commitments... I missed last friday's log entry. The reason is that I was busy with the implementation of [dynamic membership for IPFS Cluster](https://github.com/ipfs/ipfs-cluster/milestone/2).
Expand Down
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,14 @@ Current functionality only allows pinning in all cluster peers, but more strateg
## Table of Contents

- [Background](#background)
- [Maintainers and roadmap](#maintainers-and-roadmap)
- [Maintainers and Roadmap](#maintainers-and-roadmap)
- [Install](#install)
- [Usage](#usage)
- [`ipfs-cluster-service`](#ipfs-cluster-service)
- [`ipfs-cluster-ctl`](#ipfs-cluster-ctl)
- [Quick start: Building and updating an IPFS Cluster](#quick-start-building-and-updating-an-ipfs-cluster)
- [API](#api)
- [Architecture](#api)
- [Contribute](#contribute)
- [License](#license)

Expand All @@ -46,7 +47,7 @@ Since the start of IPFS it was clear that a tool to coordinate a number of diffe

`ipfs-cluster` aims to address this issues by providing a IPFS node wrapper which coordinates multiple cluster peers via a consensus algorithm. This ensures that the desired state of the system is always agreed upon and can be easily maintained by the cluster peers. Thus, every cluster node knows which content is tracked, can decide whether asking IPFS to pin it and can react to any contingencies like node reboots.

## Maintainers and roadmap
## Maintainers and Roadmap

This project is captained by [@hsanjuan](https://github.com/hsanjuan). See the [captain's log](CAPTAIN.LOG.md) for a written summary of current status and upcoming features. You can also check out the project's [Roadmap](ROADMAP.md) for a high level overview of what's coming and the project's [Waffle Board](https://waffle.io/ipfs/ipfs-cluster) to see what issues are being worked on at the moment.

Expand Down Expand Up @@ -95,7 +96,8 @@ You can add the multiaddresses for the other cluster peers the `bootstrap_multia
"ipfs_proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
"ipfs_node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
"consensus_data_folder": "/home/hector/go/src/github.com/ipfs/ipfs-cluster/ipfs-cluster-service/d1/data",
"state_sync_seconds": 60
"state_sync_seconds": 60,
"replication_factor": -1
}
```

Expand Down Expand Up @@ -280,6 +282,10 @@ This is a quick summary of API endpoints offered by the Rest API component (thes
|POST |/pins/{cid}/recover |Recover CID|


## Architecture

The best place to get an overview of how cluster works, what components exist etc. is the [architecture.md](architecture.md) doc.

## Contribute

PRs accepted.
Expand Down
Loading

0 comments on commit 8e45ce6

Please sign in to comment.