Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strong Deniability on content replication and consumption in P2P networks #21

Open
gpestana opened this issue Mar 13, 2019 · 3 comments

Comments

@gpestana
Copy link
Owner

gpestana commented Mar 13, 2019

We need last mile P2P caching which does not disclose what nodes are consuming. We need protocols for scalable storage and distribution in P2P networks with strong deniability, i.e. the peers which perform caching can provably and strongly deny that they are interested in the content they are providing.

Commonly, If a node_a is interested in content X stored in the network, it will 1) request the network peers for it with lookup(X) and 2) replicate it locally. Problems with this approach: node_b can claim that node_a is interested in content X if he intercepts a lookup(X) from node_a or if node_a replicates X. In this example, node_a leaks metadata in respect to its behaviour. In practice, it broadcasts to the world what content it is consuming, leading to exposing people to targeted political, social and economical attacks based on their profiles.

The generalised problem is the following: how can a network of nodes distribute and request content among themselves without leaking information about their behaviour to other peers?

Related:

@gpestana
Copy link
Owner Author

gpestana commented Mar 13, 2019

One way to achieve this is build a protocol which replicates data requested in the (network) neighbourhood regardless of the fact peers are interested in the data.

Imagine a tracker which keeps a log of the content stored in the network and which peers are storing it. A peer requests pointers for a given content from the tracker. The tracker replies with a set of peer-blocks data structures, which maps a set of resource blocks to a peer which is caching it. The set of peer-blocks is constructed by the tracker so that when the requester peer issues the network requests, it is not possible for adversaries to distinguish between replication and consuming patterns. This also ensures that network content is available and replicated.

@gpestana
Copy link
Owner Author

gpestana commented Apr 8, 2019

open questions

  • How to enforce this protocol in a completely decentralised way?
  • Can it deliver the performance and latency gains that peer-CDNs do?
  • What are the optimal replication schedulers to enforce privacy?

@gpestana
Copy link
Owner Author

gpestana commented Apr 8, 2019

Content Resolver with Interest Obfuscation in P2P networks (CRIOP2P)

An obfuscation scheduler is an algorithm that schedules content replication in P2P networks so that:

  1. scheduler resolves location of a resource in the network; (while)
  2. peers can strongly deny they have used/are interested in a resource requested and cached (i.e. peers do not leak behaviour* information)
  3. requests are latency sensitive (i.e. the scheduler gives priority to peers which are closer and thus with smaller communication latency)

* behaviour is defined as what network content a peer requests from other peers based on its preferences

API and interfaces

// config parameters
const (
  // response will include 0.5*blocks that are not part of the original
  // requested resource
  obfReplicationF = 0.5
  // each peer will be responsible to provide up to 25% of a resource
  peerSpreadingF = 0.25
)

type Scheduler struct {} 

type Peer interface {
  ID() string
  LatencyBetween(Peer) (uint, error)
}

type Block interface {
  ID() string
}

// keeps a mapping between a peer and a set of blocks. peer with `peerId` 
// is storing blocks with `blockId`
type PeerBlocks structure {
  peerId string
  blockId []string
}

// getProviders returns a set of PeerBlocks for the requester to resolve. 
// the set of PeerBlocks is constructed based on the requester neighbourhood,
// the content already cached by it and the resource requested. 
// upon receiving a []PeerBlocks, the requester requests the blocks from the peers
// indicated in the set in order for 1) eventually acquire the requested resource while 
// 2) not leaking information locally as of which content is being requested 
func (s Scheduler) getProviders(neighbourPeers []Peer, cachedContent []Blocks, resourceId string ) ([]PeerBlocks, error) {}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant