Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep others’ IPNS records alive #1958

Open
ion1 opened this issue Nov 11, 2015 · 14 comments
Open

Keep others’ IPNS records alive #1958

ion1 opened this issue Nov 11, 2015 · 14 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature topic/ipns Topic ipns

Comments

@ion1
Copy link

ion1 commented Nov 11, 2015

ipfs name keep-alive add <friend’s node id>

Periodically get and store the IPNS record and keep serving the latest seen version to the network until the record’s EOL.

@ghost
Copy link

ghost commented Nov 11, 2015

You'll be able to pin IPNS records like anything else once we have IPRS

@ion1
Copy link
Author

ion1 commented Nov 11, 2015

Awesome

@daviddias daviddias added the topic/ipns Topic ipns label Jan 2, 2016
@koalalorenzo
Copy link
Member

Waiting for this feature 👍

@Falsen
Copy link

Falsen commented Aug 4, 2018

But doesn't it make more sense if they are automatically pinned by nodes? Or would it be resource heavy,?

@koalalorenzo
Copy link
Member

Consider that if pinned those have to be updated constantly via signatures etc etc...

@Stebalien
Copy link
Member

Stebalien commented Aug 6, 2018

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key. We expire them because the DHT isn't persistent and will eventually forget these records anyways. When it does, an attacker would be able to replay an old IPNS record from any point in time.

@lockedshadow
Copy link

lockedshadow commented Dec 4, 2018

When it does, an attacker would be able to replay an old IPNS record from any point in time.

Is it really considered more dangerous than possibility of practically disappearing whole materials published under certain IPNS key if one (just one!) publisher node with its private key once disappears too? Doesn't this publisher node look like the central point of failure? Outdated, but valid records are really worse than no records at all?

I think that ability to replay is not an critical security issue, at least in condition that user is explicitly notified that the obtained result could be outdated. After all, «it will always return valid records (even if a bit stale)», as mentioned in 0.4.18 changelog.

So what do you think about --show-publish-time flag on ipfs name resolve command? Do the IPNS records itself contain this data?

@Stebalien Stebalien added the kind/enhancement A net-new feature or improvement to an existing feature label Mar 22, 2019
@Stebalien
Copy link
Member

@lockedshadow I've been thinking about (and discussing this) this and, well, you're right. Record authors should be able to specify a timeout but there's no reason to remove expired records from the network. Whether or not to accept an expired record would be up to the client.

@T0admomo
Copy link

@Stebalien What is the best way to go about introducing this change to the protocol?

@aschmahmann
Copy link
Contributor

aschmahmann commented Jan 31, 2022

@T0admomo since this is a client and UX change rather than a spec one mostly I would propose what the UX should be along with the various changes that would need to happen in order to enable it.

Some of the work here is in ironing out the UX and then there's some in implementation. By discussing your proposed plan in advance it makes it easier to ensure that your work is likely to be reviewed and accepted.

Some related issues: #7572 #4435 #3117

@2color
Copy link
Member

2color commented Aug 4, 2022

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key.

According to the IPNS spec, the signature contains the concatenated value, validity, and validityType fields.

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

Moreover, since validity is controlled by the key holder when they sign the record, they have the flexibility to pick any validity at the potential cost of users getting an expired/stale record (in the case of a new record published within the validity period that isn't propagated to all nodes holding the previous one). This is arguably better than getting no resolution as pointed out by @lockedshadow

Am I understanding this correctly?

@bertrandfalguiere
Copy link

bertrandfalguiere commented Aug 4, 2022

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

I think this could be an attack vector as a malicious node could publish a lot of signed records with near infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

So other clients needs to reject very old records, even if the original publisher wanted them to have very long validity.

(An attacker could also spawn many nodes and publish records from them, with the same effect)

@2color
Copy link
Member

2color commented Aug 4, 2022

I think this could be an attack vector as a malicious node could publish a lot of signed records with infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

I recently read that DHT nodes will drop stored values after ~24 hours, no matter what Lifetime and TTL you set. So it's not really possible to clog the DHT or use this as an attack vector.

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

(An attacker could also spawn many nodes and publish records from them, with the same effect)

I believe that this is what Fierro allows you to do, though without any malicious intent.

@bertrandfalguiere
Copy link

bertrandfalguiere commented Aug 4, 2022

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

Yes, you're right. Droping records is not based on age, I oversimplified. The point is that they are not in the DHT after some time if they are not republished, so they can't accumulate.

I believe that this is what Fierro allows you to do, though without any malicious intent.

Yes, but since records are droped by clients after abiut 24 hours, they still can't accumulate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature topic/ipns Topic ipns
Projects
No open projects
Development

No branches or pull requests

10 participants