Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error (can't replace a newer value with an older) when publishing IPNS #6683

Open
nk-the-crazy opened this issue Sep 27, 2019 · 14 comments
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@nk-the-crazy
Copy link

  "Version": "0.4.22",
  "Commit": "",
  "Repo": "7",
  "System": "amd64/linux",
  "Golang": "go1.12.7"
Description

After restoring data directory ( .ipfs ) from zipped archive, that created earlier.
We're unable to publish IPNS with default key.
Getting Error :

{"Message":"can't replace a newer value with an older value","Code":0,"Type":"error"}

( IPFS crashed suddenly, so we removed current data directory and restored it from zip archive , that was created 2-3 days ago. After that it , we're unable to publish. Actually, this kind of restore operations were done several times, with no problems. )

@nk-the-crazy nk-the-crazy added the kind/bug A bug in existing code (including security flaws) label Sep 27, 2019
@aschmahmann
Copy link
Contributor

@nk-the-crazy I don't think what you're trying to do is supported, although perhaps it should be. I'd look at https://discuss.ipfs.io/t/name-publish-error-cant-replace-a-newer-value-with-an-older-value/3823/8 and #5816 for more information.

Your backup essentially functions the same as if IPNS allowed multiple users to publish with the same key since restoring on the same machine is equivalent to starting up a second machine with the same key. Unfortunately, for some reasons you can read about in the linked issues and other Github issues it does not. This is what's leading to your issues.

Is there any more information you can give me to replicate your scenario? When you restore from your backup do you immediately do ipfs name publish newData or do you do an ipfs name resolve myKey first?

Possible solution idea

@Stebalien what do you think about making the datastore keys for IPNS and the DHT harmonize (either make the DHT use /namespace/base32(data) or have IPNS use base32(/ipns/key))? That looks like it should allow an external get to bump the sequence number.

https://github.com/ipfs/go-ipfs/blob/9ea7c6a11113bebd0efed9b7d5a23331ed4f93b0/namesys/publisher.go#L54-L56

https://github.com/libp2p/go-libp2p-kad-dht/blob/36578e2be34ab685bea8924fc8b898fd2cf4127f/dht.go#L399-L401

@Stebalien
Copy link
Member

what do you think about making the datastore keys for IPNS and the DHT harmonize

We intentionally split these:

  • /ipns/... version is used to record the fact that we've published that an IPNS record so we can republish.
  • The DHT version is the record put by other peers.

The issue was expiration: We started deleting IPNS records after they expired and, in doing so, ended up deleting the record we were using to republish IPNS records.


What we should do is:

  1. Detect this case.
  2. Bump the sequence number.
  3. Warn the user (somehow?) as there could be newer IPNS records floating around in the network that we're not aware of.

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Jan 24, 2020

I just ran into the same issue. Looks like there's an edge case where IPFS losts track of the correct information.

It should print a warning in this case, but should still push a new version (somehow).

I was using --enable-pubsub-experiment --enable-namesys-pubsub.

But even after deactivating pubsub I'm not able to push anything new. Also I couldn't cancel the subscribtions on pubsub either...

The error on the command was

11:45:18.540 ERROR cmds/http: could not guess encoding from content type "" parse.go:200

While the console printed a stack trace after nil pointer dereference:

11:45:18.539 ERROR  cmds/http: a panic has occurred in the commands handler! handler.go:92
11:45:18.539 ERROR  cmds/http: runtime error: invalid memory address or nil pointer dereference handler.go:93
11:45:18.540 ERROR  cmds/http: stack trace:
goroutine 100760 [running]:
runtime/debug.Stack(0xc0003906e0, 0xc001f38810, 0x1)
        /usr/lib/go/src/runtime/debug/stack.go:24 +0x9f
github.com/ipfs/go-ipfs-cmds/http.(*handler).ServeHTTP.func1()
        /build/go-ipfs/src/go-ipfs/vendor/github.com/ipfs/go-ipfs-cmds/http/handler.go:94 +0x121
panic(0x5592f5be2320, 0x5592f6665c20)
        /usr/lib/go/src/runtime/panic.go:522 +0x1b9
github.com/libp2p/go-libp2p-pubsub.(*Subscription).Cancel(...)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/libp2p/go-libp2p-pubsub/subscription.go:32
github.com/libp2p/go-libp2p-pubsub-router.(*PubsubValueStore).Cancel(0xc000114300, 0xc0018abfb0, 0x28, 0xc0018abf00, 0x0, 0x0)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/libp2p/go-libp2p-pubsub-router/pubsub.go:291 +0x1be
github.com/ipfs/go-ipfs/core/commands/name.glob..func6(0xc000145570, 0x5592f5dad200, 0xc0027f5f00, 0x5592f5c78760, 0xc0008ab680, 0x0, 0x0)
        /build/go-ipfs/src/go-ipfs/core/commands/name/ipnsps.go:128 +0x17a
github.com/ipfs/go-ipfs-cmds.(*Command).call(0x5592f6678fe0, 0xc000145570, 0x5592f5dad200, 0xc0027f5f00, 0x5592f5c78760, 0xc0008ab680, 0x73005592f5477c91, 0xc00198f5e0)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/ipfs/go-ipfs-cmds/command.go:106 +0x232
github.com/ipfs/go-ipfs-cmds.(*Command).Call(0x5592f6678fe0, 0xc000145570, 0x5592f5dad200, 0xc0027f5f00, 0x5592f5c78760, 0xc0008ab680)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/ipfs/go-ipfs-cmds/command.go:76 +0x72
github.com/ipfs/go-ipfs-cmds/http.(*handler).ServeHTTP(0xc000130b20, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/ipfs/go-ipfs-cmds/http/handler.go:164 +0x885
github.com/ipfs/go-ipfs-cmds/http.prefixHandler.ServeHTTP(0x5592f54b3b0b, 0x7, 0x5592f5d8c400, 0xc000130b20, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/ipfs/go-ipfs-cmds/http/apiprefix.go:24 +0xc5
github.com/rs/cors.(*Cors).Handler.func1(0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/rs/cors/cors.go:207 +0x1b1
net/http.HandlerFunc.ServeHTTP(0xc000130ea0, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
net/http.(*ServeMux).ServeHTTP(0xc0004f1cc0, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:2375 +0x1d8
github.com/ipfs/go-ipfs/core/corehttp.CheckVersionOption.func1.1(0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/core/corehttp/commands.go:176 +0x12e
net/http.HandlerFunc.ServeHTTP(0xc0001309a0, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
net/http.(*ServeMux).ServeHTTP(0xc0004f1c40, 0x7fc1d812a4d0, 0xc001a29ae0, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:2375 +0x1d8
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerResponseSize.func1(0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:196 +0xeb
net/http.HandlerFunc.ServeHTTP(0xc00015a120, 0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerRequestSize.func2(0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:170 +0x75
net/http.HandlerFunc.ServeHTTP(0xc00015a1b0, 0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerDuration.func2(0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:76 +0xb7
net/http.HandlerFunc.ServeHTTP(0xc00015a240, 0x7fc1d812a4d0, 0xc001a29a90, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1(0x5592f5da83c0, 0xc0004df880, 0xc00290af00)
        /build/go-ipfs/src/go-ipfs/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:100 +0xdc
net/http.HandlerFunc.ServeHTTP(0xc00015a390, 0x5592f5da83c0, 0xc0004df880, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:1995 +0x46
net/http.(*ServeMux).ServeHTTP(0xc0004f1c00, 0x5592f5da83c0, 0xc0004df880, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:2375 +0x1d8
net/http.serverHandler.ServeHTTP(0xc0003c6680, 0x5592f5da83c0, 0xc0004df880, 0xc00290af00)
        /usr/lib/go/src/net/http/server.go:2774 +0xaa
net/http.(*conn).serve(0xc00234bf40, 0x5592f5dacd80, 0xc001e6c700)
        /usr/lib/go/src/net/http/server.go:1878 +0x853
created by net/http.(*Server).Serve
        /usr/lib/go/src/net/http/server.go:2884 +0x2f6
 handler.go:94

@aschmahmann
Copy link
Contributor

aschmahmann commented Jan 24, 2020

@RubenKelevra it's not clear from your post if your issue is the same as this one or a different one.

Could you clarify which version of IPFS you're using and if you are publishing from multiple nodes or otherwise manually manipulating the IPFS data store (e.g. backups) like the OP did.

@Stebalien
Copy link
Member

The error on the command was

Just looked into this issue with @aschmahmann. This was a bug but has since been fixed.

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Jan 24, 2020

@RubenKelevra it's not clear from your post if your issue is the same as this one or a different one.

I'm sorry, next time I'll be more clear!

Could you clarify which version of IPFS you're using

I use the latest version (binary from the ArchLinux repo).

if you are publishing from multiple nodes

Single instance of IPFS publishing, I've even had just one key generated and wanted to push a second time.

otherwise manually manipulating the IPFS data store (e.g. backups) like the OP did.

Nope, I had run previously a manually ipfs repo gc which showed some errors and aborted on the first call. Sounded like timeouts for file removals because of locks to me, so I have run it a second time and it completed it successfully.

Then I rebooted the machine. Afterwards I couldn't publish with the old key. Trying to remove the subscription showed the error mentioned.

Then I've started IPFS without pubsub to test out if I can publish without pubsub which also failed.

I decided to remove the key. Then I've restarted IPFS with pubsub again and it showed no subscriptions. I generated a new key which worked flawlessly.

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Feb 25, 2020

Wrote by @Stebalien:

What we should do is:

  1. Detect this case.
  2. Bump the sequence number.
  3. Warn the user (somehow?) as there could be newer IPNS records floating around in the network that we're not aware of.

Would be nice. I ran into the same issue as @nk-the-crazy, since I deleted my datastore and now want to publish a new IPNS. I guess I now have to wait 96 hours (timeout of the older version?)

Additionally, this would allow different peers to update the IPNS on different positions of the network, which makes sense for any redundant setup.

@aschmahmann
Copy link
Contributor

aschmahmann commented Feb 25, 2020

this would allow different peers to update the IPNS on different positions of the network

It sounds like what you're asking for here (as opposed to with your datastore issue) is really about third-party republishing of IPNS keys (essentially #1958).

It'd be great to refactor namesys #6537 and make some API changes to make this possible since nothing in the protocol actually prevents third-party republishing.

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Feb 25, 2020

It sounds like what you're asking for here (as opposed to with your datastore issue) is really about third-party republishing of IPNS keys (essentially #1958).

Nope, that's completely different:

#1958 asks for the ability to republish already available data.

I was just noting, that if you have multiple server, you probably like to publish with the same ipns key from different servers, especially in a cluster configuration where multiple servers can alter the data.

The issue is that there's already a namesys item floating around, I've got the valid key but my database just doesn't know the current counter number.

Instead of just asking the network, which values are currently floating around, my node just publishes the default number.

So I would need to publish updates until I exceed the counter value already present in the network to make any changes.


I solved the issue now by just generating new keys and publishing with them new ipns records.

@aschmahmann
Copy link
Contributor

aschmahmann commented Feb 25, 2020

I was just noting, that if you have multiple server, you probably like to publish with the same ipns key from different servers, especially in a cluster configuration where multiple servers can alter the data.

The implication of "cluster" is that you have some sort of consensus among the machines you are running. Without consensus your machines could publish values that clobber each other. There's nothing in an IPNS record that prevents you from doing this multi-machine publishing, but this definitely seems more like a feature for applications building on ipfs (e.g. ipfs-cluster) instead of go-ipfs itself.

I solved the issue now by just generating new keys and publishing with them new ipns records.

👍. We should definitely make it easier to recover from these errors than forcing a key rotation. Also, I may be mistaken but I don't think that deleting your datastore (as opposed to just gc-ing all stored data) is actually supported.

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Feb 25, 2020

I was just noting, that if you have multiple server, you probably like to publish with the same ipns key from different servers, especially in a cluster configuration where multiple servers can alter the data.

The implication of "cluster" is that you have some sort of consensus among the machines you are running. Without consensus your machines could publish values that clobber each other. There's nothing in an IPNS record that prevents you from doing this multi-machine publishing, but this definitely seems more like a feature for applications building on ipfs (e.g. ipfs-cluster) instead of go-ipfs itself.

Okay, maybe an example helps to convey my point:

Server A writes a file to the cluster, waits for the census to confirm that this is the new head and publishes an IPNS record.

Server B writes a new version of the file to the cluster, waits for the census to confirm that this is the new head. But Server B now cannot publish a new IPNS record, despite having the private key:

If ipfs name publish is called on the command line, the local database will provide the version number for the IPNS. But since server A has published something in the meantime, the value isn't up to date anymore.

There should be an option to query the DHT for the currently highest value of an IPNS and the ability to specify the version value.

I don't think there's a need for an application to fiddle around in the DHT and patch the local IPFS database. This should be solved by IPFS itself.

@aschmahmann
Copy link
Contributor

There should be an option to query the DHT for the currently highest value of an IPNS

You could do that, but the DHT isn't like a single database you can query. What happens if there's a partition, or some other network anomaly. Now server B thinks the latest version is 5, when it's really 7 and therefore their publish (version 6) doesn't actually get propagated to the network.

and the ability to specify the version value.

Doing this definitely allows users to shoot themselves in the foot by just messing with the values. This doesn't necessarily mean I'm opposed to it, but if this functionality was exposed it would definitely need to come with sufficient warning lights.

@kevincox
Copy link

This is biting me as well because I am trying to move the publisher from one serve to another. Can we add a --tust-me-i-really-want-to-overwrite-the-previous-version flag to ignore this check? In my case I do want the check in general, however for the move it is blocking. I can also imagine cluster use cases where it is frequently expected that different nodes will be publishing relying on some external synchronization and will want to use this flag every time.

Also is there any way to work around this in the short term?

@RubenKelevra
Copy link
Contributor

I hit this issue again today, after converting a badgerds to a flatfs datastore...

Is there any chance to add a flag to ignore this issue and bump the version up?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

5 participants