Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster isn't actually pinning content added via URLStore #701

Closed
11 of 17 tasks
NatoBoram opened this issue Mar 6, 2019 · 7 comments
Closed
11 of 17 tasks

Cluster isn't actually pinning content added via URLStore #701

NatoBoram opened this issue Mar 6, 2019 · 7 comments

Comments

@NatoBoram
Copy link

Pre-check

  • This is not a IPFS Cluster website content issue (file those here)
  • I read the troubleshooting section of the website and it did not help
  • I searched for similar issues in the repo without luck
  • All my peers are running the same cluster version
  • All my peers are configured using the same cluster secret

Basic information

  • Version information (mark as appropiate):
0.7.0+gitd80f3ee05d5688052b26556a5bb0d9cd2ea21880
  • Type (mark as appropiate):
    • Bug
    • Feature request
    • Enhancement
  • Operating system (mark as appropiate):
OS: Ubuntu 18.04.2 LTS x86_64 
Host: Droplet 20171212 
Kernel: 4.15.0-46-generic 
Uptime: 2 hours, 36 mins 
Packages: 677 
Shell: bash 4.4.19 
Terminal: /dev/pts/0 
CPU: Intel Xeon E5-2650L v3 (1) @ 1.797GHz 
GPU: Red Hat, Inc. QXL paravirtual graphic card 
Memory: 1089MiB / 1993MiB 
  • Installation method (mark as appropiate):
    • Binaries from dist.ipfs.io
    • Built from sources
    • Docker
    • Snap
    • Other: which?

Description

I'm trying to pin about a hundred files that were previously hashed using ipfs urlstore add, but nothing is happening.

There's some of them stuck in PIN_QUEUED, PINNING and PINNED, which seems to indicates that work is being done. However, the disk usage stays the same.

image

When trying to add new pins to the queue, I receive the following error :

Failed to pin a build.
Command : ipfs-cluster-ctl pin add zdj7WcGff6a9WbgRjz37svL6BoQRyzYAe2GZLBtkPULUk6rkS --name lineage-15.1-20190221-nightly-zl1-signed.zip --replication-min 1 --replication-max 1

An error occurred:
  Code: 500
  Message: not enough peers to allocate CID. Needed at least: 1. Wanted at most: 1. Valid candidates: 0. See logs for more info.

But there's plenty of disk space.

@NatoBoram NatoBoram changed the title not enough peers to allocate CID Cluster isn't actually pinning content added via URLStore Mar 6, 2019
@NatoBoram
Copy link
Author

NatoBoram commented Mar 7, 2019

Okay, so I changed things. First, I ditched the Snap version and I compiled it from source. It actually helped, so I'm glad I did that. I had to migrate the repo, but that's not a problem, most things work fine.

I changed my watchdog so the urlstore command pins the content on the local IPFS node, then it unpins the unnecessary files, and pin the rest to the IPFS Cluster. Now, everything is appropriately pinned. However, my disk usage isn't increasing, and I've just download at least 180 GB of files.

The good news is everything's pinned, and I probably goofed somewhere in my configuration.

But how did I just fit 280 GiB inside 1.4 GiB?

@lanzafame
Copy link
Contributor

@NatoBoram Is the disk usage for both the ipfs-cluster node and ipfs node? ipfs-cluster doesn't technically store anything other than the list of pins you want it to look after. The actual data is stored in ipfs. The other thing, I am not sure but I believe storing a file via the urlstore doesn't actually download the data to your ipfs node, it sort of proxies cids to the url of the file.

@NatoBoram
Copy link
Author

I think I misunderstood something about the command then. When the resulting hash is pinned, all that's pinned is the URL? Or is there actual content when it's pinned?

@lanzafame
Copy link
Contributor

My understanding is that the content doesn't get downloaded until actually requested, it just makes a file that is on a http website available via a hash as well. It is suggested in the usage of urlstore that you have full control of the website where the file is hosted as ipfs can do nothing if that website goes down.

@hsanjuan
Copy link
Collaborator

hsanjuan commented Mar 7, 2019

Yes, urlstore downloads the content, hashes it, gives you a CID, and points that CID to the original URL, discarding the content.

Maybe what you want to do is better served by ipfs-cluster-ctl add <url> which will download and add that content.

Secondly, about not enough peers to allocate CID. Needed at least: 1. Wanted at most: 1. Valid candidates: 0. See logs for more info. What do the cluster peer logs say?

ipfs-cluster-ctl health metrics freespace (I think, writing from memory), should show the current value for freespace metrics for all peers. These are picked from the IPFS daemon by doing StorageMax (from .ipfs/config) - UsedSpace (from ipfs repo stat`). Either your ipfs daemon is using more storage than StorageMax (which is possible, as nothing prevents it), or metrics could not be fetched (if the ipfs daemon is down or something). Let us know what you find.

@NatoBoram
Copy link
Author

NatoBoram commented Mar 7, 2019

Thanks, I've migrated to ipfs-cluster-ctl add <url>, and now it acts as expected. I'll see what happens when it inevitably lacks storage in a few hours.

image

I noticed that add <url> was only available on IPFS Cluster, should it also be available to IPFS? I think it's convenient enough that I shouldn't have to run a full-blown cluster in order to use it.

As for logs, I have no idea where to find them. They don't seem to be in .ipfs-cluster or .ipfs.


Edit : Well, that happens : #709

@hsanjuan
Copy link
Collaborator

hsanjuan commented Mar 8, 2019

As for logs, I have no idea where to find them. They don't seem to be in .ipfs-cluster or .ipfs.

That would be the output of the ipfs-cluster-service daemon. They're not written anywhere, for if running through systemd journalctl -u ipfs-cluster shows them, or with docker docker logs ....

I noticed that add was only available on IPFS Cluster, should it also be available to IPFS? I think it's convenient enough that I shouldn't have to run a full-blown cluster in order to use it.

Cluster rocks doesn't it? :) Running a single-peer cluster just as a frontend to IPFS is already handy for some things. IPFS should easily be able to implement this though, but you'll need to ask them in their repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants