-
Notifications
You must be signed in to change notification settings - Fork 11
Questions after first learning about IPFS. #154
Comments
Hey Mark, Thanks for the great questions, i'll try to answer them with as much detail as you asked them with :)
Thanks! |
Hi @whyrusleeping, All sounds clear to me, but i keep having questions about point 3. The way you describe it sounds like there still is a single point of failure. Even if the site is backed by thousands of mirrors. The main entry point seems to be from name/certificate -> single hash. If that hash is offline, the site is offline. Sure, the owner can make it point to another hash, but that seems like a manual task that can (right?) be easily automated. I might also be misunderstanding something.. Could you elaborate on this please? It sounds very interesting! |
For a hash (site) to be 'offline' all the peers that have mirrored it will also have to go offline. |
Generally speaking, you would think someone who is capable of resolving an IPNS name would also be hosting the IPFS content. I guess that's not technically necessary. Also! Generally speaking it's very easy to rehost things on IPFS if you have the file. If I add a kitten gif, and you add the same kitten gif, it resolves to the same address. and IPNS is pretty easy to automate, all you have to do is run this command: ~ $ ipfs name publish /ipfs/QmNwoE1vkQeEwY3dyDdK4uyaYpm2GYTUn68mqkf4kdvXcn
Published to QmRzYFGy9M5CyjEwNdh62udgBZV6BGbNZec8gMB9mXFhX6: /ipfs/QmNwoE1vkQeEwY3dyDdK4uyaYpm2GYTUn68mqkf4kdvXcn
~ $ ipfs name resolve QmRzYFGy9M5CyjEwNdh62udgBZV6BGbNZec8gMB9mXFhX6
/ipfs/QmNwoE1vkQeEwY3dyDdK4uyaYpm2GYTUn68mqkf4kdvXcn https://ipfs.io/ipns/QmRzYFGy9M5CyjEwNdh62udgBZV6BGbNZec8gMB9mXFhX6 Also, I can now kill my daemon (so that I no longer answer IPNS requests) and that link will still be resolved. My understanding is that ipfs.io will be able to resolve the IPNS address for me, even though I'm not online, for another 24 hours. Just for science sake I did this: ~ $ ipfs name publish -t 336h /ipfs/QmNwoE1vkQeEwY3dyDdK4uyaYpm2GYTUn68mqkf4kdvXcn
Published to QmRzYFGy9M5CyjEwNdh62udgBZV6BGbNZec8gMB9mXFhX6: /ipfs/QmNwoE1vkQeEwY3dyDdK4uyaYpm2GYTUn68mqkf4kdvXcn That should mean other people can resolve the IPNS address for another 2 weeks? Wonder how long the gateway hangs on to content... |
@Ghoughpteighbteau Thats all mostly correct.
Close! That actually means that the record itself is valid for two weeks. The network will only hold onto records for at most 36 hours before they are discarded, but other nodes can |
hmmmm. In what circumstances will |
Still resolving two days later 👍 though, it took ~ 1 minute to resolve 👎 |
It is because only very few nodes will have this record after two days. What I can say is that we plan improving both get and put performance of IPNS. |
IPNS resolution will get way better with pubsub improvements coming.
|
@jbenet is there any ETA for that? |
A few more question for site principles that i don't see possible, but i hope it is and that it's merely my lack of knowledge about ipfs that makes me think "it's impossible". (i just continue with the numbering from the first post)
|
Well, IPFS as a protocol doesn't let you think in a centralized way. Bitcoin does, that's why bitcoin was such a splash, because it managed to make decentralization act like a centralized system. It pulled off distributed consensus. If you're planning to describe a service that downloads a bunch of feeds from different websites and distributes those feeds to it's clientele, then the question is: Who downloads those feeds?
The problem then becomes: how do you trust the clients to do the job? They could lie, they're no longer your servers. They could lie and say: "Oh yah, 99% invisible totally released a whole bunch of Herbal Supplement pill ads. Totally. You should buy some." The only way I can see to do this is to establish a network of trust. I download some feeds, you download some feeds, we share our data between eachother because we know we're not cheating. You add someone you trust, I add someone I trust, our network grows. That kinda thing. A system like this is pretty easy to describe with IPFS, though, the details can be a little complicated. Regarding point 2: You could literally have cronjobs do it. It's pretty easy to interact with IPFS whether it be through an actual desktop application, or the browser. That said, it's always going to be on the users whim, no getting around that. You can't force people to work for the network. |
This issue was moved to https://discuss.ipfs.io/t/questions-after-first-learning-about-ipfs/399 |
In IPFS does this possible to create authenticated resources? |
Anyone know any android and ios IPFS for realtime communication ? |
Hi,
I just know about IPFS for about a day, everything seems really neat. Specifically the ability to mount an IPFS site just like any local mount. That's in my opinion a great feature!
Having seen some videos and read some stuff about it, i'm still left with a couple open questions. I'm sure most of them are asked already, but i apparently lack the search skills to find them :)
So here we go.
If IPFS follows P2P (which i think it does) then visiting an IPFS site makes you a peer. Next visitors could reach your computer and get the files required to view the site. That principle is fine by me. But i'm slightly puzzled about for example large files. Imagine i downloaded a docker image via IPFS, will another user download it from me then? Since if that's the case then i could be getting bandwidth issues since it would suck my bandwidth up completely.
If i look at ipfs.pics it's only partially decentralized. The storage of the pictures is fully P2P, but the MySQL database it needs is still very much centralized. How would one make a service like that, but fully decentralized? I'm guessing one would need a decentralized database for that as well, like https://github.com/rqlite/rqlite, but how would one install that in IPFS? But even if it somehow magically gets connected to IPFS, how would you make it secure? By secure i mean a database used by Site X should only be accessible by Site X. Or would this involve public/private key signing of the database which only Site X can unlock? If that is the case, then where would the private key be stored (which would be needed to unlock the database)? Lots of sub questions here :)
Imagine this. A populair website or service, accessible by name (thus using IPNS), but as far as i understood it IPNS points to one hash. Basically a readable alias for a hash (or is that oversimplifying?). But what would happen if the hash it points to is for whatever reason not accessible? Would that make the site unavailable? Or does the name magically know (how?) which peers it has available to try and just send data back from another peer? For a user, the site would seem online.
With "the regular web" we have Google (among dozens of other search engines) to find a site. Sites in IPFS are probably indexed just fine as well if there is some public link somewhere, but what about the other stuff that isn't linked? In "the regular web" there is the concept of the "deep web" which consists of sites that are online, but no links to exist thus search engines can't find them. With IPNS everything is essentially in the DHT so doesn't that make it possible to index every IPFS site? And does something like that exist? Or is there a (technical) reason why this isn't possible?
From what i understand, there currently is no concept of mirroring a site in IPFS, but i wonder if a "mirror" is the right term anyway. I've seen something about pointers to hashes where a new version of something would just change the pointer. So i guess the question is something like this. How would one mirror the site content and let the pointers know (or the DHT structure) that all/some data is available at some other locations (hash)? This would for instance be vital to big sites that mainly rely on having their data in multiple data centers as mirrors, as primary means of site hosting and load balancing. A secondary place would be the P2P structure where individual users host individual pieces of the site. But i don't think what i describe here is possible (yet?)? Or i'm wrong :)
This is basically an extension of the question above. If you host something and it becomes outdated. For instance because a news article is posted or a new video or whatever your site is about. How would one - that wants to "mirror" your site - gets notified about a change? If IPFS has the concept of pointers, does it also have the concept of push notifications or publish/subscribe? I'd imagine it would just be a list of hashes (the ones subscribed) that need to be "notified" or "pinged" when the thing they point to isn't the latest version anymore. It's then op to the receiving side of that "ping" to act upon it.
The presentations about IPFS claim low latency and tons of benefits, i'm sure that's all true to some extend, but what about collaborative editing of documents? Is that possible? If so, how would it work? Is there a direct connection between all connected peers that want to edit something? Even then, how is the diff being made and how are conflicts resolved? On the other hand, this might be an application specific issue, not for IPFS to solve.
I keep it at those 7 questions for now :)
Cheers,
Mark
The text was updated successfully, but these errors were encountered: