Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove local:: storage fallback #26325

Closed
PVince81 opened this issue Oct 10, 2016 · 19 comments 路 Fixed by #26855
Closed

Remove local:: storage fallback #26325

PVince81 opened this issue Oct 10, 2016 · 19 comments 路 Fixed by #26855

Comments

@PVince81
Copy link
Contributor

Kill this

if (\OC\Files\Cache\Storage::exists('local::' . $user->getHome() . '/')) {
with a three row high 馃敟 thrower.

Removing the fallback code would be nice.
However if someone has an install where the repair didn't fully go through or warnings were missed, they're going to have surprises.
Ideal would be to have a way to assist admins to do a smoother transition: basically first have a way to identify such remaining storages and fix them manually.

Another idea would be to log a warning, but would pollute the log.
Yet another idea: prevent the user's storage to work at all (throw exception). But likely to cause outages.

I also thought about maybe adding an extra column "type" on oc_storages, but the question is how to populate it... might need iterating over all users once again. This could at least make it easier to identify user homes that still have the type "local."

@butonic @DeepDiver1975

@PVince81
Copy link
Contributor Author

Regarding the repair step RepairLegacyStorages: Trouble is that the repair step cannot repair conflict situations because an admin has to investigate which of the "local" or "home" to keep, so all the step does is display a warnings.
Note that the step doesn't run any more, need to reset a flag in oc_appconfig.

@butonic
Copy link
Member

butonic commented Oct 10, 2016

Well if for some reason a local:: storage with a different physical storage location is created the sync clients will start to delete files because it appears empty. This fallbach is in since 5 ... we should start ignoring local:: entries for home storages.

we could get rid of this with the fstab like user mount table because we need to put something in there. might as well kill the fallback with it. cc @DeepDiver1975

@PVince81
Copy link
Contributor Author

Ref fstab like user mounting: #26190

@PVince81
Copy link
Contributor Author

And by the same occasion, replace the root storage entry to be called "root://" instead of "local://path/to/datadir" to avoid that last hard-coded path...

@PVince81 PVince81 self-assigned this Nov 21, 2016
@PVince81
Copy link
Contributor Author

Killing the fallback code itself should be easy.
I'm still a bit worried about what would happen in concrete cases where the "local::" fallback was used all along due to a past failed RepairLegacyStorage. The env would suddenly switch to the new storage "home::" and people will report lost shares.

@butonic
Copy link
Member

butonic commented Dec 21, 2016

@felixboehm @DeepDiver1975 can we please kill local:: storages? We need to decide how to handle this. At least new installations must NEVER run the fallback code. maybe add an app that provides this?

@PVince81
Copy link
Contributor Author

My worry is that some admins ignored the warnings when updating from OC 8.0 and now they still have legacy storages where local:: is the correct one and home:: is the duplicate... As long as these people do not manually repair we can't force-remove the old one as it would lead to data loss.

I like the idea about "for new installs". In this case I suggest that old installs receive a oc_appconfig setting at upgrade time that tells it to still use the fallback. New installs won't have it so it will default to "home::" always.

However I don't think this will solve any real problems because people with the duplicate storages are already in a bad situation, and that one exists only because they had those from pre OC-8.0

@butonic
Copy link
Member

butonic commented Dec 21, 2016

well our support has cleaned up severl installations ... but the local storages keep popping up, having a flag to disable the local storages for them would be awesome. also for admins who do make an effort to clean up ...

@Helios07
Copy link

@PVince81 Your last sentence is not exactly true. In our 9.0 installation, a local storage was created by a database hickup (mysql server has gone away). This must not happen, but in this case, it would be better to ignore the local storage.

@PVince81
Copy link
Contributor Author

That can't be. The "local::" storages cannot reappear in the new code, the logic makes it use "home::" by default.

The exception or course is when using the "local" external storage, but that's unrelated.
And another exception is the root storage pointing to the data dir.

@PVince81
Copy link
Contributor Author

@Helios07 a database hiccup, hmm... But even then, the logic goes as follows (from memory):

  1. check if there is a "local::" storage in the database, if yes, use that one
  2. check if there is a "home::" storage in the database, if yes use that
  3. if no "home::" storage exists, then create a "home::" storage

If that really happened then I guess there must have been some crazy race conditions in there.
Will have to dig deeper into the code.

One thing I seem to remember is that sometimes if a user "suddenly doesn't exist any more" it might go into the fallback code (but I thought we added protection against these cases).

Anyway, I'm fine removing the fallback.

Should we add a check during 10.0 upgrade that displays a warning if "local::" storages are detected ?

@PVince81
Copy link
Contributor Author

Here is the code in question in OC 9.0: https://github.com/owncloud/core/blob/v9.0.6/lib/private/files/filesystem.php#L417. "If local:: storage exists, switch on legacy mode".

In later version it moved here: https://github.com/owncloud/core/blob/master/lib/private/Files/Mount/LocalHomeMountProvider.php#L41

At first look there doesn't seem a way, even with hiccups, to make that value return true. This, assuming that the database never had that "local::" entry in the first place. I don't think a hiccup would make it spit a new entry that never existed.

For reference, here is the code that switches to "local::" if the legacy flag is set: https://github.com/owncloud/core/blob/v9.0.6/lib/private/files/storage/home.php#L52

Now I wonder if there was a bug with this in older OC 9.0 versions.

@PVince81
Copy link
Contributor Author

PR to kill it #26855

@Helios07
Copy link

The incident was some weeks ago, so I do not remember exactly, what happened, but it was something like:

  • A database node from the cluster was rebooted by accident
  • Some users reported a missing share (it was the same for everyone, it was shared with multiple users)
  • The following database query gave me a hit:
    select s.id, p.configvalue from oc_storages s inner join oc_preferences p on md5(concat('local::' , p.configvalue,'/')) = s.id;
  • I deleted the shown storages (in fact, there were two of them) and the share appeared again

We have more than 20k entries in oc_share so this really happens only very rarely. But I observed the same behavior at another time (again with only a hand full of local storages).

@PVince81
Copy link
Contributor Author

Was p.configvalue the data folder or user name ?

Note that there is also a root storage "local::" + datadir without user name. That one always exists and is not related to lost shares. If deleted the entry will reappear at some point.

We should also rename that storage to "root://"

@PVince81
Copy link
Contributor Author

For reference, legacy storage with the fallback has an ID like "local::path/to/datadir/userid". These should never reappear automatically as per the mentioned code above.

So maybe it was a misunderstanding :-D

@Helios07
Copy link

According to my chat protocol with @butonic , the storage was called local::/gpfs/share/sciebo/uni-muenster/uni-muenster.de/projects/sc/scieboprojekt.pbox@uni-muenster.de/

@PVince81
Copy link
Contributor Author

Okay, so it was indeed a user. I can't explain why it happened, but maybe let's leave it at that and kill it anyway.

@lock
Copy link

lock bot commented Jul 31, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jul 31, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants