Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repair step for invalid oc_share.uid_owner values #28503

Closed
PVince81 opened this issue Aug 18, 2021 · 6 comments · Fixed by #32211
Closed

Repair step for invalid oc_share.uid_owner values #28503

PVince81 opened this issue Aug 18, 2021 · 6 comments · Fixed by #32211
Assignees
Labels
1. to develop Accepted and waiting to be taken care of bug feature: sharing

Comments

@PVince81
Copy link
Member

PVince81 commented Aug 18, 2021

Steps

  1. Create users "user1", "user2", "user3"
  2. Add "user2" to "group1"
  3. Login as "user1"
  4. Create a folder "test" with some contents
  5. Share "test" with "user1"
  6. Share "test" with "group1"
  7. Search for the user share with select * from oc_share where share_type=0 and share_with='user2' and take note of the share id
  8. Set the uid_owner to another user: update oc_share set uid_owner='user3' where id=$id, replace $id with the id from the previous query
  9. Login as "user2"
  10. Enter the folder "test"

Expected result

Contents of "test" visible.

Actual result

Folder "test" appears empty.

Versions

The behavior was observed on Nextcloud 20.0.8.
But I was able to reproduce it on master 3ac9b56

Analysis

Please note that this situation has been reported following a transfer ownership operation that failed in some way, and left the oc_share table in the state created above.

Because the "uid_owner" does not match the actual owner of the storage to which the "item_source" points, it seems the code decides to return no data.

Solutions

  1. Make the code that creates the "supershare" more robust and tolerate wrong "uid_owner" values, but is that safe ? This should probably also log a warning of sorts, but without polluting the logs.

  2. Write a repair step that finds such scenarios and updates "uid_owner" to the correct value based on who the original storage owner is.

@PVince81 PVince81 added bug 1. to develop Accepted and waiting to be taken care of feature: sharing labels Aug 18, 2021
@PVince81
Copy link
Member Author

PVince81 commented Jan 18, 2022

I think I've got a query that should make it possible to find the mismatched share entries, the ones where the "oc_share.uid_owner" does not match the storage owner as represented in oc_mounts:

select s.id,s.share_type, s.stime, s.uid_owner, s.uid_initiator, s.share_with, s.file_source, s.file_target, m.storage_id, m.user_id, m.mount_point from oc_share s, oc_filecache f, oc_mounts m where s.file_source=f.fileid and f.storage=m.storage_id and concat('/', m.user_id, '/')=m.mount_point and s.uid_owner != m.user_id

@PVince81
Copy link
Member Author

then for each entry, the repair step must update the share owner to match the one from the mounts

and if the uid_initiator is equal to the owner, also change it

not sure yet what to do in other cases, not sure what happens if the uid_initiator is for a share where the owner doesn't have access any more

@PVince81 PVince81 added this to 🧭 Planning evaluation (don't pick) in 📁 Files team (obsolete, don't use) via automation Jan 19, 2022
@PVince81
Copy link
Member Author

and make sure to tweak the query so it can run faster...

it seems to be slow on bigger envs

@PVince81
Copy link
Member Author

or alternatively make this a separate occ command so it doesn't run on upgrade with the other repair steps

@PVince81
Copy link
Member Author

  • ⚠️ beware of group folders and system-wide ext storages where no storage owner exists, need to exclude those somehow

@PVince81
Copy link
Member Author

PR here: #32211

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1. to develop Accepted and waiting to be taken care of bug feature: sharing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants