Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

move between drives dont get followed by moving to destination. #411

Closed
1 of 2 tasks
gdelcroix opened this issue May 3, 2024 · 15 comments
Closed
1 of 2 tasks

move between drives dont get followed by moving to destination. #411

gdelcroix opened this issue May 3, 2024 · 15 comments

Comments

@gdelcroix
Copy link

  • Version (cloudcmd -v): 17.4.0
  • Node Version node -v: 20.12.2
  • OS (uname -a on Linux): debian 6.1.85-1
  • Browser name/version: Firefox 125.0.2
  • Used Command Line Parameters: F6 between 2 volumes
  • Changed Config:
  • I'm ready to donate on Patreon 馃巵
  • I'm willing to work on this issue 馃挭

moving files between sda1 (4Tb hdd) and sdb1 (4Tb hdd) in fact filled to fullness system drive (64Gb sd) ./srv/dev-disk-by-uuid-"destination drive" folder.

i'm now stuck with locked server, accessible by console (ssh) but with data stuck in between here and there, not where intended, and in fearness of data transfer loss.

@gdelcroix
Copy link
Author

i will add that copy from sda NTFS format drive to sdb ext4 worked like a charm, i got fucked when trying to get from sdb to sda back after formating sda to ext4, so if we don't solve the what and where, i may have lost data just by moving it. may in last resort restore it by drive forensics (testdisk) if we dont mess up before.

@coderaiser
Copy link
Owner

Could you provide more information, as I understand you move files from one directory to another? What size of files? Do you have any errors?

@coderaiser
Copy link
Owner

Here is the code: when copying is done - files from source is removed, this is what move is, you suggest to add an option to disable move?

@gdelcroix
Copy link
Author

Hello, where do i need to go to find logs if there are any ? i didn't had any error, just didn't find files where i expected them to be, had an impossibility to login in openmediavault simultaneously, then investigated and found that the system drive of 64Gb was full and then that some of the data i intended to transfer back from slave A to B were the culprit. At that moment, we have a 48Gb in "./srv/disk-" and i was trying to transfer a bunch of files for almost 1.5Tb (if i recall well).
my main goal is to avoid messing again, because i am going to testdisk drive B to restore the files i wanted to rollback to slave A after switching it from NTFS to EXT4.

@coderaiser
Copy link
Owner

coderaiser commented May 6, 2024

Write a video of what you doing in file manager, and what the error is. All this filesystem types and volumes sizes gives me no information at all.

There is no additional logs in Cloud Commander, only that it writes to stdout.

@gdelcroix
Copy link
Author

image

@gdelcroix
Copy link
Author

image

@coderaiser
Copy link
Owner

What errors you had during move?

@gdelcroix
Copy link
Author

cloudmd.log.json

container log, hopes it will clarify for you. I don't remember error message, an error may had popped, but i don't recall.

@coderaiser
Copy link
Owner

coderaiser commented May 6, 2024

What exactly went wrong? You moved files, and then not all files copied before removing?

@gdelcroix
Copy link
Author

not a single file to destination.

@gdelcroix
Copy link
Author

it may be related to the first drive being erased and not reindexed well il the container parameters, but in that case cloudcmd might have raised an error "destination unreachable" and i couldn't have had the destination displayed in the gui ?

@gdelcroix
Copy link
Author

gdelcroix commented May 6, 2024

extracted config.v2.json, and it is related to the container parameters.
"/dev/sda1":{
"Source":"/srv/dev-disk-by-uuid-acfcc3d1-07a2-4eb0-a40f-b0b4c3597b6a","Destination":"/dev/sda1","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/srv/dev-disk-by-uuid-acfcc3d1-07a2-4eb0-a40f-b0b4c3597b6a","Target":"/dev/sda1"},"SkipMountpointCreation":false},
"/dev/sdb1":{
"Source":"/srv/dev-disk-by-uuid-6C32CA2B03A0E2C5","Destination":"/dev/sdb1","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/srv/dev-disk-by-uuid-6C32CA2B03A0E2C5","Target":"/dev/sdb1"},"SkipMountpointCreation":false},
"/mount/fs":{
"Source":"/","Destination":"/mount/fs","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rslave","Spec":{"Type":"bind","Source":"/","Target":"/mount/fs"},"SkipMountpointCreation":false}},

@gdelcroix
Copy link
Author

gdelcroix commented May 6, 2024

i can't understand why the container used a "server local" copy of the real drive, instead of using the physical disk not displaying the drive that wasn't there as the UUID changed after the EXT4 formating, and how container and cloudcmd worked together to get there use a phantom drive and manage to write a terabyte in the phantom space.

"For I was conscious that I knew practically nothing..." (Plato, Apology 22d)

@coderaiser
Copy link
Owner

Looks like it is related to containers you using, Cloud Commander is node.js based, it knows nothing about any containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants