Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upNew persist #1734
Conversation
DavidVorick
added some commits
Apr 21, 2017
DavidVorick
added
the
in progress
label
Apr 21, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
DavidVorick
Apr 21, 2017
Member
This PR is intended to address the file corruption error that has been occasionally wiping out users files after a hard shutdown. This new method has more safety around it, and notably does not use os.Rename at all to preserve safety.
|
This PR is intended to address the file corruption error that has been occasionally wiping out users files after a hard shutdown. This new method has more safety around it, and notably does not use |
DavidVorick
added some commits
Apr 21, 2017
| build.ExtendErr("unable to write version to settings temp file", err) | ||
| } | ||
| if _, err = wal.fileSettingsTmp.Write(b); err != nil { | ||
| build.ExtendErr("unable to write data settings temp file", err) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
DavidVorick
Apr 21, 2017
Member
not exactly sure what you are asking, but the primary difference is that the Sync isn't going to be called until the next iteration of the sync loop. That could be 500ms away. Sync returns a lot faster if the data is already written to disk, so we're actually writing it early to minimize the total amount of downtime (the contractmanager is stop-the-world while the syncs are happening - if we can do most of the disk writes ahead of time, it'll be stopped for less time).
DavidVorick
Apr 21, 2017
Member
not exactly sure what you are asking, but the primary difference is that the Sync isn't going to be called until the next iteration of the sync loop. That could be 500ms away. Sync returns a lot faster if the data is already written to disk, so we're actually writing it early to minimize the total amount of downtime (the contractmanager is stop-the-world while the syncs are happening - if we can do most of the disk writes ahead of time, it'll be stopped for less time).
| // Read everything else. | ||
| remainingBytes, err := ioutil.ReadAll(dec.Buffered()) | ||
| if err != nil { | ||
| return build.ExtendErr("unable to read persisted json object data", err) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
lukechampine
Apr 21, 2017
Member
oic, it's so you can calculate the checksum. Could probably achieve the same using a io.TeeReader but this is fine
lukechampine
Apr 21, 2017
Member
oic, it's so you can calculate the checksum. Could probably achieve the same using a io.TeeReader but this is fine
| // Write out the data to the real file, with a sync. | ||
| err = func() (err error) { | ||
| file, err := os.OpenFile(filename, os.O_RDWR|os.O_TRUNC|os.O_CREATE, 0600) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
| // the manual checksum. | ||
| remainingBytes = remainingBytes[9:] | ||
| } | ||
| } |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
lukechampine
Apr 21, 2017
Member
doesn't seem like the "checksum exists, but is wrong" case is handled explicitly
lukechampine
Apr 21, 2017
Member
doesn't seem like the "checksum exists, but is wrong" case is handled explicitly
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
DavidVorick
Apr 21, 2017
Member
it's handled by providing the data unchecksummed to the json decoder. This is needed to preserve compatibility with old versions calling SaveFile on an object that's just a string.
Not that we used it that way anywhere, but I figured this was the best way of handling things.
DavidVorick
Apr 21, 2017
Member
it's handled by providing the data unchecksummed to the json decoder. This is needed to preserve compatibility with old versions calling SaveFile on an object that's just a string.
Not that we used it that way anywhere, but I figured this was the best way of handling things.
lukechampine
merged commit d1289a7
into
master
Apr 21, 2017
DavidVorick
removed
the
in progress
label
Apr 21, 2017
lukechampine
deleted the
new-persist
branch
Apr 22, 2017
| err = m.saveSync() | ||
| if err != nil { | ||
| m.log.Println("ERROR: Unable to save miner persist:", err) | ||
| } |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
avahowell
Apr 22, 2017
Contributor
This was already merged, but it looks like the lock is never released here
avahowell
Apr 22, 2017
•
Contributor
This was already merged, but it looks like the lock is never released here
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
DavidVorick
Apr 22, 2017
Member
Looks like you are right. Wish there were static tools to detect this type of mistake
DavidVorick
Apr 22, 2017
Member
Looks like you are right. Wish there were static tools to detect this type of mistake
DavidVorick commentedApr 21, 2017
man... diff was too aggresive. json.go is a completely new file, jsonold.go is the old json.go, and if you want to read the code in a more sane way, you should just view it as a standalone file.
Still writing more comprehensive tests, but I wanted to see how the checksum parser worked on Windows. I'm wondering if a different approach to newlines will screw up my byte-parsing.