Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backups): write and merge block concurrency are now configurable #6416

Merged
merged 3 commits into from
Sep 16, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions @xen-orchestra/backups/RemoteAdapter.js
Original file line number Diff line number Diff line change
Expand Up @@ -634,13 +634,13 @@ class RemoteAdapter {
return path
}

async writeVhd(path, input, { checksum = true, validator = noop } = {}) {
async writeVhd(path, input, { checksum = true, validator = noop, writeBlockConcurrency } = {}) {
const handler = this._handler

if (this.#useVhdDirectory()) {
const dataPath = `${dirname(path)}/data/${uuidv4()}.vhd`
await createVhdDirectoryFromStream(handler, dataPath, input, {
concurrency: 16,
concurrency: writeBlockConcurrency,
compression: this.#getCompressionType(),
async validator() {
await input.task
Expand Down
21 changes: 18 additions & 3 deletions @xen-orchestra/backups/_cleanVm.js
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ const computeVhdsSize = (handler, vhdPaths) =>
)

// chain is [ ancestor, child_1, ..., child_n ]
async function _mergeVhdChain(handler, chain, { logInfo, remove, merge }) {
async function _mergeVhdChain(handler, chain, { logInfo, remove, merge, mergeBlockConcurrency }) {
if (merge) {
logInfo(`merging VHD chain`, { chain })

Expand All @@ -55,6 +55,7 @@ async function _mergeVhdChain(handler, chain, { logInfo, remove, merge }) {
try {
return await mergeVhdChain(handler, chain, {
logInfo,
mergeBlockConcurrency,
onProgress({ done: d, total: t }) {
done = d
total = t
Expand Down Expand Up @@ -181,7 +182,15 @@ const defaultMergeLimiter = limitConcurrency(1)

exports.cleanVm = async function cleanVm(
vmDir,
{ fixMetadata, remove, merge, mergeLimiter = defaultMergeLimiter, logInfo = noop, logWarn = console.warn }
{
fixMetadata,
remove,
merge,
mergeBlockConcurrency,
mergeLimiter = defaultMergeLimiter,
logInfo = noop,
logWarn = console.warn,
}
) {
const limitedMergeVhdChain = mergeLimiter(_mergeVhdChain)

Expand Down Expand Up @@ -447,7 +456,13 @@ exports.cleanVm = async function cleanVm(
const metadataWithMergedVhd = {}
const doMerge = async () => {
await asyncMap(toMerge, async chain => {
const merged = await limitedMergeVhdChain(handler, chain, { logInfo, logWarn, remove, merge })
const merged = await limitedMergeVhdChain(handler, chain, {
logInfo,
logWarn,
remove,
merge,
mergeBlockConcurrency,
})
if (merged !== undefined) {
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = true
Expand Down
1 change: 1 addition & 0 deletions @xen-orchestra/backups/writers/DeltaBackupWriter.js
Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,7 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
// merges and chainings
checksum: false,
validator: tmpPath => checkVhd(handler, tmpPath),
writeBlockConcurrency: this._backup.config.writeBlockConcurrency,
})

if (isDelta) {
Expand Down
1 change: 1 addition & 0 deletions @xen-orchestra/backups/writers/_MixinBackupWriter.js
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
Task.warning(message, data)
},
lock: false,
mergeBlockConcurrency: this._backup.config.mergeBlockConcurrency,
})
})
} catch (error) {
Expand Down
2 changes: 2 additions & 0 deletions CHANGELOG.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
> Users must be able to say: “Nice enhancement, I'm eager to test it”

- [Backup] Improve listing speed by updating caches instead of regenerating them on backup creation/deletion (PR [#6411](https://github.com/vatesfr/xen-orchestra/pull/6411))
- [Backup] Add `mergeBlockConcurrency` and `writeBlockConcurrency` to allow tuning of backup resources consumptions (PR [#6416](https://github.com/vatesfr/xen-orchestra/pull/6416))

### Bug fixes

Expand All @@ -34,6 +35,7 @@
<!--packages-start-->

- @xen-orchestra/backups minor
- vhd-lib minor
- xo-server-auth-saml patch
- xo-web patch

Expand Down
5 changes: 3 additions & 2 deletions packages/vhd-lib/merge.js
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ module.exports._cleanupVhds = cleanupVhds
module.exports.mergeVhdChain = limitConcurrency(2)(async function mergeVhdChain(
handler,
chain,
{ onProgress = noop, logInfo = noop, removeUnused = false } = {}
{ onProgress = noop, logInfo = noop, removeUnused = false, mergeBlockConcurrency = 2 } = {}
) {
assert(chain.length >= 2)

Expand Down Expand Up @@ -123,7 +123,8 @@ module.exports.mergeVhdChain = limitConcurrency(2)(async function mergeVhdChain(
childIsVhdDirectory = childVhd instanceof VhdDirectory
}

const concurrency = parentIsVhdDirectory && childIsVhdDirectory ? 2 : 1
// merging vhdFile must not be concurrently with the potential block reordering after a change
const concurrency = parentIsVhdDirectory && childIsVhdDirectory ? mergeBlockConcurrency : 1
if (mergeState === undefined) {
// merge should be along a vhd chain
assert.strictEqual(UUID.stringify(childVhd.header.parentUuid), UUID.stringify(parentVhd.footer.uuid))
Expand Down
12 changes: 12 additions & 0 deletions packages/xo-server/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ throttlingDelay = '2 seconds'
[backups]
disableMergeWorker = false


fbeauchamp marked this conversation as resolved.
Show resolved Hide resolved
# Mode to use for newly created backup directories
#
# https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation
Expand All @@ -87,8 +88,19 @@ snapshotNameLabelTpl = '[XO Backup {job.name}] {vm.name_label}'
# Delay for which backups listing on a remote is cached
listingDebounce = '1 min'

# settings when using Vhd directories ( s3 , encryption )
# you should use 'none' if your fs is already compressed
# changing this setting will generate new full backups
vhdDirectoryCompression = 'brotli'

# how many block can be merged in parallel per backup running
# increase to increase performance, reduce if you have timeout during merge
mergeBlockConcurrency = 2

# how many block can be uploaded in parallel
# increase to in rease performance, reduce if you have timeout or memory error during transfer
writeBlockConcurrency = 16

Comment on lines +90 to +102
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As these settings are for a libraries and are used in two different apps (xo-server & xo-proxy), I feel like we should provide defaults values and document them somewhere else.

I think I will merge your PR as it is and then move them directly in a centralized, would that be ok for you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "2" enough as default? I think we tried with 4 with success and it was "fast enough" vs 2 (when we tested on Cécile's S3). Or maybe it was 2, I don't remember exactly 🤔

Copy link
Collaborator Author

@fbeauchamp fbeauchamp Sep 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@julien-f there is default values in the code and documentation in the config file seems like a good start. Where do you want to put the default and the documentation ?

@olivierlambert we tested 4 , but the results came after the patch release.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we change that to 4 then?

Copy link
Collaborator Author

@fbeauchamp fbeauchamp Sep 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should let this as is in this PR that only exposez existing values
It will be easier to have some users test it by changing the configuration and optimize the default values later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fbeauchamp the config file is not good, because it's in xo-server and not in the lib that really uses these entries. Also, this does not make the settings easily discoverable for xo-proxy.

I think a dedicated @xen-orchestra/backups/config.js file would be a good first answer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, good idea

# This is a work-around.
#
# See https://github.com/vatesfr/xen-orchestra/pull/4674
Expand Down