Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instance: Set migration.stateful=true be default when creating a new VM #12832

Merged

Conversation

gabrielmougard
Copy link
Contributor

This is the follow up PR of #12821. This work will need a rebase when this previous item gets merged.

As described in the first pull request (see above), we want to continue improving the VM live migration user experience. This time, when a new VM is created (this won't apply to existing ones), we want to have migration.stateful=true explicitly set.

This work only enable this new default behaviour for instances with a shared storage backend. Another PR will be required to enable migration.stateful=true by default for local storage instances as some investigations need to be done to compute the value of the instance's root disk size.state configuration key.

@gabrielmougard gabrielmougard marked this pull request as ready for review February 7, 2024 15:27
@github-actions github-actions bot added the Documentation Documentation needs updating label Feb 7, 2024
@tomponline
Copy link
Member

@gabrielmougard please can you put this back into draft given the changes discussed in the cloud meeting the other day, thanks

@tomponline
Copy link
Member

@gabrielmougard you can open a separate PR with just the linting fixes though

@tomponline tomponline changed the title Feat: Set migration.stateful=true be default when creating a new VM Instance: Set migration.stateful=true be default when creating a new VM Feb 8, 2024
@gabrielmougard gabrielmougard marked this pull request as draft February 8, 2024 12:33
@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from 38f7bc4 to 5fd61b3 Compare February 8, 2024 12:52
@gabrielmougard
Copy link
Contributor Author

@tomponline here are the linting fixes for both QEMU and LXD drivers as you asked: #12841

@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch 2 times, most recently from b177e4e to 969d36d Compare February 8, 2024 16:55
@gabrielmougard gabrielmougard marked this pull request as ready for review February 8, 2024 16:55
doc/howto/move_instances.md Show resolved Hide resolved
doc/howto/move_instances.md Outdated Show resolved Hide resolved
doc/howto/move_instances.md Outdated Show resolved Hide resolved
doc/howto/move_instances.md Outdated Show resolved Hide resolved
lxd/cluster/config/config.go Outdated Show resolved Hide resolved
@tomponline
Copy link
Member

@gabrielmougard can you separate this PR from the size.state related commits please? They are unrelated right?

@gabrielmougard
Copy link
Contributor Author

@tomponline yes, this is the commit from the first PR. I can remove it from this PR

@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from 969d36d to 24f37b8 Compare February 9, 2024 14:00
@tomponline
Copy link
Member

static analysis failing

@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from 24f37b8 to 53c34d0 Compare February 9, 2024 15:11
@tomponline
Copy link
Member

needs a rebase by the looks of it now.

lxd/instance/instancetype/instance.go Outdated Show resolved Hide resolved
doc/howto/move_instances.md Outdated Show resolved Hide resolved
lxd/cluster/config/config.go Outdated Show resolved Hide resolved
@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from 53c34d0 to eb1a0a2 Compare February 12, 2024 08:14
Copy link
Member

@tomponline tomponline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also needs an API extension as you're introducing a new config key.

@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from eb1a0a2 to d0a5319 Compare February 12, 2024 09:45
@github-actions github-actions bot added the API Changes to the REST API label Feb 12, 2024
doc/api-extensions.md Outdated Show resolved Hide resolved
@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from d0a5319 to 38083b8 Compare February 12, 2024 10:13
ru-fu
ru-fu previously approved these changes Feb 22, 2024
@gabrielmougard
Copy link
Contributor Author

@tomponline actually I should have mentionned it earlier, but there is a circular deps problem with putting the ExpandInstanceConfig in the instancetype package now when I want to import clusterConfig "github.com/canonical/lxd/lxd/cluster/config" in order to use the

func ExpandInstanceConfig(globalConfig *clusterConfig.Config, config map[string]string, profiles []api.Profile) map[string]string {
      expandedConfig := map[string]string{}
	
      // Apply global config overriding
      if globalConfig != nil {
         globalInstancesMigrationStateful := globalConfig.InstancesMigrationStateful()
         if globalInstancesMigrationStateful {
	      expandedConfig["migration.stateful"] = strconv.FormatBool(globalInstancesMigrationStateful)
         }
      }
      
      ...

so I can't really rebase easily now.

@tomponline
Copy link
Member

@tomponline actually I should have mentionned it earlier, but there is a circular deps problem with putting the ExpandInstanceConfig in the instancetype package now when I want to import clusterConfig "github.com/canonical/lxd/lxd/cluster/config" in order to use the

func ExpandInstanceConfig(globalConfig *clusterConfig.Config, config map[string]string, profiles []api.Profile) map[string]string {
      expandedConfig := map[string]string{}
	
      // Apply global config overriding
      if globalConfig != nil {
         globalInstancesMigrationStateful := globalConfig.InstancesMigrationStateful()
         if globalInstancesMigrationStateful {
	      expandedConfig["migration.stateful"] = strconv.FormatBool(globalInstancesMigrationStateful)
         }
      }
      
      ...

so I can't really rebase easily now.

Where is it occurring?

@gabrielmougard
Copy link
Contributor Author

gabrielmougard commented Feb 22, 2024

@tomponline actually I should have mentionned it earlier, but there is a circular deps problem with putting the ExpandInstanceConfig in the instancetype package now when I want to import clusterConfig "github.com/canonical/lxd/lxd/cluster/config" in order to use the

func ExpandInstanceConfig(globalConfig *clusterConfig.Config, config map[string]string, profiles []api.Profile) map[string]string {
      expandedConfig := map[string]string{}
	
      // Apply global config overriding
      if globalConfig != nil {
         globalInstancesMigrationStateful := globalConfig.InstancesMigrationStateful()
         if globalInstancesMigrationStateful {
	      expandedConfig["migration.stateful"] = strconv.FormatBool(globalInstancesMigrationStateful)
         }
      }
      
      ...

so I can't really rebase easily now.

Where is it occurring?

In lxd/instance/instancetype/instance_utils.go (in the ExpandInstanceConfig function definition) and in lxd/db/cluster/instances.go (line 1249, we need to call c := clusterConfig.Load(ctx, tx) before passing c to instancetype.ExpandInstanceConfig) as well

@gabrielmougard
Copy link
Contributor Author

I found a way to avoid this using s.GlobalConfig.Dump() and a bit more logic in ExpandInstanceConfig to handle a map[string]any

diff --git a/lxd/db/cluster/instances.go b/lxd/db/cluster/instances.go
index 90e35f928..3ca0e6709 100644
--- a/lxd/db/cluster/instances.go
+++ b/lxd/db/cluster/instances.go
@@ -77,7 +77,7 @@ type InstanceFilter struct {
 }
 
 // ToAPI converts the database Instance to API type.
-func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx) (*api.Instance, error) {
+func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx, globalConfig map[string]any) (*api.Instance, error) {
 	profiles, err := GetInstanceProfiles(ctx, tx, i.ID)
 	if err != nil {
 		return nil, err
@@ -108,7 +108,7 @@ func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx) (*api.Instance, error)
 		return nil, err
 	}
 
-	expandedConfig := instancetype.ExpandInstanceConfig(config, apiProfiles)
+	expandedConfig := instancetype.ExpandInstanceConfig(globalConfig, config, apiProfiles)
 
 	archName, err := osarch.ArchitectureName(i.Architecture)
 	if err != nil {
diff --git a/lxd/instance/drivers/driver_common.go b/lxd/instance/drivers/driver_common.go
index 332a216b4..2dfad0bb0 100644
--- a/lxd/instance/drivers/driver_common.go
+++ b/lxd/instance/drivers/driver_common.go
@@ -509,7 +509,7 @@ func (d *common) deviceVolatileSetFunc(devName string) func(save map[string]stri
 
 // expandConfig applies the config of each profile in order, followed by the local config.
 func (d *common) expandConfig() error {
-	d.expandedConfig = instancetype.ExpandInstanceConfig(d.state.GlobalConfig, d.localConfig, d.profiles)
+	d.expandedConfig = instancetype.ExpandInstanceConfig(d.state.GlobalConfig.Dump(), d.localConfig, d.profiles)
 	d.expandedDevices = instancetype.ExpandInstanceDevices(d.localDevices, d.profiles)
 
 	return nil
diff --git a/lxd/instance/instancetype/instance_utils.go b/lxd/instance/instancetype/instance_utils.go
index 93f0ae03e..853369b68 100644
--- a/lxd/instance/instancetype/instance_utils.go
+++ b/lxd/instance/instancetype/instance_utils.go
@@ -1,14 +1,30 @@
 package instancetype
 
 import (
+	"strconv"
+
 	deviceConfig "github.com/canonical/lxd/lxd/device/config"
 	"github.com/canonical/lxd/shared/api"
 )
 
 // ExpandInstanceConfig expands the given instance config with the config values of the given profiles.
-func ExpandInstanceConfig(config map[string]string, profiles []api.Profile) map[string]string {
+func ExpandInstanceConfig(globalConfig map[string]any, config map[string]string, profiles []api.Profile) map[string]string {
 	expandedConfig := map[string]string{}
 
+	// Apply global config overriding
+	if globalConfig != nil {
+		globalInstancesMigrationStatefulMaybe, ok := globalConfig["instances.migration.stateful"]
+		if ok {
+			globalInstancesMigrationStatefulStr, ok := globalInstancesMigrationStatefulMaybe.(string)
+			if ok {
+				globalInstancesMigrationStateful, _ := strconv.ParseBool(globalInstancesMigrationStatefulStr)
+				if globalInstancesMigrationStateful {
+					expandedConfig["migration.stateful"] = globalInstancesMigrationStatefulStr
+				}
+			}
+		}
+	}
+
 	// Apply all the profiles.
 	profileConfigs := make([]map[string]string, len(profiles))
 	for i, profile := range profiles {
diff --git a/lxd/instances_post.go b/lxd/instances_post.go
index 921e80599..231a1fc85 100644
--- a/lxd/instances_post.go
+++ b/lxd/instances_post.go
@@ -1134,7 +1134,7 @@ func instancesPost(d *Daemon, r *http.Request) response.Response {
 				Reason:        apiScriptlet.InstancePlacementReasonNew,
 			}
 
-			reqExpanded.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig, reqExpanded.Config, profiles)
+			reqExpanded.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig.Dump(), reqExpanded.Config, profiles)
 			reqExpanded.Devices = instancetype.ExpandInstanceDevices(deviceConfig.NewDevices(reqExpanded.Devices), profiles).CloneNative()
 
 			targetMemberInfo, err = scriptlet.InstancePlacementRun(r.Context(), logger.Log, s, &reqExpanded, candidateMembers, leaderAddress)
diff --git a/lxd/network/network_utils_sriov.go b/lxd/network/network_utils_sriov.go
index 60dad508c..1020d68e2 100644
--- a/lxd/network/network_utils_sriov.go
+++ b/lxd/network/network_utils_sriov.go
@@ -62,7 +62,7 @@ func SRIOVGetHostDevicesInUse(s *state.State) (map[string]struct{}, error) {
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
 		return tx.InstanceList(ctx, func(dbInst db.InstanceArgs, p api.Project) error {
 			// Expand configs so we take into account profile devices.
-			dbInst.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig, dbInst.Config, dbInst.Profiles)
+			dbInst.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig.Dump(), dbInst.Config, dbInst.Profiles)
 			dbInst.Devices = instancetype.ExpandInstanceDevices(dbInst.Devices, dbInst.Profiles)
 
 			for name, dev := range dbInst.Devices {
diff --git a/lxd/project/permissions.go b/lxd/project/permissions.go
index fdf9a1a31..89f010556 100644
--- a/lxd/project/permissions.go
+++ b/lxd/project/permissions.go
@@ -286,7 +286,17 @@ func GetImageSpaceBudget(tx *db.ClusterTx, projectName string) (int64, error) {
 		return -1, err
 	}
 
-	instances, err := expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	globalConfigStr, err := tx.Config(context.Background())
+	if err != nil {
+		return -1, err
+	}
+
+	globalConfig := make(map[string]any)
+	for k, v := range globalConfigStr {
+		globalConfig[k] = v
+	}
+
+	instances, err := expandInstancesConfigAndDevices(globalConfig, info.Instances, info.Profiles)
 	if err != nil {
 		return -1, err
 	}
@@ -328,7 +338,17 @@ func checkRestrictionsAndAggregateLimits(tx *db.ClusterTx, info *projectInfo) er
 		return nil
 	}
 
-	instances, err := expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	globalConfigStr, err := tx.Config(context.Background())
+	if err != nil {
+		return err
+	}
+
+	globalConfig := make(map[string]any)
+	for k, v := range globalConfigStr {
+		globalConfig[k] = v
+	}
+
+	instances, err := expandInstancesConfigAndDevices(globalConfig, info.Instances, info.Profiles)
 	if err != nil {
 		return err
 	}
@@ -972,7 +992,17 @@ func AllowProjectUpdate(tx *db.ClusterTx, projectName string, config map[string]
 		return err
 	}
 
-	info.Instances, err = expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	globalConfigStr, err := tx.Config(context.Background())
+	if err != nil {
+		return err
+	}
+
+	globalConfig := make(map[string]any)
+	for k, v := range globalConfigStr {
+		globalConfig[k] = v
+	}
+
+	info.Instances, err = expandInstancesConfigAndDevices(globalConfig, info.Instances, info.Profiles)
 	if err != nil {
 		return err
 	}
@@ -1200,8 +1230,19 @@ func fetchProject(tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*p
 	}
 
 	instances := make([]api.Instance, 0, len(dbInstances))
+
+	globalConfigStr, err := tx.Config(ctx)
+	if err != nil {
+		return nil, fmt.Errorf("Fetch cluster configuration: %w", err)
+	}
+
+	globalConfig := make(map[string]any)
+	for k, v := range globalConfigStr {
+		globalConfig[k] = v
+	}
+
 	for _, instance := range dbInstances {
-		apiInstance, err := instance.ToAPI(ctx, tx.Tx())
+		apiInstance, err := instance.ToAPI(ctx, tx.Tx(), globalConfig)
 		if err != nil {
 			return nil, fmt.Errorf("Failed to get API data for instance %q in project %q: %w", instance.Name, instance.Project, err)
 		}
@@ -1226,7 +1267,7 @@ func fetchProject(tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*p
 
 // Expand the configuration and devices of the given instances, taking the give
 // project profiles into account.
-func expandInstancesConfigAndDevices(instances []api.Instance, profiles []api.Profile) ([]api.Instance, error) {
+func expandInstancesConfigAndDevices(globalConfig map[string]any, instances []api.Instance, profiles []api.Profile) ([]api.Instance, error) {
 	expandedInstances := make([]api.Instance, len(instances))
 
 	// Index of all profiles by name.
@@ -1244,7 +1285,7 @@ func expandInstancesConfigAndDevices(instances []api.Instance, profiles []api.Pr
 		}
 
 		expandedInstances[i] = instance
-		expandedInstances[i].Config = instancetype.ExpandInstanceConfig(instance.Config, apiProfiles)
+		expandedInstances[i].Config = instancetype.ExpandInstanceConfig(globalConfig, instance.Config, apiProfiles)
 		expandedInstances[i].Devices = instancetype.ExpandInstanceDevices(deviceconfig.NewDevices(instance.Devices), apiProfiles).CloneNative()
 	}
 
diff --git a/lxd/project/state.go b/lxd/project/state.go
index c92a214ad..a9fe8f7e4 100644
--- a/lxd/project/state.go
+++ b/lxd/project/state.go
@@ -24,7 +24,17 @@ func GetCurrentAllocations(ctx context.Context, tx *db.ClusterTx, projectName st
 		return nil, fmt.Errorf("Project %q returned empty info struct", projectName)
 	}
 
-	info.Instances, err = expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	globalConfigStr, err := tx.Config(context.Background())
+	if err != nil {
+		return nil, fmt.Errorf("Failed to fetch global config: %w", err)
+	}
+
+	globalConfig := make(map[string]any)
+	for k, v := range globalConfigStr {
+		globalConfig[k] = v
+	}
+
+	info.Instances, err = expandInstancesConfigAndDevices(globalConfig, info.Instances, info.Profiles)
 	if err != nil {
 		return nil, err
 	}

I could not really find something else that worked in every places where we need to pass the global config.

@tomponline
Copy link
Member

@tomponline actually I should have mentionned it earlier, but there is a circular deps problem with putting the ExpandInstanceConfig in the instancetype package now when I want to import clusterConfig "github.com/canonical/lxd/lxd/cluster/config" in order to use the

func ExpandInstanceConfig(globalConfig *clusterConfig.Config, config map[string]string, profiles []api.Profile) map[string]string {
      expandedConfig := map[string]string{}
	
      // Apply global config overriding
      if globalConfig != nil {
         globalInstancesMigrationStateful := globalConfig.InstancesMigrationStateful()
         if globalInstancesMigrationStateful {
	      expandedConfig["migration.stateful"] = strconv.FormatBool(globalInstancesMigrationStateful)
         }
      }
      
      ...

so I can't really rebase easily now.

Where is it occurring?

In lxd/instance/instancetype/instance_utils.go (in the ExpandInstanceConfig function definition) and in lxd/db/cluster/instances.go (line 1249, we need to call c := clusterConfig.Load(ctx, tx) before passing c to instancetype.ExpandInstanceConfig) as well

Maybe we should stop passing clusterConfig around and extract something like a map of config options. Let me have a think

@tomponline
Copy link
Member

@gabrielmougard using .Dump() seems like a good approach to me.

I think the logic for detecting if the instances.migration.stateful is set can be made slightly more concise, and instead of making another query with tx.Config can you pass the state.State down into that function so we can access the in memory config

Thanks!

@gabrielmougard
Copy link
Contributor Author

gabrielmougard commented Feb 23, 2024

@gabrielmougard using .Dump() seems like a good approach to me.

I think the logic for detecting if the instances.migration.stateful is set can be made slightly more concise, and instead of making another query with tx.Config can you pass the state.State down into that function so we can access the in memory config

Thanks!

@tomponline OK. So here is the updated patch based on what you asked (there should not be any tx.Config and I simplified the map[string]any validation code in the ExpandInstanceConfig):

diff --git a/lxd/api_project.go b/lxd/api_project.go
index 768d1aef4..b42234106 100644
--- a/lxd/api_project.go
+++ b/lxd/api_project.go
@@ -703,7 +703,7 @@ func projectChange(s *state.State, project *api.Project, req api.ProjectPut) res
 
 	// Update the database entry.
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
-		err := projecthelpers.AllowProjectUpdate(tx, project.Name, req.Config, configChanged)
+		err := projecthelpers.AllowProjectUpdate(s.GlobalConfig, tx, project.Name, req.Config, configChanged)
 		if err != nil {
 			return err
 		}
@@ -972,7 +972,7 @@ func projectStateGet(d *Daemon, r *http.Request) response.Response {
 
 	// Get current limits and usage.
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
-		result, err := projecthelpers.GetCurrentAllocations(ctx, tx, name)
+		result, err := projecthelpers.GetCurrentAllocations(s.GlobalConfig.Dump(), ctx, tx, name)
 		if err != nil {
 			return err
 		}
diff --git a/lxd/db/cluster/instances.go b/lxd/db/cluster/instances.go
index 90e35f928..3ca0e6709 100644
--- a/lxd/db/cluster/instances.go
+++ b/lxd/db/cluster/instances.go
@@ -77,7 +77,7 @@ type InstanceFilter struct {
 }
 
 // ToAPI converts the database Instance to API type.
-func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx) (*api.Instance, error) {
+func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx, globalConfig map[string]any) (*api.Instance, error) {
 	profiles, err := GetInstanceProfiles(ctx, tx, i.ID)
 	if err != nil {
 		return nil, err
@@ -108,7 +108,7 @@ func (i *Instance) ToAPI(ctx context.Context, tx *sql.Tx) (*api.Instance, error)
 		return nil, err
 	}
 
-	expandedConfig := instancetype.ExpandInstanceConfig(config, apiProfiles)
+	expandedConfig := instancetype.ExpandInstanceConfig(globalConfig, config, apiProfiles)
 
 	archName, err := osarch.ArchitectureName(i.Architecture)
 	if err != nil {
diff --git a/lxd/images.go b/lxd/images.go
index 58f0c6e0f..94d947013 100644
--- a/lxd/images.go
+++ b/lxd/images.go
@@ -1035,7 +1035,7 @@ func imagesPost(d *Daemon, r *http.Request) response.Response {
 	// allowed to use.
 	var budget int64
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
-		budget, err = projectutils.GetImageSpaceBudget(tx, projectName)
+		budget, err = projectutils.GetImageSpaceBudget(s.GlobalConfig, tx, projectName)
 		return err
 	})
 	if err != nil {
diff --git a/lxd/instance/drivers/driver_common.go b/lxd/instance/drivers/driver_common.go
index 332a216b4..2dfad0bb0 100644
--- a/lxd/instance/drivers/driver_common.go
+++ b/lxd/instance/drivers/driver_common.go
@@ -509,7 +509,7 @@ func (d *common) deviceVolatileSetFunc(devName string) func(save map[string]stri
 
 // expandConfig applies the config of each profile in order, followed by the local config.
 func (d *common) expandConfig() error {
-	d.expandedConfig = instancetype.ExpandInstanceConfig(d.state.GlobalConfig, d.localConfig, d.profiles)
+	d.expandedConfig = instancetype.ExpandInstanceConfig(d.state.GlobalConfig.Dump(), d.localConfig, d.profiles)
 	d.expandedDevices = instancetype.ExpandInstanceDevices(d.localDevices, d.profiles)
 
 	return nil
diff --git a/lxd/instance/instancetype/instance_utils.go b/lxd/instance/instancetype/instance_utils.go
index 93f0ae03e..5bdcfd3ea 100644
--- a/lxd/instance/instancetype/instance_utils.go
+++ b/lxd/instance/instancetype/instance_utils.go
@@ -1,14 +1,25 @@
 package instancetype
 
 import (
+	"strconv"
+
 	deviceConfig "github.com/canonical/lxd/lxd/device/config"
 	"github.com/canonical/lxd/shared/api"
 )
 
 // ExpandInstanceConfig expands the given instance config with the config values of the given profiles.
-func ExpandInstanceConfig(config map[string]string, profiles []api.Profile) map[string]string {
+func ExpandInstanceConfig(globalConfig map[string]any, config map[string]string, profiles []api.Profile) map[string]string {
 	expandedConfig := map[string]string{}
 
+	// Apply global config overriding
+	globalInstancesMigrationStatefulStr, ok := globalConfig["instances.migration.stateful"].(string)
+	if ok {
+		globalInstancesMigrationStateful, _ := strconv.ParseBool(globalInstancesMigrationStatefulStr)
+		if globalInstancesMigrationStateful {
+			expandedConfig["migration.stateful"] = globalInstancesMigrationStatefulStr
+		}
+	}
+
 	// Apply all the profiles.
 	profileConfigs := make([]map[string]string, len(profiles))
 	for i, profile := range profiles {
diff --git a/lxd/instance_patch.go b/lxd/instance_patch.go
index d865db8bb..9bf1bfbc0 100644
--- a/lxd/instance_patch.go
+++ b/lxd/instance_patch.go
@@ -200,7 +200,7 @@ func instancePatch(d *Daemon, r *http.Request) response.Response {
 			apiProfiles = append(apiProfiles, *apiProfile)
 		}
 
-		return projecthelpers.AllowInstanceUpdate(tx, projectName, name, req, c.LocalConfig())
+		return projecthelpers.AllowInstanceUpdate(s.GlobalConfig, tx, projectName, name, req, c.LocalConfig())
 	})
 	if err != nil {
 		return response.SmartError(err)
diff --git a/lxd/instance_put.go b/lxd/instance_put.go
index 066d22421..638af54c5 100644
--- a/lxd/instance_put.go
+++ b/lxd/instance_put.go
@@ -148,7 +148,7 @@ func instancePut(d *Daemon, r *http.Request) response.Response {
 				apiProfiles = append(apiProfiles, *apiProfile)
 			}
 
-			return projecthelpers.AllowInstanceUpdate(tx, projectName, name, configRaw, inst.LocalConfig())
+			return projecthelpers.AllowInstanceUpdate(s.GlobalConfig, tx, projectName, name, configRaw, inst.LocalConfig())
 		})
 		if err != nil {
 			return response.SmartError(err)
diff --git a/lxd/instances_post.go b/lxd/instances_post.go
index 921e80599..54787a977 100644
--- a/lxd/instances_post.go
+++ b/lxd/instances_post.go
@@ -51,7 +51,7 @@ func ensureDownloadedImageFitWithinBudget(s *state.State, r *http.Request, op *o
 
 	var budget int64
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
-		budget, err = project.GetImageSpaceBudget(tx, p.Name)
+		budget, err = project.GetImageSpaceBudget(s.GlobalConfig, tx, p.Name)
 		return err
 	})
 	if err != nil {
@@ -642,7 +642,7 @@ func createFromBackup(s *state.State, r *http.Request, projectName string, data
 			Type:        api.InstanceType(bInfo.Config.Container.Type),
 		}
 
-		return project.AllowInstanceCreation(tx, projectName, req)
+		return project.AllowInstanceCreation(s.GlobalConfig, tx, projectName, req)
 	})
 	if err != nil {
 		return response.SmartError(err)
@@ -1102,7 +1102,7 @@ func instancesPost(d *Daemon, r *http.Request) response.Response {
 		if !clusterNotification {
 			// Check that the project's limits are not violated. Note this check is performed after
 			// automatically generated config values (such as ones from an InstanceType) have been set.
-			err = project.AllowInstanceCreation(tx, targetProjectName, req)
+			err = project.AllowInstanceCreation(s.GlobalConfig, tx, targetProjectName, req)
 			if err != nil {
 				return err
 			}
@@ -1134,7 +1134,7 @@ func instancesPost(d *Daemon, r *http.Request) response.Response {
 				Reason:        apiScriptlet.InstancePlacementReasonNew,
 			}
 
-			reqExpanded.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig, reqExpanded.Config, profiles)
+			reqExpanded.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig.Dump(), reqExpanded.Config, profiles)
 			reqExpanded.Devices = instancetype.ExpandInstanceDevices(deviceConfig.NewDevices(reqExpanded.Devices), profiles).CloneNative()
 
 			targetMemberInfo, err = scriptlet.InstancePlacementRun(r.Context(), logger.Log, s, &reqExpanded, candidateMembers, leaderAddress)
diff --git a/lxd/network/network_utils_sriov.go b/lxd/network/network_utils_sriov.go
index 60dad508c..1020d68e2 100644
--- a/lxd/network/network_utils_sriov.go
+++ b/lxd/network/network_utils_sriov.go
@@ -62,7 +62,7 @@ func SRIOVGetHostDevicesInUse(s *state.State) (map[string]struct{}, error) {
 	err = s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
 		return tx.InstanceList(ctx, func(dbInst db.InstanceArgs, p api.Project) error {
 			// Expand configs so we take into account profile devices.
-			dbInst.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig, dbInst.Config, dbInst.Profiles)
+			dbInst.Config = instancetype.ExpandInstanceConfig(s.GlobalConfig.Dump(), dbInst.Config, dbInst.Profiles)
 			dbInst.Devices = instancetype.ExpandInstanceDevices(dbInst.Devices, dbInst.Profiles)
 
 			for name, dev := range dbInst.Devices {
diff --git a/lxd/profiles_utils.go b/lxd/profiles_utils.go
index 7e89dc0ee..89c7f31b8 100644
--- a/lxd/profiles_utils.go
+++ b/lxd/profiles_utils.go
@@ -17,7 +17,7 @@ import (
 func doProfileUpdate(s *state.State, p api.Project, profileName string, id int64, profile *api.Profile, req api.ProfilePut) error {
 	// Check project limits.
 	err := s.DB.Cluster.Transaction(context.TODO(), func(ctx context.Context, tx *db.ClusterTx) error {
-		return project.AllowProfileUpdate(tx, p.Name, profileName, req)
+		return project.AllowProfileUpdate(s.GlobalConfig, tx, p.Name, profileName, req)
 	})
 	if err != nil {
 		return err
diff --git a/lxd/project/permissions.go b/lxd/project/permissions.go
index fdf9a1a31..1d911bf7b 100644
--- a/lxd/project/permissions.go
+++ b/lxd/project/permissions.go
@@ -10,6 +10,7 @@ import (
 	"strings"
 
 	"github.com/canonical/lxd/lxd/auth"
+	clusterConfig "github.com/canonical/lxd/lxd/cluster/config"
 	"github.com/canonical/lxd/lxd/db"
 	"github.com/canonical/lxd/lxd/db/cluster"
 	deviceconfig "github.com/canonical/lxd/lxd/device/config"
@@ -25,8 +26,8 @@ import (
 
 // AllowInstanceCreation returns an error if any project-specific limit or
 // restriction is violated when creating a new instance.
-func AllowInstanceCreation(tx *db.ClusterTx, projectName string, req api.InstancesPost) error {
-	info, err := fetchProject(tx, projectName, true)
+func AllowInstanceCreation(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName string, req api.InstancesPost) error {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return err
 	}
@@ -80,7 +81,7 @@ func AllowInstanceCreation(tx *db.ClusterTx, projectName string, req api.Instanc
 		return err
 	}
 
-	err = checkRestrictionsAndAggregateLimits(tx, info)
+	err = checkRestrictionsAndAggregateLimits(globalConfig, tx, info)
 	if err != nil {
 		return fmt.Errorf("Failed checking if instance creation allowed: %w", err)
 	}
@@ -227,8 +228,8 @@ func checkRestrictionsOnVolatileConfig(project api.Project, instanceType instanc
 
 // AllowVolumeCreation returns an error if any project-specific limit or
 // restriction is violated when creating a new custom volume in a project.
-func AllowVolumeCreation(tx *db.ClusterTx, projectName string, req api.StorageVolumesPost) error {
-	info, err := fetchProject(tx, projectName, true)
+func AllowVolumeCreation(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName string, req api.StorageVolumesPost) error {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return err
 	}
@@ -248,7 +249,7 @@ func AllowVolumeCreation(tx *db.ClusterTx, projectName string, req api.StorageVo
 		Config: req.Config,
 	})
 
-	err = checkRestrictionsAndAggregateLimits(tx, info)
+	err = checkRestrictionsAndAggregateLimits(globalConfig, tx, info)
 	if err != nil {
 		return fmt.Errorf("Failed checking if volume creation allowed: %w", err)
 	}
@@ -260,8 +261,8 @@ func AllowVolumeCreation(tx *db.ClusterTx, projectName string, req api.StorageVo
 // for writing images.
 //
 // If no limit is in place, return -1.
-func GetImageSpaceBudget(tx *db.ClusterTx, projectName string) (int64, error) {
-	info, err := fetchProject(tx, projectName, true)
+func GetImageSpaceBudget(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName string) (int64, error) {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return -1, err
 	}
@@ -286,7 +287,7 @@ func GetImageSpaceBudget(tx *db.ClusterTx, projectName string) (int64, error) {
 		return -1, err
 	}
 
-	instances, err := expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	instances, err := expandInstancesConfigAndDevices(globalConfig.Dump(), info.Instances, info.Profiles)
 	if err != nil {
 		return -1, err
 	}
@@ -307,7 +308,7 @@ func GetImageSpaceBudget(tx *db.ClusterTx, projectName string) (int64, error) {
 
 // Check that we would not violate the project limits or restrictions if we
 // were to commit the given instances and profiles.
-func checkRestrictionsAndAggregateLimits(tx *db.ClusterTx, info *projectInfo) error {
+func checkRestrictionsAndAggregateLimits(globalConfig *clusterConfig.Config, tx *db.ClusterTx, info *projectInfo) error {
 	// List of config keys for which we need to check aggregate values
 	// across all project instances.
 	aggregateKeys := []string{}
@@ -328,7 +329,7 @@ func checkRestrictionsAndAggregateLimits(tx *db.ClusterTx, info *projectInfo) er
 		return nil
 	}
 
-	instances, err := expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	instances, err := expandInstancesConfigAndDevices(globalConfig.Dump(), info.Instances, info.Profiles)
 	if err != nil {
 		return err
 	}
@@ -857,9 +858,9 @@ func isVMLowLevelOptionForbidden(key string) bool {
 
 // AllowInstanceUpdate returns an error if any project-specific limit or
 // restriction is violated when updating an existing instance.
-func AllowInstanceUpdate(tx *db.ClusterTx, projectName, instanceName string, req api.InstancePut, currentConfig map[string]string) error {
+func AllowInstanceUpdate(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName, instanceName string, req api.InstancePut, currentConfig map[string]string) error {
 	var updatedInstance *api.Instance
-	info, err := fetchProject(tx, projectName, true)
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return err
 	}
@@ -893,7 +894,7 @@ func AllowInstanceUpdate(tx *db.ClusterTx, projectName, instanceName string, req
 		return err
 	}
 
-	err = checkRestrictionsAndAggregateLimits(tx, info)
+	err = checkRestrictionsAndAggregateLimits(globalConfig, tx, info)
 	if err != nil {
 		return fmt.Errorf("Failed checking if instance update allowed: %w", err)
 	}
@@ -903,8 +904,8 @@ func AllowInstanceUpdate(tx *db.ClusterTx, projectName, instanceName string, req
 
 // AllowVolumeUpdate returns an error if any project-specific limit or
 // restriction is violated when updating an existing custom volume.
-func AllowVolumeUpdate(tx *db.ClusterTx, projectName, volumeName string, req api.StorageVolumePut, currentConfig map[string]string) error {
-	info, err := fetchProject(tx, projectName, true)
+func AllowVolumeUpdate(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName, volumeName string, req api.StorageVolumePut, currentConfig map[string]string) error {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return err
 	}
@@ -927,7 +928,7 @@ func AllowVolumeUpdate(tx *db.ClusterTx, projectName, volumeName string, req api
 		info.Volumes[i].Config = req.Config
 	}
 
-	err = checkRestrictionsAndAggregateLimits(tx, info)
+	err = checkRestrictionsAndAggregateLimits(globalConfig, tx, info)
 	if err != nil {
 		return fmt.Errorf("Failed checking if volume update allowed: %w", err)
 	}
@@ -937,8 +938,8 @@ func AllowVolumeUpdate(tx *db.ClusterTx, projectName, volumeName string, req api
 
 // AllowProfileUpdate checks that project limits and restrictions are not
 // violated when changing a profile.
-func AllowProfileUpdate(tx *db.ClusterTx, projectName, profileName string, req api.ProfilePut) error {
-	info, err := fetchProject(tx, projectName, true)
+func AllowProfileUpdate(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName, profileName string, req api.ProfilePut) error {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, true)
 	if err != nil {
 		return err
 	}
@@ -957,7 +958,7 @@ func AllowProfileUpdate(tx *db.ClusterTx, projectName, profileName string, req a
 		info.Profiles[i].Devices = req.Devices
 	}
 
-	err = checkRestrictionsAndAggregateLimits(tx, info)
+	err = checkRestrictionsAndAggregateLimits(globalConfig, tx, info)
 	if err != nil {
 		return fmt.Errorf("Failed checking if profile update allowed: %w", err)
 	}
@@ -966,13 +967,13 @@ func AllowProfileUpdate(tx *db.ClusterTx, projectName, profileName string, req a
 }
 
 // AllowProjectUpdate checks the new config to be set on a project is valid.
-func AllowProjectUpdate(tx *db.ClusterTx, projectName string, config map[string]string, changed []string) error {
-	info, err := fetchProject(tx, projectName, false)
+func AllowProjectUpdate(globalConfig *clusterConfig.Config, tx *db.ClusterTx, projectName string, config map[string]string, changed []string) error {
+	info, err := fetchProject(globalConfig.Dump(), tx, projectName, false)
 	if err != nil {
 		return err
 	}
 
-	info.Instances, err = expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	info.Instances, err = expandInstancesConfigAndDevices(globalConfig.Dump(), info.Instances, info.Profiles)
 	if err != nil {
 		return err
 	}
@@ -1151,7 +1152,7 @@ type projectInfo struct {
 // If the skipIfNoLimits flag is true, then profiles, instances and volumes
 // won't be loaded if the profile has no limits set on it, and nil will be
 // returned.
-func fetchProject(tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*projectInfo, error) {
+func fetchProject(globalConfig map[string]any, tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*projectInfo, error) {
 	ctx := context.Background()
 	dbProject, err := cluster.GetProject(ctx, tx.Tx(), projectName)
 	if err != nil {
@@ -1201,7 +1202,7 @@ func fetchProject(tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*p
 
 	instances := make([]api.Instance, 0, len(dbInstances))
 	for _, instance := range dbInstances {
-		apiInstance, err := instance.ToAPI(ctx, tx.Tx())
+		apiInstance, err := instance.ToAPI(ctx, tx.Tx(), globalConfig)
 		if err != nil {
 			return nil, fmt.Errorf("Failed to get API data for instance %q in project %q: %w", instance.Name, instance.Project, err)
 		}
@@ -1226,7 +1227,7 @@ func fetchProject(tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*p
 
 // Expand the configuration and devices of the given instances, taking the give
 // project profiles into account.
-func expandInstancesConfigAndDevices(instances []api.Instance, profiles []api.Profile) ([]api.Instance, error) {
+func expandInstancesConfigAndDevices(globalConfig map[string]any, instances []api.Instance, profiles []api.Profile) ([]api.Instance, error) {
 	expandedInstances := make([]api.Instance, len(instances))
 
 	// Index of all profiles by name.
@@ -1244,7 +1245,7 @@ func expandInstancesConfigAndDevices(instances []api.Instance, profiles []api.Pr
 		}
 
 		expandedInstances[i] = instance
-		expandedInstances[i].Config = instancetype.ExpandInstanceConfig(instance.Config, apiProfiles)
+		expandedInstances[i].Config = instancetype.ExpandInstanceConfig(globalConfig, instance.Config, apiProfiles)
 		expandedInstances[i].Devices = instancetype.ExpandInstanceDevices(deviceconfig.NewDevices(instance.Devices), apiProfiles).CloneNative()
 	}
 
diff --git a/lxd/project/state.go b/lxd/project/state.go
index c92a214ad..63ae198a8 100644
--- a/lxd/project/state.go
+++ b/lxd/project/state.go
@@ -11,11 +11,11 @@ import (
 )
 
 // GetCurrentAllocations returns the current resource utilization for a given project.
-func GetCurrentAllocations(ctx context.Context, tx *db.ClusterTx, projectName string) (map[string]api.ProjectStateResource, error) {
+func GetCurrentAllocations(globalConfig map[string]any, ctx context.Context, tx *db.ClusterTx, projectName string) (map[string]api.ProjectStateResource, error) {
 	result := map[string]api.ProjectStateResource{}
 
 	// Get the project.
-	info, err := fetchProject(tx, projectName, false)
+	info, err := fetchProject(globalConfig, tx, projectName, false)
 	if err != nil {
 		return nil, err
 	}
@@ -24,7 +24,7 @@ func GetCurrentAllocations(ctx context.Context, tx *db.ClusterTx, projectName st
 		return nil, fmt.Errorf("Project %q returned empty info struct", projectName)
 	}
 
-	info.Instances, err = expandInstancesConfigAndDevices(info.Instances, info.Profiles)
+	info.Instances, err = expandInstancesConfigAndDevices(globalConfig, info.Instances, info.Profiles)
 	if err != nil {
 		return nil, err
 	}
diff --git a/lxd/storage/backend_lxd.go b/lxd/storage/backend_lxd.go
index aac41160f..6bd7a6d35 100644
--- a/lxd/storage/backend_lxd.go
+++ b/lxd/storage/backend_lxd.go
@@ -7101,7 +7101,7 @@ func (b *lxdBackend) CreateCustomVolumeFromISO(projectName string, volName strin
 	}
 
 	err := b.state.DB.Cluster.Transaction(b.state.ShutdownCtx, func(ctx context.Context, tx *db.ClusterTx) error {
-		return project.AllowVolumeCreation(tx, projectName, req)
+		return project.AllowVolumeCreation(b.state.GlobalConfig, tx, projectName, req)
 	})
 	if err != nil {
 		return fmt.Errorf("Failed checking volume creation allowed: %w", err)
@@ -7186,7 +7186,7 @@ func (b *lxdBackend) CreateCustomVolumeFromBackup(srcBackup backup.Info, srcData
 	}
 
 	err := b.state.DB.Cluster.Transaction(b.state.ShutdownCtx, func(ctx context.Context, tx *db.ClusterTx) error {
-		return project.AllowVolumeCreation(tx, srcBackup.Project, req)
+		return project.AllowVolumeCreation(b.state.GlobalConfig, tx, srcBackup.Project, req)
 	})
 	if err != nil {
 		return fmt.Errorf("Failed checking volume creation allowed: %w", err)
diff --git a/lxd/storage_volumes.go b/lxd/storage_volumes.go
index b88b99821..81e727728 100644
--- a/lxd/storage_volumes.go
+++ b/lxd/storage_volumes.go
@@ -704,7 +704,7 @@ func storagePoolVolumesPost(d *Daemon, r *http.Request) response.Response {
 			return err
 		}
 
-		err = project.AllowVolumeCreation(tx, projectName, req)
+		err = project.AllowVolumeCreation(s.GlobalConfig, tx, projectName, req)
 		if err != nil {
 			return err
 		}
@@ -1883,7 +1883,7 @@ func storagePoolVolumePut(d *Daemon, r *http.Request) response.Response {
 		if req.Config != nil || req.Restore == "" {
 			// Possibly check if project limits are honored.
 			err = s.DB.Cluster.Transaction(r.Context(), func(ctx context.Context, tx *db.ClusterTx) error {
-				return project.AllowVolumeUpdate(tx, projectName, volumeName, req, dbVolume.Config)
+				return project.AllowVolumeUpdate(s.GlobalConfig, tx, projectName, volumeName, req, dbVolume.Config)
 			})
 			if err != nil {
 				return response.SmartError(err)

Note that in the case

func fetchProject(globalConfig map[string]any, tx *db.ClusterTx, projectName string, skipIfNoLimits bool) (*projectInfo, error)

I had to pass the globalConfig as a map[string]any .. passing a state.State or a clusterConfig.Config leads to a circular dep.

@tomponline
Copy link
Member

I had to pass the globalConfig as a map[string]any .. passing a state.State or a clusterConfig.Config leads to a circular dep.

Cool thanks. Please update the PR and ill take a look Tuesday. Ta

@gabrielmougard
Copy link
Contributor Author

I had to pass the globalConfig as a map[string]any .. passing a state.State or a clusterConfig.Config leads to a circular dep.

Cool thanks. Please update the PR and ill take a look Tuesday. Ta

Thanks. updated.

@tomponline
Copy link
Member

@gabrielmougard unit tests failing

github.com/canonical/lxd/lxd/network/zone	[no test files]
# github.com/canonical/lxd/lxd/project_test [github.com/canonical/lxd/lxd/project.test]
Error: lxd/project/permissions_test.go:33:54: not enough arguments in call to project.AllowInstanceCreation
	have (*db.ClusterTx, string, api.InstancesPost)
	want (*"github.com/canonical/lxd/lxd/cluster/config".Config, *db.ClusterTx, string, api.InstancesPost)
Error: lxd/project/permissions_test.go:63:48: not enough arguments in call to project.AllowInstanceCreation
	have (*db.ClusterTx, string, api.InstancesPost)
	want (*"github.com/canonical/lxd/lxd/cluster/config".Config, *db.ClusterTx, string, api.InstancesPost)
Error: lxd/project/permissions_test.go:94:48: not enough arguments in call to project.AllowInstanceCreation
	have (*db.ClusterTx, string, api.InstancesPost)
	want (*"github.com/canonical/lxd/lxd/cluster/config".Config, *db.ClusterTx, string, api.InstancesPost)
Error: lxd/project/permissions_test.go:125:48: not enough arguments in call to project.AllowInstanceCreation
	have (*db.ClusterTx, string, api.InstancesPost)
	want (*"github.com/canonical/lxd/lxd/cluster/config".Config, *db.ClusterTx, string, api.InstancesPost)
Error: lxd/project/permissions_test.go:156:48: not enough arguments in call to project.AllowInstanceCreation
	have (*db.ClusterTx, string, api.InstancesPost)
	want (*"github.com/canonical/lxd/lxd/cluster/config".Config, *db.ClusterTx, string, api.InstancesPost)

@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch 2 times, most recently from f176791 to 968b58c Compare February 26, 2024 11:30
Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
…nable a default `migration.stateful` value for all created VM instances

Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch 2 times, most recently from 5177a8b to 99288bc Compare February 26, 2024 13:00
Through the server-level `instances.migration.stateful` key, upon creation time, a VM can have its `migration.stateful` config set to the same value as the server config `instances.migration.stateful`. This behaviour is overridden by the `migration.stateful` instance config if set. If not, the behaviour
is overridden by the `migration.stateful` config at the applied profiles level if set.

Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
…tance living on a shared storage pool

Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
… `migration.stateful`

Signed-off-by: Gabriel Mougard <gabriel.mougard@canonical.com>
@gabrielmougard gabrielmougard force-pushed the feat/improve-vm-live-migration-part2 branch from 99288bc to 307bb7e Compare February 26, 2024 13:30
@gabrielmougard
Copy link
Contributor Author

@tomponline aside from a storage bucket error on lvm, the rest of the tests are looking good.

@tomponline
Copy link
Member

@tomponline aside from a storage bucket error on lvm, the rest of the tests are looking good.

Thanks what was the issue in the end?

@tomponline tomponline merged commit 0ff71ae into canonical:main Feb 26, 2024
25 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API Changes to the REST API Documentation Documentation needs updating
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants