Implement 'incremental' provision support #553

Closed
SubPointSupport opened this Issue Jun 15, 2015 · 31 comments

Comments

Projects
None yet
5 participants
@SubPointSupport
Contributor

SubPointSupport commented Jun 15, 2015

Work in progress

  • Implement DefaultIncrementalModelTreeTraverseService
  • Regression tests: basic API
  • Regression tests: all definitions + double deployment
  • Regression tests: all definitions + double deployment and random updates
  • Implement special provision services (both CSOM/SSOM)
  • Implement fluent API to make existing provision services working with incremental updates
  • Implement fluent API totoggle provision services mode - from defalt to incremental
  • Regression tests for newly created provision services and fluent API
  • Implement IncrementalModelPrettyPrintService (to output model state)
  • model hash classes / api
  • Persistence storage base service
  • Persistence storage 'file system' implementation, tests
  • Persistence storage CSOM - DefaultCSOMWebPropertyBagStorage
  • Persistence storage SSOM - DefaultSSOMWebPropertyBagStorage
  • Persistence storage SSOM - DefaultSSOMWebApplicationPropertyBagStorage
  • Persistence storage SSOM - DefaultSSOMFarmPropertyBagStorage
  • Add option for persistence storage auto-detection

Problem

SPMeta2 might take unreasonable amount of time on large deployments. For instance, thousands of definitions and/or terabytes of content in the farm cause provision time to be unreasonable huge.

Solution

SPMeta2 needs to offer a concept of "incremental" or "conditional" provision. In nutshell, SPMeta2 should be able to calculate changes and make a derision on "should/should not" provision the current artifact.

Benefits of incremental provision should enable robust, fast provision on extremely large SharePoint deployments - terabytes of data and thousands of artifacts.

Potential implementation should work smoothly with already existing code, legacy code and current deployments. We should not spend tons of time adding some code or troubleshooting stuff.

Implementation

Two potential implementation to calculate differences are seen:

  • Model based - difference between current deployed model and previously deployed model
  • SharePoint based - difference between current deployed model and actual SharePoint objects

To simplify things, the following terminology is used:

  • "model hash" - represents model hash (model + model nodes + definitions)
  • "node hash" - represents model node hash (node + definition)
  • "definition hash" - represents model node hash (node + definition)

With two strategies to calculate the difference, the easiest way would be to implement diff calculation between models. Calculations between model and SharePoint object is complex and require a huge amount of code to be written for both CSOM, SSOM and O365 APIs.

First draft of the "model based" diff calculation should work as following:

  • calculate model hash of the current model
  • fetch model hash of previously deployed model (via extensible provider: from local folder, sharepoint or other storage)
  • calculate the diff between models and mark nodes as "should provision" or "should not provision"
  • provision model
  • provide hash of both current and previous model

A set of additional services are to be provided:

  • a service to calculate model hashes
  • a service to persist model hash (ootb impl for folders, SharePoint CSOM, SharePoint SSOM)
  • a service to output "final model" with marked nodes [will / will not be provisioned]

Suggested implementation has to be implemented for both CSOM/SSOM APIs, covered by unit tests and regression tests, offer traceability to understand which nodes are/aren't provisioned and why.

Benefits of incremental provision should enable robust, fast provision on extremely large SharePoint deployments - terabytes of data and thousands of artifacts.

@SubPointSupport SubPointSupport self-assigned this Jun 15, 2015

@SubPointSupport SubPointSupport added this to the backlog milestone Jun 15, 2015

@SubPointSupport SubPointSupport changed the title from Consider 'smart' model update - if only definition "hash" was changed to Implement 'incremental' provision support Jul 29, 2015

@SubPointSupport SubPointSupport modified the milestones: 2015.08.03, 2015.08.10 Aug 2, 2015

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ more testing and various fixes

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Persistence storage base service
+ Persistence storage 'file system' implementation, tests

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Persistence storage base service
+ Persistence storage 'file system' implementation, tests
+ Persistence storage CSOM - DefaultCSOMWebPropertyBagStorage

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Persistence storage SSOM - DefaultSSOMWebPropertyBagStorage
+ Persistence storage SSOM - DefaultSSOMWebApplicationPropertyBagStorage
+ Persistence storage SSOM - DefaultSSOMFarmPropertyBagStorage

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Add option for persistence storage auto-detection
@rolandoldengarm

This comment has been minimized.

Show comment
Hide comment
@rolandoldengarm

rolandoldengarm Mar 3, 2017

Wow you guys are amazing @SubPointSupport 👍 Looking forward to implementation.

Wow you guys are amazing @SubPointSupport 👍 Looking forward to implementation.

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Add option for persistence storage auto-detection

SubPointSupport added a commit that referenced this issue Mar 3, 2017

Merge pull request #974 from SubPointSolutions/dev
Implement 'incremental' provision support #553
@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Mar 3, 2017

Contributor

Implemented. Testing but beta access can be done as following:

VS2012+ NuGet feed:
https://www.myget.org/F/subpointsolutions-staging/api/v2

VS2012+ NuGet feed:
https://www.myget.org/F/subpointsolutions-staging/api/v3/index.json

Install-Package -Id "package-name" -Version 1.2.120-beta1

Both CSOM/SSOM provision with auto-detection for model hash persistence:

var incrementalProvisionConfig = new IncrementalProvisionConfig();

// in that case M2 auto-detect runtime/api and stored model hash in Property Bag at web, web app or farm level after successful provision
incrementalProvisionConfig.AutoDetectSharePointPersistenceStorage = true;

// configure incremental provision as per options
provisionService.SetIncrementalProvisionMode(incrementalProvisionConfig);

// assign ID, model should have that ID so that SPMeta2 would be able to find/save model hash using that key
// use "my-model", "my.model" or something like that
model.SetIncrementalProvisionModelId(incrementalModelId);

// deploy model as usual
 provisionService.DeployModel(host, model)

// get back to normal provision
provisionService.SetDefaultProvisionMode();

Trace WHAT actually was or was NOT provisioned:

provisionService.DeployModel(host, model)

var tracer = new DefaultIncrementalModelPrettyPrintService();

Console.WriteLine(string.Format("Deployed model with incremental updates:"));
Console.WriteLine(Environment.NewLine + tracer.PrintModel(model));

Trace in real time:

provisionService.OnModelNodeProcessed += (sender, args) =>
            {
               var incrementalRequireSelfProcessingValue = modelNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");

            if (incrementalRequireSelfProcessingValue != null)
                shouldDeploy = ConvertUtils.ToBoolWithDefault(incrementalRequireSelfProcessingValue.Value, false);
        
               var wasDeployed = incrementalRequireSelfProcessingValue.Value;

                Trace.WriteLine(
                    string.Format("[5] Processed: [{0}/{1}] - [{2}%] - [{3}] [{4}]",
                    new object[] {
                                  args.ProcessedModelNodeCount,
                                  args.TotalModelNodeCount,
                                  100d * (double)args.ProcessedModelNodeCount / (double)args.TotalModelNodeCount,
                                  args.CurrentNode.Value.GetType().Name,
                                  args.CurrentNode.Value,
                                  wasDeployed == true ? "[+]" : "[-]"}));
            };

provisionService.DeployModel(host, model)

Manual management for model hash (so that you can take care of storing it somewhere:

provisionService.SetIncrementalProvisionMode();

// set model hash if you have one
provisionService.SetIncrementalProvisionModelHash( hash );

// provision as usual
provisionService.DeplyModel( host, model); 

// get new model hash, save it somewhere and then use in .SetIncrementalProvisionModelHash() method
var newMOdelHost = provisionService.GetIncrementalProvisionModelHash();

Using file system to save model hash:

var incrementalProvisionConfig = new IncrementalProvisionConfig();
// use overload of DefaultFileSystemPersistenceStorage(folderPath) to get state in your folder
// default folder is Environment.SpecialFolder.LocalApplicationData + SPMeta2
incrementalProvisionConfig.PersistenceStorages.Add(new DefaultFileSystemPersistenceStorage());

provisionService.SetIncrementalProvisionMode(incrementalProvisionConfig);

// set model id for incremental provision
model.SetIncrementalProvisionModelId(incrementalModelId);

// deploy as usual
provisionService.DeployModel(host, model)

Have fun, we'll be testing more over the weekend maturing it for the next week. Docs will be updated accordingly.

Contributor

SubPointSupport commented Mar 3, 2017

Implemented. Testing but beta access can be done as following:

VS2012+ NuGet feed:
https://www.myget.org/F/subpointsolutions-staging/api/v2

VS2012+ NuGet feed:
https://www.myget.org/F/subpointsolutions-staging/api/v3/index.json

Install-Package -Id "package-name" -Version 1.2.120-beta1

Both CSOM/SSOM provision with auto-detection for model hash persistence:

var incrementalProvisionConfig = new IncrementalProvisionConfig();

// in that case M2 auto-detect runtime/api and stored model hash in Property Bag at web, web app or farm level after successful provision
incrementalProvisionConfig.AutoDetectSharePointPersistenceStorage = true;

// configure incremental provision as per options
provisionService.SetIncrementalProvisionMode(incrementalProvisionConfig);

// assign ID, model should have that ID so that SPMeta2 would be able to find/save model hash using that key
// use "my-model", "my.model" or something like that
model.SetIncrementalProvisionModelId(incrementalModelId);

// deploy model as usual
 provisionService.DeployModel(host, model)

// get back to normal provision
provisionService.SetDefaultProvisionMode();

Trace WHAT actually was or was NOT provisioned:

provisionService.DeployModel(host, model)

var tracer = new DefaultIncrementalModelPrettyPrintService();

Console.WriteLine(string.Format("Deployed model with incremental updates:"));
Console.WriteLine(Environment.NewLine + tracer.PrintModel(model));

Trace in real time:

provisionService.OnModelNodeProcessed += (sender, args) =>
            {
               var incrementalRequireSelfProcessingValue = modelNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");

            if (incrementalRequireSelfProcessingValue != null)
                shouldDeploy = ConvertUtils.ToBoolWithDefault(incrementalRequireSelfProcessingValue.Value, false);
        
               var wasDeployed = incrementalRequireSelfProcessingValue.Value;

                Trace.WriteLine(
                    string.Format("[5] Processed: [{0}/{1}] - [{2}%] - [{3}] [{4}]",
                    new object[] {
                                  args.ProcessedModelNodeCount,
                                  args.TotalModelNodeCount,
                                  100d * (double)args.ProcessedModelNodeCount / (double)args.TotalModelNodeCount,
                                  args.CurrentNode.Value.GetType().Name,
                                  args.CurrentNode.Value,
                                  wasDeployed == true ? "[+]" : "[-]"}));
            };

provisionService.DeployModel(host, model)

Manual management for model hash (so that you can take care of storing it somewhere:

provisionService.SetIncrementalProvisionMode();

// set model hash if you have one
provisionService.SetIncrementalProvisionModelHash( hash );

// provision as usual
provisionService.DeplyModel( host, model); 

// get new model hash, save it somewhere and then use in .SetIncrementalProvisionModelHash() method
var newMOdelHost = provisionService.GetIncrementalProvisionModelHash();

Using file system to save model hash:

var incrementalProvisionConfig = new IncrementalProvisionConfig();
// use overload of DefaultFileSystemPersistenceStorage(folderPath) to get state in your folder
// default folder is Environment.SpecialFolder.LocalApplicationData + SPMeta2
incrementalProvisionConfig.PersistenceStorages.Add(new DefaultFileSystemPersistenceStorage());

provisionService.SetIncrementalProvisionMode(incrementalProvisionConfig);

// set model id for incremental provision
model.SetIncrementalProvisionModelId(incrementalModelId);

// deploy as usual
provisionService.DeployModel(host, model)

Have fun, we'll be testing more over the weekend maturing it for the next week. Docs will be updated accordingly.

SubPointSupport added a commit that referenced this issue Mar 3, 2017

+ Implement 'incremental' provision support #553
+ Can_Provision_Incrementally_With_AutoDetection_As_SSOM fixes
@rolandoldengarm

This comment has been minimized.

Show comment
Hide comment
@rolandoldengarm

rolandoldengarm Mar 4, 2017

Thanks, will implement it next week 👍

Thanks, will implement it next week 👍

@rolandoldengarm

This comment has been minimized.

Show comment
Hide comment
@rolandoldengarm

rolandoldengarm Mar 8, 2017

It works amazing. One deployment locally crashed half way, and then next run resumed where it crashed before.
Previously we worked around this by commenting out code, but that is hacky.

This is a must have for all SPMeta2 developers.

Two comment about the example code:

  1. Maybe this can be implemented as getter on the ModelNode class?
var incrementalRequireSelfProcessingValue = modelNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");
  1. The shouldDeploy/wasDeployed code did not compile. I've changed it to:
var incrementalRequireSelfProcessingValue = args.CurrentNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");
            bool shouldDeploy = true;
            if (incrementalRequireSelfProcessingValue != null)
            {
                shouldDeploy = ConvertUtils.ToBoolWithDefault(incrementalRequireSelfProcessingValue.Value, false);
            }

Awesome stuff @SubPointSupport , thanks for the super quick turnaround on this one. Hopefully it can be pushed to NuGet soon :)

It works amazing. One deployment locally crashed half way, and then next run resumed where it crashed before.
Previously we worked around this by commenting out code, but that is hacky.

This is a must have for all SPMeta2 developers.

Two comment about the example code:

  1. Maybe this can be implemented as getter on the ModelNode class?
var incrementalRequireSelfProcessingValue = modelNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");
  1. The shouldDeploy/wasDeployed code did not compile. I've changed it to:
var incrementalRequireSelfProcessingValue = args.CurrentNode.NonPersistentPropertyBag
                .FirstOrDefault(p => p.Name == "_sys.IncrementalRequireSelfProcessingValue");
            bool shouldDeploy = true;
            if (incrementalRequireSelfProcessingValue != null)
            {
                shouldDeploy = ConvertUtils.ToBoolWithDefault(incrementalRequireSelfProcessingValue.Value, false);
            }

Awesome stuff @SubPointSupport , thanks for the super quick turnaround on this one. Hopefully it can be pushed to NuGet soon :)

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Mar 8, 2017

Contributor

Thanks, @rolandoldengarm!

Maybe this can be implemented as getter on the ModelNode class?
Yes, it would be wrapped into extension method similar to .SetIncrementalProvisionMode() and other extensions.

The shouldDeploy/wasDeployed code did not compile. I've changed it to:
Correct, our bad while copy-pasting stuff here in the ticket.

1.2.120-beta1 is already in NuGet. We are running some test preparing for the major release early next week. Docs and guides will be updated as well.

Contributor

SubPointSupport commented Mar 8, 2017

Thanks, @rolandoldengarm!

Maybe this can be implemented as getter on the ModelNode class?
Yes, it would be wrapped into extension method similar to .SetIncrementalProvisionMode() and other extensions.

The shouldDeploy/wasDeployed code did not compile. I've changed it to:
Correct, our bad while copy-pasting stuff here in the ticket.

1.2.120-beta1 is already in NuGet. We are running some test preparing for the major release early next week. Docs and guides will be updated as well.

@SubPointSupport SubPointSupport modified the milestones: 2017.03.06, 1.2.120, SPMeta2 1.3.0-alpha Mar 12, 2017

@SubPointSupport SubPointSupport referenced this issue in SubPointSolutions/MetaPack Mar 23, 2017

Open

API - add SPMeta2 incremental provision support #55

@andreasblueher

This comment has been minimized.

Show comment
Hide comment
@andreasblueher

andreasblueher Apr 5, 2017

Contributor

Hey guys,

haven't had the chance to look into this but with close to 1000 models being deployed in some of our customer solutions I'm more than happy about how fast things went here.

I'm not sure I'll be able to check it out without the next 2 or 3 weeks, but since we're planning on upgrading to the latest SPMeta version soon (currently on 1.2.60) this will definitely be part of my tests.

Contributor

andreasblueher commented Apr 5, 2017

Hey guys,

haven't had the chance to look into this but with close to 1000 models being deployed in some of our customer solutions I'm more than happy about how fast things went here.

I'm not sure I'll be able to check it out without the next 2 or 3 weeks, but since we're planning on upgrading to the latest SPMeta version soon (currently on 1.2.60) this will definitely be part of my tests.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Apr 5, 2017

Contributor

1000 models? Wow, impressive.

Incremental provision is disabled by default. Full backward compatibility over all SPMeta2 version upgrades. You'll have to enable it manually, with your code.

Contributor

SubPointSupport commented Apr 5, 2017

1000 models? Wow, impressive.

Incremental provision is disabled by default. Full backward compatibility over all SPMeta2 version upgrades. You'll have to enable it manually, with your code.

@andreasblueher

This comment has been minimized.

Show comment
Hide comment
@andreasblueher

andreasblueher Jun 23, 2017

Contributor

This change has cut feature activation time for one of my solutions in half! This is a huge relieve and allows us to deploy more easily. Thank you!

Contributor

andreasblueher commented Jun 23, 2017

This change has cut feature activation time for one of my solutions in half! This is a huge relieve and allows us to deploy more easily. Thank you!

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Jun 24, 2017

Contributor

Sweet! Could you give us an overall stat? What are the artifacts, how many of them, and what's the time benefits?

We'll be improving this feature over the following weeks introducing a planning feature and better diff calculation. That way, with the new calculation, a diff of the mode would be calculated BEFORE deployment which would cut the deployment time even further. Appx change is around 10-50 times.

Contributor

SubPointSupport commented Jun 24, 2017

Sweet! Could you give us an overall stat? What are the artifacts, how many of them, and what's the time benefits?

We'll be improving this feature over the following weeks introducing a planning feature and better diff calculation. That way, with the new calculation, a diff of the mode would be calculated BEFORE deployment which would cut the deployment time even further. Appx change is around 10-50 times.

@andreasblueher

This comment has been minimized.

Show comment
Hide comment
@andreasblueher

andreasblueher Jul 3, 2017

Contributor

The solution where I tested #1005 and #553 had ~500 definitions, mostly FieldDefinitions but also ListDefinitions, UserCustomActionDefinitions, ContentTypeDefinitions. Activating the feature initially took me about 60 seconds. I had verbose logging activated so maybe it could have been faster without it.

Your ideas about pre calculating sounds great and I would love to give you more feedback on other solutions where SPMeta is being used (1000 definitions easily) but I would need you to ship 1.2.130.

Contributor

andreasblueher commented Jul 3, 2017

The solution where I tested #1005 and #553 had ~500 definitions, mostly FieldDefinitions but also ListDefinitions, UserCustomActionDefinitions, ContentTypeDefinitions. Activating the feature initially took me about 60 seconds. I had verbose logging activated so maybe it could have been faster without it.

Your ideas about pre calculating sounds great and I would love to give you more feedback on other solutions where SPMeta is being used (1000 definitions easily) but I would need you to ship 1.2.130.

@andreasblueher

This comment has been minimized.

Show comment
Hide comment
@andreasblueher

andreasblueher Jul 12, 2017

Contributor

Here some other test results:

Deploying 189 FieldDefinition through a SharePoint feature:
Standard: 16s
Incrementell: 1,5s

Deploying 310 contenttypes, contenttypelinks, webfeatures and security group definitions:
Standard: 98s
Incrementell: 92s
ContentTypeFieldLinkDefinitions appeared to be very slow compared to fielddefinitions especially. I would argue that here was no real improvement at all compared to the first run.

Deploying 125 lists, listviews, module files, webparts:
Standard: 18s
Incrementell: 7s

2 out of 3 show a very good improvement (especially the first example), but maybe you can look into contenttypefieldlink definitions again and why they could perform so badly.

Contributor

andreasblueher commented Jul 12, 2017

Here some other test results:

Deploying 189 FieldDefinition through a SharePoint feature:
Standard: 16s
Incrementell: 1,5s

Deploying 310 contenttypes, contenttypelinks, webfeatures and security group definitions:
Standard: 98s
Incrementell: 92s
ContentTypeFieldLinkDefinitions appeared to be very slow compared to fielddefinitions especially. I would argue that here was no real improvement at all compared to the first run.

Deploying 125 lists, listviews, module files, webparts:
Standard: 18s
Incrementell: 7s

2 out of 3 show a very good improvement (especially the first example), but maybe you can look into contenttypefieldlink definitions again and why they could perform so badly.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Jul 12, 2017

Contributor

This is pure gold. Thank you for the feedback, we are really excited to see this in action for you.

We are aware that ContentTypeFieldLinkDefinitions aren't playing well yet, was raised by other people as well. Looked into it, couldn't spot anything. Going to get back on it, rework and try to test better.

Contributor

SubPointSupport commented Jul 12, 2017

This is pure gold. Thank you for the feedback, we are really excited to see this in action for you.

We are aware that ContentTypeFieldLinkDefinitions aren't playing well yet, was raised by other people as well. Looked into it, couldn't spot anything. Going to get back on it, rework and try to test better.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Jul 12, 2017

Contributor

We bet that it would be all about pre-calculating the model before actual provision.

Your ideas about pre calculating sounds great and I would love to give you more feedback on other solutions where SPMeta is being used (1000 definitions easily) but I would need you to ship 1.2.130.

The thing is that currently incremental provision still fetches artefacts from SharePoint skipping the update logic. It's not too bad - we don't change things we don't have to but this is only a half of the solution, we need to coved the second half and cut artefact fetching logic. That way, SPMeta2 would skip a fare share of WithResolvingModelHost calls which, most likely, cause these time drifts for different models/artefacts.

Contributor

SubPointSupport commented Jul 12, 2017

We bet that it would be all about pre-calculating the model before actual provision.

Your ideas about pre calculating sounds great and I would love to give you more feedback on other solutions where SPMeta is being used (1000 definitions easily) but I would need you to ship 1.2.130.

The thing is that currently incremental provision still fetches artefacts from SharePoint skipping the update logic. It's not too bad - we don't change things we don't have to but this is only a half of the solution, we need to coved the second half and cut artefact fetching logic. That way, SPMeta2 would skip a fare share of WithResolvingModelHost calls which, most likely, cause these time drifts for different models/artefacts.

@koltyakov

This comment has been minimized.

Show comment
Hide comment
@koltyakov

koltyakov Sep 21, 2017

Hey guys @SubPointSupport, first off thanks a lot for the feature!
It becomes vital on some projects with really huge models, where it's difficult to separate the models due to organizational and global team related nuances.
Just wanted to add our five cents that improving incremental provisioning by excluding redundant data fetching is in enormous demand.
If you can put this enhancement in your product plans we'll be super happy.

koltyakov commented Sep 21, 2017

Hey guys @SubPointSupport, first off thanks a lot for the feature!
It becomes vital on some projects with really huge models, where it's difficult to separate the models due to organizational and global team related nuances.
Just wanted to add our five cents that improving incremental provisioning by excluding redundant data fetching is in enormous demand.
If you can put this enhancement in your product plans we'll be super happy.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Sep 21, 2017

Contributor

Not a problem, thanks for raising this again, @koltyakov

Out of curiosity, what are the challenges with the current performance and your project related workflow? Pre-calculating IS on the radar, we might do that really-really soon, curious on your vision of the situation as well.

Contributor

SubPointSupport commented Sep 21, 2017

Not a problem, thanks for raising this again, @koltyakov

Out of curiosity, what are the challenges with the current performance and your project related workflow? Pre-calculating IS on the radar, we might do that really-really soon, curious on your vision of the situation as well.

@koltyakov

This comment has been minimized.

Show comment
Hide comment
@koltyakov

koltyakov Sep 21, 2017

Well, we're are in a transition and adoption stage with a remote team. Where existing project dictates some rules about how to organize and manage the artifacts. The project is rather big. A model for a web (an area for a teamwork, a number of such areas can be created constantly) includes more than 3000 artifacts and this number grows. The reasons are objective, cause the app is really sophisticated.

In this process, we're helping a remote team over the ocean to master M2 which is really a relief and a huge step forward in terms of improving and managing artifacts in the mentioned project.

Also during the transition period, some team members got to deploy from their machines to the servers located on the different continent (just because there is no other choice for now). The process of deployment takes time. Sometimes it can fail due to network issues, then the process starts from the beginning.

It's too early to chunk the models to small parts all the time as it can bring extra complexity for the members who are still new to M2, and can create hidden issues (e.g. something was added to a small model, but had been forgotten to be added to a full solution model).

In parallel, the work is in progress to push some deployment to CI/CD tools and to deploy within the machines closer to the servers.

Also, a solution itself assumes definitions update. Let's say, 100 webs had been created with one version, then a new release should be delivered to all of the existing webs.

In other words, speeding up the second run (incremental) delivery in this scenario is a really huge deal for us and a reason for appreciation.

Well, we're are in a transition and adoption stage with a remote team. Where existing project dictates some rules about how to organize and manage the artifacts. The project is rather big. A model for a web (an area for a teamwork, a number of such areas can be created constantly) includes more than 3000 artifacts and this number grows. The reasons are objective, cause the app is really sophisticated.

In this process, we're helping a remote team over the ocean to master M2 which is really a relief and a huge step forward in terms of improving and managing artifacts in the mentioned project.

Also during the transition period, some team members got to deploy from their machines to the servers located on the different continent (just because there is no other choice for now). The process of deployment takes time. Sometimes it can fail due to network issues, then the process starts from the beginning.

It's too early to chunk the models to small parts all the time as it can bring extra complexity for the members who are still new to M2, and can create hidden issues (e.g. something was added to a small model, but had been forgotten to be added to a full solution model).

In parallel, the work is in progress to push some deployment to CI/CD tools and to deploy within the machines closer to the servers.

Also, a solution itself assumes definitions update. Let's say, 100 webs had been created with one version, then a new release should be delivered to all of the existing webs.

In other words, speeding up the second run (incremental) delivery in this scenario is a really huge deal for us and a reason for appreciation.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Sep 21, 2017

Contributor

Alright, a few points to improve and/or create separate tickets.

Sometimes it can fail due to network issues, then the process starts from the beginning.

SPMeta2 has a built-in re-try logic for CSOM. It picks up disconnections, waits, re-tries. There is a pluggable API to extend out of the box handlers up for your scenario so that custom offline/network issues can be handled by you.

It's too early to chunk the models to small parts all the time as it can bring extra complexity for the members who are still new to M2, and can create hidden issues (e.g. something was added to a small model, but had been forgotten to be added to a full solution model).

Totally agree.

In other words, speeding up the second run (incremental) delivery in this scenario is a really huge deal for us and a reason for appreciation.

Agree again, totally understood. Let us see what can be done here. It isn't a major change or effort but it required some internal knowledge and context. Might push somewhat next week.

Contributor

SubPointSupport commented Sep 21, 2017

Alright, a few points to improve and/or create separate tickets.

Sometimes it can fail due to network issues, then the process starts from the beginning.

SPMeta2 has a built-in re-try logic for CSOM. It picks up disconnections, waits, re-tries. There is a pluggable API to extend out of the box handlers up for your scenario so that custom offline/network issues can be handled by you.

It's too early to chunk the models to small parts all the time as it can bring extra complexity for the members who are still new to M2, and can create hidden issues (e.g. something was added to a small model, but had been forgotten to be added to a full solution model).

Totally agree.

In other words, speeding up the second run (incremental) delivery in this scenario is a really huge deal for us and a reason for appreciation.

Agree again, totally understood. Let us see what can be done here. It isn't a major change or effort but it required some internal knowledge and context. Might push somewhat next week.

@koltyakov

This comment has been minimized.

Show comment
Hide comment
@koltyakov

koltyakov Sep 21, 2017

Thank you in advance! You guys are the best!

Thank you in advance! You guys are the best!

@rolandoldengarm

This comment has been minimized.

Show comment
Hide comment
@rolandoldengarm

rolandoldengarm Sep 21, 2017

On a very large environment (like 30 site collections, 4TB of data, millions of documents), even with incremental support, it just took too long. We've implemented a migration styled framework using https://github.com/jackawatts/ionfar-sharepoint-migration/tree/master/src

This means there is an initial migration to deploy the entire model, and migrations to deploy changes (like adding fields).
This means a bit more thinking / testing, but it's very fast.

On a very large environment (like 30 site collections, 4TB of data, millions of documents), even with incremental support, it just took too long. We've implemented a migration styled framework using https://github.com/jackawatts/ionfar-sharepoint-migration/tree/master/src

This means there is an initial migration to deploy the entire model, and migrations to deploy changes (like adding fields).
This means a bit more thinking / testing, but it's very fast.

@SubPointSupport

This comment has been minimized.

Show comment
Hide comment
@SubPointSupport

SubPointSupport Sep 21, 2017

Contributor

@rolandoldengarm, some context around this:

SPMeta2 still fetches artefacts over incremental provision. While it's still much more faster, it comes with a drawback. Imagine we've got a content type with 5 content field links, and only 2 content type fields links were changed. The way SPMeta2 works is that it treats each content type field link as a separate entity. Here is the current flow which would be called 5 times:

  • resolve content type
  • update content type field link
  • update content type (push changes to children)

With incremental provision, this flow would be still called twice for only changed content type field links. And, potentially, three times for non-changed field links with skipping updates BUT still, most likely, updating the content type and pushing changes to the lists.

Totally on SPMeta2, some internal stuff and historical design on how it processes and handles artefacts. Several improvements we can do:

  • Pre-calculate "model diff" before deployment (which we already looking into)
    That would calculate the real diff of the model so it would save lookup logic and IDLE artefact resolutions.

  • Improve content type provision - don't trigger SPContentType.Update(true) if no changes on the content type field link level were detected (applicable only for the incremental provision)

  • Batch content type field link changes (and, potentially, list field links - any other "links" artefacts) to save artefact resolutions and updates - resolve artefact once, batch all children content type field links into one change, and then call SPContentType.Update(true) once opposite, per batch, rather than per every content type field link

Such batching would improve both regular and incremental provision. Requires some drastic internal re-engineering due to the flow on how SPMeta2 handles parent-child artefact provision. It does not group artefacts, it always sees pair of "parent-child" splitting all children into separate "parent-child" pairs. For most of the artefacts that's fine, but Content Type flow works slightly different in SharePoint - hence all challenges we see.

All things mentioned here have an impact on what @koltyakov and other guys in the thread have experienced. Pointless artefact resolution, pointless artefact update (while resolving parent for the child) and, finally, batching over "groupable" children such as content type field links.

Hope we can solve that puzzle soon and if so, that would be one of the killer feature we ever had over the last 3-4 years.

Contributor

SubPointSupport commented Sep 21, 2017

@rolandoldengarm, some context around this:

SPMeta2 still fetches artefacts over incremental provision. While it's still much more faster, it comes with a drawback. Imagine we've got a content type with 5 content field links, and only 2 content type fields links were changed. The way SPMeta2 works is that it treats each content type field link as a separate entity. Here is the current flow which would be called 5 times:

  • resolve content type
  • update content type field link
  • update content type (push changes to children)

With incremental provision, this flow would be still called twice for only changed content type field links. And, potentially, three times for non-changed field links with skipping updates BUT still, most likely, updating the content type and pushing changes to the lists.

Totally on SPMeta2, some internal stuff and historical design on how it processes and handles artefacts. Several improvements we can do:

  • Pre-calculate "model diff" before deployment (which we already looking into)
    That would calculate the real diff of the model so it would save lookup logic and IDLE artefact resolutions.

  • Improve content type provision - don't trigger SPContentType.Update(true) if no changes on the content type field link level were detected (applicable only for the incremental provision)

  • Batch content type field link changes (and, potentially, list field links - any other "links" artefacts) to save artefact resolutions and updates - resolve artefact once, batch all children content type field links into one change, and then call SPContentType.Update(true) once opposite, per batch, rather than per every content type field link

Such batching would improve both regular and incremental provision. Requires some drastic internal re-engineering due to the flow on how SPMeta2 handles parent-child artefact provision. It does not group artefacts, it always sees pair of "parent-child" splitting all children into separate "parent-child" pairs. For most of the artefacts that's fine, but Content Type flow works slightly different in SharePoint - hence all challenges we see.

All things mentioned here have an impact on what @koltyakov and other guys in the thread have experienced. Pointless artefact resolution, pointless artefact update (while resolving parent for the child) and, finally, batching over "groupable" children such as content type field links.

Hope we can solve that puzzle soon and if so, that would be one of the killer feature we ever had over the last 3-4 years.

@rolandoldengarm

This comment has been minimized.

Show comment
Hide comment
@rolandoldengarm

rolandoldengarm Sep 22, 2017

Thanks @SubPointSupport for the explanation, makes sense! SPMeta2 is still super awesome 👍

Thanks @SubPointSupport for the explanation, makes sense! SPMeta2 is still super awesome 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment