diff --git a/BuildMaster/administration/agents-and-infrastructure/agents.md b/BuildMaster/administration/agents-and-infrastructure/agents.md index c3fc322d..cfe31195 100644 --- a/BuildMaster/administration/agents-and-infrastructure/agents.md +++ b/BuildMaster/administration/agents-and-infrastructure/agents.md @@ -10,19 +10,19 @@ To deploy releases to your servers, BuildMaster needs to be able to communicate ## Windows Servers {#windows data-title="Windows Servers"} -The [Inedo Agent](/support/documentation/inedoagent/overview) is generally the best way to communicate with a Windows server. It's light-weight, and uses a highly-optimized and resilient protocol built solely for this purpose, and is [quite easy to install](/support/documentation/buildmaster/installation-and-maintenance/installation-guide/agent-installation-guide). +The [Inedo Agent](/docs/inedoagent/overview) is generally the best way to communicate with a Windows server. It's light-weight, and uses a highly-optimized and resilient protocol built solely for this purpose, and is [quite easy to install](/docs/buildmaster/installation-and-maintenance/installation-guide/agent-installation-guide). ## Agentless Windows Servers {#agentless data-title="Agentless Windows Servers"} -Alternatively, BuildMaster can use PowerShell Remoting to communicate with Windows servers; however, this is generally slower and less resilient than the Inedo Agent protocol. You can connect using integrated authentication (i.e. whatever account the service is running under), or with a username & password [resource credential](/support/documentation/buildmaster/administration/resource-credentials). See [Enable-PSRemoting](https://technet.microsoft.com/en-us/library/hh849694.aspx) to configures a server to receive remote commands. +Alternatively, BuildMaster can use PowerShell Remoting to communicate with Windows servers; however, this is generally slower and less resilient than the Inedo Agent protocol. You can connect using integrated authentication (i.e. whatever account the service is running under), or with a username & password [resource credential](/docs/buildmaster/administration/resource-credentials). See [Enable-PSRemoting](https://technet.microsoft.com/en-us/library/hh849694.aspx) to configures a server to receive remote commands. ## Linux Servers {#linux data-title="Linux Servers"} -To communicate with Linux servers, BuildMaster uses the lightweight, highly-optimized, and resilient protocol already enabled on nearly every Linux box: SSH and SFTP. You can connect with a private key or username & password [resource credential](/support/documentation/buildmaster/administration/resource-credentials). +To communicate with Linux servers, BuildMaster uses the lightweight, highly-optimized, and resilient protocol already enabled on nearly every Linux box: SSH and SFTP. You can connect with a private key or username & password [resource credential](/docs/buildmaster/administration/resource-credentials). ## Local Agents {#local data-title="Local Agents"} -If you're using BuildMaster to interact with the server it's installed on, you can just set it up using a local agent. This uses the same process/identity that the [service](/support/documentation/buildmaster/installation-and-maintenance/architecture/service) is hosted as, and doesn't have very many privileges by default. +If you're using BuildMaster to interact with the server it's installed on, you can just set it up using a local agent. This uses the same process/identity that the [service](/docs/buildmaster/installation-and-maintenance/architecture/service) is hosted as, and doesn't have very many privileges by default. ## Automatic Inedo Agent Updates {#automatic-updates data-title="Automatic Inedo Agent Updates"} diff --git a/BuildMaster/administration/agents-and-infrastructure/environments.htm b/BuildMaster/administration/agents-and-infrastructure/environments.htm index 26e9f1b2..c8158e12 100644 --- a/BuildMaster/administration/agents-and-infrastructure/environments.htm +++ b/BuildMaster/administration/agents-and-infrastructure/environments.htm @@ -14,7 +14,7 @@

- Environments are also used in security and access controls to permit and restrict users from performing various tasks. For example, you could permit “QA Users” to deploy applications to the Testing environment, while restricting them from deploying to the Production environment. + Environments are also used in security and access controls to permit and restrict users from performing various tasks. For example, you could permit “QA Users” to deploy applications to the Testing environment, while restricting them from deploying to the Production environment.

BuildMaster comes with three, built-in environments that represent a very simple deployment pipeline: Integration, Testing, and Production. You can create, rename, and delete environments as needed. @@ -27,7 +27,7 @@

Nested Environmen This not only helps with visualization, but it can simplify the access controls you define. For example, if you restricted access to Production, then Prod-Main and Prod-Backup would also be restricted unless you define a more granular access control on one of the child environments.

- Configuration variables will also cascade from a parent to a child environment, which means that deployments to a child environment will have access to the parent environment’s variables if they are not defined on the child environment. + Configuration variables will also cascade from a parent to a child environment, which means that deployments to a child environment will have access to the parent environment’s variables if they are not defined on the child environment.

Multiple Environments per server

diff --git a/BuildMaster/administration/agents-and-infrastructure/server-roles.htm b/BuildMaster/administration/agents-and-infrastructure/server-roles.htm index ac33ad15..27c055f7 100644 --- a/BuildMaster/administration/agents-and-infrastructure/server-roles.htm +++ b/BuildMaster/administration/agents-and-infrastructure/server-roles.htm @@ -20,7 +20,7 @@

Creating and Assigning Server Roles

- You can create, edit, and delete server roles using the web-based interface (Admin > Infrastructure > Roles), or programmatically with the Infrastructure API. + You can create, edit, and delete server roles using the web-based interface (Admin > Infrastructure > Roles), or programmatically with the Infrastructure API.

@@ -32,11 +32,11 @@

Deplo

Pipeline Stage Target

- You can specify a role name instead of a list of servers as a stage target in a pipeline stage. When the build is deployed to that stage, the stage target's deployment plan will be run against all servers with that role and in that environment. + You can specify a role name instead of a list of servers as a stage target in a pipeline stage. When the build is deployed to that stage, the stage target's deployment plan will be run against all servers with that role and in that environment.

Server Roles and OtterScript

- You can deploy to servers in a role using the @ServersInRole function within a loop block. + You can deploy to servers in a role using the @ServersInRole function within a loop block.

         foreach server in @ServersInRole(hdars-api-host)
@@ -52,7 +52,7 @@ 

Role Dependencies

Role dependencies are used by Otter to model complex server and application configuration through hierarchical sets of simple roles with dependencies. This makes it much easier to share common configuration and define smaller (but related) roles.

- While you can define a role's dependencies (i.e. the other roles which are required) on the role overview page, BuildMaster won't behave any differently if a role has dependencies. They are included only so that you can synchronize your infrastructure across products. + While you can define a role's dependencies (i.e. the other roles which are required) on the role overview page, BuildMaster won't behave any differently if a role has dependencies. They are included only so that you can synchronize your infrastructure across products.

Example: Multiple Server Roles

diff --git a/BuildMaster/administration/agents-and-infrastructure/servers.md b/BuildMaster/administration/agents-and-infrastructure/servers.md index 692b42c0..9db97fd5 100644 --- a/BuildMaster/administration/agents-and-infrastructure/servers.md +++ b/BuildMaster/administration/agents-and-infrastructure/servers.md @@ -13,9 +13,9 @@ A server can be physical (bare metal), virtual, or even nonexistent (i.e. one th ### Adding Servers to BuildMaster {#adding-servers data-title="Adding Servers to BuildMaster"} -You can add a server using the web-based user interface (`Servers` > `Add Server`), or programmatically with the [infrastructure API](/support/documentation/buildmaster/reference/api/infrastructure). +You can add a server using the web-based user interface (`Servers` > `Add Server`), or programmatically with the [infrastructure API](/docs/buildmaster/reference/api/infrastructure). -BuildMaster communicates with servers using the [Inedo Agent](/support/documentation/inedoagent/overview) (for Windows) or [SSH/SFTP](https://www.ssh.com/ssh/sftp) (for Windows and Linux). +BuildMaster communicates with servers using the [Inedo Agent](/docs/inedoagent/overview) (for Windows) or [SSH/SFTP](https://www.ssh.com/ssh/sftp) (for Windows and Linux). ## Deploying to Servers {#deploying-to-servers data-title="Deploying to Servers"} @@ -23,11 +23,11 @@ In addition to using [Server Roles](server-roles), there are two ways to deploy ### Pipeline Stage Target {#pipeline-target data-title="Pipeline Stage Target"} -You can specify a list of servers as a stage target in a [pipeline stage](/support/documentation/buildmaster/verification/pipelines#pipeline-stages). When the build is deployed to that stage, the stage target's deployment plan will be run against all those servers. +You can specify a list of servers as a stage target in a [pipeline stage](/docs/buildmaster/verification/pipelines#pipeline-stages). When the build is deployed to that stage, the stage target's deployment plan will be run against all those servers. ### Servers and OtterScript {#servers-otter data-title="Servers and OtterScript"} -You can deploy to a server using a [general block](/support/documentation/executionengine/otterscript/statements-and-blocks/general-blocks) +You can deploy to a server using a [general block](/docs/executionengine/otterscript/statements-and-blocks/general-blocks) ``` for server prod-hdars-sv1 @@ -49,4 +49,4 @@ However, if your application has always been deployed to a specific server, and A resource pool is a set of servers that are used as a single, load-balanced resource. You may acquire an unused server from a resource pool, and then release it back into the pool once the needed tasks have been performed. -Visit the [Resource Pools](/support/documentation/executionengine/components/resource-pools) documentation of the Inedo Execution Engine for more information on how to configure a resource pool. +Visit the [Resource Pools](/docs/executionengine/components/resource-pools) documentation of the Inedo Execution Engine for more information on how to configure a resource pool. diff --git a/BuildMaster/administration/applications/import-export.md b/BuildMaster/administration/applications/import-export.md index db972b07..70adacbc 100644 --- a/BuildMaster/administration/applications/import-export.md +++ b/BuildMaster/administration/applications/import-export.md @@ -24,7 +24,7 @@ You will also select how to publish the package, from one of two options: - **Publish to Universal Feed** - a feed on a ProGet instance - **Save to Disk Path** - a local or network path that the BuildMaster service can write to -Before publishing to a feed, you will need to setup an Inedo Product [resource credential](/support/documentation/buildmaster/administration/resource-credentials) with the URL and optionally an API key to your ProGet server. +Before publishing to a feed, you will need to setup an Inedo Product [resource credential](/docs/buildmaster/administration/resource-credentials) with the URL and optionally an API key to your ProGet server. ## Importing Applications {#importing data-title="Importing Applications"} @@ -44,7 +44,7 @@ Before importing from a feed, you will need to setup an Inedo Product resource c ## Package File Format {#package-file-formate data-title="Package File Format"} -Applications will be exported as a standard [universal package](/support/documentation/proget/core-concepts/packages) which is essentially a zip file containing application configuration and history, along with a JSON-based manifest file (`upack.json`) that describes the contents of the package. +Applications will be exported as a standard [universal package](/docs/proget/core-concepts/packages) which is essentially a zip file containing application configuration and history, along with a JSON-based manifest file (`upack.json`) that describes the contents of the package. In addition to the standard name and version properties, BuildMaster will include `a _exportDate` and `_bmVersion` property in `upack.json`. The package contents will be a collection of JSON-formatted files: diff --git a/BuildMaster/administration/configuration-variables.htm b/BuildMaster/administration/configuration-variables.htm index 479d5f40..2a218b7a 100644 --- a/BuildMaster/administration/configuration-variables.htm +++ b/BuildMaster/administration/configuration-variables.htm @@ -8,15 +8,15 @@

- Variables allow for extreme flexibility when modeling your deployment plans, and - configuration variables allow you to have the same plan run differently across different servers and environments. + Variables allow for extreme flexibility when modeling your deployment plans, and + configuration variables allow you to have the same plan run differently across different servers and environments.

You can define configuration variables at different scopes (release, server, environment, global, etc), and then reference those variables in your plan when using - Operations, If/Else Blocks, - Loop Blocks, etc. - You can also create a runtime variable in a plan itself. + Operations, If/Else Blocks, + Loop Blocks, etc. + You can also create a runtime variable in a plan itself.

Cascading Variables

@@ -38,7 +38,7 @@

Cascading Variables

Resolution Rules

The variable definition that's the "closest" match to the current context is used. This is determined as follows:

@@ -50,7 +50,7 @@

Creating a Build

  • Release – the release that the build is associated with; this will inherit the pipeline and prompt for the appropriate - release template variables upon creation + release template variables upon creation
  • Build number - this is an integer that uniquely identifies @@ -101,7 +101,7 @@

    Deploying a Build

    - You can use Security and Access Controls + You can use Security and Access Controls to determine which users can perform these actions for which environments.

    diff --git a/BuildMaster/builds/packaging/artifacts-from-drop-path.md b/BuildMaster/builds/packaging/artifacts-from-drop-path.md index e4c2510d..dc03c4c6 100644 --- a/BuildMaster/builds/packaging/artifacts-from-drop-path.md +++ b/BuildMaster/builds/packaging/artifacts-from-drop-path.md @@ -74,4 +74,4 @@ Create-Artifact MyArtifact -For more information and sample usage visit our [documentation](https://inedo.com/support/documentation/buildmaster/reference/operations/artifacts/create-artifact) \ No newline at end of file +For more information and sample usage visit our [documentation](https://inedo.com/docs/buildmaster/reference/operations/artifacts/create-artifact) diff --git a/BuildMaster/builds/packaging/artifacts.htm b/BuildMaster/builds/packaging/artifacts.htm index a1905a13..1e6d7b98 100644 --- a/BuildMaster/builds/packaging/artifacts.htm +++ b/BuildMaster/builds/packaging/artifacts.htm @@ -38,8 +38,8 @@

    Creating Artifacts

    The most common method to capture an artifact is to add the Create-Artifact operation to a - deployment plan, generally immediately after some form of 'build' operation such as MSBuild::Build-Project. - See the operation's documentation for more information. + deployment plan, generally immediately after some form of 'build' operation such as MSBuild::Build-Project. + See the operation's documentation for more information.

    Third-Party Artifacts

    @@ -55,9 +55,9 @@

    Third-Party Artifacts

    Deploying Artifacts

    - Artifacts are deployed using the Deploy-Artifact operation in a deployment plan. + Artifacts are deployed using the Deploy-Artifact operation in a deployment plan. This operation by default is optimized to only transfer files that have been modified to - decrease deployment times of large artifacts. See the operation's documentation for more information. + decrease deployment times of large artifacts. See the operation's documentation for more information.

    diff --git a/BuildMaster/builds/packaging/universal-packages.md b/BuildMaster/builds/packaging/universal-packages.md index 8944d04c..46e1fc3d 100644 --- a/BuildMaster/builds/packaging/universal-packages.md +++ b/BuildMaster/builds/packaging/universal-packages.md @@ -6,7 +6,7 @@ sequence: 100 keywords: upack, universal-packages, buildmaster, packages, build, deploy --- -BuildMaster can create and manage build artifacts for any deployment target, from simple ZIP files to complex container orchestrations in the cloud. In some cases you may want to deploy a [Universal Package](https://inedo.com/support/documentation/upack/universal-packages/package-format) to a ProGet feed. This documentation will describe exactly how to accomplish this. +BuildMaster can create and manage build artifacts for any deployment target, from simple ZIP files to complex container orchestrations in the cloud. In some cases you may want to deploy a [Universal Package](https://inedo.com/docs/upack/universal-packages/package-format) to a ProGet feed. This documentation will describe exactly how to accomplish this. ###Universal Package Format The Universal Package format ([UPack](https://inedo.com/upack)) is very simple, and can be used to package applications and components built with any technology: .NET web applications, NodeJS applications, Windows services, plug-ins for your applications, system configuration scripts, and so on. It's designed for both general-purpose use, and as a platform for creating a new proprietary package format. UPack is a technology-neutral packaging platform that allows you to uniformly distribute your applications and components across environments to enable consistent deployment and testing. @@ -14,7 +14,7 @@ The Universal Package format ([UPack](https://inedo.com/upack)) is very simple, ###Create a Universal Package. To create a Universal Package you will need to: -1. In your BuildMaster plan add an [Create Package](https://inedo.com/support/documentation/buildmaster/reference/operations/general/create-package) `Create-Package` operation. +1. In your BuildMaster plan add an [Create Package](https://inedo.com/docs/buildmaster/reference/operations/general/create-package) `Create-Package` operation. 2. Enter all requried fields Sample Plan: @@ -34,9 +34,9 @@ Create-Package ###Deploy a Universal Package to a ProGet Feed. To Deploy your newly created Universal Package to a ProGet feed you'll need to complete the following tasks. -1. In the ProGet administration area (administration/api-keys) you will need to create a new [api key](https://inedo.com/support/documentation/proget/administration/security/api-keys) with _Grant access to Feed Management API_ selected. +1. In the ProGet administration area (administration/api-keys) you will need to create a new [api key](https://inedo.com/docs/proget/administration/security/api-keys) with _Grant access to Feed Management API_ selected. 2. In the BuildMaster administration area (/administration/credentials) add a new _Inedo Product_ credential and fill in all the required information that pertains and use your API key from ProGet that was created in step 1. -3. In your BuildMaster plan add a [Push Package](https://inedo.com/support/documentation/buildmaster/reference/operations/proget/push-package) `ProGet::Push-Package` operation and use the name of the credentials created in step2. The remaining fields should be customized to the package you've created in the Execute Process operation above. +3. In your BuildMaster plan add a [Push Package](https://inedo.com/docs/buildmaster/reference/operations/proget/push-package) `ProGet::Push-Package` operation and use the name of the credentials created in step2. The remaining fields should be customized to the package you've created in the Execute Process operation above. Sample Plan: ``` @@ -51,7 +51,7 @@ ProGet::Push-Package ``` Additional Options: -- Group, Package, Version: A universal package can be uniquely identified it's group, name, and version. These are different properties in the [manifest file](https://inedo.com/support/documentation/upack/universal-packages/metacontent-guidance/manifest-specification). +- Group, Package, Version: A universal package can be uniquely identified it's group, name, and version. These are different properties in the [manifest file](https://inedo.com/docs/upack/universal-packages/metacontent-guidance/manifest-specification). - Description - Describe the contents of this package. This field supports [Markdown syntax](https://www.markdownguide.org/basic-syntax/). - Title - Full Title of this package used when displaying a package and not part of its unique identification. - Icon - A string of an absolute url pointing to an image to be displayed in the ProGet UI (at both 64px and 128px); if package:// is used as the protocol, ProGet will search within the package and serve that image instead diff --git a/BuildMaster/builds/plans/#.md b/BuildMaster/builds/plans/#.md index b4f8538b..92761511 100644 --- a/BuildMaster/builds/plans/#.md +++ b/BuildMaster/builds/plans/#.md @@ -7,4 +7,4 @@ Build plans are the instructions that tell BuildMaster exactly how to build and They are written in language called OtterScript, and can be developed using the drag-and-drop editor. This saves you from having to learn the syntax before you can start building a plan. You can switch back-and-forth between visual and text modes to get a feel for the syntax and structure of the language pretty quickly, and master it in no time. -For more information on syntax, visit the [Deployment Plans](/support/documentation/buildmaster/deployments/plans) section. \ No newline at end of file +For more information on syntax, visit the [Deployment Plans](/docs/buildmaster/deployments/plans) section. diff --git a/BuildMaster/builds/platform-specific/dot-net/asp-net.md b/BuildMaster/builds/platform-specific/dot-net/asp-net.md index 7ec10543..2f7dc263 100644 --- a/BuildMaster/builds/platform-specific/dot-net/asp-net.md +++ b/BuildMaster/builds/platform-specific/dot-net/asp-net.md @@ -56,7 +56,7 @@ Create-Artifact Web ## Example: Building from Branches with Optional Tests -The following example shows how to build an ASP.NET website at the solution level, using two optional [release template variable prompts](/support/documentation/buildmaster/releases/templates#components): +The following example shows how to build an ASP.NET website at the solution level, using two optional [release template variable prompts](/docs/buildmaster/releases/templates#components): {.docs} - `$Branch` - the specified branch to pull from diff --git a/BuildMaster/builds/platform-specific/dot-net/console-app.md b/BuildMaster/builds/platform-specific/dot-net/console-app.md index a59c32eb..0c790533 100644 --- a/BuildMaster/builds/platform-specific/dot-net/console-app.md +++ b/BuildMaster/builds/platform-specific/dot-net/console-app.md @@ -41,6 +41,6 @@ You can further customize this operation by specifying values for these addition Now you're ready to further configure your plan to run tests, create artifacts, or even deploy to any on-premises or cloud server. -- [Running Tests](https://inedo.com/support/documentation/buildmaster/core-concepts/builds-and-ci/unit-tests) -- [Create Artifacts](https://inedo.com/support/documentation/buildmaster/builds/create-artifact) -- [OtterScript Overview](https://inedo.com/support/documentation/buildmaster/execution-engine/overview) \ No newline at end of file +- [Running Tests](https://inedo.com/docs/buildmaster/core-concepts/builds-and-ci/unit-tests) +- [Create Artifacts](https://inedo.com/docs/buildmaster/builds/create-artifact) +- [OtterScript Overview](https://inedo.com/docs/buildmaster/execution-engine/overview) diff --git a/BuildMaster/builds/platform-specific/dot-net/web-deploy.md b/BuildMaster/builds/platform-specific/dot-net/web-deploy.md index a19a94b8..7c8a555c 100644 --- a/BuildMaster/builds/platform-specific/dot-net/web-deploy.md +++ b/BuildMaster/builds/platform-specific/dot-net/web-deploy.md @@ -74,7 +74,7 @@ If files you would like to include into the package are in the same project and ``` #### Next Steps {#next-steps} -After the Web Deploy Package is created, you can notify any BuildMaster user or group that the package is created and needs to be deployed. When you use the [Manual Operation] (/support/documentation/buildmaster/reference/operations/buildmaster/manual-operation) in your plan, the execution will be halted until an individual completes the specified task. +After the Web Deploy Package is created, you can notify any BuildMaster user or group that the package is created and needs to be deployed. When you use the [Manual Operation] (/docs/buildmaster/reference/operations/buildmaster/manual-operation) in your plan, the execution will be halted until an individual completes the specified task. ``` Perform-ManualOperation diff --git a/BuildMaster/builds/tests/build-reports/#.htm b/BuildMaster/builds/tests/build-reports/#.htm index 0a84a206..b884bf7b 100644 --- a/BuildMaster/builds/tests/build-reports/#.htm +++ b/BuildMaster/builds/tests/build-reports/#.htm @@ -29,7 +29,7 @@

    - Note: Build reports are not persisted by application import/export. To maintain important + Note: Build reports are not persisted by application import/export. To maintain important information across this boundary, use build artifacts instead.

    @@ -39,7 +39,7 @@

    Capturing Build Report Build reports are captured using one of the following operations:

    -

    Capture-HtmlDirectoryReport

    +

    Capture-HtmlDirectoryReport

    An HTML directory report requires a specific format to be displayed correctly within BuildMaster. @@ -48,7 +48,7 @@

    Capture-FileReport

    +

    Capture-FileReport

    A file report is displayed as either plain text or HTML, depending on the format specified in the operation. @@ -61,7 +61,7 @@

    Generating Reports -

    Compare-Directories

    +

    Compare-Directories

    This operation compares two directories on the same server and highlights the following information:

    • Added files or directories
    • @@ -69,7 +69,7 @@

      Compare-Artifacts

      +

      Compare-Artifacts

      This operation works in the same manner as the Compare-Directories operation, with the following caveats:

      • Artifacts are extracted and compared on the BuildMaster server
      • @@ -78,7 +78,7 @@

        Exec operation then capture its output with one of the two built-in capturing operations.

        +

        To capture and associate a custom build report, simple run a third-party tool with the Exec operation then capture its output with one of the two built-in capturing operations.

        Examples

        diff --git a/BuildMaster/builds/tests/build-reports/fxcop.md b/BuildMaster/builds/tests/build-reports/fxcop.md index 8a120eb6..600ec188 100644 --- a/BuildMaster/builds/tests/build-reports/fxcop.md +++ b/BuildMaster/builds/tests/build-reports/fxcop.md @@ -10,7 +10,7 @@ keywords: buildmaster, build-reports, fxcop FxCop is a free static code analysis tool from Microsoft that checks .NET managed code assemblies for conformance to Microsoft's .NET Framework Design Guidelines. While technically speaking FxCop is [deprecated in favor of Roslyn Analyzers](https://docs.microsoft.com/en-us/visualstudio/code-quality/fxcop-analyzers?view=vs-2019), many projects still depend on it for code quality. -BuildMaster can execute FxCop in a build or deploy plan, then capture the output from the report into a [build report](/support/documentation/buildmaster/builds/tests/build-reports). +BuildMaster can execute FxCop in a build or deploy plan, then capture the output from the report into a [build report](/docs/buildmaster/builds/tests/build-reports). Example plan: diff --git a/BuildMaster/deployments/configuration-files.htm b/BuildMaster/deployments/configuration-files.htm index a1dbc767..5471271b 100644 --- a/BuildMaster/deployments/configuration-files.htm +++ b/BuildMaster/deployments/configuration-files.htm @@ -20,7 +20,7 @@

        Text-templating

        - Text-templating is the preferred method, and relies on the text templating in the Inedo Execution Engine, along with configuration variables and conditionals for environment-specific settings. + Text-templating is the preferred method, and relies on the text templating in the Inedo Execution Engine, along with configuration variables and conditionals for environment-specific settings.

        There are several advantages to using this method over configuration file assets: @@ -55,7 +55,7 @@

        Configuratio You can "manually deploy" outside the scope of a deployment
      • - Restrict certain instances from being viewed or edited using access controls + Restrict certain instances from being viewed or edited using access controls
      @@ -175,7 +175,7 @@

      XSLT

      Variable Replacement

      -

      All configuration variables, runtime variables, and parameterless variable functions in context are considered when variable replacement occurs before deployment and follows the same resolution rules as configuration variable replacement.

      +

      All configuration variables, runtime variables, and parameterless variable functions in context are considered when variable replacement occurs before deployment and follows the same resolution rules as configuration variable replacement.

      Template file variables are treated as runtime variables when replacement occurs, and therefore override any configuration variables. In practice, it is not recommended to have template instance variables override existing configuration variables or variable functions. It is also not recommended to rely on runtime variables created during a deployment.

      Empty/Missing Variables

      diff --git a/BuildMaster/deployments/patterns/blue-green.md b/BuildMaster/deployments/patterns/blue-green.md index becf0060..d135c1e5 100644 --- a/BuildMaster/deployments/patterns/blue-green.md +++ b/BuildMaster/deployments/patterns/blue-green.md @@ -47,7 +47,7 @@ These steps will be based off of this application, but BuildMaster is very flexi - Create `Production-Green` and `Production-Blue` as child environments of Production - Create a `SwapBlueGreen` plan that uses a variable called `$SwapTo` to determine whether to swap Blue to Green, or Green to Blue * this plan will issue instructions to your router or load-balancer using PowerShell or another mechanism - - Create two [pipelines](/support/documentation/buildmaster/verification/pipelines): + - Create two [pipelines](/docs/buildmaster/verification/pipelines): * Both will start with your standard pre-production stages (Build, Integration, Test, etc.) * Blue will deploy to a "Blue" stage, then a "Swap" stage * Green will deploy to a "Green" stage, then a "Swap" stage @@ -73,7 +73,7 @@ Once a build reaches the "Staging" stage, you can simply swap the releases pipel You can even simplify this process and give the user a choice when they deploy to the "Staging" stage. To do this: {.docs} - - creating a [release template](/support/documentation/buildmaster/releases/templates) called "Choose Blue or Green" + - creating a [release template](/docs/buildmaster/releases/templates) called "Choose Blue or Green" - configure a Deployment Variable Template for the "Staging" stage (List: Blue, Green) - create a plan that changes the release's pipline to the selected variable diff --git a/BuildMaster/deployments/plans/#.md b/BuildMaster/deployments/plans/#.md index 6fabc473..8803879f 100644 --- a/BuildMaster/deployments/plans/#.md +++ b/BuildMaster/deployments/plans/#.md @@ -26,17 +26,17 @@ show-headings-in-nav: true You can create a deployment plan at the application or global level. Global plans behave in exactly the same manner, except you may only reference global modules. -You may associate a deployment plan with an environment after you create it, which will allow you to restrict certain users from viewing its contents by defining an environment-scoped [access control](/support/documentation/buildmaster/administration/users-and-security). There's no good reason to do this, as you shouldn't be putting sensitive information in your deployment plans… but sometimes it can't be helped. +You may associate a deployment plan with an environment after you create it, which will allow you to restrict certain users from viewing its contents by defining an environment-scoped [access control](/docs/buildmaster/administration/users-and-security). There's no good reason to do this, as you shouldn't be putting sensitive information in your deployment plans… but sometimes it can't be helped. ### Association with Pipelines {#pipelines data-title="Association with Pipelines"} -All deployment plans are run under the context of a build and a pipeline, except those defined as the target of a Repository Hook/Monitor plan. Visit the [pipelines](/support/documentation/buildmaster/verification/pipelines) documentation for more information on how to configure pipeline stages and targets. +All deployment plans are run under the context of a build and a pipeline, except those defined as the target of a Repository Hook/Monitor plan. Visit the [pipelines](/docs/buildmaster/verification/pipelines) documentation for more information on how to configure pipeline stages and targets. ### OtterScript Basics {#otterscript-basics data-title="OtterScript Basics"} OtterScript is a Domain-Specific Language that was designed for high-level orchestration and automation across servers, and is used to represent build and deployment plans in BuildMaster. -To learn more, check out the [OtterScript Guide](/support/documentation/various/execution-engine/otterscript) in the Inedo Execution Engine documentation. +To learn more, check out the [OtterScript Guide](/docs/various/execution-engine/otterscript) in the Inedo Execution Engine documentation. ### PowerShell and Shell Scripting {#power-and-shell data-title="PowerShell and Shell Scripting"} @@ -44,7 +44,7 @@ OtterScript was designed to seamlessly integrate with PowerShell and Bash/Sh, an Script assets can be added to BuildMaster at the application- or global-level. Once added, your scripts will appear in the statement list in the visual plan editor, and the parameters to those scripts will be parsed and displayed as textboxes when editing in visual mode. -Visit the shared [PowerShell and Shell Scripting documentation](/support/documentation/executionengine/components/powershell-and-shell) for more information. +Visit the shared [PowerShell and Shell Scripting documentation](/docs/executionengine/components/powershell-and-shell) for more information. ### Modules {#modules data-title="Modules"} diff --git a/BuildMaster/deployments/plans/execution-engine.md b/BuildMaster/deployments/plans/execution-engine.md index c0ce79d8..a05f88d0 100644 --- a/BuildMaster/deployments/plans/execution-engine.md +++ b/BuildMaster/deployments/plans/execution-engine.md @@ -7,7 +7,7 @@ keywords: buildmaster,excecution-engine To deploy your applications and releases, BuildMaster uses an advanced execution engine that's capable of running thousands of different operations and scripts on thousands of different servers. -While BuildMaster's execution engine is an integral part of BuildMaster itself, the core engine was designed to be shared across products, and is also used in [Otter](/otter) (our Infrastructure as Code tool) and [Romp](/support/documentation/otter/reference/romp) (a stand-alone, command-line tool). +While BuildMaster's execution engine is an integral part of BuildMaster itself, the core engine was designed to be shared across products, and is also used in [Otter](/otter) (our Infrastructure as Code tool) and [Romp](/docs/otter/reference/romp) (a stand-alone, command-line tool). ## Operations {#operations data-title="Operations"} @@ -17,16 +17,16 @@ Some operations are quite simple (Sleep), while others are quite complex (Synchr ## OtterScript in BuildMaster {#otterscript data-title="OtterScript in BuildMaster"} -OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent [configuration plans](/support/documentation/otter/core-concepts/plans#configuration) and [orchestration plans](/support/documentation/otter/core-concepts/plans#orchestration) in Otter, and [deployment plans](../deployments/plans) in BuildMaster. +OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent [configuration plans](/docs/otter/core-concepts/plans#configuration) and [orchestration plans](/docs/otter/core-concepts/plans#orchestration) in Otter, and [deployment plans](../deployments/plans) in BuildMaster. You really don't need to learn OtterScript; it's simply the textual representation of a plan, and plans are already fully editable in the drag-and-drop plan editor. If you switch back-and-forth between visual and code modes, you'll probably learn it on your own, but it's also pretty well documented. {.docs} - - [OtterScript Overview](/support/documentation/executionengine/otterscript/overview) - - [Comments & Descriptions](/support/documentation/executionengine/otterscript/comments-and-descriptions) - - [Formal Grammar](/support/documentation/executionengine/reference/formal-grammar) - - [Formal Specification](/support/documentation/executionengine/reference/formal-specification) - - [Strings & Values](/support/documentation/executionengine/otterscript/strings-and-literals) + - [OtterScript Overview](/docs/executionengine/otterscript/overview) + - [Comments & Descriptions](/docs/executionengine/otterscript/comments-and-descriptions) + - [Formal Grammar](/docs/executionengine/reference/formal-grammar) + - [Formal Specification](/docs/executionengine/reference/formal-specification) + - [Strings & Values](/docs/executionengine/otterscript/strings-and-literals) ## Legacy Execution Engine {#legacy data-title="Legacy Execution Engine"} diff --git a/BuildMaster/deployments/plans/invoking-otterscript.md b/BuildMaster/deployments/plans/invoking-otterscript.md index 89565a55..5cd1a172 100644 --- a/BuildMaster/deployments/plans/invoking-otterscript.md +++ b/BuildMaster/deployments/plans/invoking-otterscript.md @@ -12,9 +12,9 @@ BuildMaster 5.1.6 introduced the ability to programmatically invoke OtterScript. * Invoking plans and modules from other applications These are accomplished through three operations: - * [Invoke-Plan](/support/documentation/buildmaster/reference/operations/buildmaster/invoke-plan), which executes a deployment plan stored within BuildMaster, in the current application, another application, or the global context - * [Invoke-Module](/support/documentation/buildmaster/reference/operations/buildmaster/invoke-module), which calls a module stored within BuildMaster - * [Invoke-OtterScript](/support/documentation/buildmaster/reference/operations/buildmaster/invoke-otterscript), which invokes an arbitrary string of OtterScript + * [Invoke-Plan](/docs/buildmaster/reference/operations/buildmaster/invoke-plan), which executes a deployment plan stored within BuildMaster, in the current application, another application, or the global context + * [Invoke-Module](/docs/buildmaster/reference/operations/buildmaster/invoke-module), which calls a module stored within BuildMaster + * [Invoke-OtterScript](/docs/buildmaster/reference/operations/buildmaster/invoke-otterscript), which invokes an arbitrary string of OtterScript ### Invoking plans and modules @@ -78,7 +78,7 @@ You can, however, use the `AdditionalVariables` or `Arguments` to pass in variab ### Programmatically invoking single operations -You can also use [Invoke-Operation](/support/documentation/buildmaster/reference/operations/general/invoke-operation) to execute a single operation: +You can also use [Invoke-Operation](/docs/buildmaster/reference/operations/general/invoke-operation) to execute a single operation: Invoke-Operation [Operation_Name] ( diff --git a/BuildMaster/deployments/plans/manual-steps-tasks.md b/BuildMaster/deployments/plans/manual-steps-tasks.md index aae4412c..230a1266 100644 --- a/BuildMaster/deployments/plans/manual-steps-tasks.md +++ b/BuildMaster/deployments/plans/manual-steps-tasks.md @@ -12,7 +12,7 @@ While OtterScript and PowerShell will allow you to automate just about anything, - Enabling a VPN to a customer site for a short time during a deployment - Running an ancient GUI-based deployment for part of an application -This is where the [Manual Operations](/support/documentation/buildmaster/reference/operations/buildmaster/manual-operation) comes in: when executed in a deployment plan, the execution will halt until an assigned person indicates that a specified task has been completed. +This is where the [Manual Operations](/docs/buildmaster/reference/operations/buildmaster/manual-operation) comes in: when executed in a deployment plan, the execution will halt until an assigned person indicates that a specified task has been completed. ## Using in a Deployment Plan {#using-plan data-title="Using in a Deployment Plan"} diff --git a/BuildMaster/deployments/targets/cloud/azure/deploy-to-azure.md b/BuildMaster/deployments/targets/cloud/azure/deploy-to-azure.md index 5a62ef90..141ac9c1 100644 --- a/BuildMaster/deployments/targets/cloud/azure/deploy-to-azure.md +++ b/BuildMaster/deployments/targets/cloud/azure/deploy-to-azure.md @@ -16,7 +16,7 @@ To deploy azure resources within buildmaster you will first need to #### Implementation -As mentioned before, in order to execute azure command line functions you will need to login to azure to associate the action you are performing to an azure subscription. We recommend that you combine the azure login **and** the azure command into one [OtterScript](https://inedo.com/support/documentation/otter/reference/otter-script) module such as this one. +As mentioned before, in order to execute azure command line functions you will need to login to azure to associate the action you are performing to an azure subscription. We recommend that you combine the azure login **and** the azure command into one [OtterScript](https://inedo.com/docs/otter/reference/otter-script) module such as this one. ##### Azure-Execute module: diff --git a/BuildMaster/deployments/targets/databases/#.md b/BuildMaster/deployments/targets/databases/#.md index d003a94b..8be5d16d 100644 --- a/BuildMaster/deployments/targets/databases/#.md +++ b/BuildMaster/deployments/targets/databases/#.md @@ -13,37 +13,37 @@ Database changes are not only more complicated to deploy than files, but they te - Oracle - Postgres -Database support is implemented through an [extensible feature](/support/documentation/buildmaster/administration/extensions), and additional databases may be supported by creating custom extensions. +Database support is implemented through an [extensible feature](/docs/buildmaster/administration/extensions), and additional databases may be supported by creating custom extensions. ## Database Connections {#database-connections data-title="Database Connections"} -Database connections are essentially named database connection strings for a specific type of database (SQL Server, MySQL, Oracle, Postgres, etc.) that are associated with an [environment](/support/documentation/buildmaster/infrastructure/environments). +Database connections are essentially named database connection strings for a specific type of database (SQL Server, MySQL, Oracle, Postgres, etc.) that are associated with an [environment](/docs/buildmaster/infrastructure/environments). -The [database operations](/support/documentation/buildmaster/reference/operations)rely on these connection names, whereas the environment is used for security purposes: +The [database operations](/docs/buildmaster/reference/operations)rely on these connection names, whereas the environment is used for security purposes: {.docs} - Database connections can be restricted to deployment plans targeting a particular environment. This means users who have permission to create and execute deployment plans in testing environments may not use database connections meant for production environments. -- Users may be prevented from viewing these database connection strings by defining an [access control](/support/documentation/buildmaster/administration/security) +- Users may be prevented from viewing these database connection strings by defining an [access control](/docs/buildmaster/administration/security) Database connections are also associated with a server. This server is only for manual database change script deployments. When selected, the deployment will run from the selected server, as if a "for server" statement was used in OtterScript. ## Database Operations {#database-operations data-title="Database Operations"} -BuildMaster has several built-in database operations that you can use in your [deployment plans](/support/documentation/buildmaster/core-concepts/deployment-plans). They all require a database connection to use. +BuildMaster has several built-in database operations that you can use in your [deployment plans](/docs/buildmaster/core-concepts/deployment-plans). They all require a database connection to use. -[Execute Database Statement](/support/documentation/buildmaster/reference/operations/databases/execute-database-statement) will run the specified SQL Statement against the specified connection. +[Execute Database Statement](/docs/buildmaster/reference/operations/databases/execute-database-statement) will run the specified SQL Statement against the specified connection. -[Execute Database Scripts on Disk](/support/documentation/buildmaster/reference/operations/databases/execute-database-scripts-on-disk) will read the contents of the specified files, and execute them as statements against the specified connection. +[Execute Database Scripts on Disk](/docs/buildmaster/reference/operations/databases/execute-database-scripts-on-disk) will read the contents of the specified files, and execute them as statements against the specified connection. -[Backup Database](/support/documentation/buildmaster/reference/operations/databases/backup-database) and [Restore Database](/support/documentation/buildmaster/reference/operations/databases/restore-database) will perform a backup or restore of a specific database to/from a file on disk. This run a simple SQL command specific to the database connection type (SQL Server, MySQL, Oracle, etc.), and does not offer many options. For a full-feature backup or restore, use the Execute Database Statement operation. +[Backup Database](/docs/buildmaster/reference/operations/databases/backup-database) and [Restore Database](/docs/buildmaster/reference/operations/databases/restore-database) will perform a backup or restore of a specific database to/from a file on disk. This run a simple SQL command specific to the database connection type (SQL Server, MySQL, Oracle, etc.), and does not offer many options. For a full-feature backup or restore, use the Execute Database Statement operation. -[Execute Database Change Scripts](/support/documentation/buildmaster/reference/operations/databases/execute-database-change-scripts) and [Build Database Updater Executable](/support/documentation/buildmaster/reference/operations/databases/build-database-updater-executable) are described in more detail below. +[Execute Database Change Scripts](/docs/buildmaster/reference/operations/databases/execute-database-change-scripts) and [Build Database Updater Executable](/docs/buildmaster/reference/operations/databases/build-database-updater-executable) are described in more detail below. ## Object Deployment {#object-deployment data-title="Object Deployment"} In order to deploy database objects (stored procedures, views, etc.) with BuildMaster, you should first put these objects in source control. This is accomplished by generating SQL Script files that drop and create the object, as prescribed in Database Changes Done Right. -Next, you can use the source control operations to get files on disk, and then use the [Execute Database Scripts on Disk](/support/documentation/buildmaster/reference/operations/databases/execute-database-scripts-on-disk) to execute those scripts. +Next, you can use the source control operations to get files on disk, and then use the [Execute Database Scripts on Disk](/docs/buildmaster/reference/operations/databases/execute-database-scripts-on-disk) to execute those scripts. ## Change Scripts (DDL/DML) {#change-scripts data-title="Change Scripts (DDL/DML)"} @@ -51,7 +51,7 @@ Some scripts that you run against a database can make irrevocable changes to you Because these types of scripts can be run once (and only once), and cannot be undone without running another script or restoring a database backup, they can be particularly challenging to incorporate into your release process. BuildMaster can help manage this process with database change script assets. -You can add a database change script asset by going to the application’s Assets > SQL Scripts, and adding a script. These scripts may then be manually executed from the user interface, or deployed within a deployment plan using the [Execute Database Change Scripts](/support/documentation/buildmaster/reference/operations/databases/execute-database-change-scripts) operation. +You can add a database change script asset by going to the application’s Assets > SQL Scripts, and adding a script. These scripts may then be manually executed from the user interface, or deployed within a deployment plan using the [Execute Database Change Scripts](/docs/buildmaster/reference/operations/databases/execute-database-change-scripts) operation. ### Release Number Ordering {#release-number data-title="Release Number Ordering"} @@ -74,7 +74,7 @@ To manually deploy a database change script, you can click the deploy button on In addition to running change scripts from within BuildMaster, sometimes it is necessary to deploy change scripts to a system not associated with the installation. BuildMaster is able to generate a simple command-line executable that has no dependencies (other than .NET 4.5) and may be used to execute change scripts against any arbitrary database, just as if this were done from within BuildMaster. -You can use the [Build Database Updater Executable](/support/documentation/buildmaster/reference/operations/databases/build-database-updater-executable) operation within a deployment plan to generate this executable, or do it from the database change scripts page in the user interface. +You can use the [Build Database Updater Executable](/docs/buildmaster/reference/operations/databases/build-database-updater-executable) operation within a deployment plan to generate this executable, or do it from the database change scripts page in the user interface. ### Using the Change Script Package {#change-script-package data-title="Using the Change Script Package"} diff --git a/BuildMaster/installation-and-maintenance/architecture/#.md b/BuildMaster/installation-and-maintenance/architecture/#.md index 8ff5783a..1c57ce81 100644 --- a/BuildMaster/installation-and-maintenance/architecture/#.md +++ b/BuildMaster/installation-and-maintenance/architecture/#.md @@ -16,7 +16,7 @@ tweak the application pool settings as needed. Note that modifying the ```web.co ## Service {#service data-title="Service"} -This runs in the background and actually runs your [deployment plans](/support/documentation/buildmaster/deployments/plans) using the [execution engine](/support/documentation/buildmaster/deployments/plans/execution-engine). It's a standard [Windows Service Application](https://msdn.microsoft.com/en-us/library/windows/desktop/ms685141(v=vs.85)), and may be managed and [configured](architecture/service#configuration-options) using the Windows Service Manager or ```sc.exe```as you see fit. +This runs in the background and actually runs your [deployment plans](/docs/buildmaster/deployments/plans) using the [execution engine](/docs/buildmaster/deployments/plans/execution-engine). It's a standard [Windows Service Application](https://msdn.microsoft.com/en-us/library/windows/desktop/ms685141(v=vs.85)), and may be managed and [configured](architecture/service#configuration-options) using the Windows Service Manager or ```sc.exe```as you see fit. Note that modifying the ```BuildMaster.Service.config``` file for any purpose other than to change the database connection string or encryption key is not at all supported. diff --git a/BuildMaster/installation-and-maintenance/architecture/service.htm b/BuildMaster/installation-and-maintenance/architecture/service.htm index 486a032d..8e1c8c14 100644 --- a/BuildMaster/installation-and-maintenance/architecture/service.htm +++ b/BuildMaster/installation-and-maintenance/architecture/service.htm @@ -8,7 +8,7 @@

      The BuildMaster Service is key component of BuildMaster's architecture, and is - what actually runs your deployment plans using the + what actually runs your deployment plans using the execution engine. It's a standard Windows Service Application, and may be managed and configured using the Windows Service Manager or sc.exe as you see fit. diff --git a/BuildMaster/installation-and-maintenance/backing-up.md b/BuildMaster/installation-and-maintenance/backing-up.md index 4be64cc5..dd58c4b3 100644 --- a/BuildMaster/installation-and-maintenance/backing-up.md +++ b/BuildMaster/installation-and-maintenance/backing-up.md @@ -9,11 +9,11 @@ BuildMaster should be backed up frequently, and as a .NET- and SQL Server-based {.docs} - **BuildMaster Database** - a SQL Server database that contains all of BuildMaster's configuration data -- **Encryption key** - the value stored in both the web.config and BuildMaster.Service.config that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/support/documentation/buildmaster/administration/resource-credentials) -- **Shared Configuration** (as of v6.0.7) - the file `%PROGRAMDATA%\Inedo\SharedConfig\BuildMaster.config` contains the encryption key that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/support/documentation/buildmaster/administration/resource-credentials) +- **Encryption key** - the value stored in both the web.config and BuildMaster.Service.config that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/docs/buildmaster/administration/resource-credentials) +- **Shared Configuration** (as of v6.0.7) - the file `%PROGRAMDATA%\Inedo\SharedConfig\BuildMaster.config` contains the encryption key that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/docs/buildmaster/administration/resource-credentials) - **Artifact Library Files** - a path on disk (defined in Artifacts.BasePath setting) that contains all the files for artifacts you created within BuildMaster -You may also back up your [extensions](/support/documentation/buildmaster/reference/extensions) folder, which is stored in the path defined in the Extensions.ExtensionsPath advanced configuration setting. This will make restoring to a new server as easy as possible, in that you'll just need to copy the backup files to the same location on the new server. +You may also back up your [extensions](/docs/buildmaster/reference/extensions) folder, which is stored in the path defined in the Extensions.ExtensionsPath advanced configuration setting. This will make restoring to a new server as easy as possible, in that you'll just need to copy the backup files to the same location on the new server. ## Database Backup {#database-backup data-title="Database Backup"} diff --git a/BuildMaster/installation-and-maintenance/installation-guide/#.htm b/BuildMaster/installation-and-maintenance/installation-guide/#.htm index cb52ccb1..e579a839 100644 --- a/BuildMaster/installation-and-maintenance/installation-guide/#.htm +++ b/BuildMaster/installation-and-maintenance/installation-guide/#.htm @@ -114,7 +114,7 @@

      10. Web Server

      11. User Account

      By default, the installer will use the NetworkService account - to run the BuildMaster Service and Web Application. + to run the BuildMaster Service and Web Application. We recommend sticking with this, and changing the account later if you need to.

      BuildMaster User account diff --git a/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm b/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm index d5d984de..2435de71 100644 --- a/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm +++ b/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm @@ -16,13 +16,13 @@

      To facilitate communication between BuildMaster and the Windows servers you want to deploy to and orchestrate, BuidMaster uses a lightweight agent with a highly-optimized and resilient protocol. Installing agents is even easier than - installing BuildMaster, and the Agent Installation Guide will provide step-by-step instructions, as well as provide some detail as to what's + installing BuildMaster, and the Agent Installation Guide will provide step-by-step instructions, as well as provide some detail as to what's happening behind the scenes.

      - Shared Documentation Note - we've kept the Agent Installation details in the Various Documentation, see Adding the Server to BuildMaster below: + Shared Documentation Note - we've kept the Agent Installation details in the Various Documentation, see Adding the Server to BuildMaster below:
      diff --git a/BuildMaster/installation-and-maintenance/installation-guide/silent-installation.htm b/BuildMaster/installation-and-maintenance/installation-guide/silent-installation.htm index d3069d6f..d5f6cf4e 100644 --- a/BuildMaster/installation-and-maintenance/installation-guide/silent-installation.htm +++ b/BuildMaster/installation-and-maintenance/installation-guide/silent-installation.htm @@ -168,7 +168,7 @@

      Agent Silent Install Options

      Shared Documentation Note - the silent installation arguments for the Inedo Agent can be found in the - Agent Installation documentation. + Agent Installation documentation.
      diff --git a/BuildMaster/installation-and-maintenance/license.md b/BuildMaster/installation-and-maintenance/license.md index 0f91fb53..501ed22a 100644 --- a/BuildMaster/installation-and-maintenance/license.md +++ b/BuildMaster/installation-and-maintenance/license.md @@ -12,7 +12,7 @@ BuildMaster is licensed by number of users on an annual or perpetual basis. BuildMaster Free has no user limit and includes all the features of BuildMaster Enterprise with two important differences: 1. Every authenticated user in the system is authorized to perform any function, effectively making every authenticated user a System Administrator -2. "View-only" access for all unauthenticated users (see [Specific Guest Account Tasks](/support/documentation/buildmaster/administration/security/free-edition#guest-tasks) for complete list of granted task attributes) +2. "View-only" access for all unauthenticated users (see [Specific Guest Account Tasks](/docs/buildmaster/administration/security/free-edition#guest-tasks) for complete list of granted task attributes) BuildMaster Enterprise is restricted only by the number of licensed users. @@ -22,7 +22,7 @@ Named users are listed on the "Licensing & Activation" page within the software, ## License Keys {#keys data-title="License Keys"} -For more information on License Key Management and Activation, visit the docs [here](/support/documentation/various/licensing/management). +For more information on License Key Management and Activation, visit the docs [here](/docs/various/licensing/management). ## Previous and Unsupported Versions {#pre-unsupported data-title="Previous and Unsupported Versions"} diff --git a/BuildMaster/overview.md b/BuildMaster/overview.md index 39fb8c8f..917f7b39 100644 --- a/BuildMaster/overview.md +++ b/BuildMaster/overview.md @@ -18,7 +18,7 @@ In this tutorial, you will learn how to deploy a simple ASP.NET web application *View the full step-by-step tutorial [here](/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis).* -BuildMaster will install on any supported version of Windows; simply [download](/buildmaster/download), and click through the installer to get BuildMaster up and running in minutes. Through the installer, you select the edition you wish to install; a trial, the free edition, or enter a license key. Review the [step-by-step Installation Guide](/support/documentation/buildmaster/installation/windows-guide) for details as to what's happening behind the scenes. +BuildMaster will install on any supported version of Windows; simply [download](/buildmaster/download), and click through the installer to get BuildMaster up and running in minutes. Through the installer, you select the edition you wish to install; a trial, the free edition, or enter a license key. Review the [step-by-step Installation Guide](/docs/buildmaster/installation/windows-guide) for details as to what's happening behind the scenes. ### Agents and Servers in BuildMaster @@ -29,5 +29,5 @@ The Inedo Agent is generally the best way to communicate with a *Windows server* To communicate with Linux servers, BuildMaster uses *SSH* and *SFTP*, and if you're using BuildMaster to interact with the server it is installed on, you can just set it up using a *local agent*. :::circle-button-set -[Learn More About Agents & Servers](/support/documentation/buildmaster/administration/agents-and-infrastructure){.doc-button} +[Learn More About Agents & Servers](/docs/buildmaster/administration/agents-and-infrastructure){.doc-button} ::: diff --git a/BuildMaster/reference/api/#.md b/BuildMaster/reference/api/#.md index 761e9d9c..ccbf9b1e 100644 --- a/BuildMaster/reference/api/#.md +++ b/BuildMaster/reference/api/#.md @@ -19,8 +19,8 @@ There are various API endpoints you can use to programmatically query or configu ## API Keys -Access to any BuildMaster API requires an API key. Refer to the [API Keys documentation](/support/documentation/buildmaster/administration/security/api-keys) for more information. +Access to any BuildMaster API requires an API key. Refer to the [API Keys documentation](/docs/buildmaster/administration/security/api-keys) for more information. ## BuildMaster Native API -The BuildMaster Native API is a lower-level API that effectively wraps the BuildMaster data layer. It's not terribly intuitive, but its documented here: [BuildMaster Native API Reference](/support/documentation/buildmaster/reference/api/native). \ No newline at end of file +The BuildMaster Native API is a lower-level API that effectively wraps the BuildMaster data layer. It's not terribly intuitive, but its documented here: [BuildMaster Native API Reference](/docs/buildmaster/reference/api/native). diff --git a/BuildMaster/reference/api/ci-badge.htm b/BuildMaster/reference/api/ci-badge.htm index 3174602c..d346ab62 100644 --- a/BuildMaster/reference/api/ci-badge.htm +++ b/BuildMaster/reference/api/ci-badge.htm @@ -28,7 +28,7 @@

      API Usage

      Each of these endpoints require that an API Key with CI Badge access is supplied with each request. The examples use a query string for simplicity since most rendered versions will be directly rendered to a web page, but you could also use a header, form value, or JSON property. See - API Access & API Keys for more information. + API Access & API Keys for more information.

      diff --git a/BuildMaster/reference/api/variables.htm b/BuildMaster/reference/api/variables.htm index 2172ce45..406e1c78 100644 --- a/BuildMaster/reference/api/variables.htm +++ b/BuildMaster/reference/api/variables.htm @@ -32,8 +32,8 @@

      Data Specification Variable Value Strings

      If a variable value string starts with a `, @(, %(, then the value will evaluated as a literal_expression - (see formal grammar), which means you'll need - to treat the value as a proper string literal, and escape $ and other + (see formal grammar), which means you'll need + to treat the value as a proper string literal, and escape $ and other characters if you don't want them expanded into variables at runtime... or cause an error when they can't expand.

      diff --git a/BuildMaster/reference/extensions.md b/BuildMaster/reference/extensions.md index 7c5022ee..c7151e1b 100644 --- a/BuildMaster/reference/extensions.md +++ b/BuildMaster/reference/extensions.md @@ -22,7 +22,7 @@ A connection to inedo.com is recommended, but not required. If your BuildMaster ## Creating your Own Extensions {#creating-extension data-title="Creating your Own Extensions"} -You can extend BuildMaster's functionality by [creating an extension](/support/documentation/various/inedo-sdk/creating) that's built against the [Inedo SDK](/support/documentation/various/inedo-sdk/the-sdk). This extension can also be used in Inedo's other tools. Here's what you can extend in BuildMaster: +You can extend BuildMaster's functionality by [creating an extension](/docs/various/inedo-sdk/creating) that's built against the [Inedo SDK](/docs/various/inedo-sdk/the-sdk). This extension can also be used in Inedo's other tools. Here's what you can extend in BuildMaster: | Name | Description | | ---- | ----------- | diff --git a/BuildMaster/releases/notes.htm b/BuildMaster/releases/notes.htm index 3f78e495..016ebe80 100644 --- a/BuildMaster/releases/notes.htm +++ b/BuildMaster/releases/notes.htm @@ -9,7 +9,7 @@

      - There are times when you'll want to attach additional information to a release or build, usually to document something for later auditing purposes, or to share information with another team member. + There are times when you'll want to attach additional information to a release or build, usually to document something for later auditing purposes, or to share information with another team member.

      For example, you might want to document some of the following: @@ -37,7 +37,7 @@

      • Corrective steps automatically executed using a try/catch statement after an error occurred during a deployment
      • -
      • The ID of an issue created using an issue tracking tool operation
      • +
      • The ID of an issue created using an issue tracking tool operation
      • The specific server used in a resource pool
      diff --git a/BuildMaster/releases/overview.md b/BuildMaster/releases/overview.md index 1ed76980..86d80871 100644 --- a/BuildMaster/releases/overview.md +++ b/BuildMaster/releases/overview.md @@ -5,7 +5,7 @@ sequence: 100 keywords: buildmaster, releases, pipelines show-headings-in-nav: true --- -A release is an event where a planned set of changes are tested and delivered to production, or more specifically, a final pipeline stage. Releases can vary in conceptual size, from a major product launch, to a single-line change rushed to production in an emergency. The deployment unit that is promoted through a pipeline in order to effectively deploy a release is referred to as a [build](/support/documentation/buildmaster/builds/overview). +A release is an event where a planned set of changes are tested and delivered to production, or more specifically, a final pipeline stage. Releases can vary in conceptual size, from a major product launch, to a single-line change rushed to production in an emergency. The deployment unit that is promoted through a pipeline in order to effectively deploy a release is referred to as a [build](/docs/buildmaster/builds/overview). In addition to the application code you want to deploy, releases have several associated features available to automate and coordinate the software release process: @@ -18,7 +18,7 @@ In addition to the application code you want to deploy, releases have several as ## Creating Releases {#creating-releases data-title="Creating Releases"} -After creating an [application](/support/documentation/buildmaster/modeling-applications/applications) and a [pipeline](/support/documentation/buildmaster/core-concepts/pipelines), a release may be created for an application. A release has the following initial properties: +After creating an [application](/docs/buildmaster/modeling-applications/applications) and a [pipeline](/docs/buildmaster/core-concepts/pipelines), a release may be created for an application. A release has the following initial properties: {.docs} - **Release template** - if one exists, a [release template](templates) may be selected to determine variable prompts, default pipeline, and more @@ -26,7 +26,7 @@ After creating an [application](/support/documentation/buildmaster/modeling-appl - **Release name** - is an optional alias you can use to create a friendlier release identifier; it is not unique within an application - **Pipeline** - is the sequence of stages and approvals that builds are promoted through -Once a release is created, you can add [configuration variables](/support/documentation/buildmaster/administration/configuration-variables) that can be used by deployment plans at runtime, target dates that will be show on a [calendar](calendars), and so on. +Once a release is created, you can add [configuration variables](/docs/buildmaster/administration/configuration-variables) that can be used by deployment plans at runtime, target dates that will be show on a [calendar](calendars), and so on. ## Status and Lifecycles {#status-lifestyles data-title="Status and Lifecycles"} diff --git a/BuildMaster/releases/rollback.md b/BuildMaster/releases/rollback.md index fbcf764c..3e83ab02 100644 --- a/BuildMaster/releases/rollback.md +++ b/BuildMaster/releases/rollback.md @@ -30,7 +30,7 @@ You'll have the option to deploy the artifact immediately, which is common for m ![](/resources/tutorials/roll-back/deploy-now.png){.screenshot} -The actions in a [deployment plan](/support/documentation/buildmaster/deployments/plans) are designed to look at the *execution context* to determine what to do. In this case the plan will deploy the artifact associated with Release 1.0.2 Build 3\. This will ensure that whatever files were deployed with Release 1.0.2 Build 3 will _**always**_ be deployed with Release 1.0.2 Build 3. +The actions in a [deployment plan](/docs/buildmaster/deployments/plans) are designed to look at the *execution context* to determine what to do. In this case the plan will deploy the artifact associated with Release 1.0.2 Build 3\. This will ensure that whatever files were deployed with Release 1.0.2 Build 3 will _**always**_ be deployed with Release 1.0.2 Build 3. Once the Release 1.0.2 Build 3 has been re-deployed, that information is reflected in BuildMaster. diff --git a/BuildMaster/releases/templates.htm b/BuildMaster/releases/templates.htm index 374c3c1c..479dd3bb 100644 --- a/BuildMaster/releases/templates.htm +++ b/BuildMaster/releases/templates.htm @@ -21,9 +21,9 @@

      Template Variables and Confi

      Hierarchy

      Release template variables are configurable at any of 3 levels:

        -
      1. Release creation - when creating a new release, inputs will appear for the specified variables and once the release is created, corresponding Release Variables will be created. These variable values may be edited later on the Release Details page.
      2. +
      3. Release creation - when creating a new release, inputs will appear for the specified variables and once the release is created, corresponding Release Variables will be created. These variable values may be edited later on the Release Details page.
      4. Build creation - when creating a new build of a release with a template assigned, inputs will appear for the specified variables and once the build is created, corresponding Build Variables will be created. These variable values may be edited later on the Build Details page.
      5. -
      6. Deployment to a pipeline stage - when deploying a build of a release with a template assigned, inputs will appear on the "Deploy to Stage" page and once deployment begins, corresponding Execution Variables will be injected into the deployment and are not editable (though they may be overridden within a deployment plan using the Set Variable statement).
      7. +
      8. Deployment to a pipeline stage - when deploying a build of a release with a template assigned, inputs will appear on the "Deploy to Stage" page and once deployment begins, corresponding Execution Variables will be injected into the deployment and are not editable (though they may be overridden within a deployment plan using the Set Variable statement).

      Types

      There are 4 types of Template Variables:

      @@ -38,12 +38,12 @@

      Common Template Variable Configuration

      • Initial value - the pre-populated value that will appear in a rendered variable prompt
      • Required - indicates that a value must be present to continue creating the release/build or to deploy
      • -
      • Obscure the value of this variable from casual viewing - indicates that when a corresponding release or build variable is created, that it is hidden from the UI and will require additional edit permissions to view it. Note that this is not a means to "secure" a variable value which is handled through Resource Credentials.
      • +
      • Obscure the value of this variable from casual viewing - indicates that when a corresponding release or build variable is created, that it is hidden from the UI and will require additional edit permissions to view it. Note that this is not a means to "secure" a variable value which is handled through Resource Credentials.

      Creating a Release Template

      To create a release template, in the context of an application, select the Releases tab under the Releases project sub-navigation menu. While the UI editor is the recommended method to create release templates, it is also possible to create them from a JSON object.

      Using a Release Template

      -

      Once a release template is created, it can be used in one of two places: either creating a new release from the UI, or the Create Release from Template API endpoint.

      +

      Once a release template is created, it can be used in one of two places: either creating a new release from the UI, or the Create Release from Template API endpoint.

      If there is only a single release template for an application, that release template selected by default when creating a new release. Once a release is assigned a release template, any of the 3 variables properties that contain template variable configurations will then prompt for values as per the Hierarchy section above.

      A release may be assigned a different release template at any time while the release is active, however note that this can change the variable prompts whose values may be expected by a deployment plan.

      JSON Format

      diff --git a/BuildMaster/verification/issue-tracking/#.htm b/BuildMaster/verification/issue-tracking/#.htm index 1b15c1f7..8afdbfbc 100644 --- a/BuildMaster/verification/issue-tracking/#.htm +++ b/BuildMaster/verification/issue-tracking/#.htm @@ -31,12 +31,12 @@

    - The Issue tracking integration is implemented through an extensible feature, and additional tools may be supported by building extensions. + The Issue tracking integration is implemented through an extensible feature, and additional tools may be supported by building extensions.

    Issue Tracker Automation

    - Most issue tracking tool extensions have operations that can be use in a deployment plan to create, query, and modify issues. Each issue tracking tool is different, but there are generally four operations available in each extension: + Most issue tracking tool extensions have operations that can be use in a deployment plan to create, query, and modify issues. Each issue tracking tool is different, but there are generally four operations available in each extension:

      @@ -146,7 +146,7 @@

      Issues in BuildM

      Issues Sources

      - The Inedo Execution Engine uses issue sources to perform background synchronizations with external issue tracking tools. Issue sources act as a filter that lists issues that are relevant for a particular release. Depending on the type of issue tracking tool, each issue source will have different fields that are specific to the tool. + The Inedo Execution Engine uses issue sources to perform background synchronizations with external issue tracking tools. Issue sources act as a filter that lists issues that are relevant for a particular release. Depending on the type of issue tracking tool, each issue source will have different fields that are specific to the tool.

      \ No newline at end of file diff --git a/Hedgehog/deliver-deploy/deployment-plans.htm b/Hedgehog/deliver-deploy/deployment-plans.htm index 0b2251b1..9f549a3c 100644 --- a/Hedgehog/deliver-deploy/deployment-plans.htm +++ b/Hedgehog/deliver-deploy/deployment-plans.htm @@ -37,7 +37,7 @@

      Getting Started: Visual Plan Editor

      - OtterScript is a Domain-Specific Language that was designed in tandem with the Inedo execution engine to represent configuration plans and orchestration plans in Otter, and deployment plans in Hedgehog and BuildMaster. + OtterScript is a Domain-Specific Language that was designed in tandem with the Inedo execution engine to represent configuration plans and orchestration plans in Otter, and deployment plans in Hedgehog and BuildMaster.

      diff --git a/Hedgehog/deliver-deploy/execution-engine.htm b/Hedgehog/deliver-deploy/execution-engine.htm index db424c2b..3a0af758 100644 --- a/Hedgehog/deliver-deploy/execution-engine.htm +++ b/Hedgehog/deliver-deploy/execution-engine.htm @@ -30,10 +30,10 @@

      OtterScript in Hedgehog

      - OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent - configuration plans and - orchestration plans in Otter, - and deployment plans in Hedgehog. + OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent + configuration plans and + orchestration plans in Otter, + and deployment plans in Hedgehog.

      You really don't need to learn OtterScript; it's simply the textual representation of a plan, and plans are already fully editable in the drag-and-drop plan editor. @@ -42,12 +42,12 @@

      OtterScript in Hedgehog

      diff --git a/Hedgehog/deliver-deploy/pipelines/#.htm b/Hedgehog/deliver-deploy/pipelines/#.htm index e1a605f3..83f81e1e 100644 --- a/Hedgehog/deliver-deploy/pipelines/#.htm +++ b/Hedgehog/deliver-deploy/pipelines/#.htm @@ -57,7 +57,7 @@

      Deployment Targets

      Deployment Plan

      This is the actual OtterScript that will be run in order to deploy the release package. It can reference an project-level plan, a parent project's plan, or global plan. Plan names should be referenced by simple names, but may also be accessed via - normal raft resolution rules. + normal raft resolution rules.

      @@ -169,7 +169,7 @@

      Auto-De

      Pipeline Variables

      - You can define key/value pairs on pipelines and stages. These behave just like configuration variables, in that you can use these variables within deployment plans that are executed through the plan. + You can define key/value pairs on pipelines and stages. These behave just like configuration variables, in that you can use these variables within deployment plans that are executed through the plan.

      However, pipeline variables are not actually configuration variables: you can't create multi-scoped variables, or modify them through the variables API. diff --git a/Hedgehog/global-components/executions/#.htm b/Hedgehog/global-components/executions/#.htm index ec8f15e9..2ceb0f7f 100644 --- a/Hedgehog/global-components/executions/#.htm +++ b/Hedgehog/global-components/executions/#.htm @@ -14,7 +14,7 @@

      - An execution represents a deployment, orchestration, routine task, synchronization, or any other type of job that is run by the service using the Inedo Execution Engine. + An execution represents a deployment, orchestration, routine task, synchronization, or any other type of job that is run by the service using the Inedo Execution Engine.

      All execution records are stored in the database, have scoped logs, and the following properties: diff --git a/Hedgehog/global-components/executions/dispatching-and-running.htm b/Hedgehog/global-components/executions/dispatching-and-running.htm index 30f6a85f..656bc091 100644 --- a/Hedgehog/global-components/executions/dispatching-and-running.htm +++ b/Hedgehog/global-components/executions/dispatching-and-running.htm @@ -9,10 +9,10 @@

      - The service uses the ExecutionDispatcher Task Runner to query the database for executions with a Run State of "Pending" to a Start Date that is before or equal to the current DateTime. For each suitable execution found, a new background task is used to run the execution. + The service uses the ExecutionDispatcher Task Runner to query the database for executions with a Run State of "Pending" to a Start Date that is before or equal to the current DateTime. For each suitable execution found, a new background task is used to run the execution.

      - The Service administration page will display the currently running background tasks that were created by the ExecutionDispatcher, and provide links to the appropriate Execution in Progress page. + The Service administration page will display the currently running background tasks that were created by the ExecutionDispatcher, and provide links to the appropriate Execution in Progress page.

      When the execution is complete, the background task will terminate; you can view all executions (regardless of state) using the Executions administration page, although it will probably be easier to find the specific execution record using a more specific context (such as package deployment history). diff --git a/Hedgehog/global-components/executions/types.htm b/Hedgehog/global-components/executions/types.htm index 0d2485ae..deca3cd3 100644 --- a/Hedgehog/global-components/executions/types.htm +++ b/Hedgehog/global-components/executions/types.htm @@ -57,7 +57,7 @@

      Infrastructure Import Execution

      Automatic Approval Gate Execution

      - TBD. This may or may not be needed, because the task runner probably won't fail; it would be dispatched from the web front for diagnostic purposes, similar to how issue import works in buildmaster. + TBD. This may or may not be needed, because the task runner probably won't fail; it would be dispatched from the web front for diagnostic purposes, similar to how issue import works in buildmaster.

      Deployment Execution Types

      @@ -71,14 +71,14 @@

      Package Deployment Execution

      This is a Deployment Execution that is initiated through the Quick Deploy workflow. A Server Target is always specified, and the deployment plan is either stored within Hedgehog or the package (install.otter).

      - Once loaded and compiled, the actual plan is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name). The plan is then wrapped using the Server Targeting described below. + Once loaded and compiled, the actual plan is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name). The plan is then wrapped using the Server Targeting described below.

      Pipeline Targeted Execution

      A Pipeline Targeted Execution is dispatched by a Pipeline Stage Execution, and allows for deploying multiple packages in a package set.

      - Each package in the set is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name), and that plan is then wrapped using the Server Targeting described below. + Each package in the set is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name), and that plan is then wrapped using the Server Targeting described below.

      Server Targeting

      @@ -86,18 +86,18 @@

      Server Targeting

      Direct Targeting

      - A single Context Iteration Statement will be created with the Source set to a literal expression of the server names (e.g. @(Server1, Server2, Server3). The Body contain an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan. + A single Context Iteration Statement will be created with the Source set to a literal expression of the server names (e.g. @(Server1, Server2, Server3). The Body contain an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.

      Roles + Environment Targeting

      - For every role targeted, a Set Context Statement (with a ContextType of role, and ContextValue of the role name) will be created. The Body of those statements will be comprised of a single Context Iteration Statement with the Source set to a literal expression of the servers in that role and environment. The Body contains an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan. + For every role targeted, a Set Context Statement (with a ContextType of role, and ContextValue of the role name) will be created. The Body of those statements will be comprised of a single Context Iteration Statement with the Source set to a literal expression of the servers in that role and environment. The Body contains an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.

      If no servers were in any of the role iterations, then a Log-Warning statement will be appended to the wrapped plan, as the actual plan will not execute.

      Pre-execution Failure

      - This execution invokes the OtterScript runtime to execute either the specified plan directly, or a wrapped version of the plan. + This execution invokes the OtterScript runtime to execute either the specified plan directly, or a wrapped version of the plan.

      If an error condition occurs before the runtime is invoked, such as a complication error or a pipeline resolution error, the error message will be written to logs, the Execution Status will be set to Error, and the Run State will be set to Completed. diff --git a/Hedgehog/global-components/rafts.htm b/Hedgehog/global-components/rafts.htm index b4d34a60..7ff73af1 100644 --- a/Hedgehog/global-components/rafts.htm +++ b/Hedgehog/global-components/rafts.htm @@ -20,7 +20,7 @@

      The Default Raft

      - Because you likely won't need multiple rafts right away, a raft named "Default" is automatically created when you install Hedgehog. This is a Database raft and is backed up when you do a regular Back-up of Hedgehog. + Because you likely won't need multiple rafts right away, a raft named "Default" is automatically created when you install Hedgehog. This is a Database raft and is backed up when you do a regular Back-up of Hedgehog.

      If you only have a single raft configured, and that raft is named "Default", then the ability to filter or select rafts will not be exposed on plan, project, asset, etc. pages. @@ -31,11 +31,11 @@

      Creating, Managing, and Downloading Rafts

      Rafts and Projects

      - When you specify a raft for a project, Hedgehog will always search within the associated raft for content, using the Project Content Name Resolution search. Only the associated raft will be searched, which means that if a parent project uses a different raft, content may never be located. If no raft is specified, then the "Default" raft (if one is named that) is used. + When you specify a raft for a project, Hedgehog will always search within the associated raft for content, using the Project Content Name Resolution search. Only the associated raft will be searched, which means that if a parent project uses a different raft, content may never be located. If no raft is specified, then the "Default" raft (if one is named that) is used.

      Raft Repository Types

      - Rafts rely on an Extensible Raft Provider to retrieve and store raft data. There are three built-in raft types: + Rafts rely on an Extensible Raft Provider to retrieve and store raft data. There are three built-in raft types:

        @@ -132,11 +132,11 @@

        Project-specific Content

        Raft Variables

        - Variables persisted within a raft are not currently displayed anywhere in the UI, and are intended to be used for storing default or fallback values for plans stored in portable/reusable rafts. They are the lowest scope, and are only used if a Configuration Variable of the same name does not exist. + Variables persisted within a raft are not currently displayed anywhere in the UI, and are intended to be used for storing default or fallback values for plans stored in portable/reusable rafts. They are the lowest scope, and are only used if a Configuration Variable of the same name does not exist.

        Raft Asset Resolution

        - Hedgehog can automatically resolve names using the Project Content Name Resolution system, sometimes it's convenient to specify an explicit, fully-qualified path. You can do this with a combination of raft names and Project Paths. + Hedgehog can automatically resolve names using the Project Content Name Resolution system, sometimes it's convenient to specify an explicit, fully-qualified path. You can do this with a combination of raft names and Project Paths.

          diff --git a/Hedgehog/global-components/resource-credentials.htm b/Hedgehog/global-components/resource-credentials.htm index ba73ca6c..6e3ecb2c 100644 --- a/Hedgehog/global-components/resource-credentials.htm +++ b/Hedgehog/global-components/resource-credentials.htm @@ -67,7 +67,7 @@ * indicates an encrypted/sensitive field

          - Resource credentials are implemented through an extensible feature, and several extensions (VSTS, GitHub, etc.) will add types that are necessary for the extension's components. + Resource credentials are implemented through an extensible feature, and several extensions (VSTS, GitHub, etc.) will add types that are necessary for the extension's components.

          Creating and Editing Credentials

          @@ -100,7 +100,7 @@

          Limiting Credential Access

          - You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment. + You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment.

          There are two task attributes you can use to control this access: @@ -119,7 +119,7 @@

          Creden Any operation that uses passwords, API keys, or sensitive information will give you the option to use a resource credential instead of needing to put those values directly in your OtterScript.

          - For example, consider the Ensure-AppPool operation: + For example, consider the Ensure-AppPool operation:

          Ensure apppool operation

          diff --git a/Hedgehog/packages/creating.htm b/Hedgehog/packages/creating.htm index 0096e1b5..b8d01a98 100644 --- a/Hedgehog/packages/creating.htm +++ b/Hedgehog/packages/creating.htm @@ -13,16 +13,16 @@ There's not a whole lot to a package: it's just a zip file containing the files you actually want to distribute, as well as a manifest file that describes the package itself. There are a lot of options for creating and publishing packages to a feed, either from a developer's workstation, a build server, or anywhere else:

          - You can also use Hedgehog's advanced execution engine to help you create packages by setting up a release and pipeline that imports build artifacts from Jenkins, grabs files from a network drive, or even creates them directly using msbuild. + You can also use Hedgehog's advanced execution engine to help you create packages by setting up a release and pipeline that imports build artifacts from Jenkins, grabs files from a network drive, or even creates them directly using msbuild.

          Deploying Packages

          @@ -30,7 +30,7 @@

          Deploying Packages

          • Quick Deploy; an ad-hock deployment of a single package to any number of servers
          • -
          • Deployment Sets; a multi-stage deployment of one or more packages that can use approvals, templates, and project based security
          • +
          • Deployment Sets; a multi-stage deployment of one or more packages that can use approvals, templates, and project based security
          option, which means it does not add the package to the local registry.

          +

          This runs the romp install method, but specifies an option, which means it does not add the package to the local registry.

          Push

          @@ -63,7 +63,7 @@

          Push

          Other Notes

          -

          Romp for Visual Studio uses a .upack/ folder, located at a project's root, that is laid out exactly like a regular romp package, exception without a package/ subfolder. Instead, MSBuild's output folder is used to create the contents at packing time.

          +

          Romp for Visual Studio uses a .upack/ folder, located at a project's root, that is laid out exactly like a regular romp package, exception without a package/ subfolder. Instead, MSBuild's output folder is used to create the contents at packing time.

          The plugin uses an embedded version of Romp (the version is indicated in the notes), but you can configure it to point to another installation on your machine if desired.

          diff --git a/Romp/installation/configuration.htm b/Romp/installation/configuration.htm index cb90d469..a98b1623 100644 --- a/Romp/installation/configuration.htm +++ b/Romp/installation/configuration.htm @@ -63,7 +63,7 @@

          Parameter Reference

          secure-credentials - A Boolean which indicates, when true, that credentials must be entered interactively when using the store command and may not be displayed with the display command. This defaults to false. + A Boolean which indicates, when true, that credentials must be entered interactively when using the store command and may not be displayed with the display command. This defaults to false. diff --git a/Romp/installation/installation-guide.htm b/Romp/installation/installation-guide.htm index 3f6e64dc..a7591ed6 100644 --- a/Romp/installation/installation-guide.htm +++ b/Romp/installation/installation-guide.htm @@ -26,7 +26,7 @@

          Simple Romp Installer

        • Add romp to the PATH environment variable, so that you can use it in any directory
        • -

          If you specify user-level installation, the files will instead be extracted to %UserProfile%\.romp folder, and the userLevel configuration value will be set to true in the configuration file.

          +

          If you specify user-level installation, the files will instead be extracted to %UserProfile%\.romp folder, and the userLevel configuration value will be set to true in the configuration file.

          Note that Romp does not have an uninstaller, so to uninstall, just remove from your path and delete files.

          diff --git a/Romp/installation/maintenance.htm b/Romp/installation/maintenance.htm index cf4ba688..02795a7f 100644 --- a/Romp/installation/maintenance.htm +++ b/Romp/installation/maintenance.htm @@ -19,7 +19,7 @@

          Local Package Registry

          -

          Romp uses the standardized local package registry specification, which allows Romp and other tools to see which packages are installed on the machine. By default, a Machine-level registry is used, which is stored in %ProgramData%\upack.

          +

          Romp uses the standardized local package registry specification, which allows Romp and other tools to see which packages are installed on the machine. By default, a Machine-level registry is used, which is stored in %ProgramData%\upack.

          Local Data Store

          diff --git a/Romp/overview.htm b/Romp/overview.htm index 2e8116bd..d7d7aed2 100644 --- a/Romp/overview.htm +++ b/Romp/overview.htm @@ -20,20 +20,20 @@

          Getting Started

          - Romp is primarily a command-line tool that lets you create and install packages. It’s really easy to get started: + Romp is primarily a command-line tool that lets you create and install packages. It’s really easy to get started:

          1. Download the Romp Installer
          2. -
          3. Follow the Creating and Publishing an IIS Website tutorial
          4. +
          5. Follow the Creating and Publishing an IIS Website tutorial

          Romp and the Inedo Execution Engine

          - Romp uses the Inedo Execution Engine, which was created exclusively for infrastructure automation and orchestration. The Inedo Execution Engine lets you uses a combination of OtterScript, PowerShell, Text Templating, Operations, and Functions to accomplish virtually any kind of deployment or infrastructure configuration. + Romp uses the Inedo Execution Engine, which was created exclusively for infrastructure automation and orchestration. The Inedo Execution Engine lets you uses a combination of OtterScript, PowerShell, Text Templating, Operations, and Functions to accomplish virtually any kind of deployment or infrastructure configuration.

          The easiest way to learn OtterScript is with the Visual Plan Editor. This drop-and-drop editor allows you to switch back-and-forth between visual and text modes to get a feel for the syntax and structure of the language pretty quickly. @@ -45,10 +45,10 @@

          Romp and ProGet

          - Romp packages are extensions of universal packages, which means you can host your packages in a ProGet Universal Feed and use any of the UPack tools or libraries to interact with them. + Romp packages are extensions of universal packages, which means you can host your packages in a ProGet Universal Feed and use any of the UPack tools or libraries to interact with them.

          - Romp uses a package source to securely store a connection to a universal feed. You can also use the --source argument to specify a feed url (see common configuration). + Romp uses a package source to securely store a connection to a universal feed. You can also use the --source argument to specify a feed url (see common configuration).

          diff --git a/Romp/romp-packages/layout/#.htm b/Romp/romp-packages/layout/#.htm index 6d72d7df..18044c49 100644 --- a/Romp/romp-packages/layout/#.htm +++ b/Romp/romp-packages/layout/#.htm @@ -14,11 +14,11 @@ }

          - A Romp Package is a special Universal Package that contains everything Romp will need to deploy an application and/or infrastructure configuration. + A Romp Package is a special Universal Package that contains everything Romp will need to deploy an application and/or infrastructure configuration.

          - Romp Packages at a minimum must include a standard installation and configuration script (install.otter), and a metadata file (upack.json). + Romp Packages at a minimum must include a standard installation and configuration script (install.otter), and a metadata file (upack.json).

          - Aside from the primary configuration script, and the required metadata file, packages can also contain variables, credentials, extensions, Otter rafts, and additional metadata (rompPackage.json). + Aside from the primary configuration script, and the required metadata file, packages can also contain variables, credentials, extensions, Otter rafts, and additional metadata (rompPackage.json).

          A Romp Package has the following layout: @@ -41,10 +41,10 @@

          /uninstall.otter

  • -

    The rafts/ directory contains rafts which are used by the install script. Each subdirectory in this directory is equivalent to a named filesystem raft.

    +

    The rafts/ directory contains rafts which are used by the install script. Each subdirectory in this directory is equivalent to a named filesystem raft.

    The install.otter and uninstall.otter files are standard - OtterScript plans + OtterScript plans that will be run when the package is installed or uninstalled. They may use any of the resources contained in the embedded rafts.

    diff --git a/Romp/romp-packages/layout/credentials.htm b/Romp/romp-packages/layout/credentials.htm index 9a480ff0..a94ebedd 100644 --- a/Romp/romp-packages/layout/credentials.htm +++ b/Romp/romp-packages/layout/credentials.htm @@ -22,7 +22,7 @@

    - Like variables, the required credentials are defined in a packageCredentials.json file in the package root. It is an array of objects that describe credentials. Each object has the following properties. + Like variables, the required credentials are defined in a packageCredentials.json file in the package root. It is an array of objects that describe credentials. Each object has the following properties.

    @@ -90,7 +90,7 @@

    Example:

    Locally Stored Credentials

    -

    Romp can store named credentials in an encrypted manner in its local configuration database, meaning you won't have to enter it at installation time or pass it in as an argument. See the romp store credentials command for more details.

    +

    Romp can store named credentials in an encrypted manner in its local configuration database, meaning you won't have to enter it at installation time or pass it in as an argument. See the romp store credentials command for more details.

    Prompting for Credentials

    diff --git a/Romp/romp-packages/layout/metadata.htm b/Romp/romp-packages/layout/metadata.htm index f394366e..453cdfc9 100644 --- a/Romp/romp-packages/layout/metadata.htm +++ b/Romp/romp-packages/layout/metadata.htm @@ -17,7 +17,7 @@ At a minimum, a romp package must contain a upack.json file.

    - See the Universal Package Metadata Specification for a list of upack.json properties. + See the Universal Package Metadata Specification for a list of upack.json properties.

    A Romp package may also include an additional metadata file, rompPackage.json, that defines package behavior. diff --git a/Romp/romp-packages/layout/package-installation.htm b/Romp/romp-packages/layout/package-installation.htm index c44918e9..ff69c5a2 100644 --- a/Romp/romp-packages/layout/package-installation.htm +++ b/Romp/romp-packages/layout/package-installation.htm @@ -17,7 +17,7 @@ A romp package must contain an install.otter file.

    The install.otter file in the package root contains the OtterScript of the plan that will be executed when - the install command is invoked.

    + the install command is invoked.

    Basic Package File Installation

    diff --git a/Romp/romp-packages/layout/package-uninstallation.htm b/Romp/romp-packages/layout/package-uninstallation.htm index ff892248..00420bf1 100644 --- a/Romp/romp-packages/layout/package-uninstallation.htm +++ b/Romp/romp-packages/layout/package-uninstallation.htm @@ -18,7 +18,7 @@ then the package does not support uninstallation.

    The uninstall.otter file in the package root contains the OtterScript of the plan that will be executed when - the uninstall command is invoked.

    + the uninstall command is invoked.

    Basic Package File Uninstallation

    diff --git a/Romp/romp-packages/layout/rafts.htm b/Romp/romp-packages/layout/rafts.htm index 18df02fc..90ce5417 100644 --- a/Romp/romp-packages/layout/rafts.htm +++ b/Romp/romp-packages/layout/rafts.htm @@ -15,7 +15,7 @@

    A raft is used to store plans, templates, variables, and asset files. If a Romp Package has any dependency on raft data, - the entire raft can be included in the package. Further information about rafts can be found in the Otter documentation. + the entire raft can be included in the package. Further information about rafts can be found in the Otter documentation.

    \ No newline at end of file diff --git a/Romp/romp-packages/layout/variables.htm b/Romp/romp-packages/layout/variables.htm index 1f9a9c46..905f1de7 100644 --- a/Romp/romp-packages/layout/variables.htm +++ b/Romp/romp-packages/layout/variables.htm @@ -78,7 +78,7 @@

    Example: packageVariables.json file

    Package variables specified as command line arguments are returned first, followed by variables in packageVariables.json, followed by raft variables, in the event of variables of the same name.

    -

    You should not store sensitive passwords or other secrets in variables. They are not secure, at all. Instead, consider resource credentials.

    +

    You should not store sensitive passwords or other secrets in variables. They are not secure, at all. Instead, consider resource credentials.

    Prompting for Required Variables

    diff --git a/Romp/using/creating-publishing.htm b/Romp/using/creating-publishing.htm index 94174b7c..50e59099 100644 --- a/Romp/using/creating-publishing.htm +++ b/Romp/using/creating-publishing.htm @@ -42,7 +42,7 @@

    Package Structure

    - A romp package is simply a upack with additional layout requirements. + A romp package is simply a upack with additional layout requirements.

    @@ -86,7 +86,7 @@

    Deploying the Package

    What did Romp just do?

    - It may seem like magic, but Romp is simply invoking the execution engine to perform the steps outlined in the install.otter plan. To gather more information for what just happened, you use the jobs subcommand: + It may seem like magic, but Romp is simply invoking the execution engine to perform the steps outlined in the install.otter plan. To gather more information for what just happened, you use the jobs subcommand:

    PS C:\tmp\romp> romp jobs
    diff --git a/UPack/feed-api/endpoints.htm b/UPack/feed-api/endpoints.htm index de80e0c2..486c45c9 100644 --- a/UPack/feed-api/endpoints.htm +++ b/UPack/feed-api/endpoints.htm @@ -306,7 +306,7 @@

    Latest Version

    @@ -358,7 +358,7 @@

    Content Type

    Using application/zip Content-Type

    You must send the raw bytes of a ZIP file as the body of your request. If the archive doesn't conform to the - universal package format, ProGet will convert it for you, if you supply the required metadata via query string parameters. + universal package format, ProGet will convert it for you, if you supply the required metadata via query string parameters.

    If the archive is already in the .upack format, you can specify additional metadata paramters via the querystring. @@ -366,7 +366,7 @@

    Using application/zip Content-Type

    Metadata Parameters

    - Any of the following parameters fields may be specified through querystring or content; the format must follow a valid metadata format specification. + Any of the following parameters fields may be specified through querystring or content; the format must follow a valid metadata format specification.

    diff --git a/UPack/overview.htm b/UPack/overview.htm index 26796684..4baf4931 100644 --- a/UPack/overview.htm +++ b/UPack/overview.htm @@ -76,19 +76,19 @@

    UPack Specifications:

    All of the UPack tools adhere to the following set of specifications. The specifications were designed to be easy to understand and implement for your own use cases.

    -

    Universal Package

    +

    Universal Package

    A ZIP archive containing any sort of content, along with a simple, JSON-formatted manifest file describing that content using built-in or additional metadata.

    -

    Virtual Package

    +

    Virtual Package

    A JSON-formatted file that behaves like a universal package, but with contents that are downloaded and assembled at install time.

    -

    Universal Package Registry

    +

    Universal Package Registry

    A JSON-formatted file that describes what packages are installed in a particular context (on a server, in an application), as well as when it was installed, who installed it, and why it was installed. It also provides for a package cache, in case packages need to be reinstalled or audited.

    -

    Universal Feed

    +

    Universal Feed

    An HTTP-based API used to list, download, and publish universal package files to a web-based package manager like ProGet.

    diff --git a/UPack/universal-package-registry/specifications.htm b/UPack/universal-package-registry/specifications.htm index 069505ee..fcce4a99 100644 --- a/UPack/universal-package-registry/specifications.htm +++ b/UPack/universal-package-registry/specifications.htm @@ -24,7 +24,7 @@

    - Like with universal packages, you can add any number of files or directories outside of these minimal requirements. However, we strongly recommend that you prefix these files and folders with an underscore (_) as not to clash with files or folders that are added in a future version of the specification. + Like with universal packages, you can add any number of files or directories outside of these minimal requirements. However, we strongly recommend that you prefix these files and folders with an underscore (_) as not to clash with files or folders that are added in a future version of the specification.

    Interacting with a Universal Package Registry and the Lock File

    @@ -57,19 +57,19 @@

    Universa

    diff --git a/UPack/universal-package-registry/what-is.htm b/UPack/universal-package-registry/what-is.htm index 68a163ed..6e7c18d1 100644 --- a/UPack/universal-package-registry/what-is.htm +++ b/UPack/universal-package-registry/what-is.htm @@ -9,7 +9,7 @@

    - Universal Feeds and Packages are "lightweight" and, on their own, have very few built-in features. This design has allowed them to be utilized for all sorts of packaging problems such as application delivery, Inedo's product extensions, and even private Bower packages. + Universal Feeds and Packages are "lightweight" and, on their own, have very few built-in features. This design has allowed them to be utilized for all sorts of packaging problems such as application delivery, Inedo's product extensions, and even private Bower packages.

    When developing a package-based solution, one question that often comes up is "what packages are installed or used in this particular context?" That's where the Universal Package Registry comes in; it's a local package registry designed specifically for Universal Packages. @@ -17,7 +17,7 @@

    Background: Other Local Package Registries

    - All packages, universal format or otherwise, are essentially zip files with content, (i.e. the actual files you want packaged) and a metadata file that describes the content. Once the contents of a package are extracted and installed to a directory, there’s no easy way to know which package those contents came from, or if they came from a package at all. This is where a local package registry comes in. + All packages, universal format or otherwise, are essentially zip files with content, (i.e. the actual files you want packaged) and a metadata file that describes the content. Once the contents of a package are extracted and installed to a directory, there’s no easy way to know which package those contents came from, or if they came from a package at all. This is where a local package registry comes in.

    Different package managers use different mechanisms to represent a local registry. Some store the entire package, others store only metadata about the package, and each approach has advantages and disadvantages. diff --git a/UPack/universal-packages/metacontent-guidance/manifest-specification.htm b/UPack/universal-packages/metacontent-guidance/manifest-specification.htm index e2155932..2a118fcb 100644 --- a/UPack/universal-packages/metacontent-guidance/manifest-specification.htm +++ b/UPack/universal-packages/metacontent-guidance/manifest-specification.htm @@ -97,7 +97,7 @@

    Additional D
  • «group»/«package-name»:«version»:«sha-hash»
  • When the version is not specified, the latest is used. If a hash is specified, the client may use it to verify a downloaded package; - see package identification. + see package identification.

    diff --git a/UPack/universal-packages/package-format.htm b/UPack/universal-packages/package-format.htm index 3346bdfd..bbce4432 100644 --- a/UPack/universal-packages/package-format.htm +++ b/UPack/universal-packages/package-format.htm @@ -14,7 +14,7 @@

    diff --git a/UPack/universal-packages/package-identification.md b/UPack/universal-packages/package-identification.md index 99c8cfaf..dce2ef3e 100644 --- a/UPack/universal-packages/package-identification.md +++ b/UPack/universal-packages/package-identification.md @@ -4,7 +4,7 @@ subtitle: Package Identification sequence: 20 keywords: proget, universal-packages --- -A universal package can be uniquely identified it's group, name, and version. These are different properties in the [manifest file](/support/documentation/upack/universal-packages/metacontent-guidance/manifest-specification). +A universal package can be uniquely identified it's group, name, and version. These are different properties in the [manifest file](/docs/upack/universal-packages/metacontent-guidance/manifest-specification). diff --git a/UPack/universal-packages/validation-security/#.md b/UPack/universal-packages/validation-security/#.md index ef5c82f5..dd2ebc93 100644 --- a/UPack/universal-packages/validation-security/#.md +++ b/UPack/universal-packages/validation-security/#.md @@ -12,7 +12,7 @@ This is where cryptographic hashing comes in. It is a small string of text that Because a package's hash is calculated from the bytes of the package file, it is impossible to store a package's hash inside of that package, since changing the package would change its hash. For this reason, a trusted package source should be used to verify the hash of the package. -However, a package [manifest file](/support/documentation/upack/universal-packages/metacontent-guidance/manifest-specification) may reference other packages' hashes in the `dependencies` and `repackageHistory` properties. +However, a package [manifest file](/docs/upack/universal-packages/metacontent-guidance/manifest-specification) may reference other packages' hashes in the `dependencies` and `repackageHistory` properties. #### Package Hash Format {#package-hash-format} diff --git a/UPack/universal-packages/validation-security/repackaging-auditing.md b/UPack/universal-packages/validation-security/repackaging-auditing.md index 82ad1a5a..be82af36 100644 --- a/UPack/universal-packages/validation-security/repackaging-auditing.md +++ b/UPack/universal-packages/validation-security/repackaging-auditing.md @@ -28,6 +28,6 @@ Given these rules, it seems nearly impossible to create a logical pipeline that This is where repackaging comes in. Repackaging is process where that creates a new package from an existing package using exactly the same content while maintaining the integrity of the original package. This also needs to adhere to a very secure work flow to prevent content tampering, while also generating an audit trail that proves it was done according to the guidelines. -The universal packaging [manifest file](/support/documentation/upack/universal-packages/metacontent-guidance/manifest-specification) allows for storing a chain of repackaging events that allow you to verify each preceding package. [ProGet](/support/documentation/proget/advanced/repackaging) also supports this as a feature, and eliminates the complex and tedious manual steps involved in repackaging, and ensures security. +The universal packaging [manifest file](/docs/upack/universal-packages/metacontent-guidance/manifest-specification) allows for storing a chain of repackaging events that allow you to verify each preceding package. [ProGet](/docs/proget/advanced/repackaging) also supports this as a feature, and eliminates the complex and tedious manual steps involved in repackaging, and ensures security. diff --git a/UPack/virtual-packages/package-format/#.htm b/UPack/virtual-packages/package-format/#.htm index fb4230ab..04977291 100644 --- a/UPack/virtual-packages/package-format/#.htm +++ b/UPack/virtual-packages/package-format/#.htm @@ -12,7 +12,7 @@ There's not much to virtual package; it's just a JSON-based text file with a .vpack file extension.

    - The file is formatted just like a universal package manifest file (e.g. upack.json), and + The file is formatted just like a universal package manifest file (e.g. upack.json), and follows the same specification, except that there are two additional properties:

    group - see package metadata specs + see package metadata specs
    nameR - see package metadata specs + see package metadata specs
    versionR - see package metadata specs + see package metadata specs
    @@ -31,13 +31,13 @@ An array of at least one item, containing any of the following: diff --git a/UPack/virtual-packages/package-format/specifications.htm b/UPack/virtual-packages/package-format/specifications.htm index 1f9519fe..1397c784 100644 --- a/UPack/virtual-packages/package-format/specifications.htm +++ b/UPack/virtual-packages/package-format/specifications.htm @@ -47,7 +47,7 @@

    Virtual Content Object

    @@ -126,7 +126,7 @@

    File Source Object

    diff --git a/UPack/virtual-packages/packages-feeds.htm b/UPack/virtual-packages/packages-feeds.htm index a0ed64eb..2a25e82d 100644 --- a/UPack/virtual-packages/packages-feeds.htm +++ b/UPack/virtual-packages/packages-feeds.htm @@ -15,10 +15,10 @@

    diff --git a/Various/ldap/combining-with-built-in.md b/Various/ldap/combining-with-built-in.md index fdc7778f..2853fae0 100644 --- a/Various/ldap/combining-with-built-in.md +++ b/Various/ldap/combining-with-built-in.md @@ -20,13 +20,13 @@ BuildMaster supports the following types of user directories, any of which can b {.docs} - **Built-in** - define users and groups in BuildMaster - **LDAP / Active Directory** - define permissions against existing users or groups in an existing Windows AD forest or generic LDAP directory - - **Custom** - because user groups are extensible, a custom user directory built against the [Inedo SDK](/support/documentation/inedosdk/overview) is eligible to be referenced in a hybrid user directory + - **Custom** - because user groups are extensible, a custom user directory built against the [Inedo SDK](/docs/inedosdk/overview) is eligible to be referenced in a hybrid user directory ## Configuring a Hybrid User Directory {#configuring} To edit or add a hybrid user directory you will need to go to administration section under Security & Authentication then click on Change User Directory (LDAP) and click the **Advanced** button. To edit an existing hybrid user directory, select its name in the user directory table. To add a new hybrid directory click the **Add Hybrid Directory** button, and then choose any combination of existing non-hybrid user directories to be included in the hybrid configuration. -Once the directory has been created, [privileges must be assigned to it](/support/documentation/buildmaster/administration/security#adding-permissions-and-restrictions) because privileges for the contained user directories are not considered when resolving permissions. +Once the directory has been created, [privileges must be assigned to it](/docs/buildmaster/administration/security#adding-permissions-and-restrictions) because privileges for the contained user directories are not considered when resolving permissions. ## Switching to a Hybrid User Directory {#switching} diff --git a/Various/ldap/integrated-authentication.md b/Various/ldap/integrated-authentication.md index e561cf17..e8e33361 100644 --- a/Various/ldap/integrated-authentication.md +++ b/Various/ldap/integrated-authentication.md @@ -6,7 +6,7 @@ keywords: ldap,active-directory Integrated Authentication is an option that allows transparent user authentication, removing the web-based login prompt. When this option is enabled and working, whatever account you signed on to Windows with, will be used as your login. -**TROUBLESHOOTING**: [Receiving 401 (Not Authorized), always prompted for credentials.](/support/documentation/various/ldap/troubleshooting#authentication-not-working) {.info} +**TROUBLESHOOTING**: [Receiving 401 (Not Authorized), always prompted for credentials.](/docs/various/ldap/troubleshooting#authentication-not-working) {.info} ## Web Server Configuration {#web-server-config} diff --git a/Various/ldap/ldap-active-directory.md b/Various/ldap/ldap-active-directory.md index 41f14046..25df86f3 100644 --- a/Various/ldap/ldap-active-directory.md +++ b/Various/ldap/ldap-active-directory.md @@ -10,7 +10,7 @@ In addition to its own built-in user module, Inedo's Active Directory Domain int The LDAP/AD integration can be enabled from the Administration page via the *Change User Directory* link under the *Security & Authentication* heading. Selecting the *Active Directory (New)* option will present a page that requires entering credentials for an administrator. -**TROUBLESHOOTING**: [Active Directory (New) is not in the list.](/support/documentation/various/ldap/troubleshooting#active-directory-new) {.info} +**TROUBLESHOOTING**: [Active Directory (New) is not in the list.](/docs/various/ldap/troubleshooting#active-directory-new) {.info} In new installations, you will need to configure privileges first by clicking the *assign privileges* link (or from the Administration page, clicking *Manage Users & Tasks*, selecting the *Tasks* tab, and ensuring the dropdown user directory is set to *Active Directory (New)*). @@ -23,7 +23,7 @@ On the assign privileges page, select *Add Permission*, and in the *Principles* Once the switch button is clicked, the web server will restart, causing a brief outage in your Inedo product. Refreshing the page will eventually redirect to the login page. Here, enter the credentials of the administrator configured in the previous step, and you will be logged in as an administrator. -**TROUBLESHOOTING**: [Log in fails, can't remember the password, etc.](/support/documentation/various/ldap/troubleshooting#locked-out) {.info} +**TROUBLESHOOTING**: [Log in fails, can't remember the password, etc.](/docs/various/ldap/troubleshooting#locked-out) {.info} ## Virtual Privilege Assignment {#virtual-privilege-assignment} diff --git a/Various/ldap/troubleshooting.md b/Various/ldap/troubleshooting.md index f3f8ef27..0b9abd02 100644 --- a/Various/ldap/troubleshooting.md +++ b/Various/ldap/troubleshooting.md @@ -55,7 +55,7 @@ The next time you visit after running these commands, there may be stale authent The *Active Directory (New)* user directory is loaded from the InedoCore extension. Visit the extensions page (Administration › Extensions) to verify that the InedoCore extension is loaded on that page. -An instance of the user directory should be visible on the *Manage User Directories* page (Administration › Change User Directory › Advanced). If there isn't one (e.g. after a reset), click the *Add User Directory* button and select *Active Directory (New)*. Configure the user directory as detailed in the [Advanced Configuration](/support/documentation/various/ldap/advanced) section, and then click Save. This will restore the user directory on the *Change User Directory* page. +An instance of the user directory should be visible on the *Manage User Directories* page (Administration › Change User Directory › Advanced). If there isn't one (e.g. after a reset), click the *Add User Directory* button and select *Active Directory (New)*. Configure the user directory as detailed in the [Advanced Configuration](/docs/various/ldap/advanced) section, and then click Save. This will restore the user directory on the *Change User Directory* page. ## Privileges assigned to the Domain Users group not working {#privileges-not-working data-title="Group-based Privileges Not Working"}
    One of the following values: @@ -90,7 +90,7 @@

    Package Source Object

    hash - A package hash string used to verify the package file + A package hash string used to verify the package file
    hash - A hash string with the same format as a package hash string used to verify the file + A hash string with the same format as a package hash string used to verify the file