diff --git a/BuildMaster/administration/agents-and-infrastructure/agents.md b/BuildMaster/administration/agents-and-infrastructure/agents.md index c3fc322d..cfe31195 100644 --- a/BuildMaster/administration/agents-and-infrastructure/agents.md +++ b/BuildMaster/administration/agents-and-infrastructure/agents.md @@ -10,19 +10,19 @@ To deploy releases to your servers, BuildMaster needs to be able to communicate ## Windows Servers {#windows data-title="Windows Servers"} -The [Inedo Agent](/support/documentation/inedoagent/overview) is generally the best way to communicate with a Windows server. It's light-weight, and uses a highly-optimized and resilient protocol built solely for this purpose, and is [quite easy to install](/support/documentation/buildmaster/installation-and-maintenance/installation-guide/agent-installation-guide). +The [Inedo Agent](/docs/inedoagent/overview) is generally the best way to communicate with a Windows server. It's light-weight, and uses a highly-optimized and resilient protocol built solely for this purpose, and is [quite easy to install](/docs/buildmaster/installation-and-maintenance/installation-guide/agent-installation-guide). ## Agentless Windows Servers {#agentless data-title="Agentless Windows Servers"} -Alternatively, BuildMaster can use PowerShell Remoting to communicate with Windows servers; however, this is generally slower and less resilient than the Inedo Agent protocol. You can connect using integrated authentication (i.e. whatever account the service is running under), or with a username & password [resource credential](/support/documentation/buildmaster/administration/resource-credentials). See [Enable-PSRemoting](https://technet.microsoft.com/en-us/library/hh849694.aspx) to configures a server to receive remote commands. +Alternatively, BuildMaster can use PowerShell Remoting to communicate with Windows servers; however, this is generally slower and less resilient than the Inedo Agent protocol. You can connect using integrated authentication (i.e. whatever account the service is running under), or with a username & password [resource credential](/docs/buildmaster/administration/resource-credentials). See [Enable-PSRemoting](https://technet.microsoft.com/en-us/library/hh849694.aspx) to configures a server to receive remote commands. ## Linux Servers {#linux data-title="Linux Servers"} -To communicate with Linux servers, BuildMaster uses the lightweight, highly-optimized, and resilient protocol already enabled on nearly every Linux box: SSH and SFTP. You can connect with a private key or username & password [resource credential](/support/documentation/buildmaster/administration/resource-credentials). +To communicate with Linux servers, BuildMaster uses the lightweight, highly-optimized, and resilient protocol already enabled on nearly every Linux box: SSH and SFTP. You can connect with a private key or username & password [resource credential](/docs/buildmaster/administration/resource-credentials). ## Local Agents {#local data-title="Local Agents"} -If you're using BuildMaster to interact with the server it's installed on, you can just set it up using a local agent. This uses the same process/identity that the [service](/support/documentation/buildmaster/installation-and-maintenance/architecture/service) is hosted as, and doesn't have very many privileges by default. +If you're using BuildMaster to interact with the server it's installed on, you can just set it up using a local agent. This uses the same process/identity that the [service](/docs/buildmaster/installation-and-maintenance/architecture/service) is hosted as, and doesn't have very many privileges by default. ## Automatic Inedo Agent Updates {#automatic-updates data-title="Automatic Inedo Agent Updates"} diff --git a/BuildMaster/administration/agents-and-infrastructure/environments.htm b/BuildMaster/administration/agents-and-infrastructure/environments.htm index 26e9f1b2..c8158e12 100644 --- a/BuildMaster/administration/agents-and-infrastructure/environments.htm +++ b/BuildMaster/administration/agents-and-infrastructure/environments.htm @@ -14,7 +14,7 @@
- Environments are also used in security and access controls to permit and restrict users from performing various tasks. For example, you could permit “QA Users” to deploy applications to the Testing environment, while restricting them from deploying to the Production environment. + Environments are also used in security and access controls to permit and restrict users from performing various tasks. For example, you could permit “QA Users” to deploy applications to the Testing environment, while restricting them from deploying to the Production environment.
BuildMaster comes with three, built-in environments that represent a very simple deployment pipeline: Integration, Testing, and Production. You can create, rename, and delete environments as needed. @@ -27,7 +27,7 @@
- Configuration variables will also cascade from a parent to a child environment, which means that deployments to a child environment will have access to the parent environment’s variables if they are not defined on the child environment. + Configuration variables will also cascade from a parent to a child environment, which means that deployments to a child environment will have access to the parent environment’s variables if they are not defined on the child environment.
diff --git a/BuildMaster/administration/agents-and-infrastructure/server-roles.htm b/BuildMaster/administration/agents-and-infrastructure/server-roles.htm index ac33ad15..27c055f7 100644 --- a/BuildMaster/administration/agents-and-infrastructure/server-roles.htm +++ b/BuildMaster/administration/agents-and-infrastructure/server-roles.htm @@ -20,7 +20,7 @@
- You can create, edit, and delete server roles using the web-based interface (Admin > Infrastructure > Roles), or programmatically with the Infrastructure API. + You can create, edit, and delete server roles using the web-based interface (Admin > Infrastructure > Roles), or programmatically with the Infrastructure API.
@@ -32,11 +32,11 @@
- You can specify a role name instead of a list of servers as a stage target in a pipeline stage. When the build is deployed to that stage, the stage target's deployment plan will be run against all servers with that role and in that environment. + You can specify a role name instead of a list of servers as a stage target in a pipeline stage. When the build is deployed to that stage, the stage target's deployment plan will be run against all servers with that role and in that environment.
- You can deploy to servers in a role using the @ServersInRole
function within a loop block.
+ You can deploy to servers in a role using the @ServersInRole
function within a loop block.
foreach server in @ServersInRole(hdars-api-host) @@ -52,7 +52,7 @@Role Dependencies
Role dependencies are used by Otter to model complex server and application configuration through hierarchical sets of simple roles with dependencies. This makes it much easier to share common configuration and define smaller (but related) roles.- While you can define a role's dependencies (i.e. the other roles which are required) on the role overview page, BuildMaster won't behave any differently if a role has dependencies. They are included only so that you can synchronize your infrastructure across products. + While you can define a role's dependencies (i.e. the other roles which are required) on the role overview page, BuildMaster won't behave any differently if a role has dependencies. They are included only so that you can synchronize your infrastructure across products.
Example: Multiple Server Roles
diff --git a/BuildMaster/administration/agents-and-infrastructure/servers.md b/BuildMaster/administration/agents-and-infrastructure/servers.md index 692b42c0..9db97fd5 100644 --- a/BuildMaster/administration/agents-and-infrastructure/servers.md +++ b/BuildMaster/administration/agents-and-infrastructure/servers.md @@ -13,9 +13,9 @@ A server can be physical (bare metal), virtual, or even nonexistent (i.e. one th ### Adding Servers to BuildMaster {#adding-servers data-title="Adding Servers to BuildMaster"} -You can add a server using the web-based user interface (`Servers` > `Add Server`), or programmatically with the [infrastructure API](/support/documentation/buildmaster/reference/api/infrastructure). +You can add a server using the web-based user interface (`Servers` > `Add Server`), or programmatically with the [infrastructure API](/docs/buildmaster/reference/api/infrastructure). -BuildMaster communicates with servers using the [Inedo Agent](/support/documentation/inedoagent/overview) (for Windows) or [SSH/SFTP](https://www.ssh.com/ssh/sftp) (for Windows and Linux). +BuildMaster communicates with servers using the [Inedo Agent](/docs/inedoagent/overview) (for Windows) or [SSH/SFTP](https://www.ssh.com/ssh/sftp) (for Windows and Linux). ## Deploying to Servers {#deploying-to-servers data-title="Deploying to Servers"} @@ -23,11 +23,11 @@ In addition to using [Server Roles](server-roles), there are two ways to deploy ### Pipeline Stage Target {#pipeline-target data-title="Pipeline Stage Target"} -You can specify a list of servers as a stage target in a [pipeline stage](/support/documentation/buildmaster/verification/pipelines#pipeline-stages). When the build is deployed to that stage, the stage target's deployment plan will be run against all those servers. +You can specify a list of servers as a stage target in a [pipeline stage](/docs/buildmaster/verification/pipelines#pipeline-stages). When the build is deployed to that stage, the stage target's deployment plan will be run against all those servers. ### Servers and OtterScript {#servers-otter data-title="Servers and OtterScript"} -You can deploy to a server using a [general block](/support/documentation/executionengine/otterscript/statements-and-blocks/general-blocks) +You can deploy to a server using a [general block](/docs/executionengine/otterscript/statements-and-blocks/general-blocks) ``` for server prod-hdars-sv1 @@ -49,4 +49,4 @@ However, if your application has always been deployed to a specific server, and A resource pool is a set of servers that are used as a single, load-balanced resource. You may acquire an unused server from a resource pool, and then release it back into the pool once the needed tasks have been performed. -Visit the [Resource Pools](/support/documentation/executionengine/components/resource-pools) documentation of the Inedo Execution Engine for more information on how to configure a resource pool. +Visit the [Resource Pools](/docs/executionengine/components/resource-pools) documentation of the Inedo Execution Engine for more information on how to configure a resource pool. diff --git a/BuildMaster/administration/applications/import-export.md b/BuildMaster/administration/applications/import-export.md index db972b07..70adacbc 100644 --- a/BuildMaster/administration/applications/import-export.md +++ b/BuildMaster/administration/applications/import-export.md @@ -24,7 +24,7 @@ You will also select how to publish the package, from one of two options: - **Publish to Universal Feed** - a feed on a ProGet instance - **Save to Disk Path** - a local or network path that the BuildMaster service can write to -Before publishing to a feed, you will need to setup an Inedo Product [resource credential](/support/documentation/buildmaster/administration/resource-credentials) with the URL and optionally an API key to your ProGet server. +Before publishing to a feed, you will need to setup an Inedo Product [resource credential](/docs/buildmaster/administration/resource-credentials) with the URL and optionally an API key to your ProGet server. ## Importing Applications {#importing data-title="Importing Applications"} @@ -44,7 +44,7 @@ Before importing from a feed, you will need to setup an Inedo Product resource c ## Package File Format {#package-file-formate data-title="Package File Format"} -Applications will be exported as a standard [universal package](/support/documentation/proget/core-concepts/packages) which is essentially a zip file containing application configuration and history, along with a JSON-based manifest file (`upack.json`) that describes the contents of the package. +Applications will be exported as a standard [universal package](/docs/proget/core-concepts/packages) which is essentially a zip file containing application configuration and history, along with a JSON-based manifest file (`upack.json`) that describes the contents of the package. In addition to the standard name and version properties, BuildMaster will include `a _exportDate` and `_bmVersion` property in `upack.json`. The package contents will be a collection of JSON-formatted files: diff --git a/BuildMaster/administration/configuration-variables.htm b/BuildMaster/administration/configuration-variables.htm index 479d5f40..2a218b7a 100644 --- a/BuildMaster/administration/configuration-variables.htm +++ b/BuildMaster/administration/configuration-variables.htm @@ -8,15 +8,15 @@
- Variables allow for extreme flexibility when modeling your deployment plans, and - configuration variables allow you to have the same plan run differently across different servers and environments. + Variables allow for extreme flexibility when modeling your deployment plans, and + configuration variables allow you to have the same plan run differently across different servers and environments.
You can define configuration variables at different scopes (release, server, environment, global, etc), and then reference those variables in your plan when using - Operations, If/Else Blocks, - Loop Blocks, etc. - You can also create a runtime variable in a plan itself. + Operations, If/Else Blocks, + Loop Blocks, etc. + You can also create a runtime variable in a plan itself.
Cascading Variables
@@ -38,7 +38,7 @@Cascading Variables
Resolution Rules
The variable definition that's the "closest" match to the current context is used. This is determined as follows:
The secure option for storing passwords, connection strings, or other values intended to be encrypted is - Resource Credentials, as they require special permissions to manage, + Resource Credentials, as they require special permissions to manage, and (as per Inedo Extensibility guidelines) should never be written to any logs.
@@ -128,7 +128,7 @@- Variables are configurable using the Variables Management API. + Variables are configurable using the Variables Management API.
- Event listeners can be added either as a post deployment step in a pipeline or specific to user via the My Event Listener user drop down. + Event listeners can be added either as a post deployment step in a pipeline or specific to user via the My Event Listener user drop down.
diff --git a/BuildMaster/administration/resource-credentials.htm b/BuildMaster/administration/resource-credentials.htm index b8b67067..c82847c9 100644 --- a/BuildMaster/administration/resource-credentials.htm +++ b/BuildMaster/administration/resource-credentials.htm @@ -68,7 +68,7 @@ * indicates an encrypted/sensitive field
- Resource credentials are implemented through an extensible feature, and several extensions (JIRA,VSTS, GitHub, etc.) will add types that are necessary for the extension's components. + Resource credentials are implemented through an extensible feature, and several extensions (JIRA,VSTS, GitHub, etc.) will add types that are necessary for the extension's components.
@@ -101,7 +101,7 @@
- You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment. + You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment.
There are two task attributes you can use to control this access: @@ -120,7 +120,7 @@
- For example, consider the Ensure-AppPool
operation:
+ For example, consider the Ensure-AppPool
operation:
@@ -138,7 +138,7 @@
- You can enable variable usage on a credential-by-credential basis. When configured, you can use the $CredentialProperty variable function to extract any property value. For example, + You can enable variable usage on a credential-by-credential basis. When configured, you can use the $CredentialProperty variable function to extract any property value. For example,
set $HDarsUser = $CredentialProperty(UsernamePassword::HDarsUser, UserName); diff --git a/BuildMaster/administration/security/#.md b/BuildMaster/administration/security/#.md index 79526572..49fb2f55 100644 --- a/BuildMaster/administration/security/#.md +++ b/BuildMaster/administration/security/#.md @@ -104,7 +104,7 @@ Task permissions and restrictions are associated with a user directory, which me ::: -Directories are exclusive; meaning you can only use one at a time. For this reason, it's important to make sure you will have sufficient administrator permissions in BuildMaster for the user directory you are switching to. If you do accidentally lock yourself out, don't worry; you can [reset the password](/support/documentation/various/ldap/troubleshooting#locked-out). +Directories are exclusive; meaning you can only use one at a time. For this reason, it's important to make sure you will have sufficient administrator permissions in BuildMaster for the user directory you are switching to. If you do accidentally lock yourself out, don't worry; you can [reset the password](/docs/various/ldap/troubleshooting#locked-out). ### Built-In Directory @@ -112,7 +112,7 @@ BuildMaster's built-in user directory is used by default and initially contains ### Active Directory LDAP -This is common to all our products; check out the [shared documentation](/support/documentation/various/ldap/ldap-active-directory). +This is common to all our products; check out the [shared documentation](/docs/various/ldap/ldap-active-directory). ## Virtual Privilege Assignments @@ -126,4 +126,4 @@ As of BuildMaster v6.1.8, privileges may be granted or restricted to the followi As of BuildMaster v6.1.8, a hybrid user directory may be used to combine multiple user directories together, resolving principals from one or more existing user directories. This allows BuildMaster administrators to configure the system such that, for example, user accounts can be defined in BuildMaster with a fallback to Active Directory. -Visit the [Hybrid User Directories](/support/documentation/various/ldap/combining-with-built-in) documentation for more information. \ No newline at end of file +Visit the [Hybrid User Directories](/docs/various/ldap/combining-with-built-in) documentation for more information. diff --git a/BuildMaster/administration/security/api-keys.md b/BuildMaster/administration/security/api-keys.md index 542d588f..190e8a45 100644 --- a/BuildMaster/administration/security/api-keys.md +++ b/BuildMaster/administration/security/api-keys.md @@ -20,13 +20,13 @@ The **Description** field is used for a human-friendly name, and can be used to A key can only be used for the API endpoints that you specify: {.docs} -- [Release & Build Deployment API](/support/documentation/buildmaster/reference/api/release-and-build) -- [Infrastructure Management API](/support/documentation/buildmaster/reference/api/infrastructure) -- [Variables Management API](/support/documentation/buildmaster/reference/api/variables) -- [CI Badge API](/support/documentation/buildmaster/reference/api/ci-badge) +- [Release & Build Deployment API](/docs/buildmaster/reference/api/release-and-build) +- [Infrastructure Management API](/docs/buildmaster/reference/api/infrastructure) +- [Variables Management API](/docs/buildmaster/reference/api/variables) +- [CI Badge API](/docs/buildmaster/reference/api/ci-badge) :::attention {.analogy} -Granting access to the [Native API](/support/documentation/buildmaster/reference/api#native) will effectively allow for full control of the instance. +Granting access to the [Native API](/docs/buildmaster/reference/api#native) will effectively allow for full control of the instance. ::: ### Logging Options @@ -52,4 +52,4 @@ When a user name is omitted, the user is treated as an administrator (also known #### Windows Integrated Authentication and API Keys {#integrated-windows-auth} -If you've configured [Windows Integrated Authentication](/support/documentation/various/ldap/integrated-authentication), the client will first need to authenticate with an Active Directory account, which may make API-key based authentication redundant. +If you've configured [Windows Integrated Authentication](/docs/various/ldap/integrated-authentication), the client will first need to authenticate with an Active Directory account, which may make API-key based authentication redundant. diff --git a/BuildMaster/administration/security/creating-tasks.md b/BuildMaster/administration/security/creating-tasks.md index 8cf5ae63..b32d3618 100644 --- a/BuildMaster/administration/security/creating-tasks.md +++ b/BuildMaster/administration/security/creating-tasks.md @@ -55,7 +55,7 @@ Note that manage is a proxy attribute for all application-related attributes (pi ### Infrastructure (Manage) -This allows users to create, edit, and delete [infrastructure](/support/documentation/buildmaster/administration/agents-and-infrastructure) (servers, roles, and environments). +This allows users to create, edit, and delete [infrastructure](/docs/buildmaster/administration/agents-and-infrastructure) (servers, roles, and environments). ::: {.attention .best-practice} This can provide transitive access to all connected servers and should be granted with extreme care. @@ -63,11 +63,11 @@ This can provide transitive access to all connected servers and should be grante ### Pipelines (ManageA) -This allows users to create, edit, and delete [pipelines](/support/documentation/buildmaster/verification/pipelines). This provides transitive access to deploy to force packages to environments, as a user can simply edit a pipeline to remove any gates or deployment windows. +This allows users to create, edit, and delete [pipelines](/docs/buildmaster/verification/pipelines). This provides transitive access to deploy to force packages to environments, as a user can simply edit a pipeline to remove any gates or deployment windows. ### Plans (View ContentsA & ManageA) -View allows users to open [deployment plans](/support/documentation/buildmaster/deployments/plans) and view the contents. +View allows users to open [deployment plans](/docs/buildmaster/deployments/plans) and view the contents. ::: {.attention .best-practice} You shouldn't put sensitive information (like passwords) in your plans, so there's no good reason to restrict this. @@ -77,7 +77,7 @@ Manage will allow users to create, edit, and delete deployment plans. Note that ### Builds (CreateA, DeployAE, ForceAE, ManageA, View Deployment Logs/DebugAE) -Create allows users to only create new [builds](/support/documentation/buildmaster/builds/overview). +Create allows users to only create new [builds](/docs/buildmaster/builds/overview). Deploy allows users to promote a build to a stage that it is eligible to deploy to (i.e. passed gates, deployment windows, etc). @@ -89,19 +89,19 @@ View Deployment Logs and View Deployment Debug Logs are primarily designed to re ### Releases (ManageA) -This allows users to create [releases](/support/documentation/buildmaster/releases/overview), change variables, and change release status. +This allows users to create [releases](/docs/buildmaster/releases/overview), change variables, and change release status. ### Notes (ManageA) -This allows users to create, edit, or delete [release/build notes](/support/documentation/buildmaster/releases/notes). +This allows users to create, edit, or delete [release/build notes](/docs/buildmaster/releases/notes). ### Issues (ManageA) -This allows users to create, edit, or delete [built-in BuildMaster issues](/support/documentation/buildmaster/verification/issue-tracking), and configure external issue sources for an application. +This allows users to create, edit, or delete [built-in BuildMaster issues](/docs/buildmaster/verification/issue-tracking), and configure external issue sources for an application. ### Script Assets (ManageA, View ContentsA) -View allows users to open [script assets](/support/documentation/executionengine/components/powershell-and-shell) and view the contents. +View allows users to open [script assets](/docs/executionengine/components/powershell-and-shell) and view the contents. ::: {.attention .best-practice} You shouldn't put sensitive information (like passwords) in your scripts, so there's no _good_ reason to restrict this. @@ -115,7 +115,7 @@ Because deployment plans can already run arbitrary commands, and editing script ### Credentials (Manage and View PasswordsE) -View Passwords will allow users to view sensitive (encrypted) fields on [resource credentials](/support/documentation/buildmaster/administration/resource-credentials). +View Passwords will allow users to view sensitive (encrypted) fields on [resource credentials](/docs/buildmaster/administration/resource-credentials). ::: {.attention .best-practice} This attribute primarily exists to allow some users to view credentials only associated with certain environments. @@ -129,11 +129,11 @@ This could provide transitive access to connected systems, so be very careful wh ### Calendars (ManageA) -This allows users to create, edit, and delete system [calendars](/support/documentation/buildmaster/releases/calendars). +This allows users to create, edit, and delete system [calendars](/docs/buildmaster/releases/calendars). ### Configuration Files (ViewAE, EditAE, DeployAE) -View allows users to open [configuration files](/support/documentation/buildmaster/deployments/configuration-files) and view the contents for a specific environment. +View allows users to open [configuration files](/docs/buildmaster/deployments/configuration-files) and view the contents for a specific environment. Edit will allow users to create, edit, and delete configuration file instances in a specific environment. @@ -141,7 +141,7 @@ Deploy will allow users to deploy configuration file instances to a specific env ### SQL Scripts (ViewA, ManageA, DeployAE) -View allows users to view [SQL change scripts](/support/documentation/buildmaster/deployments/targets/databases) for an application. +View allows users to view [SQL change scripts](/docs/buildmaster/deployments/targets/databases) for an application. Manage allows users to edit, delete, and change releases for SQL change scripts in an application, but also manage database connections. diff --git a/BuildMaster/administration/value-renderers.htm b/BuildMaster/administration/value-renderers.htm index f74b2a1b..85f3ad25 100644 --- a/BuildMaster/administration/value-renderers.htm +++ b/BuildMaster/administration/value-renderers.htm @@ -233,7 +233,7 @@Examples:
Visit the - Variable Value Renderers documentation + Variable Value Renderers documentation for more information.
diff --git a/BuildMaster/builds/continuous-integration/#.md b/BuildMaster/builds/continuous-integration/#.md index cc47243c..89f9015e 100644 --- a/BuildMaster/builds/continuous-integration/#.md +++ b/BuildMaster/builds/continuous-integration/#.md @@ -9,6 +9,6 @@ Continuous Integration (CI) is the concept of automatically creating a build imm - **[Repository monitors](continuous-integration/repository-monitors)** – supports polling a generic repository for changes - **[Repository hooks](continuous-integration/repository-hooks)** - supports immediate build creation from GitHub or GitLab, but may require support for incoming push notification requests from the internet - - **[Release & Build API](/support/documentation/buildmaster/reference/api/release-and-build)** - the most general method to create builds from a third-party system, whether TeamCity, Jenkins, TFS, or others + - **[Release & Build API](/docs/buildmaster/reference/api/release-and-build)** - the most general method to create builds from a third-party system, whether TeamCity, Jenkins, TFS, or others diff --git a/BuildMaster/builds/continuous-integration/badges.htm b/BuildMaster/builds/continuous-integration/badges.htm index 05ecf6a3..224435fb 100644 --- a/BuildMaster/builds/continuous-integration/badges.htm +++ b/BuildMaster/builds/continuous-integration/badges.htm @@ -37,7 +37,7 @@Link:
The badge specifiers are not limited to applications and may be further filtered based on commit IDs, branches, and more. For more information on how to configure and query a badge, visit - the CI Badge API documentation. + the CI Badge API documentation.
diff --git a/BuildMaster/builds/continuous-integration/source-control/svn.md b/BuildMaster/builds/continuous-integration/source-control/svn.md index e61f99aa..bb424788 100644 --- a/BuildMaster/builds/continuous-integration/source-control/svn.md +++ b/BuildMaster/builds/continuous-integration/source-control/svn.md @@ -58,7 +58,7 @@ Svn-Checkout( ); ``` -A more general approach can be taken by using BuildMaster functions (e.g. `SourcePath: branches/$ReleaseNumber`), configuration variables, or even [release template variable prompts](/support/documentation/buildmaster/releases/templates#components) that enable the branch to be selected at build time. +A more general approach can be taken by using BuildMaster functions (e.g. `SourcePath: branches/$ReleaseNumber`), configuration variables, or even [release template variable prompts](/docs/buildmaster/releases/templates#components) that enable the branch to be selected at build time. ### Branching and Tagging in Subversion @@ -104,13 +104,13 @@ svn.exe copy trunk tags/Release-1.2.5 BuildMaster supports automatically monitoring a Subversion repository for changes, no matter where it is hosted. -To automatically create builds when developers commit to a Subversion repository, simply configure a [Repository Monitor](/support/documentation/buildmaster/builds/continuous-integration/repository-monitors). +To automatically create builds when developers commit to a Subversion repository, simply configure a [Repository Monitor](/docs/buildmaster/builds/continuous-integration/repository-monitors). {.attention .technical} Note: a Subversion repository monitor requires BuildMaster v6.1 or later in combination with v1.1.0 or later of the Subversion extension. #### Available Variables -When using a [repository monitor plan](/support/documentation/buildmaster/builds/continuous-integration/repository-monitors#ci-plans), the following variables are available: +When using a [repository monitor plan](/docs/buildmaster/builds/continuous-integration/repository-monitors#ci-plans), the following variables are available: - `$Branch` - the full path of the branch, e.g. `branches/develop-1.2.3` - `$RevisionNumber` - the highest integer revision number of any file within the specified path @@ -119,4 +119,4 @@ When using a [repository monitor plan](/support/documentation/buildmaster/builds BuildMaster's Subversion integration supports username/password authentication over HTTPS. This is the simplest and recommended method to authenticate with a Subversion repository. -While each SVN operation supports supplying a repository URL, username, and password, it is recommended to create a [Resource Credential](/support/documentation/buildmaster/administration/resource-credentials) for Subversion that includes the repository name, username, and password. This is not only more secure, but more convenient as the credentials are stored in one location. \ No newline at end of file +While each SVN operation supports supplying a repository URL, username, and password, it is recommended to create a [Resource Credential](/docs/buildmaster/administration/resource-credentials) for Subversion that includes the repository name, username, and password. This is not only more secure, but more convenient as the credentials are stored in one location. diff --git a/BuildMaster/builds/external-systems/jenkins-import.md b/BuildMaster/builds/external-systems/jenkins-import.md index 3295ff86..5882c73d 100644 --- a/BuildMaster/builds/external-systems/jenkins-import.md +++ b/BuildMaster/builds/external-systems/jenkins-import.md @@ -41,10 +41,10 @@ Jenkins::Import-Artifact - Set build number to variable: You can set any variable to the build number value returned by Jenkins. Default it `$JenkinsBuildNumber` #### Further Integration -Another recommended usage for the Jenkins extension is to incorporate Jenkins with our [release templates](https://inedo.com/support/documentation/buildmaster/releases/templates). You can set up a template variable as a _dynamic list_ by using your Jenkins connection as the source. Then you'll be able to select the exact build directly from Jenkins in real-time. This saves time and gives you more control over what is being imported. [See the full documentation](https://inedo.com/support/tutorials/buildmaster/jenkins/choosing-specific-artifact-from-jenkins). +Another recommended usage for the Jenkins extension is to incorporate Jenkins with our [release templates](https://inedo.com/docs/buildmaster/releases/templates). You can set up a template variable as a _dynamic list_ by using your Jenkins connection as the source. Then you'll be able to select the exact build directly from Jenkins in real-time. This saves time and gives you more control over what is being imported. [See the full documentation](https://inedo.com/support/tutorials/buildmaster/jenkins/choosing-specific-artifact-from-jenkins). #### Next Steps -After you have imported your artifact from Jenkins, BuildMaster can now [deploy](https://inedo.com/support/documentation/buildmaster/reference/operations/artifacts/deploy-artifact) it to any number of servers or targets. +After you have imported your artifact from Jenkins, BuildMaster can now [deploy](https://inedo.com/docs/buildmaster/reference/operations/artifacts/deploy-artifact) it to any number of servers or targets. #### Reference back to Jenkins -Once you have queued a build and imported the artifact from Jenkins you can use the `$JenkinsBuildNumber` variable to link directly back to jenkins with a [custom value renderer](https://inedo.com/support/documentation/buildmaster/administration/value-renderers). This is a convenient way you to quickly browse to Jenkins to review the build that was imported. +Once you have queued a build and imported the artifact from Jenkins you can use the `$JenkinsBuildNumber` variable to link directly back to jenkins with a [custom value renderer](https://inedo.com/docs/buildmaster/administration/value-renderers). This is a convenient way you to quickly browse to Jenkins to review the build that was imported. diff --git a/BuildMaster/builds/external-systems/teamcity.md b/BuildMaster/builds/external-systems/teamcity.md index 3ce3fbd5..e5668e1d 100644 --- a/BuildMaster/builds/external-systems/teamcity.md +++ b/BuildMaster/builds/external-systems/teamcity.md @@ -35,12 +35,12 @@ TeamCity::Import-Artifact ``` #### Next Steps -After you have imported your artifact from TeamCity, BuildMaster can now [deploy](/support/documentation/buildmaster/reference/operations/artifacts/deploy-artifact) it to any number of servers or targets. +After you have imported your artifact from TeamCity, BuildMaster can now [deploy](/docs/buildmaster/reference/operations/artifacts/deploy-artifact) it to any number of servers or targets. #### Self-Service TeamCity Builds -With [release templates](/support/documentation/buildmaster/releases/templates), you can build an easy-to-use deployment pipeline using data directly from TeamCity. You can set up a template variable as a _dynamic list_ and use the TeamCity resource credentials as the source and select the build configurations directly from TeamCity in real-time. This saves time and gives you more control over what is being imported. Refer to the [Choosing a specific TeamCity Build with BuildMaster](/support/tutorials/buildmaster/teamcity/choosing-specific-artifact-from-teamcity) tutorial for more information. +With [release templates](/docs/buildmaster/releases/templates), you can build an easy-to-use deployment pipeline using data directly from TeamCity. You can set up a template variable as a _dynamic list_ and use the TeamCity resource credentials as the source and select the build configurations directly from TeamCity in real-time. This saves time and gives you more control over what is being imported. Refer to the [Choosing a specific TeamCity Build with BuildMaster](/support/tutorials/buildmaster/teamcity/choosing-specific-artifact-from-teamcity) tutorial for more information. #### Capturing TeamCity Build Numbers -Once you have queued a build and imported the artifact from TeamCity you can use the `$TeamCityBuildNumber` variable in conjunction with a [custom value renderer](/support/documentation/buildmaster/administration/value-renderers). This is a convenient way you to quickly link back to the TeamCity server to get an overview of the build that was imported. +Once you have queued a build and imported the artifact from TeamCity you can use the `$TeamCityBuildNumber` variable in conjunction with a [custom value renderer](/docs/buildmaster/administration/value-renderers). This is a convenient way you to quickly link back to the TeamCity server to get an overview of the build that was imported. diff --git a/BuildMaster/builds/external-systems/trigger-via-api.md b/BuildMaster/builds/external-systems/trigger-via-api.md index 54a4ca84..7faf79f0 100644 --- a/BuildMaster/builds/external-systems/trigger-via-api.md +++ b/BuildMaster/builds/external-systems/trigger-via-api.md @@ -8,7 +8,7 @@ sequence: 700 BuildMaster manages the CI/CD process for applications of all types and is designed to integrate with any existing process. Sometimes, this process includes initiating a build through a deployment pipeline from an existing CI tool (e.g. TeamCity, Jenkins) or package manager such as [ProGet](/proget). -Builds may be created in BuildMaster by sending an HTTP request to the [Release & Build Deployment API](/support/documentation/buildmaster/reference/api/release-and-build-deployment). Common use-cases for this include: +Builds may be created in BuildMaster by sending an HTTP request to the [Release & Build Deployment API](/docs/buildmaster/reference/api/release-and-build-deployment). Common use-cases for this include: {.docs} - automatically triggering a BuildMaster build or deployment from an external tool @@ -27,7 +27,7 @@ In order to create a build via the API, it must be enabled. To do so, visit the ### 2. Call the API: -Because the API is called via HTTP, it can be accessed in a variety of ways. For this example, we will use a [ProGet "package-added" webhook](/support/documentation/proget/advanced/webhooks) to initiate a build in BuildMaster. In ProGet, the webhook is configured as follows: +Because the API is called via HTTP, it can be accessed in a variety of ways. For this example, we will use a [ProGet "package-added" webhook](/docs/proget/advanced/webhooks) to initiate a build in BuildMaster. In ProGet, the webhook is configured as follows: | Setting | Value | |---|---| diff --git a/BuildMaster/builds/overview.htm b/BuildMaster/builds/overview.htm index 37cff791..8a659fcf 100644 --- a/BuildMaster/builds/overview.htm +++ b/BuildMaster/builds/overview.htm @@ -10,7 +10,7 @@A build in BuildMaster is the fundamental unit of deployment - under the context of a release that advances through a sequence of pipeline + under the context of a release that advances through a sequence of pipeline stages in order to effectively deploy a release. Its components may consist of any or all of the following:
@@ -19,8 +19,8 @@
- You can use Security and Access Controls + You can use Security and Access Controls to determine which users can perform these actions for which environments.
diff --git a/BuildMaster/builds/packaging/artifacts-from-drop-path.md b/BuildMaster/builds/packaging/artifacts-from-drop-path.md index e4c2510d..dc03c4c6 100644 --- a/BuildMaster/builds/packaging/artifacts-from-drop-path.md +++ b/BuildMaster/builds/packaging/artifacts-from-drop-path.md @@ -74,4 +74,4 @@ Create-Artifact MyArtifact -For more information and sample usage visit our [documentation](https://inedo.com/support/documentation/buildmaster/reference/operations/artifacts/create-artifact) \ No newline at end of file +For more information and sample usage visit our [documentation](https://inedo.com/docs/buildmaster/reference/operations/artifacts/create-artifact) diff --git a/BuildMaster/builds/packaging/artifacts.htm b/BuildMaster/builds/packaging/artifacts.htm index a1905a13..1e6d7b98 100644 --- a/BuildMaster/builds/packaging/artifacts.htm +++ b/BuildMaster/builds/packaging/artifacts.htm @@ -38,8 +38,8 @@
The most common method to capture an artifact is to add the Create-Artifact
operation to a
- deployment plan, generally immediately after some form of 'build' operation such as MSBuild::Build-Project
.
- See the operation's documentation for more information.
+ deployment plan, generally immediately after some form of 'build' operation such as MSBuild::Build-Project
.
+ See the operation's documentation for more information.
@@ -55,9 +55,9 @@
- Artifacts are deployed using the Deploy-Artifact
operation in a deployment plan.
+ Artifacts are deployed using the Deploy-Artifact
operation in a deployment plan.
This operation by default is optimized to only transfer files that have been modified to
- decrease deployment times of large artifacts. See the operation's documentation for more information.
+ decrease deployment times of large artifacts. See the operation's documentation for more information.
- Note: Build reports are not persisted by application import/export. To maintain important + Note: Build reports are not persisted by application import/export. To maintain important information across this boundary, use build artifacts instead.
Capture-HtmlDirectoryReport
Capture-HtmlDirectoryReport
An HTML directory report requires a specific format to be displayed correctly within BuildMaster. @@ -48,7 +48,7 @@
Capture-FileReport
Capture-FileReport
A file report is displayed as either plain text or HTML, depending on the format specified in the operation. @@ -61,7 +61,7 @@
Compare-Directories
Compare-Directories
This operation compares two directories on the same server and highlights the following information:
Compare-Artifacts
Compare-Artifacts
This operation works in the same manner as the Compare-Directories operation, with the following caveats:
To capture and associate a custom build report, simple run a third-party tool with the Exec
operation then capture its output with one of the two built-in capturing operations.
- Text-templating is the preferred method, and relies on the text templating in the Inedo Execution Engine, along with configuration variables and conditionals for environment-specific settings. + Text-templating is the preferred method, and relies on the text templating in the Inedo Execution Engine, along with configuration variables and conditionals for environment-specific settings.
There are several advantages to using this method over configuration file assets: @@ -55,7 +55,7 @@
All configuration variables, runtime variables, and parameterless variable functions in context are considered when variable replacement occurs before deployment and follows the same resolution rules as configuration variable replacement.
+All configuration variables, runtime variables, and parameterless variable functions in context are considered when variable replacement occurs before deployment and follows the same resolution rules as configuration variable replacement.
Template file variables are treated as runtime variables when replacement occurs, and therefore override any configuration variables. In practice, it is not recommended to have template instance variables override existing configuration variables or variable functions. It is also not recommended to rely on runtime variables created during a deployment.
The BuildMaster Service is key component of BuildMaster's architecture, and is
- what actually runs your deployment plans using the
+ what actually runs your deployment plans using the
execution engine. It's a standard
Windows Service Application,
and may be managed and configured using the Windows Service Manager or sc.exe
as you see fit.
diff --git a/BuildMaster/installation-and-maintenance/backing-up.md b/BuildMaster/installation-and-maintenance/backing-up.md
index 4be64cc5..dd58c4b3 100644
--- a/BuildMaster/installation-and-maintenance/backing-up.md
+++ b/BuildMaster/installation-and-maintenance/backing-up.md
@@ -9,11 +9,11 @@ BuildMaster should be backed up frequently, and as a .NET- and SQL Server-based
{.docs}
- **BuildMaster Database** - a SQL Server database that contains all of BuildMaster's configuration data
-- **Encryption key** - the value stored in both the web.config and BuildMaster.Service.config that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/support/documentation/buildmaster/administration/resource-credentials)
-- **Shared Configuration** (as of v6.0.7) - the file `%PROGRAMDATA%\Inedo\SharedConfig\BuildMaster.config` contains the encryption key that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/support/documentation/buildmaster/administration/resource-credentials)
+- **Encryption key** - the value stored in both the web.config and BuildMaster.Service.config that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/docs/buildmaster/administration/resource-credentials)
+- **Shared Configuration** (as of v6.0.7) - the file `%PROGRAMDATA%\Inedo\SharedConfig\BuildMaster.config` contains the encryption key that's used to encrypt/decrypt sensitive data, most notably [resource credentials](/docs/buildmaster/administration/resource-credentials)
- **Artifact Library Files** - a path on disk (defined in Artifacts.BasePath setting) that contains all the files for artifacts you created within BuildMaster
-You may also back up your [extensions](/support/documentation/buildmaster/reference/extensions) folder, which is stored in the path defined in the Extensions.ExtensionsPath advanced configuration setting. This will make restoring to a new server as easy as possible, in that you'll just need to copy the backup files to the same location on the new server.
+You may also back up your [extensions](/docs/buildmaster/reference/extensions) folder, which is stored in the path defined in the Extensions.ExtensionsPath advanced configuration setting. This will make restoring to a new server as easy as possible, in that you'll just need to copy the backup files to the same location on the new server.
## Database Backup {#database-backup data-title="Database Backup"}
diff --git a/BuildMaster/installation-and-maintenance/installation-guide/#.htm b/BuildMaster/installation-and-maintenance/installation-guide/#.htm
index cb52ccb1..e579a839 100644
--- a/BuildMaster/installation-and-maintenance/installation-guide/#.htm
+++ b/BuildMaster/installation-and-maintenance/installation-guide/#.htm
@@ -114,7 +114,7 @@
By default, the installer will use the NetworkService account - to run the BuildMaster Service and Web Application. + to run the BuildMaster Service and Web Application. We recommend sticking with this, and changing the account later if you need to.
diff --git a/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm b/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm index d5d984de..2435de71 100644 --- a/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm +++ b/BuildMaster/installation-and-maintenance/installation-guide/agent-installation-guide.htm @@ -16,13 +16,13 @@To facilitate communication between BuildMaster and the Windows servers you want to deploy to and orchestrate, BuidMaster uses a lightweight agent with a highly-optimized and resilient protocol. Installing agents is even easier than - installing BuildMaster, and the Agent Installation Guide will provide step-by-step instructions, as well as provide some detail as to what's + installing BuildMaster, and the Agent Installation Guide will provide step-by-step instructions, as well as provide some detail as to what's happening behind the scenes.
If a variable value string starts with a `
, @(
, %(
, then the value will evaluated as a literal_expression
- (see formal grammar), which means you'll need
- to treat the value as a proper string literal, and escape $
and other
+ (see formal grammar), which means you'll need
+ to treat the value as a proper string literal, and escape $
and other
characters if you don't want them expanded into variables at runtime... or cause an error when they can't expand.
- There are times when you'll want to attach additional information to a release or build, usually to document something for later auditing purposes, or to share information with another team member. + There are times when you'll want to attach additional information to a release or build, usually to document something for later auditing purposes, or to share information with another team member.
For example, you might want to document some of the following: @@ -37,7 +37,7 @@
Release template variables are configurable at any of 3 levels:
There are 4 types of Template Variables:
@@ -38,12 +38,12 @@To create a release template, in the context of an application, select the Releases tab under the Releases project sub-navigation menu. While the UI editor is the recommended method to create release templates, it is also possible to create them from a JSON object.
Once a release template is created, it can be used in one of two places: either creating a new release from the UI, or the Create Release from Template API endpoint.
+Once a release template is created, it can be used in one of two places: either creating a new release from the UI, or the Create Release from Template API endpoint.
If there is only a single release template for an application, that release template selected by default when creating a new release. Once a release is assigned a release template, any of the 3 variables properties that contain template variable configurations will then prompt for values as per the Hierarchy section above.
A release may be assigned a different release template at any time while the release is active, however note that this can change the variable prompts whose values may be expected by a deployment plan.
- The Issue tracking integration is implemented through an extensible feature, and additional tools may be supported by building extensions. + The Issue tracking integration is implemented through an extensible feature, and additional tools may be supported by building extensions.
- Most issue tracking tool extensions have operations that can be use in a deployment plan to create, query, and modify issues. Each issue tracking tool is different, but there are generally four operations available in each extension: + Most issue tracking tool extensions have operations that can be use in a deployment plan to create, query, and modify issues. Each issue tracking tool is different, but there are generally four operations available in each extension:
- The Inedo Execution Engine uses issue sources to perform background synchronizations with external issue tracking tools. Issue sources act as a filter that lists issues that are relevant for a particular release. Depending on the type of issue tracking tool, each issue source will have different fields that are specific to the tool. + The Inedo Execution Engine uses issue sources to perform background synchronizations with external issue tracking tools. Issue sources act as a filter that lists issues that are relevant for a particular release. Depending on the type of issue tracking tool, each issue source will have different fields that are specific to the tool.
\ No newline at end of file diff --git a/Hedgehog/deliver-deploy/deployment-plans.htm b/Hedgehog/deliver-deploy/deployment-plans.htm index 0b2251b1..9f549a3c 100644 --- a/Hedgehog/deliver-deploy/deployment-plans.htm +++ b/Hedgehog/deliver-deploy/deployment-plans.htm @@ -37,7 +37,7 @@- OtterScript is a Domain-Specific Language that was designed in tandem with the Inedo execution engine to represent configuration plans and orchestration plans in Otter, and deployment plans in Hedgehog and BuildMaster. + OtterScript is a Domain-Specific Language that was designed in tandem with the Inedo execution engine to represent configuration plans and orchestration plans in Otter, and deployment plans in Hedgehog and BuildMaster.
- OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent - configuration plans and - orchestration plans in Otter, - and deployment plans in Hedgehog. + OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent + configuration plans and + orchestration plans in Otter, + and deployment plans in Hedgehog.
You really don't need to learn OtterScript; it's simply the textual representation of a plan, and plans are already fully editable in the drag-and-drop plan editor. @@ -42,12 +42,12 @@
This is the actual OtterScript that will be run in order to deploy the release package. It can reference an project-level plan, a parent project's plan, or global plan. Plan names should be referenced by simple names, but may also be accessed via - normal raft resolution rules. + normal raft resolution rules.
@@ -169,7 +169,7 @@
- You can define key/value pairs on pipelines and stages. These behave just like configuration variables, in that you can use these variables within deployment plans that are executed through the plan. + You can define key/value pairs on pipelines and stages. These behave just like configuration variables, in that you can use these variables within deployment plans that are executed through the plan.
However, pipeline variables are not actually configuration variables: you can't create multi-scoped variables, or modify them through the variables API. diff --git a/Hedgehog/global-components/executions/#.htm b/Hedgehog/global-components/executions/#.htm index ec8f15e9..2ceb0f7f 100644 --- a/Hedgehog/global-components/executions/#.htm +++ b/Hedgehog/global-components/executions/#.htm @@ -14,7 +14,7 @@
- An execution represents a deployment, orchestration, routine task, synchronization, or any other type of job that is run by the service using the Inedo Execution Engine. + An execution represents a deployment, orchestration, routine task, synchronization, or any other type of job that is run by the service using the Inedo Execution Engine.
All execution records are stored in the database, have scoped logs, and the following properties: diff --git a/Hedgehog/global-components/executions/dispatching-and-running.htm b/Hedgehog/global-components/executions/dispatching-and-running.htm index 30f6a85f..656bc091 100644 --- a/Hedgehog/global-components/executions/dispatching-and-running.htm +++ b/Hedgehog/global-components/executions/dispatching-and-running.htm @@ -9,10 +9,10 @@
- The service uses the ExecutionDispatcher
Task Runner to query the database for executions with a Run State of "Pending" to a Start Date that is before or equal to the current DateTime. For each suitable execution found, a new background task is used to run the execution.
+ The service uses the ExecutionDispatcher
Task Runner to query the database for executions with a Run State of "Pending" to a Start Date that is before or equal to the current DateTime. For each suitable execution found, a new background task is used to run the execution.
- The Service administration page will display the currently running background tasks that were created by the ExecutionDispatcher, and provide links to the appropriate Execution in Progress page. + The Service administration page will display the currently running background tasks that were created by the ExecutionDispatcher, and provide links to the appropriate Execution in Progress page.
When the execution is complete, the background task will terminate; you can view all executions (regardless of state) using the Executions administration page, although it will probably be easier to find the specific execution record using a more specific context (such as package deployment history). diff --git a/Hedgehog/global-components/executions/types.htm b/Hedgehog/global-components/executions/types.htm index 0d2485ae..deca3cd3 100644 --- a/Hedgehog/global-components/executions/types.htm +++ b/Hedgehog/global-components/executions/types.htm @@ -57,7 +57,7 @@
- TBD. This may or may not be needed, because the task runner probably won't fail; it would be dispatched from the web front for diagnostic purposes, similar to how issue import works in buildmaster. + TBD. This may or may not be needed, because the task runner probably won't fail; it would be dispatched from the web front for diagnostic purposes, similar to how issue import works in buildmaster.
@@ -71,14 +71,14 @@
install.otter
).
- Once loaded and compiled, the actual plan is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name). The plan is then wrapped using the Server Targeting described below. + Once loaded and compiled, the actual plan is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name). The plan is then wrapped using the Server Targeting described below.
A Pipeline Targeted Execution is dispatched by a Pipeline Stage Execution, and allows for deploying multiple packages in a package set.
- Each package in the set is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name), and that plan is then wrapped using the Server Targeting described below. + Each package in the set is wrapped in a Set Context Statement (with a ContextType of package, and ContextValue of the package name), and that plan is then wrapped using the Server Targeting described below.
@@ -86,18 +86,18 @@
- A single Context Iteration Statement will be created with the Source set to a literal expression of the server names (e.g. @(Server1
, Server2
, Server3
). The Body contain an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.
+ A single Context Iteration Statement will be created with the Source set to a literal expression of the server names (e.g. @(Server1
, Server2
, Server3
). The Body contain an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.
- For every role targeted, a Set Context Statement (with a ContextType of role
, and ContextValue of the role name) will be created. The Body of those statements will be comprised of a single Context Iteration Statement with the Source set to a literal expression of the servers in that role and environment. The Body contains an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.
+ For every role targeted, a Set Context Statement (with a ContextType of role
, and ContextValue of the role name) will be created. The Body of those statements will be comprised of a single Context Iteration Statement with the Source set to a literal expression of the servers in that role and environment. The Body contains an Execution Directive Statement with an Asynchronous flag, and the Body of that will contain the actual plan.
If no servers were in any of the role iterations, then a Log-Warning statement will be appended to the wrapped plan, as the actual plan will not execute.
- This execution invokes the OtterScript runtime to execute either the specified plan directly, or a wrapped version of the plan. + This execution invokes the OtterScript runtime to execute either the specified plan directly, or a wrapped version of the plan.
If an error condition occurs before the runtime is invoked, such as a complication error or a pipeline resolution error, the error message will be written to logs, the Execution Status will be set to Error, and the Run State will be set to Completed. diff --git a/Hedgehog/global-components/rafts.htm b/Hedgehog/global-components/rafts.htm index b4d34a60..7ff73af1 100644 --- a/Hedgehog/global-components/rafts.htm +++ b/Hedgehog/global-components/rafts.htm @@ -20,7 +20,7 @@
- Because you likely won't need multiple rafts right away, a raft named "Default" is automatically created when you install Hedgehog. This is a Database raft and is backed up when you do a regular Back-up of Hedgehog. + Because you likely won't need multiple rafts right away, a raft named "Default" is automatically created when you install Hedgehog. This is a Database raft and is backed up when you do a regular Back-up of Hedgehog.
If you only have a single raft configured, and that raft is named "Default", then the ability to filter or select rafts will not be exposed on plan, project, asset, etc. pages. @@ -31,11 +31,11 @@
- When you specify a raft for a project, Hedgehog will always search within the associated raft for content, using the Project Content Name Resolution search. Only the associated raft will be searched, which means that if a parent project uses a different raft, content may never be located. If no raft is specified, then the "Default" raft (if one is named that) is used. + When you specify a raft for a project, Hedgehog will always search within the associated raft for content, using the Project Content Name Resolution search. Only the associated raft will be searched, which means that if a parent project uses a different raft, content may never be located. If no raft is specified, then the "Default" raft (if one is named that) is used.
- Rafts rely on an Extensible Raft Provider to retrieve and store raft data. There are three built-in raft types: + Rafts rely on an Extensible Raft Provider to retrieve and store raft data. There are three built-in raft types:
- Variables persisted within a raft are not currently displayed anywhere in the UI, and are intended to be used for storing default or fallback values for plans stored in portable/reusable rafts. They are the lowest scope, and are only used if a Configuration Variable of the same name does not exist. + Variables persisted within a raft are not currently displayed anywhere in the UI, and are intended to be used for storing default or fallback values for plans stored in portable/reusable rafts. They are the lowest scope, and are only used if a Configuration Variable of the same name does not exist.
- Hedgehog can automatically resolve names using the Project Content Name Resolution system, sometimes it's convenient to specify an explicit, fully-qualified path. You can do this with a combination of raft names and Project Paths. + Hedgehog can automatically resolve names using the Project Content Name Resolution system, sometimes it's convenient to specify an explicit, fully-qualified path. You can do this with a combination of raft names and Project Paths.
- Resource credentials are implemented through an extensible feature, and several extensions (VSTS, GitHub, etc.) will add types that are necessary for the extension's components. + Resource credentials are implemented through an extensible feature, and several extensions (VSTS, GitHub, etc.) will add types that are necessary for the extension's components.
@@ -100,7 +100,7 @@
- You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment. + You may want to permit or restrict certain users from accessing certain credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment.
There are two task attributes you can use to control this access: @@ -119,7 +119,7 @@
- For example, consider the Ensure-AppPool
operation:
+ For example, consider the Ensure-AppPool
operation:
diff --git a/Hedgehog/packages/creating.htm b/Hedgehog/packages/creating.htm index 0096e1b5..b8d01a98 100644 --- a/Hedgehog/packages/creating.htm +++ b/Hedgehog/packages/creating.htm @@ -13,16 +13,16 @@ There's not a whole lot to a package: it's just a zip file containing the files you actually want to distribute, as well as a manifest file that describes the package itself. There are a lot of options for creating and publishing packages to a feed, either from a developer's workstation, a build server, or anywhere else:
- You can also use Hedgehog's advanced execution engine to help you create packages by setting up a release and pipeline that imports build artifacts from Jenkins, grabs files from a network drive, or even creates them directly using msbuild. + You can also use Hedgehog's advanced execution engine to help you create packages by setting up a release and pipeline that imports build artifacts from Jenkins, grabs files from a network drive, or even creates them directly using msbuild.
@@ -30,7 +30,7 @@
This runs the romp install method, but specifies an option, which means it does not add the package to the local registry.
Romp for Visual Studio uses a .upack/
folder, located at a project's root, that is laid out exactly like a regular romp package, exception without a package/ subfolder. Instead, MSBuild's output folder is used to create the contents at packing time.
Romp for Visual Studio uses a .upack/
folder, located at a project's root, that is laid out exactly like a regular romp package, exception without a package/ subfolder. Instead, MSBuild's output folder is used to create the contents at packing time.
The plugin uses an embedded version of Romp (the version is indicated in the notes), but you can configure it to point to another installation on your machine if desired.
diff --git a/Romp/installation/configuration.htm b/Romp/installation/configuration.htm index cb90d469..a98b1623 100644 --- a/Romp/installation/configuration.htm +++ b/Romp/installation/configuration.htm @@ -63,7 +63,7 @@secure-credentials
If you specify user-level installation, the files will instead be extracted to %UserProfile%\.romp
folder, and the userLevel configuration value will be set to true in the configuration file.
If you specify user-level installation, the files will instead be extracted to %UserProfile%\.romp
folder, and the userLevel configuration value will be set to true in the configuration file.
Note that Romp does not have an uninstaller, so to uninstall, just remove from your path and delete files.
diff --git a/Romp/installation/maintenance.htm b/Romp/installation/maintenance.htm index cf4ba688..02795a7f 100644 --- a/Romp/installation/maintenance.htm +++ b/Romp/installation/maintenance.htm @@ -19,7 +19,7 @@Romp uses the standardized local package registry specification, which allows Romp and other tools to see which packages are installed on the machine. By default, a Machine-level registry is used, which is stored in %ProgramData%\upack
.
Romp uses the standardized local package registry specification, which allows Romp and other tools to see which packages are installed on the machine. By default, a Machine-level registry is used, which is stored in %ProgramData%\upack
.
- Romp is primarily a command-line tool that lets you create and install packages. It’s really easy to get started: + Romp is primarily a command-line tool that lets you create and install packages. It’s really easy to get started:
- Romp uses the Inedo Execution Engine, which was created exclusively for infrastructure automation and orchestration. The Inedo Execution Engine lets you uses a combination of OtterScript, PowerShell, Text Templating, Operations, and Functions to accomplish virtually any kind of deployment or infrastructure configuration. + Romp uses the Inedo Execution Engine, which was created exclusively for infrastructure automation and orchestration. The Inedo Execution Engine lets you uses a combination of OtterScript, PowerShell, Text Templating, Operations, and Functions to accomplish virtually any kind of deployment or infrastructure configuration.
The easiest way to learn OtterScript is with the Visual Plan Editor. This drop-and-drop editor allows you to switch back-and-forth between visual and text modes to get a feel for the syntax and structure of the language pretty quickly. @@ -45,10 +45,10 @@
- Romp packages are extensions of universal packages, which means you can host your packages in a ProGet Universal Feed and use any of the UPack tools or libraries to interact with them. + Romp packages are extensions of universal packages, which means you can host your packages in a ProGet Universal Feed and use any of the UPack tools or libraries to interact with them.
- Romp uses a package source to securely store a connection to a universal feed. You can also use the --source argument
to specify a feed url (see common configuration).
+ Romp uses a package source to securely store a connection to a universal feed. You can also use the --source argument
to specify a feed url (see common configuration).
- A Romp Package is a special Universal Package that contains everything Romp will need to deploy an application and/or infrastructure configuration. + A Romp Package is a special Universal Package that contains everything Romp will need to deploy an application and/or infrastructure configuration.
- Romp Packages at a minimum must include a standard installation and configuration script (install.otter
), and a metadata file (upack.json
).
+ Romp Packages at a minimum must include a standard installation and configuration script (install.otter
), and a metadata file (upack.json
).
- Aside from the primary configuration script, and the required metadata file, packages can also contain variables, credentials, extensions, Otter rafts, and additional metadata (rompPackage.json
).
+ Aside from the primary configuration script, and the required metadata file, packages can also contain variables, credentials, extensions, Otter rafts, and additional metadata (rompPackage.json
).
The rafts/ directory contains rafts which are used by the install script. Each subdirectory in this directory is equivalent to a named filesystem raft.
+The rafts/ directory contains rafts which are used by the install script. Each subdirectory in this directory is equivalent to a named filesystem raft.
The install.otter and uninstall.otter files are standard - OtterScript plans + OtterScript plans that will be run when the package is installed or uninstalled. They may use any of the resources contained in the embedded rafts.
diff --git a/Romp/romp-packages/layout/credentials.htm b/Romp/romp-packages/layout/credentials.htm index 9a480ff0..a94ebedd 100644 --- a/Romp/romp-packages/layout/credentials.htm +++ b/Romp/romp-packages/layout/credentials.htm @@ -22,7 +22,7 @@
- Like variables, the required credentials are defined in a packageCredentials.json
file in the package root. It is an array of objects that describe credentials. Each object has the following properties.
+ Like variables, the required credentials are defined in a packageCredentials.json
file in the package root. It is an array of objects that describe credentials. Each object has the following properties.