diff --git a/content/admin/configuration/configuring-your-enterprise/command-line-utilities.md b/content/admin/configuration/configuring-your-enterprise/command-line-utilities.md index 81e5573b49b9..9d68bbcd30bf 100644 --- a/content/admin/configuration/configuring-your-enterprise/command-line-utilities.md +++ b/content/admin/configuration/configuring-your-enterprise/command-line-utilities.md @@ -237,7 +237,7 @@ ghe-motd ### ghe-nwo -This utility returns a repository's name and owner based on the repository ID. +This utility returns a repository's name and owner based on the repository ID. ```shell ghe-nwo REPOSITORY_ID @@ -511,7 +511,7 @@ ghe-ssl-ca-certificate-install -c CERTIFICATE_PATH ### ghe-ssl-certificate-setup -This utility allows you to update an SSL certificate for {% data variables.location.product_location %}. +This utility allows you to update an SSL certificate for {% data variables.location.product_location %}. For more information about this command or for additional options, use the `-h` flag. @@ -613,16 +613,6 @@ To send a bundle to {% data variables.contact.github_support %} and associate th $ ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -t TICKET_ID' ``` -{% ifversion ghes %} -### ghe-cluster-failover - -Fail over from active cluster nodes to passive cluster nodes. For more information, see "[Initiating a failover to your replica cluster](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)." - -```shell -ghe-cluster-failover -``` -{% endif %} - ### ghe-dpages This utility allows you to manage the distributed {% data variables.product.prodname_pages %} server. diff --git a/content/admin/enterprise-management/configuring-clustering/about-clustering.md b/content/admin/enterprise-management/configuring-clustering/about-clustering.md index 030322cf6edc..32e6934664a2 100644 --- a/content/admin/enterprise-management/configuring-clustering/about-clustering.md +++ b/content/admin/enterprise-management/configuring-clustering/about-clustering.md @@ -7,6 +7,12 @@ redirect_from: - /enterprise/admin/clustering/clustering-overview - /enterprise/admin/enterprise-management/about-clustering - /admin/enterprise-management/about-clustering + - /enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster + - /admin/enterprise-management/configuring-high-availability-replication-for-a-cluster + - /admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster + - /enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster + - /admin/enterprise-management/initiating-a-failover-to-your-replica-cluster + - /admin/enterprise-management/configuring-clustering/initiating-a-failover-to-your-replica-cluster versions: ghes: '*' type: overview diff --git a/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md b/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md deleted file mode 100644 index a4a6b010755c..000000000000 --- a/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md +++ /dev/null @@ -1,363 +0,0 @@ ---- -title: Configuring high availability replication for a cluster -intro: 'You can configure a passive replica of your entire {% data variables.product.prodname_ghe_server %} cluster in a different location, allowing your cluster to fail over to redundant nodes.' -miniTocMaxHeadingLevel: 3 -redirect_from: - - /enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster - - /admin/enterprise-management/configuring-high-availability-replication-for-a-cluster -versions: - ghes: '*' -type: how_to -topics: - - Clustering - - Enterprise - - High availability - - Infrastructure -shortTitle: Configure HA replication ---- -## About high availability replication for clusters - -You can configure a cluster deployment of {% data variables.product.prodname_ghe_server %} for high availability, where an identical set of passive nodes sync with the nodes in your active cluster. If hardware or software failures affect the datacenter with your active cluster, you can manually fail over to the replica nodes and continue processing user requests, minimizing the impact of the outage. - -In high availability mode, each active node syncs regularly with a corresponding passive node. The passive node runs in standby and does not serve applications or process user requests. - -We recommend configuring high availability as a part of a comprehensive disaster recovery plan for {% data variables.product.prodname_ghe_server %}. We also recommend performing regular backups. For more information, see "[Configuring backups on your appliance](/enterprise/admin/configuration/configuring-backups-on-your-appliance)." - -## Prerequisites - -### Hardware and software - -For each existing node in your active cluster, you'll need to provision a second virtual machine with identical hardware resources. For example, if your cluster has 11 nodes and each node has 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage, you must provision 11 new virtual machines that each have 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage. - -On each new virtual machine, install the same version of {% data variables.product.prodname_ghe_server %} that runs on the nodes in your active cluster. You don't need to upload a license or perform any additional configuration. For more information, see "[Setting up a {% data variables.product.prodname_ghe_server %} instance](/enterprise/admin/installation/setting-up-a-github-enterprise-server-instance)." - -{% note %} - -**Note**: The nodes that you intend to use for high availability replication should be standalone {% data variables.product.prodname_ghe_server %} instances. Don't initialize the passive nodes as a second cluster. - -{% endnote %} - -### Network - -You must assign a static IP address to each new node that you provision, and you must configure a load balancer to accept connections and direct them to the nodes in your cluster's front-end tier. - -{% data reusables.enterprise_clustering.network-latency %} For more information about network connectivity between nodes in the passive cluster, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration)." - -## Creating a high availability replica for a cluster - -- [Assigning active nodes to the primary datacenter](#assigning-active-nodes-to-the-primary-datacenter) -- [Adding passive nodes to the cluster configuration file](#adding-passive-nodes-to-the-cluster-configuration-file) -- [Example configuration](#example-configuration) - -### Assigning active nodes to the primary datacenter - -Before you define a secondary datacenter for your passive nodes, ensure that you assign your active nodes to the primary datacenter. - -{% data reusables.enterprise_clustering.ssh-to-a-node %} - -{% data reusables.enterprise_clustering.open-configuration-file %} - -3. Note the name of your cluster's primary datacenter. The `[cluster]` section at the top of the cluster configuration file defines the primary datacenter's name, using the `primary-datacenter` key-value pair. By default, the primary datacenter for your cluster is named `default`. - - ```shell - [cluster] - mysql-master = HOSTNAME - redis-master = HOSTNAME - primary-datacenter = default - ``` - - - Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`. - -4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node. - - ``` - datacenter = default - ``` - - When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %} - - ```shell - [cluster "HOSTNAME"] - datacenter = default - hostname = HOSTNAME - ipv4 = IP-ADDRESS - ... - ... - ``` - - {% note %} - - **Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node. - - ``` - consul-datacenter = primary - ``` - - {% endnote %} - -{% data reusables.enterprise_clustering.apply-configuration %} - -{% data reusables.enterprise_clustering.configuration-finished %} - -After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished assigning your nodes to the cluster's primary datacenter. - -### Adding passive nodes to the cluster configuration file - -To configure high availability, you must define a corresponding passive node for every active node in your cluster. The following instructions create a new cluster configuration that defines both active and passive nodes. You will: - -- Create a copy of the active cluster configuration file. -- Edit the copy to define passive nodes that correspond to the active nodes, adding the IP addresses of the new virtual machines that you provisioned. -- Merge the modified copy of the cluster configuration back into your active configuration. -- Apply the new configuration to start replication. - -For an example configuration, see "[Example configuration](#example-configuration)." - -1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)." - - {% note %} - - **Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead. - - {% endnote %} - -{% data reusables.enterprise_clustering.ssh-to-a-node %} - -3. Back up your existing cluster configuration. - - ``` - cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup - ``` - -4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`). - - ``` - grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf - ``` - -5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step. - - ``` - git config -f ~/cluster-passive.conf --remove-section cluster - ``` - -6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose. - - ```shell - sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf - ``` - -7. Decide on a pattern for the passive nodes' hostnames. - - {% warning %} - - **Warning**: Hostnames for passive nodes must be unique and differ from the hostname for the corresponding active node. - - {% endwarning %} - -8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim. - - ```shell - sudo vim ~/cluster-passive.conf - ``` - -9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %} - - - Change the quoted hostname in the section heading and the value for `hostname` within the section to the passive node's hostname, per the pattern you chose in step 7 above. - - Add a new key named `ipv4`, and set the value to the passive node's static IPv4 address. - - Add a new key-value pair, `replica = enabled`. - - ```shell - [cluster "NEW PASSIVE NODE HOSTNAME"] - ... - hostname = NEW PASSIVE NODE HOSTNAME - ipv4 = NEW PASSIVE NODE IPV4 ADDRESS - replica = enabled - ... - ... - ``` - -10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file. - - ```shell - cat ~/cluster-passive.conf >> /data/user/common/cluster.conf - ``` - -11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries. - - ```shell - git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA-MYSQL-PRIMARY-HOSTNAME - git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA-REDIS-PRIMARY-HOSTNAME - ``` - - {% warning %} - - **Warning**: Review your cluster configuration file before proceeding. - - - In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover. - - In each section for an active node named [cluster "ACTIVE NODE HOSTNAME"], double-check the following key-value pairs. - - `datacenter` should match the value of `primary-datacenter` in the top-level `[cluster]` section. - - `consul-datacenter` should match the value of `datacenter`, which should be the same as the value for `primary-datacenter` in the top-level `[cluster]` section. - - Ensure that for each active node, the configuration has **one** corresponding section for **one** passive node with the same roles. In each section for a passive node, double-check each key-value pair. - - `datacenter` should match all other passive nodes. - - `consul-datacenter` should match all other passive nodes. - - `hostname` should match the hostname in the section heading. - - `ipv4` should match the node's unique, static IPv4 address. - - `replica` should be configured as `enabled`. - - Take the opportunity to remove sections for offline nodes that are no longer in use. - - To review an example configuration, see "[Example configuration](#example-configuration)." - - {% endwarning %} - -13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %} - - ```shell - ghe-cluster-config-init - ``` - -14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message. - - ```shell - Finished cluster initialization - ``` - -{% data reusables.enterprise_clustering.apply-configuration %} - -{% data reusables.enterprise_clustering.configuration-finished %} - -17. Configure a load balancer that will accept connections from users if you fail over to the passive nodes. For more information, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration#configuring-a-load-balancer)." - -You've finished configuring high availability replication for the nodes in your cluster. Each active node begins replicating configuration and data to its corresponding passive node, and you can direct traffic to the load balancer for the secondary datacenter in the event of a failure. For more information about failing over, see "[Initiating a failover to your replica cluster](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)." - -### Example configuration - -The top-level `[cluster]` configuration should look like the following example. - -```shell -[cluster] - mysql-master = HOSTNAME-OF-ACTIVE-MYSQL-MASTER - redis-master = HOSTNAME-OF-ACTIVE-REDIS-MASTER - primary-datacenter = PRIMARY-DATACENTER-NAME - mysql-master-replica = HOSTNAME-OF-PASSIVE-MYSQL-MASTER - redis-master-replica = HOSTNAME-OF-PASSIVE-REDIS-MASTER - mysql-auto-failover = false -... -``` - -The configuration for an active node in your cluster's storage tier should look like the following example. - -```shell -... -[cluster "UNIQUE ACTIVE NODE HOSTNAME"] - datacenter = default - hostname = UNIQUE-ACTIVE-NODE-HOSTNAME - ipv4 = IPV4-ADDRESS - consul-datacenter = default - consul-server = true - git-server = true - pages-server = true - mysql-server = true - elasticsearch-server = true - redis-server = true - memcache-server = true - metrics-server = true - storage-server = true - vpn = IPV4 ADDRESS SET AUTOMATICALLY - uuid = UUID SET AUTOMATICALLY - wireguard-pubkey = PUBLIC KEY SET AUTOMATICALLY -... -``` - -The configuration for the corresponding passive node in the storage tier should look like the following example. - -- Important differences from the corresponding active node are **bold**. -- {% data variables.product.prodname_ghe_server %} assigns values for `vpn`, `uuid`, and `wireguard-pubkey` automatically, so you shouldn't define the values for passive nodes that you will initialize. -- The server roles, defined by `*-server` keys, match the corresponding active node. - -```shell -... -[cluster "UNIQUE PASSIVE NODE HOSTNAME"] - replica = enabled - ipv4 = IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES - datacenter = SECONDARY DATACENTER NAME - hostname = UNIQUE PASSIVE NODE HOSTNAME - consul-datacenter = SECONDARY DATACENTER NAME - consul-server = true - git-server = true - pages-server = true - mysql-server = true - elasticsearch-server = true - redis-server = true - memcache-server = true - metrics-server = true - storage-server = true - vpn = DO NOT DEFINE - uuid = DO NOT DEFINE - wireguard-pubkey = DO NOT DEFINE -... -``` - -## Monitoring replication between active and passive cluster nodes - -Initial replication between the active and passive nodes in your cluster takes time. The amount of time depends on the amount of data to replicate and the activity levels for {% data variables.product.prodname_ghe_server %}. - -You can monitor the progress on any node in the cluster, using command-line tools available via the {% data variables.product.prodname_ghe_server %} administrative shell. For more information about the administrative shell, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)." - -- Monitor replication of databases: - - ``` - /usr/local/share/enterprise/ghe-cluster-status-mysql - ``` - -- Monitor replication of repository and Gist data: - - ``` - ghe-spokes status - ``` - -- Monitor replication of attachment and LFS data: - - ``` - ghe-storage replication-status - ``` - -- Monitor replication of Pages data: - - ``` - ghe-dpages replication-status - ``` - -You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)." - -## Reconfiguring high availability replication after a failover - -After you fail over from the cluster's active nodes to the cluster's passive nodes, you can reconfigure high availability replication in two ways. - -### Provisioning and configuring new passive nodes - -After a failover, you can reconfigure high availability in two ways. The method you choose will depend on the reason that you failed over, and the state of the original active nodes. - -1. Provision and configure a new set of passive nodes for each of the new active nodes in your secondary datacenter. - -2. Use the old active nodes as the new passive nodes. - -The process for reconfiguring high availability is identical to the initial configuration of high availability. For more information, see "[Creating a high availability replica for a cluster](#creating-a-high-availability-replica-for-a-cluster)." - - -## Disabling high availability replication for a cluster - -You can stop replication to the passive nodes for your cluster deployment of {% data variables.product.prodname_ghe_server %}. - -{% data reusables.enterprise_clustering.ssh-to-a-node %} - -{% data reusables.enterprise_clustering.open-configuration-file %} - -3. In the top-level `[cluster]` section, delete the `redis-master-replica`, and `mysql-master-replica` key-value pairs. - -4. Delete each section for a passive node. For passive nodes, `replica` is configured as `enabled`. - -{% data reusables.enterprise_clustering.apply-configuration %} - -{% data reusables.enterprise_clustering.configuration-finished %} - -After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished disabling high availability replication. diff --git a/content/admin/enterprise-management/configuring-clustering/index.md b/content/admin/enterprise-management/configuring-clustering/index.md index 3738fe80e661..5b8aa341a921 100644 --- a/content/admin/enterprise-management/configuring-clustering/index.md +++ b/content/admin/enterprise-management/configuring-clustering/index.md @@ -20,7 +20,4 @@ children: - /monitoring-cluster-nodes - /replacing-a-cluster-node - /evacuating-a-cluster-node - - /configuring-high-availability-replication-for-a-cluster - - /initiating-a-failover-to-your-replica-cluster --- - diff --git a/content/admin/enterprise-management/configuring-clustering/initiating-a-failover-to-your-replica-cluster.md b/content/admin/enterprise-management/configuring-clustering/initiating-a-failover-to-your-replica-cluster.md deleted file mode 100644 index fe887227a248..000000000000 --- a/content/admin/enterprise-management/configuring-clustering/initiating-a-failover-to-your-replica-cluster.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Initiating a failover to your replica cluster -intro: 'If your {% data variables.product.prodname_ghe_server %} cluster fails, you can fail over to the passive replica .' -redirect_from: - - /enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster - - /admin/enterprise-management/initiating-a-failover-to-your-replica-cluster -versions: - ghes: '*' -type: how_to -topics: - - Clustering - - Enterprise - - High availability - - Infrastructure -shortTitle: Initiate a failover to replica ---- -## About failover to your replica cluster - -In the event of a failure at your primary datacenter, you can fail over to the replica nodes in the secondary datacenter if you configure a passive replica node for each node in your active cluster. - -The time required to fail over depends on how long it takes to manually promote the replica cluster and redirect traffic. - -Promoting a replica cluster does not automatically set up replication for the existing cluster. After promoting a replica cluster, you can reconfigure replication from the new active cluster. For more information, see "[Configuring high availability for a cluster](/enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster#reconfiguring-high-availability-replication-after-a-failover)." - -## Prerequisites - -To fail over to passive replica nodes, you must have configured high availability for your cluster. For more information, see "[Configuring high availability for a cluster](/enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster)." - -## Initiating a failover to your replica cluster - -1. SSH into any passive node in the secondary datacenter for your cluster. For more information, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh#enabling-access-to-the-administrative-shell-via-ssh)." - -2. Initialize the failover to the secondary cluster and configure it to act as the active nodes. - - ```shell - ghe-cluster-failover - ``` - -{% data reusables.enterprise_clustering.configuration-finished %} - -3. Update the DNS record to point to the IP address of the load balancer for your passive cluster. Traffic is directed to the replica after the TTL period elapses. - -After {% data variables.product.prodname_ghe_server %} returns you to the prompt and your DNS updates have propagated, you've finished failing over. Users can access {% data variables.product.prodname_ghe_server %} using the usual hostname for your cluster. diff --git a/content/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise.md b/content/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise.md index 7ab4228a6581..b705b353b972 100644 --- a/content/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise.md +++ b/content/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise.md @@ -41,13 +41,17 @@ By default, each member must create a personal account on {% data variables.loca If you configure additional SAML access restriction, each member must create and manage a personal account on {% data variables.location.product_location %}. You grant access to your enterprise, and the member can access your enterprise's resources after both signing into the account on {% data variables.location.product_location %} and successfully authenticating with your SAML identity provider (IdP). The member can contribute to other enterprises, organizations, and repositories on {% data variables.location.product_location %} using their personal account. For more information about requiring SAML authentication for all access your enterprise's resources, see "[About SAML for enterprise IAM](/admin/identity-and-access-management/using-saml-for-enterprise-iam/about-saml-for-enterprise-iam)." -#### Considerations for enabling SAML for an enterprise or organization +You can choose between configuring SAML at the enterprise level, which applies the same SAML configuration to all organizations within the enterprise, and configuring SAML separately for individual organizations. -You can configure SAML authentication for every organization in your enterprise, or for individual organizations. If you use a standalone organization with {% data variables.product.product_name %}, or if you don't want to use SAML authentication for every organization in your enterprise, you may want to configure SAML for an individual organization instead of your enterprise. For more information, see "[About identity and access management with SAML single sign-on](/organizations/managing-saml-single-sign-on-for-your-organization/about-identity-and-access-management-with-saml-single-sign-on)." +#### Deciding whether to configure SAML at the enterprise level or the organization level -If some groups within your enterprise must use different SAML authentication providers to grant access to your resources on {% data variables.location.product_location %}, you can configure SAML for individual organizations. You can implement SAML for your organizations over time by allowing users to gradually authenticate using SAML. Alternatively, you can require SAML authentication by a certain date. Organization members who do not authenticate using SAML by this date will be removed. +If some groups within your enterprise must use different SAML authentication providers to grant access to your resources on {% data variables.location.product_location %}, you can configure SAML for individual organizations. You can implement SAML for your organizations over time by allowing users to gradually authenticate using SAML. Alternatively, you can require SAML authentication by a certain date. Organization members who do not authenticate using SAML by this date will be removed. For more information about organization-level SAML, see "[About identity and access management with SAML single sign-on](/organizations/managing-saml-single-sign-on-for-your-organization/about-identity-and-access-management-with-saml-single-sign-on)." -If you need to enforce a consistent authentication experience for every organization in your enterprise, you can configure SAML authentication for your enterprise account. The SAML configuration for your enterprise overrides any SAML configuration for individual organizations, and organizations cannot override the enterprise configuration. After you configure SAML for your enterprise, organization members must authenticate with SAML before accessing organization resources. SCIM is not available for enterprise accounts. Team synchronization is only available for SAML at the enterprise level if you use Azure AD as an IdP. For more information, see "[Managing team synchronization for organizations in your enterprise](/admin/identity-and-access-management/using-saml-for-enterprise-iam/managing-team-synchronization-for-organizations-in-your-enterprise)." +If you configure SAML at the organization level, members are not required to authenticate via SAML to access internal repositories. For more information about internal repositories, see "[About repositories](/repositories/creating-and-managing-repositories/about-repositories#about-internal-repositories)," + +If you need to protect internal repositories or enforce a consistent authentication experience for every organization in your enterprise, you can configure SAML authentication for your enterprise account instead. The SAML configuration for your enterprise overrides any SAML configuration for individual organizations, and organizations cannot override the enterprise configuration. After you configure SAML for your enterprise, organization members must authenticate with SAML before accessing organization resources, including internal repositories. + +SCIM is not available for enterprise accounts, and team synchronization is only available for SAML at the enterprise level if you use Azure AD as an IdP. For more information, see "[Managing team synchronization for organizations in your enterprise](/admin/identity-and-access-management/using-saml-for-enterprise-iam/managing-team-synchronization-for-organizations-in-your-enterprise)." Regardless of the SAML implementation you choose, you cannot add external collaborators to organizations or teams. You can only add external collaborators to individual repositories. diff --git a/content/admin/identity-and-access-management/using-ldap-for-enterprise-iam/using-ldap.md b/content/admin/identity-and-access-management/using-ldap-for-enterprise-iam/using-ldap.md index 29ea2a82d2dd..f308370e5725 100644 --- a/content/admin/identity-and-access-management/using-ldap-for-enterprise-iam/using-ldap.md +++ b/content/admin/identity-and-access-management/using-ldap-for-enterprise-iam/using-ldap.md @@ -131,10 +131,10 @@ After you enable LDAP sync, a synchronization job will run at the specified time - If one or more restricted user groups are configured on the instance, the corresponding LDAP entry is in one of these groups, and _Reactivate suspended users_ is enabled in the Admin Center, unsuspend the user. - If the corresponding LDAP entry includes a `name` attribute, update the user's profile name. - If the corresponding LDAP entry is in the Administrators group, promote the user to site administrator. -- If the corresponding LDAP entry is not in the Administrators group, demote the user to a normal account. +- If the corresponding LDAP entry is not in the Administrators group, demote the user to a normal account, unless the account is suspended. Suspended administrators will not be demoted and will remain listed on the "Site admins" and "Enterprise owners" pages. - If an LDAP User field is defined for emails, synchronize the user's email settings with the LDAP entry. Set the first LDAP `mail` entry as the primary email. -- If an LDAP User field is defined for SSH public keys, synchronize the user's public SSH keys with the LDAP entry. -- If an LDAP User field is defined for GPG keys, synchronize the user's GPG keys with the LDAP entry. +- If an LDAP User field is defined for SSH public keys, synchronize the user's public SSH keys with the LDAP entry. +- If an LDAP User field is defined for GPG keys, synchronize the user's GPG keys with the LDAP entry. {% note %} diff --git a/content/admin/identity-and-access-management/using-saml-for-enterprise-iam/about-saml-for-enterprise-iam.md b/content/admin/identity-and-access-management/using-saml-for-enterprise-iam/about-saml-for-enterprise-iam.md index 1b0acbc62c0f..b604d71dbc9c 100644 --- a/content/admin/identity-and-access-management/using-saml-for-enterprise-iam/about-saml-for-enterprise-iam.md +++ b/content/admin/identity-and-access-management/using-saml-for-enterprise-iam/about-saml-for-enterprise-iam.md @@ -75,6 +75,8 @@ Your IdP does not communicate with {% data variables.product.product_name %} aut {% data reusables.enterprise_user_management.external_auth_disables_2fa %} +After you configure SAML, people who use {% data variables.location.product_location %} must use a {% data variables.product.pat_generic %} to authenticate API requests. For more information, see "[Creating a {% data variables.product.pat_generic %}](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)." + {% data reusables.enterprise_user_management.built-in-authentication %} {% endif %}