Skip to content

Commit

Permalink
[k8s-extension] Release v1.0.1 with fix for AzureML (Azure#4177)
Browse files Browse the repository at this point in the history
* Create pull.yml

* Update pull.yml

* Update azure-pipelines.yml

* Initial commit of k8s-extension

* Update pipelines file

* Update CODEOWNERS

* Update private preview pipelines

* Remove open service mesh from public release

* Update pipeline files

* Update public extension pipeline

* Change condition variable

* Add version to public preview/private preview

* Update pipelines

* Add different testing based on private branch

* Add annotations to extension model

* Update k8s-custom-pipelines.yml

* Update SDKs with Updated Swagger Spec for 2020-07-01-preview (#13)

* Update sdks with updated swagger spec

* Update version and history rst

* Reorder release history timeline

* Fix ExtensionInstanceForCreate for import

* remove py2 bdist support

* Add custom table formatting

* Remove unnecessary files

* Fix style issues

* Fix branch based on comments

* Update identity piece manually

* Don't handle defaults at the CLI level

* Remove defaults from CLI client

* Check null target namespace with namespace scope

* Update style

* Add cassandra operator and location to model

* Stage Public Version of k8s-extension 0.2.0 for official release (#15)

* Create pull.yml

* Update pull.yml

* Update azure-pipelines.yml

* Initial commit of k8s-extension

* Update pipelines file

* Update CODEOWNERS

* Update private preview pipelines

* Remove open service mesh from public release

* Update pipeline files

* Update public extension pipeline

* Change condition variable

* Add version to public preview/private preview

* Update pipelines

* Add different testing based on private branch

* Add annotations to extension model

* Update k8s-custom-pipelines.yml

* Update SDKs with Updated Swagger Spec for 2020-07-01-preview (#13)

* Update sdks with updated swagger spec

* Update version and history rst

* Reorder release history timeline

* Fix ExtensionInstanceForCreate for import

* remove py2 bdist support

* Add custom table formatting

* Remove unnecessary files

* Fix style issues

* Fix branch based on comments

* Update identity piece manually

* Don't handle defaults at the CLI level

* Remove defaults from CLI client

* Check null target namespace with namespace scope

* Update style

* Add cassandra operator and location to model

Co-authored-by: action@github.com <Action - Fork Sync>

* Remove custom pipelines file

* Update extension description, remove private const

* Update pipeline file

* Disable refs docs

* Update to include better create warning logs and remove update context (#20)

* Update to include better create warning logs and remove update context

* Remove help text for update

* Fix spelling error

* Update message

* Fix k8s-extension conflict with private version

* Fix style errors

* Fix filename

* add customization for microsoft.azureml.kubernetes (#23)

* add customization for microsoft.azureml.kubernetes

* Update release history

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: jonathan-innis <jonathan.innis.ji@gmail.com>

* Add E2E Testing from Separate branch into internal code (#26)

* Add internal e2e testing

* Change to testing folder

* Inference CLI validation for Scoring FE (#24)

* cli validation starter

* added the call to the fe validation function

* nodeport validation not required

* test fix

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* legal warning added (#27)

* Remove deprecated method logger.warn

* Update k8s-custom-pipelines.yml for Azure Pipelines

* Update k8s-custom-pipelines.yml for Azure Pipelines

* Add Azure Defender to E2E testing (#28)

* Add azure defender testing to e2e

* Remove the debug flag

* Add configuration testing

* Fix pipeline failures

* Make test script more intuitive

* Remove parameter from testing

* Fix wrong location for k8s config whl

* Fix pip upgrade issue

* Fix pip install upgrade issue

* Fix pip install issue

* delete resurce in testcase (#29)

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Check Provider is Registered with Subscription Before Making Requests (#18)

* Add check for KubernetesConfiguration

* Disable pylint and rename

* Update provider registration link

* Update version

* Remove extra blank line

* Fix bug in import

* only validate scoring fe when inference is enabled (#31)

* only validate scoring fe when inference is enabled

* Fix versioning

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: jonathan-innis <jonathan.innis.ji@gmail.com>

* Provider registration case insensitive

* do not validate against scoring fe if inference is not enabled. (#33)

* do not validate against scoring fe if inference is not enabled.

* add inference enabled scenario

* refine

* increase sleeping time

* fix

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Add OSM as Public Preview Extension (#34)

* Add OSM as public preview extension

* Add osm testing

* Add release train to tests

* Fix failing osm test

* Upgrade pip in integration testing

* Remove ununsed import

* Fix release train check in update

* Parallelize E2E Testing (#36)

* Add OSM as public preview extension

* Add osm testing

* Update test logic to parallelize

* Fix test success checking

* Parallelize extension testing

* Better error checking logic

* Fix azureml deletion

* Fix private build (#40)

* change amlk8s to amlarc (#42)

Co-authored-by: Yue Yu <yuyu3@microsoft.com>

* Servicebus client model changes (#44)

* Servicebus client model changes

* Fix testing script

* Update history file and pipeline

* Update min cli core version for track 2 updates

* Read SSL cert and key from files (#38)

* first sketch of the change

fixes

removed extra blank lines

changes regarding param renaming

added ssl tests

added more detail to the unit test

additional import

moved pem files out of public folder

fixed import

chenged import

changed import

unit tests fix

unit test fix

fixed unit tests

fixed unit test

unit test fix

changes int test cert and key

* test protected config

* fix test typo

* temporary changes reverted

* fixing tests

* fixed file paths

* removed accidentally added file

* changes according to review comments

* more changes according to review comments

* changes according to review comments

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Upgrade release version

* Liakaz/inference read ssl from file (#47)

* first sketch of the change

fixes

removed extra blank lines

changes regarding param renaming

added ssl tests

added more detail to the unit test

additional import

moved pem files out of public folder

fixed import

chenged import

changed import

unit tests fix

unit test fix

fixed unit tests

fixed unit test

unit test fix

changes int test cert and key

* test protected config

* fix test typo

* temporary changes reverted

* fixing tests

* fixed file paths

* removed accidentally added file

* changes according to review comments

* more changes according to review comments

* changes according to review comments

* fixed decode error

* renamed the experimental param

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Fix style issues (#51)

* Fixed scoring fe related extension param names (#49)

* fixed scoring fe related extension params

* bug fix and style fixes

* variable rename

* fixed the error type

* set cluster to prod by default

* Add distro validation for osm-arc (#50)

* Add distro validation for osm-arc

* fixed indentation

* Fix linting

* Resolve comments

* Add unit test

* fix lint

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Add distro validation for osm-arc (#50)

* Add distro validation for osm-arc

* fixed indentation

* Fix linting

* Resolve comments

* Add unit test

* fix lint

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Add distro validation for osm-arc (#53)

removed release-train logic

* Add Custom Delete Logic for Partners (#54)

* Add custom delete logic

* Fix failing unit tests

* Add warning message when deleting amlarc extension (#55)

* add warning message

* fix indentation

* Update release version

* Remove Pyhelm from OSM customization (#58)

* Fix OSM pyhelm bug

* Debug bootstrap error

* Update release message

* Remove pyhelm dependency

* Update tests to only check extensionconfig creation (#61)

* Update tests to only check extensionconfig creation

* Single set of CRUD for AzureML

* Debug logs for connectedk8s

* Increase open service mesh version number

* Update k8s-extension Models to Track2 (#64)

* Update k8s-extension models to track2

* Add debug for failed cleanup

* Increase version number

* Exit 0 on failed cleanup

* Fix identity in wrong place in model (#66)

* Readd osm-arc distro validation (#62)

* Add distro validation for osm-arc

removed release-train logic

* Readd osm_arc distro validation

* Fix style

* Rm space

* Edit test

* Fixed tests and error logic

* Remove dependency

* Add delete method

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Don't Send Identity Headers If In DF (#67)

* Don't send identity for clusters in dogfood

* Add location to model for identity

* Add identity validation to testing

* Use default extension with identity instead of Cassandra specific (#69)

* Remove the identity check for now

* Add -t for clusterType parameter (#71)

* Adding a flag for AKS to AMLARC migration and set up corresponding FE… (#65)

* Adding a flag for AKS to AMLARC migration and set up corresponding FE helm values

* Remove one extra line

* Adding Scoring FE IS_AKS_MIGRATION check logic for helm values

Co-authored-by: Harry Yang <huayang@microsoft.com>
Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* remove version requirement and auto upgrade minor version check (#72)

* Custom User Confirmation for Partners (#70)

* Custom user confirmation

* Check for disable confirm prompty for confirmation

* Add yes to delete command

* Code cleanup and style fixes (#73)

* Enabled identity by default (#74)

* Increase version

* Fix df check and add unit test (#77)

* Bump extension version

* Pin helm version

* Extensions GA changes into Public Branch (#79)

* Add openservicemesh back

* OpenServiceMesh import

* Update osm with new extension model

* Add back private file

* Add Azure ML to list of private extensions (#16)

* Update k8s-custom-pipelines.yml

* Add Microsoft.PolicyInsights extension (#17)

* Add Policy extension

* Update comment

* Update args

* Fix linting errors

Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Add HISTORY_private file for private preview

* Change versioning scheme

* Update the code for supporting both extensions at once

* Fix style issue

* Remove old consts file

* change the resource tag from managed_by:amlk8s to created_by:amlk8s-e… (#22)

* change the resource tag from managed_by:amlk8s to created_by:amlk8s-extension

* remove the lock when creating resources

* fix lint

* update version and HISTORY_private.rst

* change error message

Co-authored-by: Yue Yu <yuyu3@microsoft.com>

* Update the beta version with upstream

* Update the private history file

* Add upgrade pip to pipeline

* Move pip install within virtualenv

* Merge in k8s-extension/public (0.3.1) (#32)

* delete resurce in testcase (#29)

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Check Provider is Registered with Subscription Before Making Requests (#18)

* Add check for KubernetesConfiguration

* Disable pylint and rename

* Update provider registration link

* Update version

* Remove extra blank line

* Fix bug in import

* only validate scoring fe when inference is enabled (#31)

* only validate scoring fe when inference is enabled

* Fix versioning

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: jonathan-innis <jonathan.innis.ji@gmail.com>

* Update private release

Co-authored-by: yuyue9284 <15863499+yuyue9284@users.noreply.github.com>
Co-authored-by: Yue Yu <yuyu3@microsoft.com>

* Release Version 0.4.0-b1 (#37)

* Merge k8s-extension/public into k8s-extension/private

* Update the version

* Fix testing concurrency

* K8s extension/private 0.4.0b2 (#41)

* Fix private build (#40)

* Update version

* Upgrade to v0.5.2

* Fix policy bug

* Increase private version

* Update consts_private.py

* Increase private version

* Increase version with public

* Add flux to private version

* Update models for 2021-05-01-preview

* Add async models to version

* Add no wait to delete and create

* support managed cluster

* Bump version

* Pin helm version

* Add cmd to delete call

* Add force deletion

* add dapr extension (#78)

Signed-off-by: Ji An Liu <jiliu8@microsoft.com>

* Fix failing integration tests

* Adding the GA changes for private branch

* Fix confirm prompt

* Fix update E2E tests

Co-authored-by: jonathan-innis <jonathan.innis.ji@gmail.com>
Co-authored-by: action@github.com <Action - Fork Sync>
Co-authored-by: nreisch <noahreisch4@gmail.com>
Co-authored-by: yuyue9284 <15863499+yuyue9284@users.noreply.github.com>
Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: anagg929 <59664801+anagg929@users.noreply.github.com>
Co-authored-by: Ji'an Liu <jiliu8@microsoft.com>
Co-authored-by: nanthi <nanthi@NANTHI01>

* Fix configuration settings in update

* Only provide confirmation when specifying settings

* Fix style issues

* Cassandra tests with update (#81)

* Add Microsoft.PolicyInsights extension for public preview (#83)

* Add Azure Policy

* Remove custom configuration and update tests

* Yuyu3/fix upgrade public (#85)

* populate configuration protected settings for azureml

bump version && add log

fetch connection string only if configuration protected settings are set

update ssl key

* bump the version

* reverse changes on version and HISTORY.rst

* inferenceLoadBalancerHA

Co-authored-by: Yue Yu <yuyu3@microsoft.com>

* Remove Parallel Powershell Jobs (#82)

* Unparallelize tests

* Moved location of pipeline file

* Remove the parallel invoke expression calls

* Add templates to testing

* Remove policy update test from extension E2E (#88)

* feIsNodePort, feIsInternalLoadBalancer (#87)

Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: Jonathan Innis <jonathan.innis.ji@gmail.com>

* Fix history file

* remove unneeded files

Co-authored-by: action@github.com <Action - Fork Sync>
Co-authored-by: yuyue9284 <15863499+yuyue9284@users.noreply.github.com>
Co-authored-by: Yue Yu <yuyu3@microsoft.com>
Co-authored-by: Lia Kazakova <58274127+liakaz@users.noreply.github.com>
Co-authored-by: Niranjan Shankar <nshankar@microsoft.com>
Co-authored-by: jingyizhu99 <83610845+jingyizhu99@users.noreply.github.com>
Co-authored-by: Harry Yang <huaimingyang@hotmail.com>
Co-authored-by: Harry Yang <huayang@microsoft.com>
Co-authored-by: Thomas Stringer <thomas@trstringer.com>
Co-authored-by: NarayanThiru <nanthi@microsoft.com>
Co-authored-by: nreisch <noahreisch4@gmail.com>
Co-authored-by: anagg929 <59664801+anagg929@users.noreply.github.com>
Co-authored-by: Ji'an Liu <jiliu8@microsoft.com>
Co-authored-by: nanthi <nanthi@NANTHI01>
  • Loading branch information
14 people committed Dec 2, 2021
1 parent fa998ce commit e95a41f
Show file tree
Hide file tree
Showing 6 changed files with 165 additions and 87 deletions.
3 changes: 3 additions & 0 deletions src/k8s-extension/HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
Release History
===============
1.0.1
++++++++++++++++++
* microsoft.azureml.kubernetes: Retrieve relay and service bus connection string when update the configuration protected settings of the extension.

1.0.0
++++++++++++++++++
Expand Down
2 changes: 1 addition & 1 deletion src/k8s-extension/azext_k8s_extension/custom.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ def update_k8s_extension(cmd, client, resource_group_name, cluster_name, name, c
# Get the extension class based on the extension type
extension_class = ExtensionFactory(extension_type_lower)

upd_extension = extension_class.Update(auto_upgrade_minor_version, release_train, version,
upd_extension = extension_class.Update(cmd, resource_group_name, cluster_name, auto_upgrade_minor_version, release_train, version,
config_settings, config_protected_settings)

return sdk_no_wait(no_wait, client.begin_update, resource_group_name, cluster_rp, cluster_type,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
import azure.mgmt.storage.models
import azure.mgmt.loganalytics
import azure.mgmt.loganalytics.models
from azure.cli.core.azclierror import InvalidArgumentValueError, MutuallyExclusiveArgumentError
from azure.cli.core.azclierror import AzureResponseError, InvalidArgumentValueError, MutuallyExclusiveArgumentError, ResourceNotFoundError
from azure.cli.core.commands.client_factory import get_mgmt_service_client, get_subscription_id
from azure.mgmt.resource.locks.models import ManagementLockObject
from knack.log import get_logger
Expand All @@ -31,7 +31,8 @@
from ..vendored_sdks.models import (
Extension,
Scope,
ScopeCluster
ScopeCluster,
PatchExtension
)

logger = get_logger(__name__)
Expand All @@ -45,7 +46,6 @@ def __init__(self):
# constants for configuration settings.
self.DEFAULT_RELEASE_NAMESPACE = 'azureml'
self.RELAY_CONNECTION_STRING_KEY = 'relayserver.relayConnectionString'
self.RELAY_CONNECTION_STRING_DEPRECATED_KEY = 'RelayConnectionString' # for 3rd party deployment, will be deprecated
self.HC_RESOURCE_ID_KEY = 'relayserver.hybridConnectionResourceID'
self.RELAY_HC_NAME_KEY = 'relayserver.hybridConnectionName'
self.SERVICE_BUS_CONNECTION_STRING_KEY = 'servicebus.connectionString'
Expand Down Expand Up @@ -86,7 +86,7 @@ def __init__(self):

# reference mapping
self.reference_mapping = {
self.RELAY_SERVER_CONNECTION_STRING: [self.RELAY_CONNECTION_STRING_KEY, self.RELAY_CONNECTION_STRING_DEPRECATED_KEY],
self.RELAY_SERVER_CONNECTION_STRING: [self.RELAY_CONNECTION_STRING_KEY],
self.SERVICE_BUS_CONNECTION_STRING: [self.SERVICE_BUS_CONNECTION_STRING_KEY],
'cluster_name': ['clusterId', 'prometheus.prometheusSpec.externalLabels.cluster_name'],
}
Expand Down Expand Up @@ -164,6 +164,82 @@ def Delete(self, cmd, client, resource_group_name, cluster_name, name, cluster_t
"Please try to reinstall device plugins to fix this issue.")
user_confirmation_factory(cmd, yes)

def Update(self, cmd, resource_group_name, cluster_name, auto_upgrade_minor_version, release_train, version, configuration_settings,
configuration_protected_settings):
self.__normalize_config(configuration_settings, configuration_protected_settings)

if len(configuration_protected_settings) > 0:
subscription_id = get_subscription_id(cmd.cli_ctx)

if self.AZURE_LOG_ANALYTICS_CONNECTION_STRING not in configuration_protected_settings:
try:
_, shared_key = _get_log_analytics_ws_connection_string(
cmd, subscription_id, resource_group_name, cluster_name, '', True)
configuration_protected_settings[self.AZURE_LOG_ANALYTICS_CONNECTION_STRING] = shared_key
logger.info("Get log analytics connection string succeeded.")
except azure.core.exceptions.HttpResponseError:
logger.info("Failed to get log analytics connection string.")

if self.RELAY_SERVER_CONNECTION_STRING not in configuration_protected_settings:
try:
relay_connection_string, _, _ = _get_relay_connection_str(
cmd, subscription_id, resource_group_name, cluster_name, '', self.RELAY_HC_AUTH_NAME, True)
configuration_protected_settings[self.RELAY_SERVER_CONNECTION_STRING] = relay_connection_string
logger.info("Get relay connection string succeeded.")
except azure.mgmt.relay.models.ErrorResponseException as ex:
if ex.response.status_code == 404:
raise ResourceNotFoundError("Relay server not found.") from ex
raise AzureResponseError("Failed to get relay connection string.") from ex

if self.SERVICE_BUS_CONNECTION_STRING not in configuration_protected_settings:
try:
service_bus_connection_string, _ = _get_service_bus_connection_string(
cmd, subscription_id, resource_group_name, cluster_name, '', {}, True)
configuration_protected_settings[self.SERVICE_BUS_CONNECTION_STRING] = service_bus_connection_string
logger.info("Get service bus connection string succeeded.")
except azure.core.exceptions.HttpResponseError as ex:
if ex.response.status_code == 404:
raise ResourceNotFoundError("Service bus not found.") from ex
raise AzureResponseError("Failed to get service bus connection string.") from ex

configuration_protected_settings = _dereference(self.reference_mapping, configuration_protected_settings)

if self.sslKeyPemFile in configuration_protected_settings and \
self.sslCertPemFile in configuration_protected_settings:
logger.info(f"Both {self.sslKeyPemFile} and {self.sslCertPemFile} are set, update ssl key.")
self.__set_inference_ssl_from_file(configuration_protected_settings)

return PatchExtension(auto_upgrade_minor_version=auto_upgrade_minor_version,
release_train=release_train,
version=version,
configuration_settings=configuration_settings,
configuration_protected_settings=configuration_protected_settings)

def __normalize_config(self, configuration_settings, configuration_protected_settings):
# inference
isTestCluster = _get_value_from_config_protected_config(
self.inferenceLoadBalancerHA, configuration_settings, configuration_protected_settings)
if isTestCluster is not None:
isTestCluster = str(isTestCluster).lower() == 'false'
if isTestCluster:
configuration_settings['clusterPurpose'] = 'DevTest'
else:
configuration_settings['clusterPurpose'] = 'FastProd'

feIsNodePort = _get_value_from_config_protected_config(
self.privateEndpointNodeport, configuration_settings, configuration_protected_settings)
if feIsNodePort is not None:
feIsNodePort = str(feIsNodePort).lower() == 'true'
configuration_settings['scoringFe.serviceType.nodePort'] = feIsNodePort

feIsInternalLoadBalancer = _get_value_from_config_protected_config(
self.privateEndpointILB, configuration_settings, configuration_protected_settings)
if feIsInternalLoadBalancer is not None:
feIsInternalLoadBalancer = str(feIsInternalLoadBalancer).lower() == 'true'
configuration_settings['scoringFe.serviceType.internalLoadBalancer'] = feIsInternalLoadBalancer
logger.warning(
'Internal load balancer only supported on AKS and AKS Engine Clusters.')

def __validate_config(self, configuration_settings, configuration_protected_settings):
# perform basic validation of the input config
config_keys = configuration_settings.keys()
Expand Down Expand Up @@ -241,24 +317,27 @@ def __validate_scoring_fe_settings(self, configuration_settings, configuration_p
logger.warning(
'Internal load balancer only supported on AKS and AKS Engine Clusters.')

def __set_inference_ssl_from_file(self, configuration_protected_settings):
import base64
feSslCertFile = configuration_protected_settings.get(self.sslCertPemFile)
feSslKeyFile = configuration_protected_settings.get(self.sslKeyPemFile)
with open(feSslCertFile) as f:
cert_data = f.read()
cert_data_bytes = cert_data.encode("ascii")
ssl_cert = base64.b64encode(cert_data_bytes).decode()
configuration_protected_settings['scoringFe.sslCert'] = ssl_cert
with open(feSslKeyFile) as f:
key_data = f.read()
key_data_bytes = key_data.encode("ascii")
ssl_key = base64.b64encode(key_data_bytes).decode()
configuration_protected_settings['scoringFe.sslKey'] = ssl_key

def __set_up_inference_ssl(self, configuration_settings, configuration_protected_settings):
allowInsecureConnections = _get_value_from_config_protected_config(
self.allowInsecureConnections, configuration_settings, configuration_protected_settings)
allowInsecureConnections = str(allowInsecureConnections).lower() == 'true'
if not allowInsecureConnections:
import base64
feSslCertFile = configuration_protected_settings.get(self.sslCertPemFile)
feSslKeyFile = configuration_protected_settings.get(self.sslKeyPemFile)
with open(feSslCertFile) as f:
cert_data = f.read()
cert_data_bytes = cert_data.encode("ascii")
ssl_cert = base64.b64encode(cert_data_bytes).decode()
configuration_protected_settings['scoringFe.sslCert'] = ssl_cert
with open(feSslKeyFile) as f:
key_data = f.read()
key_data_bytes = key_data.encode("ascii")
ssl_key = base64.b64encode(key_data_bytes).decode()
configuration_protected_settings['scoringFe.sslKey'] = ssl_key
self.__set_inference_ssl_from_file(configuration_protected_settings)
else:
logger.warning(
'SSL is not enabled. Allowing insecure connections to the deployed services.')
Expand Down Expand Up @@ -335,83 +414,82 @@ def _lock_resource(cmd, lock_scope, lock_level='CanNotDelete'):


def _get_relay_connection_str(
cmd, subscription_id, resource_group_name, cluster_name, cluster_location, auth_rule_name) -> Tuple[str, str, str]:
cmd, subscription_id, resource_group_name, cluster_name, cluster_location, auth_rule_name, get_key_only=False) -> Tuple[str, str, str]:
relay_client: azure.mgmt.relay.RelayManagementClient = get_mgmt_service_client(
cmd.cli_ctx, azure.mgmt.relay.RelayManagementClient)

cluster_id = '{}-{}-{}-relay'.format(cluster_name, subscription_id, resource_group_name)
# create namespace
relay_namespace_name = _get_valid_name(
cluster_id, suffix_len=6, max_len=50)
relay_namespace_params = azure.mgmt.relay.models.RelayNamespace(
location=cluster_location, tags=resource_tag)

async_poller = relay_client.namespaces.create_or_update(
resource_group_name, relay_namespace_name, relay_namespace_params)
while True:
async_poller.result(15)
if async_poller.done():
break

# create hybrid connection
hybrid_connection_name = cluster_name
hybrid_connection_object = relay_client.hybrid_connections.create_or_update(
resource_group_name, relay_namespace_name, hybrid_connection_name, requires_client_authorization=True)

# relay_namespace_ojbect = relay_client.namespaces.get(resource_group_name, relay_namespace_name)
# relay_namespace_resource_id = relay_namespace_ojbect.id
# _lock_resource(cmd, lock_scope=relay_namespace_resource_id)

# create authorization rule
auth_rule_rights = [azure.mgmt.relay.models.AccessRights.manage,
azure.mgmt.relay.models.AccessRights.send, azure.mgmt.relay.models.AccessRights.listen]
relay_client.hybrid_connections.create_or_update_authorization_rule(
resource_group_name, relay_namespace_name, hybrid_connection_name, auth_rule_name, rights=auth_rule_rights)
hc_resource_id = ''
if not get_key_only:
# create namespace
relay_namespace_params = azure.mgmt.relay.models.RelayNamespace(
location=cluster_location, tags=resource_tag)

async_poller = relay_client.namespaces.create_or_update(
resource_group_name, relay_namespace_name, relay_namespace_params)
while True:
async_poller.result(15)
if async_poller.done():
break

# create hybrid connection
hybrid_connection_object = relay_client.hybrid_connections.create_or_update(
resource_group_name, relay_namespace_name, hybrid_connection_name, requires_client_authorization=True)
hc_resource_id = hybrid_connection_object.id

# create authorization rule
auth_rule_rights = [azure.mgmt.relay.models.AccessRights.manage,
azure.mgmt.relay.models.AccessRights.send, azure.mgmt.relay.models.AccessRights.listen]
relay_client.hybrid_connections.create_or_update_authorization_rule(
resource_group_name, relay_namespace_name, hybrid_connection_name, auth_rule_name, rights=auth_rule_rights)

# get connection string
key: azure.mgmt.relay.models.AccessKeys = relay_client.hybrid_connections.list_keys(
resource_group_name, relay_namespace_name, hybrid_connection_name, auth_rule_name)
return f'{key.primary_connection_string}', hybrid_connection_object.id, hybrid_connection_name
return f'{key.primary_connection_string}', hc_resource_id, hybrid_connection_name


def _get_service_bus_connection_string(cmd, subscription_id, resource_group_name, cluster_name, cluster_location,
topic_sub_mapping: Dict[str, str]) -> Tuple[str, str]:
topic_sub_mapping: Dict[str, str], get_key_only=False) -> Tuple[str, str]:
service_bus_client: azure.mgmt.servicebus.ServiceBusManagementClient = get_mgmt_service_client(
cmd.cli_ctx, azure.mgmt.servicebus.ServiceBusManagementClient)
cluster_id = '{}-{}-{}-service-bus'.format(cluster_name,
subscription_id, resource_group_name)
service_bus_namespace_name = _get_valid_name(
cluster_id, suffix_len=6, max_len=50)

# create namespace
service_bus_sku = azure.mgmt.servicebus.models.SBSku(
name=azure.mgmt.servicebus.models.SkuName.standard.name)
service_bus_namespace = azure.mgmt.servicebus.models.SBNamespace(
location=cluster_location,
sku=service_bus_sku,
tags=resource_tag)
async_poller = service_bus_client.namespaces.begin_create_or_update(
resource_group_name, service_bus_namespace_name, service_bus_namespace)
while True:
async_poller.result(15)
if async_poller.done():
break

for topic_name, service_bus_subscription_name in topic_sub_mapping.items():
# create topic
topic = azure.mgmt.servicebus.models.SBTopic(max_size_in_megabytes=5120, default_message_time_to_live='P60D')
service_bus_client.topics.create_or_update(
resource_group_name, service_bus_namespace_name, topic_name, topic)

# create subscription
sub = azure.mgmt.servicebus.models.SBSubscription(
max_delivery_count=1, default_message_time_to_live='P14D', lock_duration='PT30S')
service_bus_client.subscriptions.create_or_update(
resource_group_name, service_bus_namespace_name, topic_name, service_bus_subscription_name, sub)
if not get_key_only:
# create namespace
service_bus_sku = azure.mgmt.servicebus.models.SBSku(
name=azure.mgmt.servicebus.models.SkuName.standard.name)
service_bus_namespace = azure.mgmt.servicebus.models.SBNamespace(
location=cluster_location,
sku=service_bus_sku,
tags=resource_tag)
async_poller = service_bus_client.namespaces.begin_create_or_update(
resource_group_name, service_bus_namespace_name, service_bus_namespace)
while True:
async_poller.result(15)
if async_poller.done():
break

for topic_name, service_bus_subscription_name in topic_sub_mapping.items():
# create topic
topic = azure.mgmt.servicebus.models.SBTopic(max_size_in_megabytes=5120, default_message_time_to_live='P60D')
service_bus_client.topics.create_or_update(
resource_group_name, service_bus_namespace_name, topic_name, topic)

# create subscription
sub = azure.mgmt.servicebus.models.SBSubscription(
max_delivery_count=1, default_message_time_to_live='P14D', lock_duration='PT30S')
service_bus_client.subscriptions.create_or_update(
resource_group_name, service_bus_namespace_name, topic_name, service_bus_subscription_name, sub)

service_bus_object = service_bus_client.namespaces.get(resource_group_name, service_bus_namespace_name)
service_bus_resource_id = service_bus_object.id
# _lock_resource(cmd, service_bus_resource_id)

# get connection string
auth_rules = service_bus_client.namespaces.list_authorization_rules(
Expand All @@ -423,26 +501,23 @@ def _get_service_bus_connection_string(cmd, subscription_id, resource_group_name


def _get_log_analytics_ws_connection_string(
cmd, subscription_id, resource_group_name, cluster_name, cluster_location) -> Tuple[str, str]:
cmd, subscription_id, resource_group_name, cluster_name, cluster_location, get_key_only=False) -> Tuple[str, str]:
log_analytics_ws_client: azure.mgmt.loganalytics.LogAnalyticsManagementClient = get_mgmt_service_client(
cmd.cli_ctx, azure.mgmt.loganalytics.LogAnalyticsManagementClient)

# create workspace
cluster_id = '{}-{}-{}'.format(cluster_name, subscription_id, resource_group_name)
log_analytics_ws_name = _get_valid_name(cluster_id, suffix_len=6, max_len=63)
log_analytics_ws = azure.mgmt.loganalytics.models.Workspace(location=cluster_location, tags=resource_tag)
async_poller = log_analytics_ws_client.workspaces.begin_create_or_update(
resource_group_name, log_analytics_ws_name, log_analytics_ws)
customer_id = ''
# log_analytics_ws_resource_id = ''
while True:
log_analytics_ws_object = async_poller.result(15)
if async_poller.done():
customer_id = log_analytics_ws_object.customer_id
# log_analytics_ws_resource_id = log_analytics_ws_object.id
break

# _lock_resource(cmd, log_analytics_ws_resource_id)
if not get_key_only:
log_analytics_ws = azure.mgmt.loganalytics.models.Workspace(location=cluster_location, tags=resource_tag)
async_poller = log_analytics_ws_client.workspaces.begin_create_or_update(
resource_group_name, log_analytics_ws_name, log_analytics_ws)
while True:
log_analytics_ws_object = async_poller.result(15)
if async_poller.done():
customer_id = log_analytics_ws_object.customer_id
break

# get workspace shared keys
shared_key = log_analytics_ws_client.shared_keys.get_shared_keys(
Expand Down
Loading

0 comments on commit e95a41f

Please sign in to comment.