-
Notifications
You must be signed in to change notification settings - Fork 68
/
release-notes.html.md.erb
539 lines (373 loc) · 24.7 KB
/
release-notes.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
---
title: Enterprise PKS Release Notes
owner: PKS
topictype: releasenotes
---
<strong><%= modified_date %></strong>
This topic contains release notes for <%= vars.product_full %> v1.5.0.
## <a id="v1.5.0"></a>v1.5.0
**Release Date**: August 20, 2019
### <a id="v1.5.0-features"></a>Features
New features and changes in this release:
* Cluster administrators and managers can use the PKS CLI command `pks cluster CLUSTER-NAME --details` to view details about the named cluster, including Kubernetes nodes and NSX-T network details. See [Viewing Cluster Details](./view-cluster-details.html).
* Cluster administrators can define a network profile to use a third-party load balancer for Kubernetes services of type LoadBalancer.
See [Load Balancer Configuration](./network-profiles-ncp-lb.html) for details.
* Cluster administrators can define a network profile to use a third-party ingress controller for Pod ingress traffic.
See [Ingress Controller Configuration](./network-profiles-ncp-ingress.html) for details.
* Cluster administrators can define a network profile to configure section markers for explicit distributed firewall rule placement.
See [DFW Section Marking](./network-profiles-ncp-dfw.html) for details.
* Cluster administrators can define a network profile to configure NCP logging.
See [Configure NCP Logging](./network-profiles-ncp-logerr.html) for details.
* Cluster administrators can define a network profile to configure DNS lookup of the IP addresses for the Kubernetes API load balancer and the ingress controller. See [Configure DNS lookup Kubernetes API and Ingress Controllers](./network-profiles-dns.html) for details.
* Cluster administrators can provision a Windows worker-based Kubernetes cluster on vSphere with Flannel.
Windows worker-based clusters in <%= vars.product_short %> 1.5 currently do not support NSX-T integration.
For more information, see [Configuring Windows Worker-Based Clusters (Beta)](windows-pks-beta.html)
and [Deploying and Exposing Windows Workloads (Beta)](deploy-windows-workloads-beta.html).
* Operators can set the lifetime for the refresh and access tokens for Kubernetes clusters.
You can configure the token lifetimes to meet your organization's security and compliance needs.
For instructions about configuring the access and refresh token for your Kubernetes clusters,
see the [UAA](./installing-pks-vsphere.html#uaa) section in the _Installing_ topic for your IaaS.
* Operators can configure prefixes for OpenID Connect (OIDC) users
and groups to avoid name conflicts with existing Kubernetes system users.
Pivotal recommends adding prefixes to ensure OIDC users and groups
do not gain unintended privileges on clusters. For instructions about configuring OIDC prefixes,
see the [Configure OpenID Connect](./installing-pks-vsphere.html#configure-oidc) section
in the _Installing_ topic for your IaaS.
* Operators can configure an external SAML identity provider for user authentication and authorization.
For instructions about configuring an external SAML identity provider,
see the [Configure SAML as an Identity Provider](./installing-pks-vsphere.html#configure-saml)
section in the _Installing_ topic for your IaaS.
* Operators can upgrade Kubernetes clusters separately from the <%= vars.product_tile %> tile.
For instructions on upgrading Kubernetes clusters, see [Upgrading Clusters](./upgrade-clusters.html).
* Operators can configure the Telgraf agent to send master/etcd node metrics
to a third-party monitoring service. For more information,
see [Monitoring Master/etcd Node VMs](./monitor-etcd.html).
* Operators can configure the default node drain behavior.
You can use this feature to resolve hanging or failed cluster upgrades.
For more information about configuring node drain behavior,
see [Worker Node Hangs Indefinitely](./troubleshoot-issues.html#upgrade-drain-hangs) in _Troubleshooting_
and [Configure Node Drain Behavior](./checklist.html#configure-node-drain)
in Upgrade Preparation Checklist for <%= vars.product_short %> v1.5.
* App developers can create metric sinks for namespaces within a Kubernetes cluster.
For more information, see [Creating Sink Resources](create-sinks.html).
* VMware's Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry)
are now enabled in <%= vars.product_short %> by default.
This includes both new installations and upgrades.
For information about configuring CEIP and Telemetry in the <%= vars.product_tile %> tile,
see [CEIP and Telemetry](installing-pks-vsphere.html#telemetry) in the _Installing_ topic for your IaaS.
* Adds a beta release of VMware <%= vars.product_short %> Management Console,
that provides a graphical interface for deploying and managing <%= vars.product_short %> on vSphere. For more information, see [Using the <%= vars.product_short %> Management Console](./console/console-index.html).
### <a id="v1.5.0-snapshot"></a>Product Snapshot
<table class="nice">
<th>Element</th>
<th>Details</th>
<tr>
<td>Version</td>
<td>v1.5.0</td>
</tr>
<tr>
<td>Release date</td>
<td>August 20, 2019</td>
</tr>
<tr>
<td>Compatible Ops Manager versions <sup>*</sup> </td>
<td><%= vars.ops_man_version_2_5 %> or <%= vars.ops_man_version_2_6 %></td>
</tr>
<tr>
<td>Xenial stemcell version</td>
<td>v315.81</td>
</tr>
<tr>
<td>Kubernetes version</td>
<td>v1.14.5</td>
</tr>
<tr>
<td>On-Demand Broker version</td>
<td>v0.29.0</td>
</tr>
<tr>
<td>NSX-T versions</td>
<td>v2.4.0.1, v2.4.1, v2.4.2 (<a href="./release-notes.html#nsxt-242">see below</a>)</td>
</tr>
<tr>
<td>NCP version</td>
<td>v2.5.0</td>
</tr>
<tr>
<td>Docker version</td>
<td>v18.09.8<br><a href="https://github.com/cloudfoundry-incubator/docker-boshrelease/">CFCR</a></td>
</tr>
<tr>
<td>Backup and Restore SDK version</td>
<td>v1.17.0</td>
</tr>
</tr>
</table>
<sup>*</sup> If you want to use Windows workers in <%= vars.product_short %> v1.5,
you must install Ops Manager <%= vars.ops_man_version_2_6 %>. <%= vars.product_short %> does not support
this feature on Ops Manager v2.5. For more information about Ops Manager <%= vars.ops_man_version_2_6 %>,
see [PCF Ops Manager v2.6 Release Notes](https://docs.pivotal.io/pivotalcf/2-6/pcf-release-notes/opsmanager-rn.html).
### <a id="console-snapshot"></a>VMware Enterprise PKS Management Console Product Snapshot
<p class="note"><strong>Note</strong>: The Management Console BETA provides an opinionated installation of <%= vars.product_short %>. The supported versions list may differ from or be more limited than what is generally supported by <%= vars.product_short %>.</p>
<table class="nice">
<th>Element</th>
<th>Details</th>
<tr>
<td>Version</td>
<td>v0.9 - This feature is a beta component and is intended for evaluation and test purposes only.</td>
</tr>
<tr>
<td>Release date</td>
<td>August 22, 2019</td>
</tr>
<tr>
<td>Installed Enterprise PKS version</td>
<td>v1.5.0</td>
</tr>
<tr>
<td>Installed Ops Manager version</td>
<td>v2.6.5</td>
</tr>
<tr>
<td>Installed Kubernetes version</td>
<td>v1.14.5</td>
</tr>
<tr>
<td>Supported NSX-T versions</td>
<td>v2.4.1, v2.4.2 (<a href="./release-notes.html#nsxt-242">see below</a>)</td>
</tr>
<tr>
<td>Installed Harbor Registry version</td>
<td>v1.8.1</td>
</tr>
</table>
### <a id='vsphere-reqs'></a> vSphere Version Requirements
For <%= vars.product_short %> installations on vSphere or on vSphere with NSX-T Data Center, refer to the <a href="https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&356=&175=&1=">VMware Product Interoperability Matrices</a>.</p>
### <a id="v1.5.0-upgrade"></a>Upgrade Path
The supported upgrade paths to <%= vars.product_short %> v1.5.0 are from <%= vars.product_short %> v1.4.0 and later.
**Exception:** If you are running <%= vars.product_short %> v1.4.0 with NSX-T v2.3.x, follow these steps:
1. Upgrade to PKS v1.4.1.
1. Upgrade to NSX-T v2.4.1.
1. Upgrade to PKS v1.5.0.
For detailed instructions, see [Upgrading <%= vars.product_short %>](upgrade-pks.html) and [Upgrading <%= vars.product_short %> with NSX-T](upgrade-pks-nsxt.html).
### <a id="v1.5.0-breaking-changes"></a>Breaking Changes
<%= vars.product_short %> v1.5.0 has the following breaking changes:
#### <a id='nsxt-242'></a> Announcing Support for NSX-T v2.4.2 with Known Issue and Workaround
<%= vars.product_short %> v1.5 supports NSX-T v2.4.2. However, there is a known issue with NSX-T v2.4.2 that can affect new and upgraded
installations of <%= vars.product_short %> v1.5 that use a [NAT topology](./nsxt-topologies.html#topology-nat).
For NSX-T v2.4.2, the PKS Management Plane must be deployed on a Tier-1 distributed router (DR). If the PKS Management Plane is deployed on a Tier-1 service router (SR), the router needs to be converted. To convert an SR to a DR, refer to the following KB article: [East-West traffic between workloads behind different T1 is impacted, when NAT is configured on T0 (71363)](https://kb.vmware.com/s/article/71363)
This issue will be addressed in a subsequent release of NSX-T such that it will not matter if the Tier-1 Router is a DR or an SR.
#### <a id='oidc'></a> New OIDC Prefixes Break Existing Cluster Role Bindings
In <%= vars.product_short %> v1.5, operators can configure prefixes for OIDC usernames and groups.
If you add OIDC prefixes you must manually change any existing role bindings that bind to a username or group.
If you do not change your role bindings, developers cannot access Kubernetes clusters.
For instructions about creating a role binding, see [Managing Cluster Access and Permissions](./manage-cluster-permissions.html).
#### <a id='sink-api-changes'></a> New API Group Name for Sink Resources
The `apps.pivotal.io` API group name for sink resources is no longer supported.
The new API group name is `pksapi.io`.
When creating a sink resource,
your sink resource YAML definition must start with `apiVersion: pksapi.io/v1beta1`.
All existing sinks are migrated automatically.
For more information about defining and managing sink resources, see [Creating Sink Resources](create-sinks.html).
#### <a id='log-sink-changes'></a> Log Sink Changes
<%= vars.product_short %> v1.5.0 adds the following log sink changes:
* The `ClusterSink` log sink resource has been renamed to `ClusterLogSink` and the `Sink` log sink resource has been renamed to `LogSink`.
* When you create a log sink resource with YAML, you must use one of the new names in your sink resource YAML definition. For example, specify `kind: ClusterLogSink` to define a cluster log sink. All existing sinks are migrated automatically.
* When managing your log sink resources through kubectl, you must use the new log sink resource names.
For example, if you want to delete a cluster log sink, run `kubectl delete clusterlogsink` instead of `kubectl delete clustersink`.
* Log transport now requires a secure connection. When creating a `ClusterLogSink` or `LogSink` resource,
you must include `enable_tls: true` in your sink resource YAML definition.
All existing sinks are migrated automatically.
For more information about defining and managing sink resources, see [Creating Sink Resources](create-sinks.html).
#### <a id='sink-cli-deprecation'></a> Deprecation of Sink Commands in the PKS CLI
The following <%= vars.product_short %> Command Line Interface (PKS CLI) commands are deprecated
and will be removed in a future release:
* `pks create-sink`
* `pks sinks`
* `pks delete-sink`
You can use the following Kubernetes CLI commands instead:
* `kubectl apply -f MY-SINK.yml`
* `kubectl get clusterlogsinks`
* `kubectl delete clusterlogsink YOUR-SINK`
For more information about defining and managing sink resources,
see [Creating Sink Resources](create-sinks.html).
### <a id="v1.5.0-known-issues"></a> Known Issues
<%= vars.product_short %> v1.5.0 has the following known issues:
#### <a id="security-group"></a>Azure Default Security Group Is Not Automatically Assigned to Cluster VMs
**Symptom**
You experience issues when configuring a load balancer for a multi-master Kubernetes cluster or creating a service of type `LoadBalancer`.
Additionally, in the Azure portal, the **VM** > **Networking** page does not display
any inbound and outbound traffic rules for your cluster VMs.
**Explanation**
As part of configuring the <%= vars.product_tile %> tile for Azure, you enter **Default Security Group** in the **Kubernetes Cloud Provider** pane.
When you create a Kubernetes cluster, <%= vars.product_short %> automatically assigns this security group to each VM in the cluster.
However, on Azure the automatic assignment may not occur.
As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.
**Workaround**
If you experience this issue, manually assign the default security group to each VM NIC in your cluster.
#### <a id='first-az'></a>Cluster Creation Fails When First AZ Runs out of Resources
**Symptom**
If the first availability zone (AZ) used by a plan with multiple AZs runs out of
resources, cluster creation fails with an error like the following:
<pre class="terminal">
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'No valid placement found for requested memory: 4096
</pre>
**Explanation**
BOSH creates VMs for your <%= vars.product_short %> deployment using a round-robin
algorithm, creating the first VM in the first AZ that your plan uses.
If the AZ runs out of resources, cluster creation fails because BOSH cannot create
the cluster VM.
For example, if your three AZs each have enough resources for ten VMs, and you
create two clusters with four worker VMs each, BOSH creates VMs in the
following AZs:
<table>
<tr>
<th></th>
<th>AZ1</th>
<th>AZ2</th>
<th>AZ3</th>
</tr>
<tr>
<th>Cluster 1</th>
<td>Worker VM 1</td>
<td>Worker VM 2</td>
<td>Worker VM 3</td>
</tr>
<tr>
<td></td>
<td>Worker VM 4</td>
<td></td>
<td></td>
</tr>
<tr>
<th>Cluster 2</th>
<td>Worker VM 1</td>
<td>Worker VM 2</td>
<td>Worker VM 3</td>
</tr>
<tr>
<td></td>
<td>Worker VM 4</td>
<td></td>
<td></td>
</table>
In this scenario, AZ1 has twice as many VMs as AZ2 or AZ3.
#### <a id='azure-worker-comm'></a>Azure Worker Node Communication Fails after Upgrade
**Symptom**
Outbound communication from a worker node VM fails after upgrading <%= vars.product_short %>.
**Explanation**
<%= vars.product_short %> uses Azure Availability Sets to improve the uptime of workloads and worker nodes in the event of Azure platform failures. Worker node
VMs are distributed evenly across Availability Sets.
Azure Standard SKU Load Balancers are recommended for the Kubernetes control plane and Kubernetes ingress and egress. This load balancer type provides an IP address for outbound communication using SNAT.
During an upgrade, when BOSH rebuilds a given worker instance in an Availability Set,
Azure can time out while re-attaching the worker node network interface to the
back-end pool of the Standard SKU Load Balancer.
For more information, see [Outbound connections in Azure](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections) in the Azure documentation.
**Workaround**
You can manually re-attach the worker instance to the back-end pool of the Azure Standard SKU Load Balancer in your Azure console.
#### <a id='no-password-om'></a> Passwords Not Supported for Ops Manager VM on vSphere
Starting in Ops Manager v2.6, you can only SSH onto the Ops Manager VM in a vSphere deployment with a private SSH key. You cannot SSH onto the Ops Manager VM with a password.
To avoid upgrade failure and errors when authenticating, add a public key to the **Customize Template** screen of the the OVF template for the Ops Manager VM. Then, use the private key to SSH onto the Ops Manager VM.
<p class="note warning"><strong>Warning</strong>: You cannot upgrade to Ops Manager v2.6 successfully without adding a public key. If you do not add a key, Ops Manager shuts down automatically because it cannot find a key and may enter a reboot loop.</p>
For more information about adding a public key to the OVF template, see [Deploy Ops Manager](https://docs.pivotal.io/pivotalcf/om/vsphere/deploy.html#deploy) in _Deploying Ops Manager on vSphere_.
#### <a id='timeout'></a> Error During Individual Cluster Upgrades
**Symptom**
While submitting a large number of cluster upgrade requests using the `pks upgrade-cluster` command, some of your Kubernetes clusters are marked as failed.
**Explanation**
BOSH upgrades Kubernetes clusters in parallel with a limit of up to four concurrent cluster upgrades by default.
If you schedule more than four cluster upgrades,
<%= vars.product_short %> queues the upgrades and waits for BOSH to finish the last upgrade.
When BOSH finishes the last upgrade, it starts working on the next upgrade request.
If you submit too many cluster upgrades to BOSH, an error may occur wherein some of the clusters are marked as FAILED because BOSH could not start the upgrade with the specified timeout. The timeout is set to 168 hours.
However, BOSH does not remove the task from the queue or stop working on the upgrade if it has been picked up.
**Solution**
If you expect that upgrading all of your Kubernetes clusters takes more than 168 hours,
do not use a script that submits upgrade requests for all of your clusters at once.
For information about upgrading Kubernetes clusters provisioned by <%= vars.product_short %>,
see [Upgrading Clusters](./upgrade-clusters.html).
#### <a id="kubectl-azs"></a> Kubectl CLI Commands Do Not Work after Changing an Existing Plan to a Different AZ
**Symptom**
After you reconfigure the AZ of an existing plan, kubectl cli commands do not work in the plan's existing clusters.
**Explanation**
This issue occurs in IaaS environments which either limit or prevent attaching a disk across multiple AZs.
BOSH supports creating new VMs and attaching existing disks to VMs. BOSH cannot "move" VMs.
If the plan for an existing cluster is changed to a different AZ,
the cluster's new "intended" state is for the cluster to be hosted within the new AZ.
To migrate the cluster from its original state to its intended state,
BOSH will create new VMs for the cluster within the designated AZ and remove the cluster's original VMs from the original AZ.
On an IaaS where attaching VM disks across AZs is not supported, the disks attached to the newly created VMs will not have the original disks' content.
**Workaround**
If you have reconfigured the AZ of an existing cluster and afterwards could not run kubectl cli commands, contact Support for assistance.
#### <a name='telemetry-prefs'></a> HTTP 500 Internal Server Error When Saving Telemetry Preferences
**Symptom**
You receive an `HTTP 500 Internal Server Error` when saving the Telemetry preferences form.
**Explanation**
When using Ops Manager v2.5, you may receive an `HTTP 500 Internal Server Error` if you attempt to
save Telemetry preferences without configuring all of the form's required settings.
**Solution**
Use your browser's `Back` function to return to the Telemetry preference configuration form.
Configure all of the form's required settings. To submit your Telemetry preferences, click `Save` .
#### <a name='uuid-length'></a> One Plan ID Is Longer Than Other Plan IDs
**Symptom**
One of your Plan IDs is one character longer than your other Plan IDs.
**Explanation**
Each Plan has a unique Plan ID. A Plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens.
The **Plan 4** Plan ID is instead a UUID consisting of 33 alphanumeric characters and 4 hyphens.
You can safely configure and use **Plan 4**.
The length of the **Plan 4** Plan ID does not affect the functionality of **Plan 4** clusters.
If you require all Plan IDs to have identical length, do not activate or use **Plan 4**.
#### <a id="metricsink"></a><%= vars.product_short %> The metric sink fails to send to secure connections.
**Symptom**
If you attempt to use the `MetricSink` or `ClusterMetricSink` over a secure connection the TLS handshake will be rejected by the Telegraf.
**Explanation**
This is due to missing CA Certificates in the Telegraf container images included in the RC Tile version.
**Workaround**
A patch is being worked on. User should not expect to be able to send metrics to secure connections until this patch is published.
### <a id="v1.5.0-known-issues--mgmt-console"></a> Enterprise PKS Management Console Known Issues
The following additional known issues are specific to the Enterprise PKS Management Console v0.9.0 appliance and user interface.
#### <a id="console-notifications"></a><%= vars.product_short %> Management Console Notifications Persist
**Symptom**
In the **Enterprise PKS** view of <%= vars.product_short %> Management Console, error notifications sometimes persist in memory on the **Clusters** and **Nodes** pages after you clear those notifications.
**Explanation**
After clicking the **X** button to clear a notification it is removed, but when you navigate back to those pages the notification might show again.
**Workaround**
Use shift+refresh to reload the page.
#### <a id="console-delete"></a>Cannot Delete <%= vars.product_short %> Deployment from Management Console
**Symptom**
In the **Enterprise PKS** view of <%= vars.product_short %> Management Console, you cannot use the **Delete Enterprise PKS Deployment** option even after you have removed all clusters.
**Explanation**
The option to delete the deployment is only activated in the management console a short period after the clusters are deleted.
**Workaround**
After removing clusters, wait for a few minutes before attempting to use the **Delete Enterprise PKS Deployment** option again.
#### <a id="console-vli-port"></a>Configuring <%= vars.product_short %> Management Console Integration with VMware vRealize Log Insight
**Symptom**
Enterprise PKS Management Console appliance sends logs to VMware vRealize Log Insight over HTTP, not HTTPS.
**Explanation**
When you deploy the Enterprise PKS Management Console appliance from the OVA, if you require log forwarding to vRealize Log Insight, you must provide the port on the vRealize Log Insight server on which it listens for HTTP traffic. Do not provide the HTTPS port.
**Workaround**
Set the vRealize Log Insight port to the HTTP port. This is typically 9000.
#### <a id="console-nsxt-flannel-error"></a>Deploying <%= vars.product_short %> to an Unprepared NSX-T Data Center Environment Results in Flannel Error
**Symptom**
When using the management console to deploy <%= vars.product_short %> in **NSX-T Data Center (Not prepared for PKS)** mode, if an error occurs during the network configuration, the message `Unable to set flannel environment` is displayed in the deployment progress page.
**Explanation**
The network configuration has failed, but the error message is incorrect.
**Workaround**
To see the correct reason for the failure, see the server logs. For instructions about how to obtain the server logs, see [Troubleshooting Enterprise PKS Management Console](./console/console-troubleshooting.html).
#### <a id="console-bosh-cli"></a>Using BOSH CLI from Operations Manager VM
**Symptom**
The BOSH CLI client bash command that you obtain from the **Deployment Metadata** view does not work when logged in to the Operations Manager VM.
**Explanation**
The BOSH CLI client bash command from the **Deployment Metadata** view is intended to be used from within the <%= vars.product_short %> Management Console appliance.
**Workaround**
To use the BOSH CLI from within the Operations Manager VM, see [Connect to Operations Manager](./console/console-login-opsmanager.html).
From the Ops Manager VM, use the BOSH CLI client bash command from the **Deployment Metadata** page, with the following modifications:
* Remove the clause `BOSH_ALL_PROXY=xxx`
* Replace the `BOSH_CA_CERT` section with `BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate`
#### <a id="console-pks-cli"></a>Run `pks` Commands against the PKS API Server
**Explanation**
The PKS CLI is available in the <%= vars.product_short %> Management Console appliance.
**Workaround**
To be able to run `pks` commands against the PKS API Server, you must first log to PKS using the following command syntax `pks login -a fqdn_of_pks …`.
To do this, you must ensure either of the following:
* The FQDN configured for the PKS Server is resolvable by the DNS server configured for the Enterprise PKS Management Console appliance, or
* An entry that maps the Floating IP assigned to the PKS Server to the FQDN exists on /etc/hosts in the appliance. For example: `192.168.160.102 api.pks.local`.