Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift 4.1 Release Notes Tracker #12487

Closed
deads2k opened this issue Oct 15, 2018 · 20 comments

Comments

@deads2k
Copy link
Contributor

commented Oct 15, 2018

No description provided.

@deads2k

This comment has been minimized.

Copy link
Contributor Author

commented Oct 15, 2018

SecurityContextConstraints only exist in the security.openshift.io group.

@kalexand-rh

This comment has been minimized.

Copy link
Contributor

commented Oct 18, 2018

Pods have to change the CA cert bundle differently: #12563

@mrogers950

This comment has been minimized.

Copy link
Contributor

commented Oct 18, 2018

@kalexand-rh A note for #12563:
Pods that currently consume the service-serving CA bundle from /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt should migrate to obtaining the CA bundle from a configMap annotated with "service.alpha.openshift.io/inject-cabundle=true"

The /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt is alpha and deprecated. It will be removed in a future release.

@ariordan-redhat

This comment has been minimized.

Copy link
Contributor

commented Feb 4, 2019

https://jira.coreos.com/browse/OSDOCS-178
Comment from SDN team for 4.0:

We do install multus, but anything that actually utilizes it will be tech preview
So we should document that multus is available, and point to the tech preview documentation for how to install the additional networks on top of multus.
We should also document how to disable multus, in case people want it gone for whatever reason.

@sheriff-rh

This comment has been minimized.

Copy link
Contributor

commented Feb 5, 2019

We do install multus, but anything that actually utilizes it will be tech preview
So we should document that multus is available, and point to the tech preview documentation for how to install the additional networks on top of multus.

I have included this in the tech preview chart portion of the release notes, which includes a link to the definition/support meanings of TP. Thanks @ariordan-redhat !

@bergerhoffer

This comment has been minimized.

Copy link
Contributor

commented Feb 21, 2019

Deprecation notice to add:

The OpenShift Service Broker and Service Catalog is being replaced over the course of several future OpenShift 4 releases. Red Hat will be deprecating the template service broker and ansible service brokers once we have ported important dependent content to new Operator driven solutions. Users should be aware that equivalent and better functionality is present in the Operator Framework and Operator Life Cycle Manager (OLM). Please use this notice to begin researching how you can advantage your service by leveraging these new technologies now available in OpenShift 4.

@huffmanca

This comment has been minimized.

Copy link
Contributor

commented Feb 25, 2019

https://jira.coreos.com/browse/AUTH-184
This issue needs to be included in the release notes, as several of the oc adm commands have been deprecated. The new certificate details are being tracked under OSDOCS-315.

@vikram-redhat

This comment has been minimized.

Copy link
Contributor

commented Mar 5, 2019

@bergerhoffer

This comment has been minimized.

Copy link
Contributor

commented Mar 14, 2019

@ahardin-rh @sheriff-rh Here's a new section for the change in behavior for the service catalog:

The service catalog is not installed by default in OpenShift Container Platform 4.0. You must install it if you plan on using any of the services from the OpenShift Ansible broker or template service broker. In OpenShift Container Platform 4.0, the service catalog API server is installed into the openshift-service-catalog-apiserver namespace and the service catalog controller manager is installed into the openshift-service-catalog-controller-manager namespace. In OpenShift Container Platform 3.11, both of these components were installed into the kube-service-catalog namespace.

@bergerhoffer

This comment has been minimized.

Copy link
Contributor

commented Mar 14, 2019

@ahardin-rh @sheriff-rh Can you also make updates to the existing service catalog/brokers deprecation notice in the release notes?

Use the changes suggested in this Google doc under "Deprecation notice".

The paragraph above it might eventually get rolled in, but we're still working it out.

@ahardin-rh ahardin-rh changed the title OpenShift 4.0 Release Notes Tracker OpenShift 4.1 Release Notes Tracker Apr 2, 2019
@yufchang

This comment has been minimized.

Copy link

commented Apr 3, 2019

@ahardin-rh @sheriff-rh hi, will UHC(https://cloud.openshift.com) part be added in release notes in the future? don't see this part. Thanks.

@sichvoge

This comment has been minimized.

Copy link
Contributor

commented Apr 8, 2019

For the "Cluster Monitoring" section, I am missing the following three new features:

  1. new Alerting UI natively integrated into the OpenShift console. You can now view cluster-level alerts and alerting rules from a single place, as well as configure silences.
  2. Telemeter to collect anonymized cluster-related metrics to proactively help customers with their OpenShift clusters.
  3. Autoscale pods horizontally based on the resource metrics API

For telemeter, we try to achieve the following additional use cases:

  • Gather crucial (health) metrics of OpenShift installation
  • Enable viable feedback loop of OpenShift upgrades
  • Enable metering

For autoscale based on resource metrics API you could say something along these lines:

By default, OpenShift Cluster Monitoring exposes CPU and Memory utilization through the Kubernetes resource metrics API. There is no need to install a separate metrics server anymore.

@jboxman

This comment has been minimized.

Copy link
Contributor

commented Apr 17, 2019

@ahardin-rh @sheriff-rh -- Not sure if this is the proper channel for this, but with regard to Networking, I think it makes sense to mention the Network Operator somewhere. @pecameron said:

network operator is new in 4.1, 3.11 uses ansible playbooks.
In addition to installing, the network operator does upgrades and monitors networking.

I'm not sure if it also needs a mention under OperatorHub, possibly as Network management or similar.

@yapei

This comment has been minimized.

Copy link

commented May 10, 2019

@ahardin-rh @sheriff-rh I have no idea about

Now with Operators, you can pick and choose what functionality you want to enable, allowing the user to customize the interface.

AFAIK, we only support logoutURL and brand & documentationBaseURL, can we say you can pick and choose what functionality you want to enable ?

@michaelgugino

This comment has been minimized.

Copy link

commented May 20, 2019

I put both of these at 'medium' as they may impact users and have negative effects, but would not prevent normal cluster operations.

This one may need further consideration for a higher level: https://bugzilla.redhat.com/show_bug.cgi?id=1712056

https://bugzilla.redhat.com/show_bug.cgi?id=1712068

@adambkaplan

This comment has been minimized.

Copy link
Contributor

commented May 21, 2019

Builds with shell substitution for env vars may fail.

https://bugzilla.redhat.com/show_bug.cgi?id=1712245
Docs PR is up: #14977

@weinliu

This comment has been minimized.

Copy link

commented May 22, 2019

Autoscaling for Memory Utilization is not working https://bugzilla.redhat.com/show_bug.cgi?id=1707785 , shall we list it as known issue?

@michaelgugino

This comment has been minimized.

Copy link

commented May 22, 2019

@ahardin-rh https://bugzilla.redhat.com/show_bug.cgi?id=1712056 no longer applies, further testing revealed it's not a bug.

@michaelgugino

This comment has been minimized.

Copy link

commented May 22, 2019

This is an issue, confirmed today: https://bugzilla.redhat.com/show_bug.cgi?id=1713061

When deleting a machine-object (either directly or by scaling down the owning machine-set), if the associated node has already been deleted somehow (possibly by a cluster admin), the machine-controller will fail to successfully delete the backing cloud instance, and the machine-object will be stuck in 'deleting' status.

@kalexand-rh

This comment has been minimized.

Copy link
Contributor

commented Jun 7, 2019

It looks like @ahardin-rh incorporated all of the issues that were raised here, so I'm going to close this issue. Thanks everyone!

@kalexand-rh kalexand-rh closed this Jun 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.