Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up
Note: With our new infrastructure services, we no longer launch a network agent and have moved to a specific HAProxy image for load balancers. Therefore, we no longer need rancher/agent-instance.
Starting with version 1.2.0, Rancher no longer supports AWS ELBs and only supports AWS ALB (Application Load Balancers). Please update your high availability setups to use ALBs.
New Features (since v1.1.4)
- Kubernetes 1.4.6 Support - In addition to adding support for the latest k8s upstream distribution, Rancher now offers the following support:
- Added ability for users to now be able to select AWS as a Cloud Provider in addition to the default Rancher option.
- Added support for kubectl exec, logs, and attach.
- Added support for k8s node labeling.
- Added support for managing stateful applications through the use of PetSet objects.
- Added support for upgrading k8s cluster within an environment.
- Added Rancher UI support for the concept of a Stack to manage k8s templates as individual applications. Stacks can be upgraded and deleted as an application.
- Added Rancher UI support for k8s Deployments and ReplicaSet.
- Docker 1.12.x Support [#5179] - Docker 1.12.x is now a supported in Rancher along with a few enhancements:
- Docker Swarm mode is now available as an option for orchestration frameworks when creating environments.
- Cattle now supports all docker run options as of Docker 1.12.3. Please see (#4708) for a more detail list of updated run options.
- Improved Network Support [#5256, #5276] - Rancher now adds the ability to manage the lifecycle, distribution, and update management of custom networks plugins written for the Container Network Interface (CNI) specification.
- The current Rancher IPSec managed networking option has been completely rewritten as a CNI plugin and is available to both Cattle and K8s orchestration frameworks.
- Rancher now offers a VXLAN CNI plugin as an alternate option of an unencrypted cross-host (with a promise of better performance) managed networking option.
- Improved Load Balancer v2 Support [#2179] - The Rancher LB Service has been completely rewritten with the following support:
- SNI Routing is now available in LB v2.
- HAProxy logging is now available in LB v2. [#2414]
- Users can now add custom configuration to HAProxy configurations for frontend and backend in addition to the already supported global and default sections. [#2171, #1871]
- Users can now add a selector with hostname routing rules [#2288]
- Users now have more flexibility in defining port to service mappings in v2. In v1, ports had to be mapped to all services.
- Users can now implement and add their own custom LB Service using their favorite LB engine (i.e. nginx) by integrating it with Rancher’s metadata service to determine when container is requesting to have it registered to a LB.
- Improved Storage Support - Rancher now adds the ability to manage the lifecycle, distribution, and update management of both custom K8s flexvolume and Docker volume plugins.
- Cattle now support volume plugins for NFS. Please be aware that going forward in 1.2, RancherNFS will now be the supported NFS Docker plugin solution. Convoy-NFS will no longer be available as an option in 1.2, and will no longer be supported in 1.3+
- [Experimental] Rancher now has support for EBS, and EFS.
- Improved Authentication Support [#5265] - The authentication framework has been re-written to provide more flexibility in adding new authentication/authorization services into Rancher.
- Shibboleth v3 is now a new authentication provider option in Rancher written for SAML 2.0 support.
- General Performance/Scale Improvements - Various enhancements have been added to improve general performance and scalability in Rancher.
- UI Infrastructure view has been changed to accommodate more hosts and containers per environment.
- Container deployment performance has been improvement through scheduling improvements and to allow launching of containers in parallel.
- Rancher CLI - Rancher now ships with a new rancher CLI with the following support:
- Native Docker CLI interactions to your managed hosts
- Environment management
- Stack management
- Service management
- Host management
- SSH access to your managed host
- Resource Scheduling - Cattle now supports the ability to schedule containers via resource constraints in CPU and MEM. Admins can also now set resource limits of CPU/MEM on a per host basis.
- Environment Templates - Rancher now supports the ability for users to launch their environment based on a template that describes all their required infrastructure services (i.e. LB, Storage, Networking, etc.).
- Rancher will still have default primary templates so users can quickly get their environments up and running.
- Users will now have the ability to create and manage environment templates that describes the infrastructure services that are to be deployed upon environment creation.
- Users can now leverage both community contributed services or create their own to be deployed and managed by Rancher.
- Improved HA Support - Rancher HA has now been drastically simplified to configure and manage. Redis and Zookeeper have been removed as requirement for multi-node Rancher deployments.
- Any many more...
- Rancher now supports the ability for admins to whitelist the only Docker registries available to each Rancher deployment.
- Rancher supports allowing admins to add a default Docker registry if there are no prefixes to images.
- Catalog bindings support for ports and labels.
- Catalog support for handling git branches.
- Rancher agent is now rewritten in Go.
- DockerMachine was updated and Azure is a brand new driver with a new updated UI.
- Support for RancherOS 0.6.0+
- Added Service Log journaling to allow give users better logging when services do not launch properly.
How to upgrade to 1.2
Rancher 1.2 introduces some major changes to how network is managed, most notably the refactoring on the IPsec networking to a proper CNI plugin and the introduction of a new v2 LB service to provider users with more flexibility in HAProxy configurations. Due to these changes, the upgrade process
will result in network downtime and will require you to upgrade each environment to restore connectivity. The upgrade process has been split into a Rancher update followed by individual environment updates so please follow the instructions below to properly upgrade your current version to 1.2:
Rancher Server Upgrade
Make sure you back up your database. You will not be able to rollback to a previous version that has been used with Rancher 1.2.0. If you need to roll back, you will only be able to use a snapshot of a database that had been running your previous version. Once you have created a backup of your database, please proceed with the normal upgrade process as per the docs.
Note: If you are using AWS security groups, please make sure ICMP is enabled in your security group.
After your Rancher server has been upgraded, you will no longer be able to access your environments until you successfully upgrade them to the 1.2 environments. Due to the network and LB changes with 1.2, the upgrade process will require downtime until the new network components have been updated and migrated. We have provided a convenience feature for you to decide when to update these environments by displaying an Upgrade Now screen for each environment that requires an update. Until you perform the update, the containers should remain functional but no management capability will be allowed, but anything trigger recreation of containers (health checks) may cause these containers to no longer work. We highly suggest you upgrade the environments as any functionality that require Rancher involvement such as HA, DNS programming, healthchecks, etc. may not work properly until it has been completed.
Once you click "Upgrade Now", Rancher will proceed to upgrade the environment. Please be patient as this can take up to 10-20 minutes depending on the size of your environment. The environment will be successfully updated when all the stacks found under Stacks -> Infrastructure are in the active state.
For those of you that have Kubernetes environments, you must upgrade your existing k8s v1.2.6 stack to v1.4.6 after all infrastructure services are in an
active state. Note: When upgrading k8s, please be aware that there is a known issue that may cause existing pods to get deleted and recreated. If the pod is not part of a replication controller, it will not be recreated. Please plan accordingly. Again, the upgrade process may take upwards of 5-10 minutes depending on your environment and will be completed when the stack is in an
Known Limitations with Upgrade
- Upgrading Swarm environments are not supported. Due to the change from Docker regarding moving Swarm into the Docker engine, we have updated our original Swarm support to the latest Docker 1.12 Swarm.
- There are some catalog entries that are migrating folders in order to show up for environment template options. These catalog entries do not support roll back to the old entries. Examples include but not limited to: Kubernetes, all External DNS entries.
- During the upgrade from v1 load balancers to v2 load balancers, any rules using selectors will not be upgraded. These rules would need to be added into the load balancer after the environment upgrade.
- Starting with 1.2, Rancher will no longer pull stats from cadvisor but rather from docker stats. Please be aware that this will cause existing catalog items that rely on cadvisor such as Prometheus to no longer work until they have been fixed to rely on docker stats instead.
Known Major Issues
- Individual container links are not resolvable, note this is only for container links, service links are still working as expected. [#6584]
- Self signed certs do not work with Rancher server [#6122]
- boot2docker hosts are known to have issues with
rancher/plugin-manager:v0.2.12, there is a new network services running
rancher/network-manager:v0.2.13. If you have an "Upgrade Available" button next to Network Services stack, please upgrade. [#6874]
- v1.2.0 only works if your docker bridge is docker0 [https://github.com//issues/6896] and docker must be installed at
- Hosts in AWS created in a prior release using the UI (aka docker-machine) are not cleaned up properly when deleted from the UI [#6750]
Major Bug Fixes since v1.1.4
- Please look into our Milestone 1.2 for a comprehensive list of issues that were resolved.
Here are the previous pre-releases that are included in v1.2.0: