IML 5.0.0
We are pleased to announce general availability of IML 5.0.0
IML 5 adds support for Lustre 2.12.1 and ZFS 0.7.13 and features a number of enhancements and bugfixes.
General documentation can be found here.
Issues can be reported here.
Special thanks to all the contributors involved in this release:
@utopiabound
@johnsonw
@peterjonescumberland
@liy106
@mdiep25
@AlexTalker
@tanabarr
@brianjmurrell
@chrisgearing
How to get it
IML 5 can be installed either as a set of RPMs via Fedora Copr or a Docker Stack deployment.
RPM Install
To install the software via RPM, follow these steps.
Docker stack Install
To install the software via Docker Stack, follow these steps.
Upgrade to new RPMs
To upgrade from IEEL 2.4.x / 3.1.x or IML 4.0.x to IML 5 follow these steps.
Upgrades from IEEL versions need to upgrade to IML 4 then IML 5.
For instance:
- Upgrading from IEEL 2.4 to IML 5, perform an upgrade from IEEL 2.4 -> IML 4.0.10 -> IML 5
- Upgrading from IEEL 3.1 to IML 5, perform an upgrade from IEEL 3.1 -> IML 4.0.10 -> IML 5
Enhancements
- Lustre 2.12.1 / ZFS 0.7.13 Support
- IML 5 adds support for Lustre 2.12.1, and can deploy managed filesystems in both patched / patchless configurations, for both ldiskfs and ZFS.
- In addition, IML 5 supports monitoring 2.10 Installs.
- ZED integration (#536)
- IML now uses ZED (ZFS Event Daemon) to be notified of changes to pools / datasets / properties and vdevs.
- This results in lower resource utilization and greater responsiveness as IML will no longer poll cli output for these states.
- libzfs bindings and integration (#535)
- In addition to gathering data via ZED, IML talks directly to
libzfs
to gather ancillary data.
- In addition to gathering data via ZED, IML talks directly to
- RPM delivery and updates (#534)
- IML is now completely delivered via Fedora Copr, there is no tarball installer.
- Components are modular and can be updated individually
- Bugfix / non-breaking updates will be delivered in band for existing releases as updated RPMs to Copr repo.
- HA Improvements (#738)
- IML 5 now uses the upstream Lustre RA from the Lustre Repo
- IML 5 also uses the upstream ClusterLabs ZFS RA
- IML 5 adds support for running within Docker stack
- Run the manager on any docker supported platform
- Can collocate the IML manager with otherwise conflicting services
- Reactive Architecture (#533)
- IML is moving to a push based architecture utilizing Udev + ZED.
- Instead of polling CLI outputs in a blocking fashion, events from storage servers are used to update state.
- This both lowers resource usage and provides a very large improvement in responsiveness + scalability.
- Modularization
- IML now utilizes systemd units for orchestration and environment files and variables for configuration.
- We are striving for smaller modular services that follow 12 factor app principles
- Performance Improvements
- IML 5 adds some performance improvements that should lead to lower resource utilization, and faster response times.
- Add patchless server profile (#887)
- Add monitoring action for fencing (#79)
- Switch ring0 and ring1 interfaces(#70)
Bug Fixes
- Stop running partprobe on storage servers on an interval (#331)
- Remove modprobe as a method of verifying modules are running (#41)
- Ensure stats get stored correctly in monitor mode failover (#554)
- Ensure Robinhood is functional with Lustre 2.12 (#731)
- Do not import pools on hosts to discover state
- Remove dependency on DNF
- Remove usage of ZFS lockfile for bookkeeping imports
- Ensure dashboard displays target names instead of ids
Known Issues
These are known issues that are being actively worked. Fixes will be pushed to the IML 5 repo as they land.
- Rabbitmq may not start upon reboot (#480)
- Workaround If the manager is not running upon reboot, issue the following command to restart rabbitmq and all IML services:
systemctl restart iml-manager.target
- Workaround If the manager is not running upon reboot, issue the following command to restart rabbitmq and all IML services:
- ZFS backed nodes may not reboot upon power loss
- Workaround Boot the node
- Zpools may appear on hosts they are not available on.
- This should be a display only bug and will not effect filesystem creation, failover / failback