Skip to content

Commit

Permalink
Remove the CachingScheduler
Browse files Browse the repository at this point in the history
The CachingScheduler has been deprecated since Pike [1].
It does not use the placement service and as more of nova
relies on placement for managing resource allocations,
maintaining compabitility for the CachingScheduler is
exorbitant.

The release note in this change goes into much more detail
about why the FilterScheduler + Placement should be a
sufficient replacement for the original justification
for the CachingScheduler along with details on how to migrate
from the CachingScheduler to the FilterScheduler.

Since the [scheduler]/driver configuration option does allow
loading out-of-tree drivers and the scheduler driver interface
does have the USES_ALLOCATION_CANDIDATES variable, it is
possible that there are drivers being used which are also not
using the placement service. The release note also explains this
but warns against it. However, as a result some existing
functional tests, which were using the CachingScheduler, are
updated to still test scheduling without allocations being
created in the placement service.

Over time we will likely remove the USES_ALLOCATION_CANDIDATES
variable in the scheduler driver interface along with the
compatibility code associated with it, but that is left for
a later change.

[1] Ia7ff98ff28b7265058845e46b277317a2bfc96d2

Change-Id: I1832da2190be5ef2b04953938860a56a43e8cddf
  • Loading branch information
mriedem committed Oct 18, 2018
1 parent ce520ee commit 25dadb9
Show file tree
Hide file tree
Showing 23 changed files with 113 additions and 664 deletions.
10 changes: 0 additions & 10 deletions .zuul.yaml
Expand Up @@ -71,14 +71,6 @@
tox_envlist: functional-py35
timeout: 3600

- job:
name: nova-caching-scheduler
parent: nova-dsvm-base
description: |
Run non-slow Tempest API and scenario tests using the CachingScheduler.
run: playbooks/legacy/nova-caching-scheduler/run.yaml
post-run: playbooks/legacy/nova-caching-scheduler/post.yaml

- job:
name: nova-cells-v1
parent: nova-dsvm-base
Expand Down Expand Up @@ -246,7 +238,5 @@
irrelevant-files: *dsvm-irrelevant-files
- neutron-tempest-dvr-ha-multinode-full:
irrelevant-files: *dsvm-irrelevant-files
- nova-caching-scheduler:
irrelevant-files: *dsvm-irrelevant-files
- os-vif-ovs:
irrelevant-files: *dsvm-irrelevant-files
40 changes: 0 additions & 40 deletions contrib/profile_caching_scheduler.sh

This file was deleted.

30 changes: 0 additions & 30 deletions devstack/tempest-dsvm-caching-scheduler-rc

This file was deleted.

21 changes: 9 additions & 12 deletions doc/source/admin/configuration/schedulers.rst
Expand Up @@ -311,10 +311,9 @@ CoreFilter

``CoreFilter`` is deprecated since the 19.0.0 Stein release. VCPU
filtering is performed natively using the Placement service when using the
``filter_scheduler`` driver. Users of the ``caching_scheduler`` driver may
still rely on this filter but the ``caching_scheduler`` driver is itself
deprecated. Furthermore, enabling CoreFilter may incorrectly filter out
`baremetal nodes`_ which must be scheduled using custom resource classes.
``filter_scheduler`` driver. Furthermore, enabling CoreFilter may
incorrectly filter out `baremetal nodes`_ which must be scheduled using
custom resource classes.

Only schedules instances on hosts if sufficient CPU cores are available. If
this filter is not set, the scheduler might over-provision a host based on
Expand Down Expand Up @@ -390,10 +389,9 @@ DiskFilter

``DiskFilter`` is deprecated since the 19.0.0 Stein release. DISK_GB
filtering is performed natively using the Placement service when using the
``filter_scheduler`` driver. Users of the ``caching_scheduler`` driver may
still rely on this filter but the ``caching_scheduler`` driver is itself
deprecated. Furthermore, enabling DiskFilter may incorrectly filter out
`baremetal nodes`_ which must be scheduled using custom resource classes.
``filter_scheduler`` driver. Furthermore, enabling DiskFilter may
incorrectly filter out `baremetal nodes`_ which must be scheduled using
custom resource classes.

Only schedules instances on hosts if there is sufficient disk space available
for root and ephemeral storage.
Expand Down Expand Up @@ -640,10 +638,9 @@ RamFilter

``RamFilter`` is deprecated since the 19.0.0 Stein release. MEMORY_MB
filtering is performed natively using the Placement service when using the
``filter_scheduler`` driver. Users of the ``caching_scheduler`` driver may
still rely on this filter but the ``caching_scheduler`` driver is itself
deprecated. Furthermore, enabling RamFilter may incorrectly filter out
`baremetal nodes`_ which must be scheduled using custom resource classes.
``filter_scheduler`` driver. Furthermore, enabling RamFilter may
incorrectly filter out `baremetal nodes`_ which must be scheduled using
custom resource classes.

.. _baremetal nodes: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html

Expand Down
2 changes: 0 additions & 2 deletions nova/conductor/tasks/live_migrate.py
Expand Up @@ -196,8 +196,6 @@ def _check_destination_has_enough_memory(self):
# TODO(mriedem): This method can be removed when the forced host
# scenario is calling select_destinations() in the scheduler because
# Placement will be used to filter allocation candidates by MEMORY_MB.
# We likely can't remove it until the CachingScheduler is gone though
# since the CachingScheduler does not use Placement.
compute = self._get_compute_info(self.destination)
free_ram_mb = compute.free_ram_mb
total_ram_mb = compute.memory_mb
Expand Down
8 changes: 2 additions & 6 deletions nova/conf/scheduler.py
Expand Up @@ -34,17 +34,13 @@
Other options are:
* 'caching_scheduler' which aggressively caches the system state for better
individual scheduler performance at the risk of more retries when running
multiple schedulers. [DEPRECATED]
* 'fake_scheduler' which is used for testing.
Possible values:
* Any of the drivers included in Nova:
* filter_scheduler
* caching_scheduler
* fake_scheduler
* You may also set this to the entry point name of a custom scheduler driver,
Expand All @@ -62,8 +58,8 @@
This value controls how often (in seconds) to run periodic tasks in the
scheduler. The specific tasks that are run for each period are determined by
the particular scheduler being used. Currently the only in-tree scheduler
driver that uses this option is the ``caching_scheduler``.
the particular scheduler being used. Currently there are no in-tree scheduler
driver that use this option.
If this is larger than the nova-service 'service_down_time' setting, the
ComputeFilter (if enabled) may think the compute service is down. As each
Expand Down
107 changes: 0 additions & 107 deletions nova/scheduler/caching_scheduler.py

This file was deleted.

6 changes: 6 additions & 0 deletions nova/scheduler/driver.py
Expand Up @@ -32,6 +32,12 @@
class Scheduler(object):
"""The base class that all Scheduler classes should inherit from."""

# TODO(mriedem): We should remove this flag now so that all scheduler
# drivers, both in-tree and out-of-tree, must rely on placement for
# scheduling decisions. We're likely going to have more and more code
# over time that relies on the scheduler creating allocations and it
# will not be sustainable to try and keep compatibility code around for
# scheduler drivers that do not create allocations in Placement.
USES_ALLOCATION_CANDIDATES = True
"""Indicates that the scheduler driver calls the Placement API for
allocation candidates and uses those allocation candidates in its
Expand Down
4 changes: 1 addition & 3 deletions nova/scheduler/filters/core_filter.py
Expand Up @@ -83,9 +83,7 @@ def __init__(self):
LOG.warning('The CoreFilter is deprecated since the 19.0.0 Stein '
'release. VCPU filtering is performed natively using the '
'Placement service when using the filter_scheduler '
'driver. Users of the caching_scheduler driver may still '
'rely on this filter but the caching_scheduler driver is '
'itself deprecated. Furthermore, enabling CoreFilter '
'driver. Furthermore, enabling CoreFilter '
'may incorrectly filter out baremetal nodes which must be '
'scheduled using custom resource classes.')

Expand Down
10 changes: 4 additions & 6 deletions nova/scheduler/filters/disk_filter.py
Expand Up @@ -33,12 +33,10 @@ def __init__(self):
LOG.warning('The DiskFilter is deprecated since the 19.0.0 Stein '
'release. DISK_GB filtering is performed natively '
'using the Placement service when using the '
'filter_scheduler driver. Users of the '
'caching_scheduler driver may still rely on this '
'filter but the caching_scheduler driver is itself '
'deprecated. Furthermore, enabling DiskFilter may '
'incorrectly filter out baremetal nodes which must be '
'scheduled using custom resource classes.')
'filter_scheduler driver. Furthermore, enabling '
'DiskFilter may incorrectly filter out baremetal '
'nodes which must be scheduled using custom resource '
'classes.')

def _get_disk_allocation_ratio(self, host_state, spec_obj):
return host_state.disk_allocation_ratio
Expand Down
7 changes: 2 additions & 5 deletions nova/scheduler/filters/ram_filter.py
Expand Up @@ -73,11 +73,8 @@ def __init__(self):
LOG.warning('The RamFilter is deprecated since the 19.0.0 Stein '
'release. MEMORY_MB filtering is performed natively '
'using the Placement service when using the '
'filter_scheduler driver. Users of the '
'caching_scheduler driver may still rely on this '
'filter but the caching_scheduler driver is itself '
'deprecated. Furthermore, enabling RamFilter may '
'incorrectly filter out baremetal nodes which must be '
'filter_scheduler driver. Furthermore, enabling RamFilter '
'may incorrectly filter out baremetal nodes which must be '
'scheduled using custom resource classes.')

def _get_ram_allocation_ratio(self, host_state, spec_obj):
Expand Down
10 changes: 0 additions & 10 deletions nova/test.py
Expand Up @@ -433,19 +433,9 @@ def restart_compute_service(self, compute):
def restart_scheduler_service(scheduler):
"""Restart a scheduler service in a realistic way.
Deals with resetting the host state cache in the case of using the
CachingScheduler driver.
:param scheduler: The nova-scheduler service to be restarted.
"""
scheduler.stop()
if hasattr(scheduler.manager.driver, 'all_host_states'):
# On startup, the CachingScheduler runs a periodic task to pull
# the initial set of compute nodes out of the database which it
# then puts into a cache (hence the name of the driver). This can
# race with actually starting the compute services so we need to
# restart the scheduler to refresh the cache.
scheduler.manager.driver.all_host_states = None
scheduler.start()

def assertJsonEqual(self, expected, observed, message=''):
Expand Down
17 changes: 10 additions & 7 deletions nova/tests/functional/regressions/test_bug_1671648.py
Expand Up @@ -79,9 +79,7 @@ def setUp(self):
self.addCleanup(fake.restore_nodes)
self.start_service('compute', host='host2')

# Start the scheduler after the compute nodes are created in the DB
# in the case of using the CachingScheduler.
self.start_service('scheduler')
self.scheduler_service = self.start_service('scheduler')

self.useFixture(cast_as_call.CastAsCall(self))

Expand Down Expand Up @@ -153,9 +151,14 @@ def test_retry_build_on_compute_error(self):
self.assertEqual(2, self.attempts)


class TestRetryBetweenComputeNodeBuildsCachingScheduler(
class TestRetryBetweenComputeNodeBuildsNoAllocations(
TestRetryBetweenComputeNodeBuilds):
"""Tests the reschedule scenario using the CachingScheduler."""
"""Tests the reschedule scenario using a scheduler driver which does
not use Placement.
"""
def setUp(self):
self.flags(driver='caching_scheduler', group='scheduler')
super(TestRetryBetweenComputeNodeBuildsCachingScheduler, self).setUp()
super(TestRetryBetweenComputeNodeBuildsNoAllocations, self).setUp()
# We need to mock the FilterScheduler to not use Placement so that
# allocations won't be created during scheduling.
self.scheduler_service.manager.driver.USES_ALLOCATION_CANDIDATES = \
False

0 comments on commit 25dadb9

Please sign in to comment.