Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

cpufreq: interactive: New 'interactive' governor

New interactive governor.

This governor is designed for latency sensitive workloads, UI interaction for
example.

Advantages:
+ significantly more responsive to ramp cpu up when required (UI interaction)
+ more consistent ramping, existing governors do their cpu load sampling in a
workqueue context, the 'interactive' governor does this in a timer context, which
gives more consistent cpu load sampling.
+ higher priority for cpu frequency increase, rt_workqueue is used for scaling
up, giving the remaining tasks the cpu performance benefit, unlike existing
governors which schedule rampup work to occur after your performance starved
tasks have completed.

Existing governors sample cpu load at a particular rate, typically
every X ms. Which can lead to under powering UI threads when the user has
interacted with an idle system until the next sample period happns.

The 'interactive' governor has a different approach. Instead of sampling the cpu
at a specified rate, the governor will scale the cpu frequency up when coming
out of idle. When the cpu comes out of idle, a timer is configured to fire
within 1-2 ticks. If the cpu is 100% busy from exiting idle to when the timer
fires then we assume the cpu is underpowered and ramp to MAX speed.

If the cpu was not 100% busy, then the governor evaluates the cpu load over the
last 'min_sample_rate' (default 50000 uS) to determine the cpu speed to ramp down
to.

There is only one tuneable for this governor:
/sys/devices/system/cpu/cpufreq/interactive/min_sample_rate:
	The minimum ammount of time to spend at the current frequency before
	ramping down. This is to ensure that the governor has seen enough
	historic cpu load data to determine the appropriate workload.
	Default is 5000 uS.

Signed-off-by: Mike Chan <mike@android.com>
Change-Id: I686d2f57b0ed9cbb73217403b7438be5719588d2
  • Loading branch information...
commit 255f13bf41f368aa51638a854ed69cfc60f39120 1 parent 60ae2d9
authored June 22, 2010 cyanogen committed July 24, 2010
24  Documentation/cpu-freq/governors.txt
@@ -28,6 +28,7 @@ Contents:
28 28
 2.3  Userspace
29 29
 2.4  Ondemand
30 30
 2.5  Conservative
  31
+2.6  Interactive
31 32
 
32 33
 3.   The Governor Interface in the CPUfreq Core
33 34
 
@@ -182,6 +183,29 @@ governor but for the opposite direction.  For example when set to its
182 183
 default value of '20' it means that if the CPU usage needs to be below
183 184
 20% between samples to have the frequency decreased.
184 185
 
  186
+
  187
+2.6 Interactive
  188
+---------------
  189
+
  190
+The CPUfreq governor "interactive" is designed for low latency,
  191
+interactive workloads. This governor sets the CPU speed depending on
  192
+usage, similar to "ondemand" and "conservative" governors. However
  193
+there is no polling, or 'sample_rate' required to scale the CPU up.
  194
+
  195
+Sampling CPU load every X ms can lead to under powering the CPU
  196
+for X ms, leading to dropped framerate, stuttering UI etc..
  197
+
  198
+Scaling the CPU up is done when coming out of idle, and like "ondemand"
  199
+scaling up will always go to MAX, then step down based off of cpu load.
  200
+
  201
+There is only one tuneable value for this governor:
  202
+
  203
+min_sample_time: The ammount of time the CPU must spend (in uS)
  204
+at the current frequency before scaling DOWN. This is done to
  205
+more accurately determine the cpu workload and the best speed for that
  206
+workload. The default is 50ms.
  207
+
  208
+
185 209
 3. The Governor Interface in the CPUfreq Core
186 210
 =============================================
187 211
 
16  drivers/cpufreq/Kconfig
@@ -110,6 +110,15 @@ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
110 110
 	  Be aware that not all cpufreq drivers support the conservative
111 111
 	  governor. If unsure have a look at the help section of the
112 112
 	  driver. Fallback governor will be the performance governor.
  113
+
  114
+config CPU_FREQ_DEFAULT_GOV_INTERACTIVE
  115
+	bool "interactive"
  116
+	select CPU_FREQ_GOV_INTERACTIVE
  117
+	help
  118
+	 Use the 'interactive' governor as default. This gets full cpu frequency
  119
+	 scaling for workloads that are latency sensitive, typically interactive
  120
+	 workloads..
  121
+
113 122
 endchoice
114 123
 
115 124
 config CPU_FREQ_GOV_PERFORMANCE
@@ -167,6 +176,13 @@ config CPU_FREQ_GOV_ONDEMAND
167 176
 
168 177
 	  If in doubt, say N.
169 178
 
  179
+config CPU_FREQ_GOV_INTERACTIVE
  180
+	tristate "'interactive' cpufreq policy governor"
  181
+	help
  182
+	 'interactive' - This driver adds a dynamic cpufreq policy governor.
  183
+	 Designed for low latency burst workloads. Scaling it done when coming
  184
+	 out of idle instead of polling.
  185
+
170 186
 config CPU_FREQ_GOV_CONSERVATIVE
171 187
 	tristate "'conservative' cpufreq governor"
172 188
 	depends on CPU_FREQ
1  drivers/cpufreq/Makefile
@@ -9,6 +9,7 @@ obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE)	+= cpufreq_powersave.o
9 9
 obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE)	+= cpufreq_userspace.o
10 10
 obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND)	+= cpufreq_ondemand.o
11 11
 obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE)	+= cpufreq_conservative.o
  12
+obj-$(CONFIG_CPU_FREQ_GOV_INTERACTIVE)	+= cpufreq_interactive.o
12 13
 
13 14
 # CPUfreq cross-arch helpers
14 15
 obj-$(CONFIG_CPU_FREQ_TABLE)		+= freq_table.o
323  drivers/cpufreq/cpufreq_interactive.c
... ...
@@ -0,0 +1,323 @@
  1
+/*
  2
+ * drivers/cpufreq/cpufreq_interactive.c
  3
+ *
  4
+ * Copyright (C) 2010 Google, Inc.
  5
+ *
  6
+ * This software is licensed under the terms of the GNU General Public
  7
+ * License version 2, as published by the Free Software Foundation, and
  8
+ * may be copied, distributed, and modified under those terms.
  9
+ *
  10
+ * This program is distributed in the hope that it will be useful,
  11
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
  12
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  13
+ * GNU General Public License for more details.
  14
+ *
  15
+ * Author: Mike Chan (mike@android.com)
  16
+ *
  17
+ */
  18
+
  19
+#include <linux/cpu.h>
  20
+#include <linux/cpumask.h>
  21
+#include <linux/cpufreq.h>
  22
+#include <linux/mutex.h>
  23
+#include <linux/sched.h>
  24
+#include <linux/tick.h>
  25
+#include <linux/timer.h>
  26
+#include <linux/workqueue.h>
  27
+
  28
+#include <asm/cputime.h>
  29
+
  30
+static void (*pm_idle_old)(void);
  31
+static atomic_t active_count = ATOMIC_INIT(0);
  32
+
  33
+static DEFINE_PER_CPU(struct timer_list, cpu_timer);
  34
+
  35
+static DEFINE_PER_CPU(u64, time_in_idle);
  36
+static DEFINE_PER_CPU(u64, idle_exit_time);
  37
+
  38
+static struct cpufreq_policy *policy;
  39
+static unsigned int target_freq;
  40
+
  41
+/* Workqueues handle frequency scaling */
  42
+static struct workqueue_struct *up_wq;
  43
+static struct workqueue_struct *down_wq;
  44
+static struct work_struct freq_scale_work;
  45
+
  46
+static u64 freq_change_time;
  47
+static u64 freq_change_time_in_idle;
  48
+
  49
+static cpumask_t work_cpumask;
  50
+
  51
+/*
  52
+ * The minimum ammount of time to spend at a frequency before we can ramp down,
  53
+ * default is 50ms.
  54
+ */
  55
+#define DEFAULT_MIN_SAMPLE_TIME 50000;
  56
+static unsigned long min_sample_time;
  57
+
  58
+static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
  59
+		unsigned int event);
  60
+
  61
+#ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE
  62
+static
  63
+#endif
  64
+struct cpufreq_governor cpufreq_gov_interactive = {
  65
+	.name = "interactive",
  66
+	.governor = cpufreq_governor_interactive,
  67
+	.max_transition_latency = 10000000,
  68
+	.owner = THIS_MODULE,
  69
+};
  70
+
  71
+static void cpufreq_interactive_timer(unsigned long data)
  72
+{
  73
+	u64 delta_idle;
  74
+	u64 update_time;
  75
+	u64 *cpu_time_in_idle;
  76
+	u64 *cpu_idle_exit_time;
  77
+	struct timer_list *t;
  78
+
  79
+	u64 now_idle = get_cpu_idle_time_us(data,
  80
+						&update_time);
  81
+
  82
+
  83
+	cpu_time_in_idle = &per_cpu(time_in_idle, data);
  84
+	cpu_idle_exit_time = &per_cpu(idle_exit_time, data);
  85
+
  86
+	if (update_time == *cpu_idle_exit_time)
  87
+		return;
  88
+
  89
+	delta_idle = cputime64_sub(now_idle, *cpu_time_in_idle);
  90
+
  91
+	/* Scale up if there were no idle cycles since coming out of idle */
  92
+	if (delta_idle == 0) {
  93
+		if (policy->cur == policy->max)
  94
+			return;
  95
+
  96
+		if (nr_running() < 1)
  97
+			return;
  98
+
  99
+		target_freq = policy->max;
  100
+		cpumask_set_cpu(data, &work_cpumask);
  101
+		queue_work(up_wq, &freq_scale_work);
  102
+		return;
  103
+	}
  104
+
  105
+	/*
  106
+	 * There is a window where if the cpu utlization can go from low to high
  107
+	 * between the timer expiring, delta_idle will be > 0 and the cpu will
  108
+	 * be 100% busy, preventing idle from running, and this timer from
  109
+	 * firing. So setup another timer to fire to check cpu utlization.
  110
+	 * Do not setup the timer if there is no scheduled work.
  111
+	 */
  112
+	t = &per_cpu(cpu_timer, data);
  113
+	if (!timer_pending(t) && nr_running() > 0) {
  114
+			*cpu_time_in_idle = get_cpu_idle_time_us(
  115
+					data, cpu_idle_exit_time);
  116
+			mod_timer(t, jiffies + 2);
  117
+	}
  118
+
  119
+	if (policy->cur == policy->min)
  120
+		return;
  121
+
  122
+	/*
  123
+	 * Do not scale down unless we have been at this frequency for the
  124
+	 * minimum sample time.
  125
+	 */
  126
+	if (cputime64_sub(update_time, freq_change_time) < min_sample_time)
  127
+		return;
  128
+
  129
+	target_freq = policy->min;
  130
+	cpumask_set_cpu(data, &work_cpumask);
  131
+	queue_work(down_wq, &freq_scale_work);
  132
+}
  133
+
  134
+static void cpufreq_idle(void)
  135
+{
  136
+	struct timer_list *t;
  137
+	u64 *cpu_time_in_idle;
  138
+	u64 *cpu_idle_exit_time;
  139
+
  140
+	pm_idle_old();
  141
+
  142
+	if (!cpumask_test_cpu(smp_processor_id(), policy->cpus))
  143
+			return;
  144
+
  145
+	/* Timer to fire in 1-2 ticks, jiffie aligned. */
  146
+	t = &per_cpu(cpu_timer, smp_processor_id());
  147
+	cpu_idle_exit_time = &per_cpu(idle_exit_time, smp_processor_id());
  148
+	cpu_time_in_idle = &per_cpu(time_in_idle, smp_processor_id());
  149
+
  150
+	if (timer_pending(t) == 0) {
  151
+		*cpu_time_in_idle = get_cpu_idle_time_us(
  152
+				smp_processor_id(), cpu_idle_exit_time);
  153
+		mod_timer(t, jiffies + 2);
  154
+	}
  155
+}
  156
+
  157
+/*
  158
+ * Choose the cpu frequency based off the load. For now choose the minimum
  159
+ * frequency that will satisfy the load, which is not always the lower power.
  160
+ */
  161
+static unsigned int cpufreq_interactive_calc_freq(unsigned int cpu)
  162
+{
  163
+	unsigned int delta_time;
  164
+	unsigned int idle_time;
  165
+	unsigned int cpu_load;
  166
+	u64 current_wall_time;
  167
+	u64 current_idle_time;;
  168
+
  169
+	current_idle_time = get_cpu_idle_time_us(cpu, &current_wall_time);
  170
+
  171
+	idle_time = (unsigned int) current_idle_time - freq_change_time_in_idle;
  172
+	delta_time = (unsigned int) current_wall_time - freq_change_time;
  173
+
  174
+	cpu_load = 100 * (delta_time - idle_time) / delta_time;
  175
+
  176
+	return policy->cur * cpu_load / 100;
  177
+}
  178
+
  179
+
  180
+/* We use the same work function to sale up and down */
  181
+static void cpufreq_interactive_freq_change_time_work(struct work_struct *work)
  182
+{
  183
+	unsigned int cpu;
  184
+	cpumask_t tmp_mask = work_cpumask;
  185
+	for_each_cpu(cpu, tmp_mask) {
  186
+		if (target_freq == policy->max) {
  187
+			if (nr_running() == 1) {
  188
+				cpumask_clear_cpu(cpu, &work_cpumask);
  189
+				return;
  190
+			}
  191
+
  192
+			__cpufreq_driver_target(policy, target_freq,
  193
+					CPUFREQ_RELATION_H);
  194
+		} else {
  195
+			target_freq = cpufreq_interactive_calc_freq(cpu);
  196
+			__cpufreq_driver_target(policy, target_freq,
  197
+							CPUFREQ_RELATION_L);
  198
+		}
  199
+		freq_change_time_in_idle = get_cpu_idle_time_us(cpu,
  200
+							&freq_change_time);
  201
+
  202
+		cpumask_clear_cpu(cpu, &work_cpumask);
  203
+	}
  204
+
  205
+
  206
+}
  207
+
  208
+static ssize_t show_min_sample_time(struct kobject *kobj,
  209
+				struct attribute *attr, char *buf)
  210
+{
  211
+	return sprintf(buf, "%lu\n", min_sample_time);
  212
+}
  213
+
  214
+static ssize_t store_min_sample_time(struct kobject *kobj,
  215
+			struct attribute *attr, const char *buf, size_t count)
  216
+{
  217
+	return strict_strtoul(buf, 0, &min_sample_time);
  218
+}
  219
+
  220
+static struct global_attr min_sample_time_attr = __ATTR(min_sample_time, 0644,
  221
+		show_min_sample_time, store_min_sample_time);
  222
+
  223
+static struct attribute *interactive_attributes[] = {
  224
+	&min_sample_time_attr.attr,
  225
+	NULL,
  226
+};
  227
+
  228
+static struct attribute_group interactive_attr_group = {
  229
+	.attrs = interactive_attributes,
  230
+	.name = "interactive",
  231
+};
  232
+
  233
+static int cpufreq_governor_interactive(struct cpufreq_policy *new_policy,
  234
+		unsigned int event)
  235
+{
  236
+	int rc;
  237
+	switch (event) {
  238
+	case CPUFREQ_GOV_START:
  239
+		if (!cpu_online(new_policy->cpu))
  240
+			return -EINVAL;
  241
+
  242
+		/*
  243
+		 * Do not register the idle hook and create sysfs
  244
+		 * entries if we have already done so.
  245
+		 */
  246
+		if (atomic_inc_return(&active_count) > 1)
  247
+			return 0;
  248
+
  249
+		rc = sysfs_create_group(cpufreq_global_kobject,
  250
+				&interactive_attr_group);
  251
+		if (rc)
  252
+			return rc;
  253
+
  254
+		pm_idle_old = pm_idle;
  255
+		pm_idle = cpufreq_idle;
  256
+		policy = new_policy;
  257
+		break;
  258
+
  259
+	case CPUFREQ_GOV_STOP:
  260
+		if (atomic_dec_return(&active_count) > 1)
  261
+			return 0;
  262
+
  263
+		sysfs_remove_group(cpufreq_global_kobject,
  264
+				&interactive_attr_group);
  265
+
  266
+		pm_idle = pm_idle_old;
  267
+		del_timer(&per_cpu(cpu_timer, new_policy->cpu));
  268
+			break;
  269
+
  270
+	case CPUFREQ_GOV_LIMITS:
  271
+		if (new_policy->max < new_policy->cur)
  272
+			__cpufreq_driver_target(new_policy,
  273
+					new_policy->max, CPUFREQ_RELATION_H);
  274
+		else if (new_policy->min > new_policy->cur)
  275
+			__cpufreq_driver_target(new_policy,
  276
+					new_policy->min, CPUFREQ_RELATION_L);
  277
+		break;
  278
+	}
  279
+	return 0;
  280
+}
  281
+
  282
+static int __init cpufreq_interactive_init(void)
  283
+{
  284
+	unsigned int i;
  285
+	struct timer_list *t;
  286
+	min_sample_time = DEFAULT_MIN_SAMPLE_TIME;
  287
+
  288
+	/* Initalize per-cpu timers */
  289
+	for_each_possible_cpu(i) {
  290
+		t = &per_cpu(cpu_timer, i);
  291
+		init_timer_deferrable(t);
  292
+		t->function = cpufreq_interactive_timer;
  293
+		t->data = i;
  294
+	}
  295
+
  296
+	/* Scale up is high priority */
  297
+	up_wq = create_rt_workqueue("kinteractive_up");
  298
+	down_wq = create_workqueue("knteractive_down");
  299
+
  300
+	INIT_WORK(&freq_scale_work, cpufreq_interactive_freq_change_time_work);
  301
+
  302
+	return cpufreq_register_governor(&cpufreq_gov_interactive);
  303
+}
  304
+
  305
+#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE
  306
+pure_initcall(cpufreq_interactive_init);
  307
+#else
  308
+module_init(cpufreq_interactive_init);
  309
+#endif
  310
+
  311
+static void __exit cpufreq_interactive_exit(void)
  312
+{
  313
+	cpufreq_unregister_governor(&cpufreq_gov_interactive);
  314
+	destroy_workqueue(up_wq);
  315
+	destroy_workqueue(down_wq);
  316
+}
  317
+
  318
+module_exit(cpufreq_interactive_exit);
  319
+
  320
+MODULE_AUTHOR("Mike Chan <mike@android.com>");
  321
+MODULE_DESCRIPTION("'cpufreq_interactive' - A cpufreq governor for "
  322
+	"Latency sensitive workloads");
  323
+MODULE_LICENSE("GPL");
3  include/linux/cpufreq.h
@@ -339,6 +339,9 @@ extern struct cpufreq_governor cpufreq_gov_ondemand;
339 339
 #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE)
340 340
 extern struct cpufreq_governor cpufreq_gov_conservative;
341 341
 #define CPUFREQ_DEFAULT_GOVERNOR	(&cpufreq_gov_conservative)
  342
+#elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE)
  343
+extern struct cpufreq_governor cpufreq_gov_interactive;
  344
+#define CPUFREQ_DEFAULT_GOVERNOR	(&cpufreq_gov_interactive)
342 345
 #endif
343 346
 
344 347
 

0 notes on commit 255f13b

Please sign in to comment.
Something went wrong with that request. Please try again.