forked from django/djangobench
/
perf.py
2042 lines (1602 loc) · 69.7 KB
/
perf.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#!/usr/bin/env python
"""
Stolen from Unladen Swallow
(http://unladen-swallow.googlecode.com/svn/tests/perf.py). I'm just using
some parts of this -- stats results, t-tests, etc. -- but I've snagged
the whole file because I'm lazy. --JKM
----
Tool for comparing the performance of two Python implementations.
Typical usage looks like
./perf.py -b 2to3,django control/python experiment/python
This will run the 2to3 and Django template benchmarks, using `control/python`
as the baseline and `experiment/python` as the experiment. The --fast and
--rigorous options can be used to vary the duration/accuracy of the run. Run
--help to get a full list of options that can be passed to -b.
perf.py will run Student's two-tailed T test on the benchmark results at the 95%
confidence level to indicate whether the observed difference is statistically
significant.
Omitting the -b option will result in the default group of benchmarks being run
This currently consists of: 2to3, django, nbody, rietveld, slowspitfire,
slowpickle, slowunpickle, spambayes. Omitting -b is the same as specifying
`-b default`.
To run every benchmark perf.py knows about, use `-b all`. To see a full list of
all available benchmarks, use `--help`.
Negative benchmarks specifications are also supported: `-b -2to3` will run every
benchmark in the default group except for 2to3 (this is the same as
`-b default,-2to3`). `-b all,-django` will run all benchmarks except the Django
templates benchmark. Negative groups (e.g., `-b -default`) are not supported.
Positive benchmarks are parsed before the negative benchmarks are subtracted.
If --track_memory is passed, perf.py will continuously sample the benchmark's
memory usage, then give you the maximum usage and a link to a Google Chart of
the benchmark's memory usage over time. This currently only works on Linux
2.6.16 and higher or Windows with PyWin32. Because --track_memory introduces
performance jitter while collecting memory measurements, only memory usage is
reported in the final report.
If --args is passed, it specifies extra arguments to pass to the test
python binaries. For example,
perf.py --args="-A -B,-C -D" base_python changed_python
will run benchmarks like
base_python -A -B the_benchmark.py
changed_python -C -D the_benchmark.py
while
perf.py --args="-A -B" base_python changed_python
will pass the same arguments to both pythons:
base_python -A -B the_benchmark.py
changed_python -A -B the_benchmark.py
"""
from __future__ import division, with_statement
__author__ = "jyasskin@google.com (Jeffrey Yasskin)"
import contextlib
import logging
import math
import optparse
import os
import os.path
import platform
import re
import shutil
import subprocess
import sys
import tempfile
import time
import threading
import urllib2
try:
import multiprocessing
except ImportError:
multiprocessing = None
try:
import win32api
import win32con
import win32process
import pywintypes
except ImportError:
win32api = None
info = logging.info
def avg(seq):
return sum(seq) / float(len(seq))
def SampleStdDev(seq):
"""Compute the standard deviation of a sample.
Args:
seq: the numeric input data sequence.
Returns:
The standard deviation as a float.
"""
mean = avg(seq)
squares = ((x - mean) ** 2 for x in seq)
return math.sqrt(sum(squares) / (len(seq) - 1))
# A table of 95% confidence intervals for a two-tailed t distribution, as a
# function of the degrees of freedom. For larger degrees of freedom, we
# approximate. While this may look less elegant than simply calculating the
# critical value, those calculations suck. Look at
# http://www.math.unb.ca/~knight/utility/t-table.htm if you need more values.
T_DIST_95_CONF_LEVELS = [0, 12.706, 4.303, 3.182, 2.776,
2.571, 2.447, 2.365, 2.306, 2.262,
2.228, 2.201, 2.179, 2.160, 2.145,
2.131, 2.120, 2.110, 2.101, 2.093,
2.086, 2.080, 2.074, 2.069, 2.064,
2.060, 2.056, 2.052, 2.048, 2.045,
2.042]
def TDist95ConfLevel(df):
"""Approximate the 95% confidence interval for Student's T distribution.
Given the degrees of freedom, returns an approximation to the 95%
confidence interval for the Student's T distribution.
Args:
df: An integer, the number of degrees of freedom.
Returns:
A float.
"""
df = int(round(df))
highest_table_df = len(T_DIST_95_CONF_LEVELS)
if df >= 200: return 1.960
if df >= 100: return 1.984
if df >= 80: return 1.990
if df >= 60: return 2.000
if df >= 50: return 2.009
if df >= 40: return 2.021
if df >= highest_table_df:
return T_DIST_95_CONF_LEVELS[highest_table_df - 1]
return T_DIST_95_CONF_LEVELS[df]
def PooledSampleVariance(sample1, sample2):
"""Find the pooled sample variance for two samples.
Args:
sample1: one sample.
sample2: the other sample.
Returns:
Pooled sample variance, as a float.
"""
deg_freedom = len(sample1) + len(sample2) - 2
mean1 = avg(sample1)
squares1 = ((x - mean1) ** 2 for x in sample1)
mean2 = avg(sample2)
squares2 = ((x - mean2) ** 2 for x in sample2)
return (sum(squares1) + sum(squares2)) / float(deg_freedom)
def TScore(sample1, sample2):
"""Calculate a t-test score for the difference between two samples.
Args:
sample1: one sample.
sample2: the other sample.
Returns:
The t-test score, as a float.
"""
assert len(sample1) == len(sample2)
error = PooledSampleVariance(sample1, sample2) / len(sample1)
return (avg(sample1) - avg(sample2)) / math.sqrt(error * 2)
def IsSignificant(sample1, sample2):
"""Determine whether two samples differ significantly.
This uses a Student's two-sample, two-tailed t-test with alpha=0.95.
Args:
sample1: one sample.
sample2: the other sample.
Returns:
(significant, t_score) where significant is a bool indicating whether
the two samples differ significantly; t_score is the score from the
two-sample T test.
"""
deg_freedom = len(sample1) + len(sample2) - 2
critical_value = TDist95ConfLevel(deg_freedom)
t_score = TScore(sample1, sample2)
return (abs(t_score) >= critical_value, t_score)
### Code to parse Linux /proc/%d/smaps files.
### See http://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html for
### a quick introduction to smaps.
def _ParseSmapsData(smaps_data):
"""Parse the contents of a Linux 2.6 smaps file.
Args:
smaps_data: the smaps file contents, as a string.
Returns:
The size of the process's private data, in kilobytes.
"""
total = 0
for line in smaps_data.splitlines():
# Include both Private_Clean and Private_Dirty sections.
if line.startswith("Private_"):
parts = line.split()
total += int(parts[1])
return total
def _ReadSmapsFile(pid):
"""Read the Linux smaps file for a pid.
Args:
pid: the process id to retrieve smaps data for.
Returns:
The data from the smaps file, as a string.
Raises:
IOError if the smaps file for the given pid could not be found.
"""
with open("/proc/%d/smaps" % pid) as f:
return f.read()
# Code to sample memory usage on Win32
def _GetWin32MemorySample(process_handle):
"""Gets the amount of memory in use by a process on Win32
Args:
process_handle: handle to the process to get the memory usage for
Returns:
The size of the process's private data, in kilobytes
"""
pmi = win32process.GetProcessMemoryInfo(process_handle)
return pmi["PagefileUsage"] // 1024
@contextlib.contextmanager
def _OpenWin32Process(pid):
"""Open a process on Win32 and close it when done
Args:
pid: the process id of the process to open
Yields:
A handle to the process
Raises:
pywintypes.error if the process does not exist or the user
does not have sufficient privileges to open it
Example:
with _OpenWin32Process(pid) as process_handle:
...
"""
h = win32api.OpenProcess(
win32con.PROCESS_QUERY_INFORMATION | win32con.PROCESS_VM_READ,
0,
pid)
try:
yield h
finally:
win32api.CloseHandle(h)
def CanGetMemoryUsage():
"""Returns True if MemoryUsageFuture is supported on this platform."""
if win32api:
try:
with _OpenWin32Process(win32process.GetCurrentProcessId()):
return True
except pywintypes.error:
pass
try:
_ReadSmapsFile(pid=1)
except IOError:
pass
else:
return True
return False
class MemoryUsageFuture(threading.Thread):
"""Continuously sample a process's memory usage for its lifetime.
Example:
future = MemoryUsageFuture(some_pid)
...
usage = future.GetMemoryUsage()
print max(usage)
Note that calls to GetMemoryUsage() will block until the process exits.
"""
def __init__(self, pid):
super(MemoryUsageFuture, self).__init__()
self._pid = pid
self._usage = []
self._done = threading.Event()
self.start()
def run(self):
if win32api:
with _OpenWin32Process(self._pid) as process_handle:
while (win32process.GetExitCodeProcess(process_handle) ==
win32con.STILL_ACTIVE):
sample = _GetWin32MemorySample(process_handle)
self._usage.append(sample)
time.sleep(0.001)
else:
while True:
try:
sample = _ParseSmapsData(_ReadSmapsFile(self._pid))
self._usage.append(sample)
except IOError:
# Once the process exits, its smaps file will go away,
# leading _ReadSmapsFile() to raise IOError.
break
self._done.set()
def GetMemoryUsage(self):
"""Get the memory usage over time for the process being sampled.
This will block until the process has exited.
Returns:
A list of all memory usage samples, in kilobytes.
"""
self._done.wait()
return self._usage
class RawData(object):
"""Raw data from a benchmark run.
Attributes:
runtimes: list of floats, one per iteration.
mem_usage: list of ints, memory usage in kilobytes.
inst_output: output from Unladen's --with-instrumentation build. This is
the empty string if there was no instrumentation output.
"""
def __init__(self, runtimes, mem_usage, inst_output=""):
self.runtimes = runtimes
self.mem_usage = mem_usage
self.inst_output = inst_output
class BenchmarkResult(object):
"""An object representing data from a succesful benchmark run."""
def __init__(self, min_base, min_changed, delta_min, avg_base,
avg_changed, delta_avg, t_msg, std_base, std_changed,
delta_std, timeline_link):
self.min_base = min_base
self.min_changed = min_changed
self.delta_min = delta_min
self.avg_base = avg_base
self.avg_changed = avg_changed
self.delta_avg = delta_avg
self.t_msg = t_msg
self.std_base = std_base
self.std_changed = std_changed
self.delta_std = delta_std
self.timeline_link = timeline_link
def get_timeline(self):
if self.timeline_link is None:
return ""
return "\nTimeline: %(timeline_link)s" % self.__dict__
def __str__(self):
return (("Min: %(min_base)f -> %(min_changed)f:" +
" %(delta_min)s\n" +
"Avg: %(avg_base)f -> %(avg_changed)f:" +
" %(delta_avg)s\n" + self.t_msg +
"Stddev: %(std_base).5f -> %(std_changed).5f:" +
" %(delta_std)s" + self.get_timeline())
% self.__dict__)
class BenchmarkError(object):
"""Object representing the error from a failed benchmark run."""
def __init__(self, e):
self.msg = str(e)
def __str__(self):
return self.msg
class MemoryUsageResult(object):
"""Memory usage data from a successful benchmark run."""
def __init__(self, max_base, max_changed, delta_max, timeline_link):
self.max_base = max_base
self.max_changed = max_changed
self.delta_max = delta_max
self.timeline_link = timeline_link
def get_usage_over_time(self):
if self.timeline_link is None:
return ""
return "\nUsage over time: %(timeline_link)s" % self.__dict__
def __str__(self):
return (("Mem max: %(max_base).3f -> %(max_changed).3f:" +
" %(delta_max)s" + self.get_usage_over_time())
% self.__dict__)
class SimpleBenchmarkResult(object):
"""Object representing result data from a successful benchmark run."""
def __init__(self, base_time, changed_time, time_delta):
self.base_time = base_time
self.changed_time = changed_time
self.time_delta = time_delta
def __str__(self):
return ("%(base_time)f -> %(changed_time)f: %(time_delta)s"
% self.__dict__)
class InstrumentationResult(object):
"""Object respresenting a --diff_instrumentation result."""
def __init__(self, inst_diff, options):
self.inst_diff = inst_diff
self._control_label = options.control_label
self._experiment_label = options.experiment_label
def __str__(self):
if not self.inst_diff:
return "No difference in instrumentation"
output = []
for header, (control, exp) in self.inst_diff.items():
output.append(header)
output.append(self._control_label)
output.append(control or "No data")
output.append("")
output.append(self._experiment_label)
output.append(exp or "No data")
output.append("\n")
return "\n".join(output).strip()
def CompareMemoryUsage(base_usage, changed_usage, options):
"""Like CompareMultipleRuns, but for memory usage.
Args:
base_usage: list of the memory usage numbers for the control.
changed_usage: list of the memory usage numbers for the experiment.
options: optparse.Values instance.
Returns:
A MemoryUsageResult object.
"""
max_base, max_changed = max(base_usage), max(changed_usage)
delta_max = QuantityDelta(max_base, max_changed)
if options.disable_timelines:
chart_link = None
else:
chart_link = GetChart(SummarizeData(base_usage),
SummarizeData(changed_usage),
options,
title=options.benchmark_name,
y_label="Memory+(kb)")
return MemoryUsageResult(max_base, max_changed, delta_max, chart_link)
### Utility functions
def _FormatPerfDataForTable(base_label, changed_label, results):
"""Prepare performance data for tabular output.
Args:
base_label: label for the control binary.
changed_label: label for the experimental binary.
results: iterable of (bench_name, result) 2-tuples where bench_name is
the name of the benchmark being reported; and result is a
BenchmarkResult object.
Returns:
A list of 6-tuples, where each tuple corresponds to a row in the output
table, and each item in the tuples corresponds to a cell in the output
table.
"""
table = [("Benchmark", base_label, changed_label, "Change", "Significance",
"Timeline")]
for (bench_name, result) in results:
table.append((bench_name,
# Limit the precision for conciseness in the table.
str(round(result.avg_base, 2)),
str(round(result.avg_changed, 2)),
result.delta_avg,
result.t_msg.strip(),
result.timeline_link))
return table
def _FormatMemoryUsageForTable(base_label, changed_label, results):
"""Prepare memory usage data for tabular output.
Args:
base_label: label for the control binary.
changed_label: label for the experimental binary.
results: iterable of (bench_name, result) 2-tuples where bench_name is
the name of the benchmark being reported; and result is a
MemoryUsageResult object.
Returns:
A list of 5-tuples, where each tuple corresponds to a row in the output
table, and each item in the tuples corresponds to a cell in the output
table.
"""
table = [("Benchmark", base_label, changed_label, "Change", "Timeline")]
for (bench_name, result) in results:
table.append((bench_name,
# We don't care about fractional kilobytes.
str(int(result.max_base)),
str(int(result.max_changed)),
result.delta_max,
result.timeline_link))
return table
def FormatOutputAsTable(base_label, changed_label, results):
"""Format a benchmark result in a PEP-fiendly ASCII-art table.
Args:
base_label: label to use for the baseline binary.
changed_label: label to use for the experimental binary.
results: list of (bench_name, result) 2-tuples, where bench_name is the
name of the just-run benchmark; and result is a BenchmarkResult
object.
Returns:
A string holding the desired ASCII-art table.
"""
if isinstance(results[0][1], BenchmarkResult):
table = _FormatPerfDataForTable(base_label, changed_label, results)
elif isinstance(results[0][1], MemoryUsageResult):
table = _FormatMemoryUsageForTable(base_label, changed_label, results)
else:
raise TypeError("Unknown result type: %r" % type(results[0][1]))
col_widths = [0] * len(table[0])
for row in table:
for col, val in enumerate(row):
col_widths[col] = max(col_widths[col], len(val))
outside_line = "+"
header_sep_line = "+"
for width in col_widths:
width += 2 # Compensate for the left and right padding spaces.
outside_line += "-" * width + "+"
header_sep_line += "=" * width + "+"
output = [outside_line]
for row_i, row in enumerate(table):
output_row = []
for col_i, val in enumerate(row):
output_row.append("| " + val.ljust(col_widths[col_i]) + " ")
output.append("".join(output_row) + "|")
if row_i > 0:
output.append(outside_line)
output.insert(2, "".join(header_sep_line))
return "\n".join(output)
def _SegmentInstrumentation(inst_output):
"""Cut --with-instrumentation output into its component sections.
Instrumentation sections are separated by two newlines, and begin with a
header that ends in a colon and a newline.
Args:
inst_output: text holding full --with-instrumentation output.
Returns:
Dict mapping string section headers to section output text.
"""
if not inst_output:
return {}
sections = {}
text_sections = [s.strip() for s in inst_output.split("\n\n")]
for section in text_sections:
header, lines = section.split("\n", 1)
if header.endswith(":"):
sections[header] = lines
return sections
def DiffInstrumentation(control_inst_output, exp_inst_output):
"""Compare the instrumentation output from two Unladen Swallow binaries.
These binaries should have been configured with Unladen's
--with-instrumentation flag.
Args:
control_inst_output: string; the control binary's instrumentation data.
exp_inst_output: string; the experimental binary's instrumentation data.
Returns:
Dict mapping section headers to (control, exp) 2-tuples, where `control`
is the output section from control binary, and `exp` is the output
section from the experimental binary. If either `control` or `exp` is
the empty string, that binary did not emit the section.
"""
control_sections = _SegmentInstrumentation(control_inst_output)
exp_sections = _SegmentInstrumentation(exp_inst_output)
control_keys = set(control_sections)
exp_keys = set(exp_sections)
diff = {}
for section in (control_keys - exp_keys):
diff[section] = (control_sections[section], "")
for section in (exp_keys - control_keys):
diff[section] = ("", exp_sections[section])
for section in (exp_keys & control_keys):
if control_sections[section] != exp_sections[section]:
diff[section] = (control_sections[section], exp_sections[section])
return diff
def SimpleBenchmark(benchmark_function, base_python, changed_python, options,
*args, **kwargs):
"""Abstract out the body for most simple benchmarks.
Example usage:
def BenchmarkSomething(*args, **kwargs):
return SimpleBenchmark(MeasureSomething, *args, **kwargs)
The *args, **kwargs style is recommended so as to minimize the number of
places that have to be changed if we update benchmark arguments.
Args:
benchmark_function: callback that takes (python_path, options) and
returns a RawData instance.
base_python: path to the reference Python binary.
changed_python: path to the experimental Python binary.
options: optparse.Values instance.
*args, **kwargs: will be passed through to benchmark_function.
Returns:
A BenchmarkResult object if the benchmark runs succeeded.
A BenchmarkError object if either benchmark run failed.
"""
try:
changed_data = benchmark_function(changed_python, options,
*args, **kwargs)
base_data = benchmark_function(base_python, options,
*args, **kwargs)
except subprocess.CalledProcessError, e:
return BenchmarkError(e)
return CompareBenchmarkData(base_data, changed_data, options)
def _FormatData(num):
return str(round(num, 2))
def GetChart(base_data, changed_data, options, title, y_label,
chart_margin=100):
"""Build a Google Chart API URL for the given data.
Args:
base_data: data points for the base binary.
changed_data: data points for the changed binary.
options: optparse.Values instance.
title: title for the chart.
y_label: label for Y axis on the chart.
chart_margin: optional integer margin to add/sub from the max/min.
Returns:
Google Chart API URL as a string; or None, if options.disable_timelines
is true.
"""
if options.disable_timelines:
return None
# We use these to scale the graph.
max_data = max(max(base_data), max(changed_data)) + chart_margin
min_data = min(min(base_data), min(changed_data)) - chart_margin
if min_data < 0:
min_data = 0
# Google-bound data, formatted as desired by the Chart API.
data_for_google = (",".join(map(_FormatData, base_data)) + "|" +
",".join(map(_FormatData, changed_data)))
# Come up with labels for the X axis; not too many, though, or they'll be
# unreadable.
max_len = max(len(base_data), len(changed_data))
if max_len <= 20:
points = range(1, max_len + 1)
else:
points = SummarizeData(range(1, max_len + 1), points=5)
if points[0] != 1:
points.insert(0, 1)
x_axis_labels = "".join("|%d" % i for i in points)
# Parameters for the Google Chart API. See
# http://code.google.com/apis/chart/ for more details.
# cht=lc: line graph with visible axes.
# chs: dimensions of the graph, in pixels.
# chdl: labels for the graph lines.
# chco: colors for the graph lines.
# chds: minimum and maximum values for the vertical axis.
# chxr: minimum and maximum values for the vertical axis labels.
# chd=t: the data sets, |-separated.
# chxt: which axes to draw.
# chxl: labels for the axes.
# chtt: chart title, using + for space and | for line breaks
control_label = options.control_label
experiment_label = options.experiment_label
title = title.replace(' ', '+').replace('\n', '|')
raw_url = ("http://chart.apis.google.com/chart?cht=lc&chs=700x400&"
"chxt=x,y,x,y&"
"chxr=1,%(min_data)s,%(max_data)s&chco=FF0000,0000FF&"
"chdl=%(control_label)s|%(experiment_label)s&"
"chds=%(min_data)s,%(max_data)s&chd=t:%(data_for_google)s&"
"chxl=0:%(x_axis_labels)s|2:||Iteration|3:||%(y_label)s&"
"chtt=%(title)s"
% locals())
return ShortenUrl(raw_url)
def ShortenUrl(url):
"""Shorten a given URL using tinyurl.com.
Args:
url: url to shorten.
Returns:
Shorter url. If tinyurl.com is not available, returns the original
url unaltered.
"""
tinyurl_api = "http://tinyurl.com/api-create.php?url="
try:
url = urllib2.urlopen(tinyurl_api + url).read()
except urllib2.URLError:
info("failed to call out to tinyurl.com")
return url
def SummarizeData(data, points=100, summary_func=max):
"""Summarize a large data set using a smaller number of points.
This will divide up the original data set into `points` windows,
using `summary_func` to summarize each window into a single point.
Args:
data: the original data set, as a list.
points: optional; how many summary points to take. Default is 100.
summary_func: optional; function to use when summarizing each window.
Default is the max() built-in.
Returns:
List of summary data points.
"""
window_size = int(math.ceil(len(data) / points))
if window_size == 1:
return data
summary_points = []
start = 0
while start < len(data):
end = min(start + window_size, len(data))
summary_points.append(summary_func(data[start:end]))
start = end
return summary_points
@contextlib.contextmanager
def ChangeDir(new_cwd):
former_cwd = os.getcwd()
os.chdir(new_cwd)
try:
yield
finally:
os.chdir(former_cwd)
def RemovePycs():
if sys.platform == "win32":
for root, dirs, files in os.walk('.'):
for name in files:
if name.endswith('.pyc') or name.endswith('.pyo'):
os.remove(os.path.join(root, name))
else:
subprocess.check_call(["find", ".", "-name", "*.py[co]",
"-exec", "rm", "-f", "{}", ";"])
def Relative(path):
return os.path.join(os.path.dirname(__file__), path)
def LogCall(command):
command = map(str, command)
info("Running %s", " ".join(command))
return command
try:
import resource
except ImportError:
# Approximate child time using wall clock time.
def GetChildUserTime():
return time.time()
else:
def GetChildUserTime():
return resource.getrusage(resource.RUSAGE_CHILDREN).ru_utime
@contextlib.contextmanager
def TemporaryFilename(prefix):
fd, name = tempfile.mkstemp(prefix=prefix)
os.close(fd)
try:
yield name
finally:
os.remove(name)
def TimeDelta(old, new):
if old == 0 or new == 0:
return "incomparable (one result was zero)"
if new > old:
return "%.4fx slower" % (new / old)
elif new < old:
return "%.4fx faster" % (old / new)
else:
return "no change"
def QuantityDelta(old, new):
if old == 0 or new == 0:
return "incomparable (one result was zero)"
if new > old:
return "%.4fx larger" % (new / old)
elif new < old:
return "%.4fx smaller" % (old / new)
else:
return "no change"
def BuildEnv(env=None, inherit_env=[]):
"""Massage an environment variables dict for the host platform.
Massaging performed (in this order):
- Add any variables named in inherit_env.
- Copy PYTHONPATH to JYTHONPATH to support Jython.
- Win32 requires certain env vars to be set.
Args:
env: optional; environment variables dict. If this is omitted, start
with an empty environment.
inherit_env: optional; iterable of strings, each the name of an
environment variable to inherit from os.environ.
Returns:
A copy of `env`, possibly with modifications.
"""
if env is None:
env = {}
fixed_env = env.copy()
for varname in inherit_env:
fixed_env[varname] = os.environ[varname]
if "PYTHONPATH" in fixed_env:
fixed_env["JYTHONPATH"] = fixed_env["PYTHONPATH"]
if sys.platform == "win32":
# Win32 requires certain environment variables be present
for k in ("COMSPEC", "SystemRoot"):
if k in os.environ and k not in fixed_env:
fixed_env[k] = os.environ[k]
return fixed_env
def CompareMultipleRuns(base_times, changed_times, options):
"""Compare multiple control vs experiment runs of the same benchmark.
Args:
base_times: iterable of float times (control).
changed_times: iterable of float times (experiment).
options: optparse.Values instance.
Returns:
A BenchmarkResult object, summarizing the difference between the two
runs; or a SimpleBenchmarkResult object, if there was only one data
point per run.
"""
assert len(base_times) == len(changed_times)
if len(base_times) == 1:
# With only one data point, we can't do any of the interesting stats
# below.
base_time, changed_time = base_times[0], changed_times[0]
time_delta = TimeDelta(base_time, changed_time)
return SimpleBenchmarkResult(base_time, changed_time, time_delta)
# Create a chart showing iteration times over time. We round the times so
# as not to exceed the GET limit for Google's chart server.
timeline_link = GetChart(SummarizeData(base_times),
SummarizeData(changed_times),
options,
title=options.benchmark_name,
y_label="Time+(secs)",
chart_margin=1)
base_times = sorted(base_times)
changed_times = sorted(changed_times)
min_base, min_changed = base_times[0], changed_times[0]
avg_base, avg_changed = avg(base_times), avg(changed_times)
std_base = SampleStdDev(base_times)
std_changed = SampleStdDev(changed_times)
delta_min = TimeDelta(min_base, min_changed)
delta_avg = TimeDelta(avg_base, avg_changed)
delta_std = QuantityDelta(std_base, std_changed)
t_msg = "Not significant\n"
significant, t_score = IsSignificant(base_times, changed_times)
if significant:
t_msg = "Significant (t=%f)\n" % t_score
return BenchmarkResult(min_base, min_changed, delta_min, avg_base,
avg_changed, delta_avg, t_msg, std_base,
std_changed, delta_std, timeline_link)
def CompareBenchmarkData(base_data, exp_data, options):
"""Compare performance and memory usage.
Args:
base_data: RawData instance for the control binary.
exp_data: RawData instance for the experimental binary.
options: optparse.Values instance.
Returns:
Something that implements a __str__() method:
- BenchmarkResult: summarizes the difference between the two runs.
- SimpleBenchmarkResult: if there was only one data point per run.
- InstrumentationResult: if --diff_instrumentation was given.
- MemoryUsageResult: if --track_memory was given.
- BenchmarkError: if something went wrong.
"""
# We suppress performance data when running with --track_memory or
# --diff_instrumentation.
if options.track_memory:
if base_data.mem_usage is not None:
assert exp_data.mem_usage is not None
return CompareMemoryUsage(base_data.mem_usage, exp_data.mem_usage,
options)
return BencharkError("Benchmark does not report memory usage yet")
if options.diff_instrumentation:
inst_diff = DiffInstrumentation(base_data.inst_output,
exp_data.inst_output)
return InstrumentationResult(inst_diff, options)
return CompareMultipleRuns(base_data.runtimes, exp_data.runtimes, options)
def CallAndCaptureOutput(command, env=None, track_memory=False, inherit_env=[]):
"""Run the given command, capturing stdout.
Args:
command: the command to run as a list, one argument per element.