Permalink
Browse files

Initial import

git-svn-id: https://oltpbenchmark.googlecode.com/svn/trunk@2 8c9255d6-0b70-036a-fcb6-8a999947b34c
  • Loading branch information...
curino committed Aug 19, 2011
1 parent 2fb3c0b commit 6ba5e13fab569af7fa748e47115f7d8981a5ed76
Showing with 3,492 additions and 0 deletions.
  1. +1 −0 README
  2. +66 −0 README.original
  3. +36 −0 build.xml
  4. +7 −0 classpath.sh
  5. +2 −0 config/config.txt
  6. +27 −0 config/config.xml
  7. +11 −0 config/profile.txt
  8. +27 −0 latency.py
  9. +3 −0 launch_profiled_Test.sh
  10. BIN lib/commons-cli-1.2.jar
  11. BIN lib/commons-collections-3.2.1.jar
  12. BIN lib/commons-configuration-1.6.jar
  13. BIN lib/commons-lang-2.6.jar
  14. BIN lib/commons-logging-1.1.1.jar
  15. BIN lib/edb-jdbc14-8_0_3_14.jar
  16. BIN lib/ganymed-ssh2-build250.jar
  17. BIN lib/hsqldb.jar
  18. BIN lib/mysql-connector-java-5.1.10-bin.jar
  19. BIN lib/ojdbc14-10.2.jar
  20. BIN lib/postgresql-8.0.309.jdbc3.jar
  21. +6 −0 libs
  22. +3 −0 manifest.mf
  23. +4 −0 matlab/README
  24. +66 −0 matlab/fast_deverticalize_align.m
  25. +1 −0 matlab/go.m
  26. +67 −0 matlab/load_and_plot.m
  27. +483 −0 nbproject/build-impl.xml
  28. +31 −0 nbproject/genfiles.properties
  29. +27 −0 nbproject/private/private.properties
  30. +27 −0 nbproject/private/private.xml
  31. +77 −0 nbproject/project.properties
  32. +37 −0 nbproject/project.xml
  33. +64 −0 recoverytest.py
  34. +34 −0 relcloud_tpcc.sh
  35. +6 −0 run/classpath.sh
  36. +10 −0 run/createtrace_fromdb.sh
  37. +6 −0 run/diskStress
  38. +130 −0 run/distribution.py
  39. +27 −0 run/enterprisedb.properties
  40. +7 −0 run/launchHSQLDBtest
  41. +7 −0 run/launchHSQLDBtest2
  42. +8 −0 run/launchMySQLTest
  43. +1 −0 run/launchScalingTest
  44. +1 −0 run/loadData.sh
  45. +40 −0 run/mysql-remote.properties
  46. +27 −0 run/oracle.properties
  47. +105 −0 run/plot_raw.py
  48. +7 −0 run/plotme
  49. +7 −0 run/plotme2
  50. +27 −0 run/postgres.properties
  51. +27 −0 run/relcloud.properties
  52. +8 −0 run/reset.sql
  53. +1 −0 run/runBenchmark.sh
  54. +3 −0 run/runHeadless.bash
  55. +1 −0 run/runSQL.sh
  56. +2 −0 run/runWikipediaBenchmark.sh
  57. +53 −0 run/sampledata_fromlargerdb.sh
  58. +9 −0 run/sql/clear.sql
  59. +60 −0 run/sql/sqlIndexCreates
  60. +39 −0 run/sql/sqlIndexDrops
  61. +42 −0 run/sql/sqlTableCopies
  62. +148 −0 run/sql/sqlTableCreates
  63. +27 −0 run/sql/sqlTableDrops
  64. +18 −0 run/sql/sqlTableTruncates
  65. +28 −0 run/sql/warm.sql
  66. +494 −0 run/stupidplot.py
  67. +1,000 −0 run/wikipediabenchmark_1000pages_trace.txt
  68. +6 −0 run/wikipediareadonlytrace.txt
  69. +3 −0 src/manifest.mf
View
1 README
@@ -0,0 +1 @@
+empty for now..
View
@@ -0,0 +1,66 @@
+This is the original README included in this repository. We believe that our
+version is "better," but these directions still apply for generating the
+TPC-C data.
+
+
+
+Instructions for building
+-------------------------
+
+Use of JDK 1.5 is recommended, build with "ant jar" from the command line of
+the base directory or use your favorite IDE such as Netbeans or Eclipse.
+
+
+Instructions for running
+------------------------
+The below scripts all use relative paths, but, they depend on JAVA_HOME
+environment varibale being set so that the correct runtime can be found.
+
+JDBC drivers and sample "?.properties" files are included to make it extremely easy for
+you to test out the performance of EnterpriseDB, PostgreSQL, MySQL, Oracle, & SQL Svr
+in your environment.
+
+
+1. Go to the 'run/scripts' directory, edit the appropriate "??????.properties" file to
+ point to the database instance you'd like to test. Of course you'll substitute in
+ the name of your appropriate config file in the command lines below.
+
+
+2. Run the "sqlTableCreates" to create the base tables.
+
+ - runSQL EnterpriseDB.properties sqlTableCreates
+
+ Note: "sqlTableCreates" will truncate all the tables so you can start over clean.
+ There is also a "sqlTableDrops" script if you need it.
+
+
+3. Run the "loadData" command file to load all of the default data for a benchmark:
+
+
+ A.) Approximately half a million rows in total will be loaded across 9 tables
+ per Warehouse. (The default is numWarehouses=1) A decent test size of data
+ totaling about 1 GB is 10 warehouses as follows:
+ $ loadData EnterpriseDB.properties numWarehouses=10
+
+ B.) Alternatively, you may choose to generate test out to CSV files that can be
+ bulk loaded as follows:
+ $ loadData EnterpriseDB.properties numWarehouses=10 fileLocation=c:/temp/
+
+ These CSV files can be bulk loaded into EDB-Postgres via the following:
+ $ runSQL EnterpriseDB.properties sqlTableCopies
+
+ You may clean out the data in the tables without dropping them via:
+ $ runSQL EnterpriseDB.properties sqlTableTruncates
+
+
+4. Run the "runSQL" command file to execute the SQL script "sqlIndexCreates" to
+ create the primary keys & other indexes on the tables.
+
+ - runSQL EnterpriseDB.properties sqlIndexCreates
+
+
+5. Run the "runBenchmark" command file to execute the swing GUI application to
+ test the database. Don't forget to set the number of warehouses equal to the
+ number you created in step 3
+
+ - runBenchmark EnterpriseDB.properties
View
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- WARNING: Eclipse auto-generated file.
+ Any modifications will be overwritten.
+ To include a user specific buildfile here, simply create one in the same
+ directory with the processing instruction <?eclipse.ant.import?>
+ as the first entry and export the buildfile again. -->
+<project basedir="." default="build" name="benchmark">
+ <property environment="env"/>
+ <property name="ECLIPSE_HOME" value="../../../../Applications/eclipse3.4"/>
+ <property name="debuglevel" value="source,lines,vars"/>
+ <property name="target" value="1.6"/>
+ <property name="source" value="1.6"/>
+
+
+ <target name="build">
+ <fileset dir="src" excludes="**/*.launch, **/*.java"/>
+ <echo message="${ant.project.name}: ${ant.file}"/>
+ <mkdir dir="build/classes"/>
+ <javac debug="true" debuglevel="${debuglevel}" destdir="build/classes" source="${source}" target="${target}">
+ <src path="src"/>
+ <classpath>
+ <pathelement path="classes"/>
+ <fileset dir="lib">
+ <include name="**/*.jar"/>
+ </fileset>
+ </classpath>
+
+
+ </javac>
+ </target>
+
+
+ <target name="clean" description="Destroys all generated files and dirs.">
+ <delete dir="build/classes"/>
+ </target>
+</project>
View
@@ -0,0 +1,7 @@
+BASEDIR=/Users/Djellel/Documents/workspace
+echo -ne "./bin"
+for i in `ls ./lib/*.jar`
+do
+echo -ne ":$i"
+done
+
View
@@ -0,0 +1,2 @@
+warmUpSeconds=0
+measureSeconds=120
View
@@ -0,0 +1,27 @@
+<?xml version="1.0"?>
+<parameters>
+ <driver>com.mysql.jdbc.Driver</driver>
+ <DBUrl>jdbc:mysql://localhost/test</DBUrl>
+ <DBName>test</DBName>
+ <username>root</username>
+ <password>hello</password>
+ <terminals>2</terminals>
+ <numWarehouses>10</numWarehouses>
+ <works>
+ <work>
+ <time>5</time>
+ <rate>100</rate>
+ <weights>0,0,0,0,100</weights>
+ </work>
+ <work>
+ <time>2</time>
+ <rate>200</rate>
+ <weights>20,20,20,20,20</weights>
+ </work>
+ <work>
+ <time>10</time>
+ <rate>20</rate>
+ <weights>30,10,0,40,20</weights>
+ </work>
+ </works>
+</parameters>
View
@@ -0,0 +1,11 @@
+30 10000 20 20 20 20 20
+0 50 100 0 0 0 0
+10 50 100 0 0 0 0
+10 50 0 100 0 0 0
+30 10000 20 20 20 20 20
+10 50 0 0 100 0 0
+30 10000 20 20 20 20 20
+0 50 0 0 0 100 0
+10 50 0 0 0 100 0
+10 50 0 0 0 0 100
+
View
@@ -0,0 +1,27 @@
+#!/usr/bin/python
+
+"""Averages the per transaction latency for TPC-C traces."""
+
+import sys
+
+if __name__ == "__main__":
+ start = False
+
+ count = 0
+ sum = 0
+ for line in sys.stdin:
+ if not start:
+ start = line.startswith("Transaction Number")
+ elif start:
+ if line == "\n": break
+ parts = line.split("\t")
+ try:
+ latency = int(parts[3])
+ except:
+ print repr(line)
+ raise
+
+ sum += latency
+ count += 1
+
+ print "%d / %d = %f average" % (sum, count, float(sum) / count)
View
@@ -0,0 +1,3 @@
+SERVER=127.0.0.1
+PORT=3306
+java -cp ./build/classes:./lib/commons-lang-2.5.jar:./lib/mysql-connector-java-5.1.10-bin.jar -Dnwarehouses=32 -Dnterminals=160 -Ddriver=com.mysql.jdbc.Driver -Dconn=jdbc:mysql://$SERVER:$PORT/tpcc -Duser=root client.TPCCRateLimitedFromFile config/config.txt $1 > transactions.csv
View
Binary file not shown.
Binary file not shown.
Binary file not shown.
View
Binary file not shown.
Binary file not shown.
View
Binary file not shown.
Binary file not shown.
View
Binary file not shown.
Binary file not shown.
View
Binary file not shown.
Binary file not shown.
View
6 libs
@@ -0,0 +1,6 @@
+$ sha1sum lib/*
+c80f0febf8437d5cd0424e90651ae15348b0ddce lib/edb-jdbc14-8_0_3_14.jar
+f7e56f837b00e827136f66dacf94c525029a7dfa lib/hsqldb.jar
+b83574124f1a00d6f70d56ba64aa52b8e1588e6d lib/mysql-connector-java-5.1.10-bin.jar
+2d79529d51134ab5c4395d84d3b8f3b0aaea845d lib/ojdbc14-10.2.jar
+d0c03073350b4b5a182ef0875af158c580a3f932 lib/postgresql-8.0.309.jdbc3.jar
View
@@ -0,0 +1,3 @@
+Manifest-Version: 1.0
+X-COMMENT: Main-Class will be added automatically by build
+
View
@@ -0,0 +1,4 @@
+fast_deverticalize_align.m: the script doing bucketing and alingment... starts from transactions.csv and spits out latency, counts, percentiles
+load_and_plot: a simple visualization script...
+go.m: a simple main launching the fast_deverticalize_align.m
+
@@ -0,0 +1,66 @@
+% the function receives in input a data file a window size in seconds and
+% the nubmer of transaction types
+% and returns 2 matrixes containing the number of transactions per type per
+% window and the average latency per window
+
+
+
+%filename = 'coefs-7200-20-20-20-20-3000';
+%winSize=1;
+%numVariable=5;
+function [counts latencies monitor] = fast_deverticalize_align(indir, outdir, proxyfilename, monitorfilename, winSize, numVariable)
+
+ tic;
+ % load the file skipping the first 4 lines
+ temp = csvread(horzcat(indir,proxyfilename),4);
+ monitor = csvread(horzcat(indir,monitorfilename),2);
+
+ %bring measure in timewindows (if winSize is 1 than it is in seconds)
+ temp(:,2) = temp(:,2)/(winSize);
+ temp(:,3) = temp(:,3)/(winSize*1000000);
+
+ fprintf(1, 'READ FROM DISK TIME:');
+ toc;
+
+ tic;
+
+
+
+ counts = zeros(size(monitor,1),numVariable+1);
+ latencies = zeros(size(monitor,1),numVariable+1);
+ latenciesPCtile = zeros(size(monitor,1),numVariable+1,8);
+% locktimes = zeros(size(monitor,1),numVariable+1);
+
+
+ for i=1:size(counts,1)-1
+ for j=1:numVariable
+ counts(i,j+1) = sum(temp(:,1)==j & temp(:,2)+temp(:,3)>=monitor(i,1) & temp(:,2)+temp(:,3)<monitor(i+1,1));
+ latencies(i,j+1) = sum(sum(temp(find(temp(:,1)==j & temp(:,2)+temp(:,3)>=monitor(i,1) & temp(:,2)+temp(:,3)<monitor(i+1,1)),3)));
+ % locktimes(i,j+1) = sum(sum(temp(find(temp(:,1)==j & temp(:,2)+temp(:,3)>=monitor(i,1) & temp(:,2)+temp(:,3)<monitor(i+1,1)),4)));
+ latenciesPCtile(i,j+1,:) = prctile(temp(find(temp(:,1)==j & temp(:,2)+temp(:,3)>=monitor(i,1) & temp(:,2)+temp(:,3)<monitor(i+1,1)),3),[10, 25, 50, 75, 90, 95, 99, 99.9]);
+ end
+
+ end
+
+ latencies = latencies./counts; % actually compute latency AVG
+ % locktimes = locktimes./counts;
+ counts(:,1) = monitor(:,1);
+ latencies(:,1) = counts(:,1); % reset first cell
+ % locktimes(:,1) = counts(:,1);
+ latencies(isnan(latencies))=0; % remove NaN
+ %locktimes(isnan(locktimes))=0; % remove NaN
+ latenciesPCtile(:,1) = counts(:,1);
+
+
+ fprintf(1, 'CRUNCH TIME:');
+ toc;
+
+ fprintf(1, 'SAVE TIME:');
+ tic;
+ save(horzcat(outdir,proxyfilename,'_rough_trans_count.al'), 'counts','-ascii','-double');
+ save(horzcat(outdir,proxyfilename,'_avg_latency.al'), 'latencies','-ascii','-double');
+ %save(horzcat(outdir,proxyfilename,'_locktimes.al'), 'locktimes','-ascii','-double');
+ save(horzcat(outdir,proxyfilename,'_prctile_latencies.mat'), 'latenciesPCtile');
+ toc;
+
+end
View
@@ -0,0 +1 @@
+[counts latencies monitor] = fast_deverticalize_align('./test1/', './test1/', 'transactions.csv','log_exp_1.csv', 1, 6);
View
@@ -0,0 +1,67 @@
+
+function load_and_plot(monitorfile, transactionprefix)
+
+monitor = csvread(monitorfile,2);
+avglat = load(horzcat(transactionprefix,'_avg_latency.al'));
+prclat = load(horzcat(transactionprefix,'_prctile_latencies.mat'));
+%locktime = load(horzcat(transactionprefix,'percona_transactions.csv_locktimes.al'));
+counts = load(horzcat(transactionprefix,'percona_transactions.csv_rough_trans_count.al'));
+
+Com_commit_index = 164;
+disk_write_index = 108;
+cpu_indexes = [5 11 17 23 29 35 41 47 53 59 65 71 77 83 89 95 6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 96];
+
+tstart=2;
+tend=size(monitor,1);
+
+figure;
+subplot(3,2,1);
+plot(monitor(tstart:tend,cpu_indexes));
+title('cpu usage');
+xlabel('time');
+ylabel('cpu usage (% of core)');
+grid on;
+%axis([0 500 0 100]);
+
+
+subplot(3,2,3);
+plot(monitor(tstart:tend,disk_write_index)./1024./1024);
+title('disk usage');
+xlabel('time');
+ylabel('disk usage (MB/sec)');
+grid on;
+
+
+subplot(3,2,5);
+plot(diff(monitor(tstart:tend,Com_commit_index)));
+hold on;
+plot(sum(counts(tstart:tend,2:end)'),'r');
+grid;
+title('transactions');
+xlabel('time (sec)');
+ylabel('transactions (tps)');
+legend('Comcommit','perconaLog');
+grid on;
+
+
+
+subplot(3,2,2);
+plot(sum(avglat(tstart:tend,2:end)'));
+hold on;
+plot(sum(prclat.latenciesPCtile(tstart:tend,2:end,6)'),'r'); % showing 95%tile
+title('latency');
+xlabel('time');
+ylabel('latency (sec)');
+grid on;
+
+%subplot(3,2,4);
+%plot(locktime(tstart:tend,2:end));
+%title('locktime');
+%xlabel('time');
+%ylabel('lock time (sec)');
+%grid on;
+
+
+set(gcf,'Color','w');
+
+end
Oops, something went wrong.

0 comments on commit 6ba5e13

Please sign in to comment.