Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

README.md typo fixes and minor edits

  • Loading branch information...
commit f397a7dac7d3da16213193ed858ba2b7ce9a4744 1 parent 15c198a
jbayer authored

Showing 1 changed file with 31 additions and 75 deletions. Show diff stats Hide diff stats

  1. +31 75 README.md
106 README.md
Source Rendered
... ... @@ -1,6 +1,6 @@
1 1 # Stac2
2 2
3   -Stac2 is a load generation system for Cloud Foundry. The system is made up for several Cloud Foundry applications. The instance
  3 +Stac2 is a load generation system for Cloud Foundry. The system is made up of several Cloud Foundry applications. The instance
4 4 count of some of the applications gates the amount of load that can be generated and depending on the size and complexity of your system,
5 5 you will have to size these to your Cloud Foundry instance using the "vmc scale --instances" command.
6 6
@@ -34,7 +34,7 @@ ugly edit form. What can I say. Lazy hack session one night when I just wanted t
34 34 At this point, assuming your cloud config file is correct, you should be able to run some load. Select the sys_info workload in the workload selection, validate
35 35 that the cloud selector correctly selected your cloud, and then click the light gray 100% load button. You should immediately see the blue load lights
36 36 across the bottom of the screen peg to your max concurrency and you should see the counters in the main screen for the info call show activity. For a reasonable
37   -cloud with reasonable stac2 concurrency seeing ~1,000 CC API calls per second (the yellow counter) should be easily in reach. You should see a screen similar to this:
  37 +cloud with reasonable stac2 concurrency seeing ~1,000 CC API calls per second (the yellow counter) should be easily in-reach. You should see a screen similar to this:
38 38
39 39 <p/>
40 40 ![Stac2 in Action](https://github.com/cloudfoundry/stac2/raw/master/images/stac2_home.png)
@@ -57,35 +57,27 @@ on/off control is light gray and none of the blue lights are on in the light gri
57 57 * **cloud selector** - leave this alone
58 58 * **http request counter** - this green counter on the upper right shows the amount of http traffic sent to apps created by stac2 workloads, or in the case of static loads like xnf_*, traffic to existing apps.
59 59 * **cc api calls** - this yellow counter shows the number of cloud foundry api calls executed per second
60   -* **results table** - this large tabluar region in the center of the screen shows the various api calls used by their scenarios displayed as ~equivalent vmc commands.
  60 +* **results table** - this large tabular region in the center of the screen shows the various api calls used by their scenarios displayed as ~equivalent vmc commands.
61 61 * *total* - this column is the raw number of operations for this api row
62 62 * *<50ms* - this column shows the % of api calls that executed in less than 50ms
63 63 * *<100ms* - this column shows the % of api calls that executed in less than 100ms but more than 50ms. The column header misleads you to believe that this should include all of the <50% calls as well, but thats not the intent. Read this column as >50ms & <= 100ms.
64 64 * *<200ms, <400ms,...>3s* - these work as you'd expect per the above definitions
65 65 * *avg(ms)* - the average speed of the calls executed in this row
66   - * *err* - the % of api calls for this row that failed, note, failures are displayed as a running log under the light grid. On ccng based cloud controllers the host IP in the display is a hyperlink that takes you to a
67   - detail page showing all api calls and request id's that occurred during the mix that failed. The request id can be used for log correlation while debugging the failure.
  66 + * *err* - the % of api calls for this row that failed, note, failures are displayed as a running log under the light grid. On ccng based cloud controllers the host IP in the display is a hyperlink that takes you to a detail page showing all api calls and request id's that occurred during the mix that failed. The request id can be used for log correlation while debugging the failure.
68 67 * **results table http-request row** - this last row in the main table is similar to the previous description, but here the "api" is any http request to an app created by a scenario or a static app in the case of xnf based loads
69 68 * **results table http-status row** - this row shows the breakdown of http response status (200's, 400's, 500's) as well as those that timed out
70   -* **email button** - if email is enabled in your cloud config, this button will serialize the results table and error log and send an email summary. Note, for non-email enabled clouds, the stac2 front end entrypoing "/ss2?cloud=cloud-name" will produce a full JSON dump as well.
71   -* **dirty users?** - great name... If you hit reset during an active run, or bounced your cloud under heavy activity, or restarted/repushed nabv or nabh during heavy activity, you likely left the system in a state where there
72   -are applications, services, routes, etc. that have not been properly cleared. If you see quota errors, or blue lights stuck on, thats another clue. Use this button on an idle system to ask stac2 to clean up these zombied apps and services.
  69 +* **email button** - if email is enabled in your cloud config, this button will serialize the results table and error log and send an email summary. Note, for non-email enabled clouds, the stac2 front-end entrypoint "/ss2?cloud=cloud-name" will produce a full JSON dump as well.
  70 +* **dirty users?** - great name... If you hit reset during an active run, or bounced your cloud under heavy activity, or restarted/repushed nabv or nabh during heavy activity, you likely left the system in a state where there are applications, services, routes, etc. that have not been properly cleared. If you see quota errors, or blue lights stuck on, thats another clue. Use this button on an idle system to ask stac2 to clean up these zombied apps and services.
73 71 Under normal operation you will not need to use this as the system self cleans anywhere it can.
74   -* **reset** - this button flushes the redis instance at the center of the system wiping all stats, queues, error logs, etc. always use between runs on a fully quiet system. if you click on this where there is heavy activity, you will
75   -more than likely strand an application or two. If your system is seriously hosed then click reset, then vmc restart nabh; vmc restart nabv; then click reset again. This very hard reset yanks redis and restarts all workers.
76   -* **light grid** - each light, when on, represents an active worker in the nabv app currently running an instance of the selected load. For instance if a load is designed to simulate login, push, create-service, bind-service, delete, delete-service, if one worker is currently running
77   -the load, one light will be on. If 100 lights are on, then 100 workers are simultaneously executing the load. Since stac2 is designed to be able to mimic what a typical developer is doing in front of the system, you can think if the lights as representing
78   -how many simultaneously active users the system is servicing. Active means really active though so 100 active active users can easily mean 10,000 normal users.
  72 +* **reset** - this button flushes the redis instance at the center of the system wiping all stats, queues, error logs, etc. always use between runs on a fully quiet system. if you click on this where there is heavy activity, you will more than likely strand an application or two. If your system is seriously hosed then click reset, then vmc restart nabh; vmc restart nabv; then click reset again. This very hard reset yanks redis and restarts all workers.
  73 +* **light grid** - each light, when on, represents an active worker in the nabv app currently running an instance of the selected load. For instance if a load is designed to simulate login, push, create-service, bind-service, delete, delete-service, if one worker is currently running the load, one light will be on. If 100 lights are on, then 100 workers are simultaneously executing the load. Since stac2 is designed to be able to mimic what a typical developer is doing in front of the system, you can think if the lights as representing how many simultaneously active users the system is servicing. Active means really active though so 100 active active users can easily mean 10,000 normal users.
79 74
80 75 # Components
81 76
82   -Stac2 consists of several, statically defined Cloud Foundry applications and services. The [manifest.yml](https://github.com/cloudfoundry/stac2/blob/master/manifest.yml) is used by the master vmc push
83   -command to create and update a stac2 cluster. The list below describes each of the static components as well as their relationship to one and other.
  77 +Stac2 consists of several, statically defined Cloud Foundry applications and services. The [manifest.yml](https://github.com/cloudfoundry/stac2/blob/master/manifest.yml) is used by the master vmc push command to create and update a stac2 cluster. The list below describes each of the static components as well as their relationship to one and other.
84 78
85 79 ## stac2
86   -The [stac2](https://github.com/cloudfoundry/stac2/tree/master/stac2) application is responsible for presenting the user interface of stac2. It's a simple, single-instance ruby/sinatra app. Multiple browser sessions may have this app open and each sessions will see
87   -the same data and control a single instance of a stac2 cluster. The bulk of the UI is written in JS which the page layout and template done in haml. When a stac2 run is in progress a lot of data is generated
88   -and the UI is supposed to feel like a realtime dashboard. As a result, there is very active JS based polling going on (10x/s) so its best have only a small number of browser sessions open at a time.
  80 +The [stac2](https://github.com/cloudfoundry/stac2/tree/master/stac2) application is responsible for presenting the user interface of stac2. It's a simple, single-instance ruby/sinatra app. Multiple browser sessions may have this app open and each sessions will see the same data and control a single instance of a stac2 cluster. The bulk of the UI is written in JS which the page layout and template done in haml. When a stac2 run is in progress a lot of data is generated and the UI is supposed to feel like a realtime dashboard. As a result, there is very active JS based polling going on (10x/s) so its best have only a small number of browser sessions open at a time.
89 81
90 82 When a workload is started, stac2 communicates the desired load and settings to the system by making http requests to the nabh server.
91 83
@@ -94,75 +86,52 @@ One key entrypoint exposed by stac2 is the ability to capture all of the data re
94 86 The stac2 app is bound to the stac2-redis redis service-instance as well as the stac2-mongo mongodb service-instance.
95 87
96 88 ## nabv
97   -The [nabv](https://github.com/cloudfoundry/stac2/tree/master/nabv) application is the main work horse of the system for executing VMC style commands (a.k.a., Cloud Controller API calls). Note: The nab* prefix came from an earlier project around auto-scaling where
98   -what I needed was an apache bench like app that I could drive programatically. I built a node.JS app called nab (network ab) and this prefix stuck as I morphed the code into two components...
  89 +The [nabv](https://github.com/cloudfoundry/stac2/tree/master/nabv) application is the main work horse of the system for executing VMC style commands (a.k.a., Cloud Controller API calls). Note: The nab* prefix came from an earlier project around auto-scaling where what I needed was an apache bench like app that I could drive programatically. I built a node.JS app called nab (network ab) and this prefix stuck as I morphed the code into two components...
99 90
100   -The nabv app is a ruby/sinatra app that makes heavy use of the cfoundry gem/object model to make synchronous calls into cloud foundry. It's because of the synchronous nature of cfoundry that this app
101   -has so many instances. It is multi-threaded, so each instance drives more than one cfoundry client, but at this point given ruby's simplistic threading system nabv does not tax this at all...
  91 +The nabv app is a ruby/sinatra app that makes heavy use of the cfoundry gem/object model to make synchronous calls into cloud foundry. It's because of the synchronous nature of cfoundry that this app has so many instances. It is multi-threaded, so each instance drives more than one cfoundry client, but at this point given ruby's simplistic threading system nabv does not tax this at all...
102 92
103   -The app receives all of its work via a set of work queue lists in the stac2-redis service-instance. The list is fed by the nabh server. Each worker does a blpop for a work item, and each work item represents scenario that the worker is supposed to run. The scenarios
104   -are described below but in a nutshell they are sets of commands/cloud controller APIs and http calls that should be made in order to simulate developer activity.
  93 +The app receives all of its work via a set of work queue lists in the stac2-redis service-instance. The list is fed by the nabh server. Each worker does a blpop for a work item, and each work item represents scenario that the worker is supposed to run. The scenarios are described below but in a nutshell they are sets of commands/cloud controller APIs and http calls that should be made in order to simulate developer activity.
105 94
106   -When a nabv worker is active, a blue light is on in the light grid. Since nabv is threaded, this means that a thread, in a nabv instance is actively running a workload. If anything goes wrong during a run (an exception, an API failure, etc.)
107   -the system is designed to abort the current scenario and clean up any resources (e.g., delete services, apps, routes, spaces) that may have been created by the partial scenario. Normally this is very robust, BUT IF you manually restart
108   -nabv (e.g., vmc restart nabv, or vmc push, or vmc stop/start) or do anything to manually interfere with the operation of the system, resources can be left behind. To clean these up, quiesce the app by turning stac2 off, let all of the workloads drain,
109   -and then click on the "dirty users?" button. This will iterate through the user accounts used by stac2 looking for users that are "dirty" (e.g., they have resources assigned to them that stac2 has forgotten about) and it will delete these resources.
  95 +When a nabv worker is active, a blue light is on in the light grid. Since nabv is threaded, this means that a thread, in a nabv instance is actively running a workload. If anything goes wrong during a run (an exception, an API failure, etc.) the system is designed to abort the current scenario and clean up any resources (e.g., delete services, apps, routes, spaces) that may have been created by the partial scenario. Normally this is very robust, BUT IF you manually restart nabv (e.g., vmc restart nabv, or vmc push, or vmc stop/start) or do anything to manually interfere with the operation of the system, resources can be left behind. To clean these up, quiesce the app by turning stac2 off, let all of the workloads drain, and then click on the "dirty users?" button. This will iterate through the user accounts used by stac2 looking for users that are "dirty" (e.g., they have resources assigned to them that stac2 has forgotten about) and it will delete these resources.
110 96
111 97 The maximum concurrency of a run is determined by the instance count of the nabv application. If your cloud configuration has a desired maximum concurrency of 64 (e.g., cmax: 64), then make sure your nabv instance count is at least 32.
112 98
113   -The nabv application is connected to the stac2-redis and stac2-mongo services. It is a very active writer into redis. For each operation it counts the operation, errors, increments the counters used in aggregate rate calculations, etc. When an
114   -exception occurs it spills the exception into the exception log list, it uses redis to track active workers, and as discussed before, it uses a redis list to recieve work items requesting it to run various scenarios.
  99 +The nabv application is connected to the stac2-redis and stac2-mongo services. It is a very active writer into redis. For each operation it counts the operation, errors, increments the counters used in aggregate rate calculations, etc. When an exception occurs it spills the exception into the exception log list, it uses redis to track active workers, and as discussed before, it uses a redis list to recieve work items requesting it to run various scenarios.
115 100
116 101 ## nabh
117   -The [nabh](https://github.com/cloudfoundry/stac2/tree/master/nabh) application is a node.JS app and internally is made up of two distinct servers. One server, the http-server, is responsible for accepting work requests via http calls from either stac2, or from nabv. From stac2, the work requests are
118   -of the form: "run the sys_info scenario, across N clients, Y times". The nabh app takes this sort of request and turns this into N work items. These are pushed onto the appropriate work list in stac2-redis where they are eventually picked by by
119   -nabv. From nabv, the work requests are of the form: "run 5000 HTTP GET from N concurrent connections to the following URL". The nabh app takes this sort of requests and turns this into N http work items. These are also pushed into stac2-redis, but in
120   -this case, this work list is not processed by nabv. Instead it's processed by the other server, the http worker.
  102 +The [nabh](https://github.com/cloudfoundry/stac2/tree/master/nabh) application is a node.JS app and internally is made up of two distinct servers. One server, the http-server, is responsible for accepting work requests via http calls from either stac2, or from nabv. From stac2, the work requests are of the form: "run the sys_info scenario, across N clients, Y times". The nabh app takes this sort of request and turns this into N work items. These are pushed onto the appropriate work list in stac2-redis where they are eventually picked by by nabv. From nabv, the work requests are of the form: "run 5000 HTTP GET from N concurrent connections to the following URL". The nabh app takes this sort of requests and turns this into N http work items. These are also pushed into stac2-redis, but in this case, this work list is not processed by nabv. Instead it's processed by the other server, the http worker.
121 103
122   -The http worker is responsible for picking up http request work items and executing the requests as quickly and efficiently as possible, performing all accounting and recording results in redis, etc. Since node.JS is an excellent platform for
123   -asynch programming this is a very efficient process. A small pool of 16-32 nabh clients can generate an enormous amount of http traffic.
  104 +The http worker is responsible for picking up http request work items and executing the requests as quickly and efficiently as possible, performing all accounting and recording results in redis, etc. Since node.JS is an excellent platform for asynch programming this is a very efficient process. A small pool of 16-32 nabh clients can generate an enormous amount of http traffic.
124 105
125   -The nabh app has a split personality. One half of the app acts as an API head. This app processes the API requests and converts these into workitems that are delivered to the workers via queues in redis. One pool of workers execute within the nabv app,
126   -the other pool of workers lives as a separate server running in the context of the nabh app.
  106 +The nabh app has a split personality. One half of the app acts as an API head. This app processes the API requests and converts these into workitems that are delivered to the workers via queues in redis. One pool of workers execute within the nabv app, the other pool of workers lives as a separate server running in the context of the nabh app.
127 107
128 108 The nabh app is connected to the stac2-redis service where it both reads/writes the work queues and where it does heavy recording of stats and counters related to http calls.
129 109
130 110 ## nf
131   -The [nf](https://github.com/cloudfoundry/stac2/tree/master/nf) app is not a core piece of stac2. Instead its an extra app that is used by some of the heavy http oriented workloads; specifically, those starting with **xnf_**. The app is created staticaly and because of that,
132   - heavy http scenarios can throw load at it without first launching an instance. Within the Cloud Foundry team, we use this app and the related **xnf_** workloads to stress test the firewalls, load balancers, and cloud foundry
133   - routing layer.
  111 +The [nf](https://github.com/cloudfoundry/stac2/tree/master/nf) app is not a core piece of stac2. Instead its an extra app that is used by some of the heavy http oriented workloads; specifically, those starting with **xnf_**. The app is created staticaly and because of that, heavy http scenarios can throw load at it without first launching an instance. Within the Cloud Foundry team, we use this app and the related **xnf_** workloads to stress test the firewalls, load balancers, and cloud foundry routing layer.
134 112
135   -The nf app is a very simple node.JS http server that exposes two entrypoints: /fast-echo, an entrypoint that returns immediately with an optional response body equal to the passed "echo" argument's value, /random-data, an entrypoint that
136   -returns 1k - 512k of random data. One good scenario we use pretty extensively is the **xnf_http_fb_data** scenario. Of course this is out of date on each rev of Facebook, but when building this scenario, we did a clean
137   -load of logged in facebook.com page and observed a mix of ~130 requests that transferred ~800k of data. The **xnf_http_fb_data** scenario is an attempt to mimic this pattern by performing:
  113 +The nf app is a very simple node.JS http server that exposes two entrypoints: /fast-echo, an entrypoint that returns immediately with an optional response body equal to the passed "echo" argument's value, /random-data, an entrypoint that returns 1k - 512k of random data. One good scenario we use pretty extensively is the **xnf_http_fb_data** scenario. Of course this is out of date on each rev of Facebook, but when building this scenario, we did a clean load of logged in facebook.com page and observed a mix of ~130 requests that transferred ~800k of data. The **xnf_http_fb_data** scenario is an attempt to mimic this pattern by performing:
138 114 * 50 requests for 1k of data
139 115 * 50 requests for 2k of data
140 116 * 20 requests for 4k of data
141 117 * 10 requests for 32k of data
142 118 * 2 requests for 128k of data
143 119
144   -Each run of the scenario does the above 4 times, waiting for all requests to complete before starting the next iteration. This and the **xnf_http** or **xnf_http_1k** are great ways to stress your
145   -serving infrastructure and to tune your Cloud Foundry deployment.
  120 +Each run of the scenario does the above 4 times, waiting for all requests to complete before starting the next iteration. This and the **xnf_http** or **xnf_http_1k** are great ways to stress your serving infrastructure and to tune your Cloud Foundry deployment.
146 121
147   -The **sys_http** scenario does not use the statically created version of nf. Instead each iteration of the scenario launches an instance of nf and then directs a small amount of load to the instance just
148   -launched.
  122 +The **sys_http** scenario does not use the statically created version of nf. Instead each iteration of the scenario launches an instance of nf and then directs a small amount of load to the instance just launched.
149 123
150 124 ## stac2-redis
151 125
152   -The stac2-redis service instance is a Redis 2.2 service instance that is shared by all core components (stac2, nabv, nabh). All of the runtime stats, counters, and exception logs are stored in redis. In addition,
153   -the instance is used as globally visible storage for the workload data thats stored/updated in Mongodb. When this mongo based data changes, or on boot, the redis based version of the data is updated.
  126 +The stac2-redis service instance is a Redis 2.2 service instance that is shared by all core components (stac2, nabv, nabh). All of the runtime stats, counters, and exception logs are stored in redis. In addition, the instance is used as globally visible storage for the workload data thats stored/updated in Mongodb. When this mongo based data changes, or on boot, the redis based version of the data is updated.
154 127
155 128 ## stac2-mongodb
156 129
157   -The stac2-mongo service instance is a MongoDB 2.0 service instance that us shared by the stac2 and nabv components. It's primary function is to act as the persistent storage for the workload definitions. Initiall
158   -the service is empty and the edit/reset workloads/main sequence highlighted in the beginning of this document is used to initialize the workloads collection. Workloads can also be added/modified/removed using the
159   -UI on the workload edit page and these go straight into stac2-mongo (and then from there, into stac2-redis for global, high speed availability).
  130 +The stac2-mongo service instance is a MongoDB 2.0 service instance that us shared by the stac2 and nabv components. It's primary function is to act as the persistent storage for the workload definitions. Initially the service is empty and the edit/reset workloads/main sequence highlighted in the beginning of this document is used to initialize the workloads collection. Workloads can also be added/modified/removed using the UI on the workload edit page and these go straight into stac2-mongo (and then from there, into stac2-redis for global, high speed availability).
160 131
161 132 # Workloads
162 133
163   -The default workloads are defined in [stac2/config/workloads](https://github.com/cloudfoundry/stac2/tree/master/stac2/config/workloads). Each workload file is a yaml file containing one or more named workloads.
164   -Using the workload management interface ("edit" link next to workload selector), workload files and all associated workloads may be deleted or added to the system. The default set of workloads can also be
165   -re-established from this page by clicking on the "reset workloads" link.
  134 +The default workloads are defined in [stac2/config/workloads](https://github.com/cloudfoundry/stac2/tree/master/stac2/config/workloads). Each workload file is a yaml file containing one or more named workloads. Using the workload management interface ("edit" link next to workload selector), workload files and all associated workloads may be deleted or added to the system. The default set of workloads can also be re-established from this page by clicking on the "reset workloads" link.
166 135
167 136 A workload is designed to mimic the activity of a single developer sitting in front of her machine ready to do a little coding. A typical vmc session might be:
168 137
@@ -179,10 +148,7 @@ A workload is designed to mimic the activity of a single developer sitting in fr
179 148
180 149 The workload grammar is designed to let you easily express scenarios like this and then use the stac2 framework to execute a few hundred active developers running this scenario non-stop.
181 150
182   -In Cloud Foundry, applications and services are named objects and the names are scoped to a user account. This means that within an account, application names must be unique, and service names must be unique.
183   -With the second generation cloud controller, an additional named object, the "space" object is introduced. Application names and service names must be unqie within a space but multiple users may access and manipulate
184   -the objects. In order to reliably support this, stac2 workloads that manipulate named objects tell the system that they are going to use names and the system generates unique names for each workload in action. The workload's
185   -then refer to these named objects using indirection. E.g.:
  151 +In Cloud Foundry, applications and services are named objects and the names are scoped to a user account. This means that within an account, application names must be unique, and service names must be unique. With the second generation cloud controller, an additional named object, the "space" object is introduced. Application names and service names must be unqie within a space but multiple users may access and manipulate the objects. In order to reliably support this, stac2 workloads that manipulate named objects tell the system that they are going to use names and the system generates unique names for each workload in action. The workload's then refer to these named objects using indirection. E.g.:
186 152
187 153 appnames:
188 154 - please-compute
@@ -191,8 +157,7 @@ then refer to these named objects using indirection. E.g.:
191 157 - action: start_app
192 158 appname: 0
193 159
194   -In the above fragment, the "appnames" defines an array. The value "please-compute" is a signal to the system to generate two unique appnames that will be used by an instance of the running workload.
195   -Later on, the "start_app" action (an action that roughly simulates vmc start *name-of-app*) specifies via the "appname: 0" key that it wants to use the first generated appname.
  160 +In the above fragment, the "appnames" defines an array. The value "please-compute" is a signal to the system to generate two unique appnames that will be used by an instance of the running workload. Later on, the "start_app" action (an action that roughly simulates vmc start *name-of-app*) specifies via the "appname: 0" key that it wants to use the first generated appname.
196 161
197 162 The name generation logic is applied to application (via appnames:), services (via servicenames:), and spaces (via spacenames:). A sample fragment using these looks like:
198 163
@@ -211,9 +176,7 @@ The name generation logic is applied to application (via appnames:), services (v
211 176 operations:
212 177 ...
213 178
214   -The value "please-compute" is the key to dynamic name generation. If you recall that the "nf" app is a built in app that's typically used for high http load scenarios, workloads referencing this app still use
215   -still use the appnames construct, but use static appnames. Note the full workload below that uses the existing nf app. In this workload there is a call to vmc info, and then a loop of two iterations where each iteration
216   -does 400 http GET's to the nf app's /random-data entrypoint from 4 concurrent clients. At the end of each loop iteration, the scenario waits for all outstanding http operations to complete before moving on.
  179 +The value "please-compute" is the key to dynamic name generation. If you recall that the "nf" app is a built in app that's typically used for high http load scenarios, workloads referencing this app still use still use the appnames construct, but use static appnames. Note the full workload below that uses the existing nf app. In this workload there is a call to vmc info, and then a loop of two iterations where each iteration does 400 http GET's to the nf app's /random-data entrypoint from 4 concurrent clients. At the end of each loop iteration, the scenario waits for all outstanding http operations to complete before moving on.
217 180
218 181 xnf_http_1k:
219 182 display: heavy http load targeting static nf, 1k transfers
@@ -574,15 +537,8 @@ Note: this repo is managed via classic GitHub pull requests, not front ended by
574 537
575 538 # Trivia
576 539
577   -Where did the name stac2 come from? The Cloud Foundry project started with a codename of b20nine. This code name was inspired by Robert Scoble's [building43](http://www.building43.com/) a.k.a., a place where
578   -all the cool stuff at Google happened. The b20nine moniker was a mythical place on the Microsoft Campus (NT was mostly built in building 27, 2 away from b20nine)... Somewhere along the way, the b20nine long form
579   -was shortened to B29, which unfortunately was the name of a devastating machine of war. In an effort to help prevent Paul Maritz from making an embarrassing joke using the B29 code name, a generic codename of "appcloud"
580   -was used briefly.
  540 +Where did the name stac2 come from? The Cloud Foundry project started with a codename of b20nine. This code name was inspired by Robert Scoble's [building43](http://www.building43.com/) a.k.a., a place where all the cool stuff at Google happened. The b20nine moniker was a mythical place on the Microsoft Campus (NT was mostly built in building 27, 2 away from b20nine)... Somewhere along the way, the b20nine long form was shortened to B29, which unfortunately was the name of a devastating machine of war. In an effort to help prevent Paul Maritz from making an embarrassing joke using the B29 code name, a generic codename of "appcloud" was used briefly.
581 541
582   -The original STAC system was developed by Peter Kukol during the "appcloud" era and in his words: "My current working acronym for the load testing harness is “STAC” which stands for the (hopefully) obvious
583   -name (“Stress-Testing of/for AppCloud”); I think that this is a bit better than ACLH, which doesn’t quite roll off the keyboard. If you can think of a better name / acronym, though, I’m certainly all ears."
  542 +The original STAC system was developed by Peter Kukol during the "appcloud" era and in his words: "My current working acronym for the load testing harness is “STAC” which stands for the (hopefully) obvious name (“Stress-Testing of/for AppCloud”); I think that this is a bit better than ACLH, which doesn’t quite roll off the keyboard. If you can think of a better name / acronym, though, I’m certainly all ears."
584 543
585   -In March 2012, I decided to revisit the load generation framework. Like any good software engineer, I looked Peter's original work and declared it "less than optimal" (a.k.a., a disaster) and decided that the only way to "fix it" was to do a complete and total re-write. This work was done
586   -during a 3 day non-stop coding session where I watched the sunrise twice before deciding I'm too old for this... The name Stac2 represents the second shot at building a system for "Stress-Testing of/for AppCloud". I suppose
587   -I could have just as easily named it STCF but given rampant dyslexia in the developer community I was worried about typos like STFU, etc... So I just did the boring thing and named it Stac2. Given that
588   -both Peter and I cut our teeth up north in Redmond, I'm confident that there will be a 3rd try coming. Maybe that's a good time for a new name...
  544 +In March 2012, I decided to revisit the load generation framework. Like any good software engineer, I looked Peter's original work and declared it "less than optimal" (a.k.a., a disaster) and decided that the only way to "fix it" was to do a complete and total re-write. This work was done during a 3 day non-stop coding session where I watched the sunrise twice before deciding I'm too old for this... The name Stac2 represents the second shot at building a system for "Stress-Testing of/for AppCloud". I suppose I could have just as easily named it STCF but given rampant dyslexia in the developer community I was worried about typos like STFU, etc... So I just did the boring thing and named it Stac2. Given that both Peter and I cut our teeth up north in Redmond, I'm confident that there will be a 3rd try coming. Maybe that's a good time for a new name...

0 comments on commit f397a7d

mark lucovsky

I know it's not intentional, but .md diffs are a pain in the ass when all goes well. changing the line breaks like this makes them impossible to read :( Please try to preserve the line breaks next time...

Please sign in to comment.
Something went wrong with that request. Please try again.