Skip to content

Changes in BFG (Tank version 1.8.11)

Alexey Lavrenuke edited this page Apr 15, 2016 · 1 revision

BFG was and still is an experimental gun and we are striving to improve the experience of working with it. Old version was not as stable as we wanted it to be. You used to include timing measurement code in your test scenarios what was not convenient. There were some other not-cute things. So we are introducing some changes:

  • your custom gun's shoot function (or scenarios for scenario gun) will now receive a measuring context as a parameter instead of results queue (look for an example below)

  • there are no worker threads anymore, only worker processes. This one made BFG more stable

  • sqlalchemy is now imported while SQL gun initialization, so you don't need this dependency if you don't use it

  • BFG will now count busy instances and post that number to data collector

  • measuring context will catch all exceptions from your code and set proto code to 500 and net code to 1 in this case

  • BFG module name has now first letter capitalized (for the name of consistency): yandextank.plugins.Bfg

You should change your config file a bit:

Old config:

[tank]
plugin_phantom=
plugin_bfg=yandextank.plugins.bfg

New config:

[tank]
plugin_phantom=
plugin_bfg=yandextank.plugins.Bfg

And your scenario code should become more simple now:

Old code:

import sys
import os
from Queue import Queue

import logging
import time

from contextlib import contextmanager
from collections import namedtuple

Sample = namedtuple(
        'Sample', 'marker,threads,overallRT,httpCode,netCode,sent,received,connect,send,latency,receive,accuracy')

# this context manager is now inside tank's code
@contextmanager
def measure(marker, queue):
    start_ms = time.time()

    resp_code = 0
    try:
        yield
    except Exception as e:
        print marker, e
        resp_code = 110

    response_time = int((time.time() - start_ms) * 1000)

    data_item = Sample(
            marker,         # tag
            1,              # threadsв
            rt_time,        # overallRT
            200,            # httCode
            resp_code,      # netCode
            0,              # sent
            0,              # received
            connect_time,   # connect
            0,              # send
            latency_time,   # latency
            0,              # receive
            0,              # accuracy
    )
    queue.put((int(time.time()), data_item), timeout=5)
    if resp_code != 0:
        raise RuntimeError


def shoot(missile, marker, results):
    with measure("markerOfRequest", results):
        <...your test actions...>
        # if there was an exception, the context manager (above)
        # will set response code to 110

New code:

import sys
import os

import logging

def shoot(missile, marker, measure):
    with measure("markerOfRequest") as di:
        try:
            <...your test actions...>
        except RuntimeError as e:
            # set your exit code
            di['proto_code'] = 500
        finally:
            <...some finishing work...>

Measuring context returns a data item dict, you can set it's fields as you want:

data_item = {
    "send_ts": start_time, # the time when you entered context
    "tag": marker,         # the marker you passed to the context
    "interval_real": None, # will be set automatically on context exit, if you leave None here
    "connect_time": 0,
    "send_time": 0,
    "latency": 0,
    "receive_time": 0,
    "interval_event": 0,
    "size_out": 0,
    "size_in": 0,
    "net_code": 0,         # should be int
    "proto_code": 200,     # should be int
}

Happy performance measuring!