diff --git a/atom.xml b/atom.xml
index b7ecc3b40..1fc078487 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
- 2015-07-01T12:17:16+03:00
+ 2015-07-27T13:34:41+03:00http://atodorov.org/
@@ -13,6 +13,53 @@
Octopress
+
+
+
+ 2015-07-27T13:04:00+03:00
+ http://atodorov.org/blog/2015/07/27/call-for-ideas-graphical-test-coverage-reports
+ If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don’t use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python’s coverage doesn’t tell you how many times a particular
+line of code has been executed. It also doesn’t tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
+]]>
+
+
@@ -2173,63 +2220,6 @@ the system has. I’ve still not figured that out entirely.
-]]>
-
-
-
-
-
- 2015-03-16T23:16:00+02:00
- http://atodorov.org/blog/2015/03/16/mining-e-mail-identities-with-gravatar
- Recently I’ve laid my hands on a list of little over 7000 email addresses.
-This begs the question how much of them are still in use and what for ?
-
-
My data is not fresh so I’ve uploaded the list to Facebook and created a custom
-audience. 2400 of 7129 addresses were recognized - 30% of these addresses are
-on Facebook and easy to target! Need to figure out which ones.
-
-
I could have tried some sort of batch search combined with the custom audience
-functionality but I didn’t find an API for that and decided not to bother.
-Instead I’ve opted for Gravatar.
Feed gravatars.sh with the email list and it will download all images to the
-current working directory and use the address as the file name. After
-md5sum *@* | cut -f1 -d' ' | sort | uniq -c I quickly noticed the following:
-
-
-
4563 addresses have the a1719586837f0fdac8835f74cf4ef04a check-sum; These are
-not found on Gravatar.
-
2400 addresses have the d5fe5cbcc31cff5f8ac010db72eb000c check-sum. These are
-addresses which are registered with Gravatar but didn’t bother to change the default
-image.
-
166 remaining addresses, each with a different check-sum. These have their custom
-pictures uploaded to Gravatar and probably much more actively used.
-
-
-
-
A second check with Facebook reveals 900 out of these 2566 addresses were recognized.
-This begs the question is Facebook showing incorrect stats or are there 1500 addresses
-using Gravatar (or have used at some point) which are not on Facebook ?
-
-
At least some of the remaining 4000 addresses are still active and used to send emails.
-Next I will be looking for ways to identify them. Any suggestions and comments are more
-than welcome!
If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don’t use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python’s coverage doesn’t tell you how many times a particular
+line of code has been executed. It also doesn’t tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
diff --git a/blog/categories/django/atom.xml b/blog/categories/django/atom.xml
index b8c038c64..92813c60c 100644
--- a/blog/categories/django/atom.xml
+++ b/blog/categories/django/atom.xml
@@ -4,7 +4,7 @@
- 2015-07-01T12:17:16+03:00
+ 2015-07-27T13:34:41+03:00http://atodorov.org/
@@ -13,6 +13,53 @@
Octopress
+
+
+
+ 2015-07-27T13:04:00+03:00
+ http://atodorov.org/blog/2015/07/27/call-for-ideas-graphical-test-coverage-reports
+ If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don't use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python's coverage doesn't tell you how many times a particular
+line of code has been executed. It also doesn't tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
+]]>
+
+
@@ -242,85 +289,6 @@ not spend anymore time on this problem soon.
current limitations by using Kombu directly
(see this gist) with a transport that
uses either a UNIX domain socket or a name pipe (FIFO) file.
-]]>
-
-
-
-
-
- 2014-11-07T15:48:00+02:00
- http://atodorov.org/blog/2014/11/07/speeding-up-celery-backends-part-2
- In the first part of this
-post I looked at a few celery backends and discovered they didn't meet my needs.
-Why is the Celery stack slow? How slow is it actually?
-
-
How slow is Celery in practice
-
-
-
Queue: 500`000 msg/sec
-
Kombu: 14`000 msg/sec
-
Celery: 2`000 msg/sec
-
-
-
-
Detailed test description
-
-
There are three main components of the Celery stack:
-
-
-
Celery itself
-
Kombu which handles the transport layer
-
Python Queue()'s underlying everything
-
-
-
-
Using the Queue and Kombu tests
-run for 1 000 000 messages I got the following results:
-
-
-
Raw Python Queue: Msgs per sec: 500`000
-
Raw Kombu without Celery where kombu/utils/__init__.py:uuid() is set to return 0
-
-
-
with json serializer: Msgs per sec: 5`988
-
with pickle serializer: Msgs per sec: 12`820
-
with the custom mem_serializer from part 1:
-Msgs per sec: 14`492
-
-
-
-
-
-
Note: when the test is executed with 100K messages mem_serializer yielded
-25`000 msg/sec then the performance is saturated. I've observed similar behavior
-with raw Python Queue()'s. I saw some cache buffers being managed internally to avoid OOM
-exceptions. This is probably the main reason performance becomes saturated over a longer
-execution.
-
-
-
Using celery_load_test.py modified to
-loop 1 000 000 times I got 1908.0 tasks created per sec.
-
-
-
-
Another interesting this worth outlining - in the kombu test there are these lines:
-
1
-2
-3
-4
-5
-6
-
with producers[connection].acquire(block=True) as producer:</p>
-
-<pre><code>for j in range(1000000):
-</code></pre>
-
-<p>
-
-
If we swap them the performance drops down to 3875 msg/sec which is comparable with the
-Celery results. Indeed inside Celery there's the same with producer.acquire(block=True)
-construct which is executed every time a new task is published. Next I will be looking
-into this to figure out exactly where the slowliness comes from.
diff --git a/blog/categories/fedora-planet/atom.xml b/blog/categories/fedora-planet/atom.xml
index 1bc77c586..82991777b 100644
--- a/blog/categories/fedora-planet/atom.xml
+++ b/blog/categories/fedora-planet/atom.xml
@@ -4,7 +4,7 @@
- 2015-07-01T12:17:16+03:00
+ 2015-07-27T13:34:41+03:00http://atodorov.org/
@@ -13,6 +13,53 @@
Octopress
+
+
+
+ 2015-07-27T13:04:00+03:00
+ http://atodorov.org/blog/2015/07/27/call-for-ideas-graphical-test-coverage-reports
+ If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don't use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python's coverage doesn't tell you how many times a particular
+line of code has been executed. It also doesn't tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
+]]>
+
+
@@ -164,54 +211,6 @@ but I have no idea what the status is. For more info see:
-]]>
-
-
-
-
-
- 2015-05-04T22:27:00+03:00
- http://atodorov.org/blog/2015/05/04/thunderbolt-to-ethernet-adapter-on-linux
- As it seems my
-Thunderbolt to gigabit Ethernet adapter
-works with
-RHEL 7 on a MacBook Air
-despite some reports it may not.
-
-
After plugging the device is automatically recognized and the tg3 driver is loaded.
-Detailed lspci below:
-
-
0a:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet PCIe
- Subsystem: Apple Inc. Device 00f6
- Physical Slot: 9
- Flags: bus master, fast devsel, latency 0, IRQ 19
- Memory at cd800000 (64-bit, prefetchable) [size=64K]
- Memory at cd810000 (64-bit, prefetchable) [size=64K]
- [virtual] Expansion ROM at cd820000 [disabled] [size=64K]
- Capabilities: [48] Power Management version 3
- Capabilities: [50] Vital Product Data
- Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
- Capabilities: [a0] MSI-X: Enable+ Count=6 Masked-
- Capabilities: [ac] Express Endpoint, MSI 00
- Capabilities: [100] Advanced Error Reporting
- Capabilities: [13c] Device Serial Number 00-00-ac-87-a3-25-20-33
- Capabilities: [150] Power Budgeting <?>
- Capabilities: [160] Virtual Channel
- Capabilities: [1b0] Latency Tolerance Reporting
- Kernel driver in use: tg3
-
-
-
Unplugging and pluggin back in the network cable works as expected.
-I did see my computer freeze 2 out of 10 times when I've unplugged the Thunderbolt
-adapter but couldn't reproduce it repliably or grab more info.
-
-
For the record this is with kernel 3.10.0-229.1.2.el7.x86_64 which is missing
-this
-upstream commit.
-I'm not sure why it works though.
-
-
If I remember correctly tg3 is available during installation so you should
-be able to use the Thunderbolt adapter instead of WiFi as well.
diff --git a/blog/categories/fedora/atom.xml b/blog/categories/fedora/atom.xml
index a224bf4d9..20c060379 100644
--- a/blog/categories/fedora/atom.xml
+++ b/blog/categories/fedora/atom.xml
@@ -4,7 +4,7 @@
- 2015-07-01T12:17:16+03:00
+ 2015-07-27T13:34:41+03:00http://atodorov.org/
@@ -13,6 +13,53 @@
Octopress
+
+
+
+ 2015-07-27T13:04:00+03:00
+ http://atodorov.org/blog/2015/07/27/call-for-ideas-graphical-test-coverage-reports
+ If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don't use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python's coverage doesn't tell you how many times a particular
+line of code has been executed. It also doesn't tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
+]]>
+
+
@@ -222,341 +269,6 @@ without removing the default IPv4 one targetcli will throw an error:For more information about targetcli usage see my previous post
How to Configure iSCSI Target on Red Hat Enterprise Linux 7.
-]]>
-
-
-
-
-
- 2015-04-07T15:52:00+03:00
- http://atodorov.org/blog/2015/04/07/how-to-configure-iscsi-target-on-red-hat-enterprise-linux-7
- Linux-IO (LIO) Target is an open-source implementation of the SCSI target that
-has become the standard one included in the Linux kernel and the one present in
-Red Hat Enterprise Linux 7. The popular scsi-target-utils package is replaced
-by the newer targetcli which makes configuring a software iSCSI target quite
-different.
-
-
In earlier versions one had to edit the /etc/tgtd/targets.conf file and
-service tgtd restart. Here is an example configuration:
targetcli can be used either as an interactive shell or as stand alone commands.
-Here is an example shell session which creates a file-based disk image. Comments are
-provided inline:
</p>
-
-<h1>yum install -y targetcli</h1>
-
-<h1>systemctl enable target</h1>
-
-<h1>targetcli</h1>
-
-<h1>first create a disk image with the name of disk1. All files are sparsely created.</h1>
-
-<p>/> backstores/fileio create disk1 /var/lib/libvirt/images/disk1.img 10G
-Created fileio disk1 with size 10737418240</p>
-
-<h1>create an iSCSI target. NB: this only defines the target</h1>
-
-<p>/> iscsi/ create iqn.2015-04.com.example:target1
-Created target iqn.2015-04.com.example:target1.
-Created TPG 1.
-Global pref auto_add_default_portal=true
-Created default portal listening on all IPs (0.0.0.0), port 3260.</p>
-
-<h1>TPGs (Target Portal Groups) allow the iSCSI to support multiple complete</h1>
-
-<h1>configurations within one target. This is useful for complex quality-of-service</h1>
-
-<h1>configurations. targetcli will automatically create one TPG when the target</h1>
-
-<h1>is created, and almost all setups only need one.</h1>
-
-<h1>switch to TPG definition for our target</h1>
-
-<p>/> cd iscsi/iqn.2015-04.com.example:target1/tpg1</p>
-
-<h1>list the contents</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> ls
-o- tpg1 ..................................................................................................... [no-gen-acls, no-auth]
- o- acls ................................................................................................................ [ACLs: 0]
- o- luns ................................................................................................................ [LUNs: 0]
- o- portals .......................................................................................................... [Portals: 1]</p>
-
-<pre><code>o- 0.0.0.0:3260 ........................................................................................................... [OK]
-</code></pre>
-
-<h1>create a portal aka IP:port pairs which expose the target on the network</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> portals/ create
-Using default IP port 3260
-Binding to INADDR_ANY (0.0.0.0)
-This NetworkPortal already exists in configFS.</p>
-
-<h1>create logical units (LUNs) aka disks inside our target</h1>
-
-<h1>in other words bind the target to its on-disk storage</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> luns/ create /backstores/fileio/disk1
-Created LUN 0.</p>
-
-<h1>disable authentication</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> set attribute authentication=0
-Parameter authentication is now '0'.</p>
-
-<h1>enable read/write mode</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> set attribute demo_mode_write_protect=0
-Parameter demo_mode_write_protect is now '0'.</p>
-
-<h1>Enable generate_node_acls mode. This can be thought of as</h1>
-
-<h1>"ignore ACLs mode" -- both authentication and LUN mapping</h1>
-
-<h1>will then use the TPG settings.</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> set attribute generate_node_acls=1
-Parameter generate_node_acls is now '1'.</p>
-
-<p>/iscsi/iqn.20...:target1/tpg1> ls
-o- tpg1 ........................................................................................................ [gen-acls, no-auth]
- o- acls ................................................................................................................ [ACLs: 0]
- o- luns ................................................................................................................ [LUNs: 1]
- | o- lun0 ..................................................................... [fileio/disk1 (/var/lib/libvirt/images/disk1.img)]
- o- portals .......................................................................................................... [Portals: 1]</p>
-
-<pre><code>o- 0.0.0.0:3260 ........................................................................................................... [OK]
-</code></pre>
-
-<h1>exit or Ctrl+D will save the configuration under /etc/target/saveconfig.json</h1>
-
-<p>/iscsi/iqn.20...:target1/tpg1> exit
-Global pref auto_save_on_exit=true
-Last 10 configs saved in /etc/target/backup.
-Configuration saved to /etc/target/saveconfig.json</p>
-
-<h1>after creating a second target the layout looks like this</h1>
-
-<p>/> ls
-o- / ......................................................................................................................... [...]
- o- backstores .............................................................................................................. [...]
- | o- block .................................................................................................. [Storage Objects: 0]
- | o- fileio ................................................................................................. [Storage Objects: 2]
- | | o- disk1 .................................................. [/var/lib/libvirt/images/disk1.img (10.0GiB) write-back activated]
- | | o- disk2 .................................................. [/var/lib/libvirt/images/disk2.img (10.0GiB) write-back activated]
- | o- pscsi .................................................................................................. [Storage Objects: 0]
- | o- ramdisk ................................................................................................ [Storage Objects: 0]
- o- iscsi ............................................................................................................ [Targets: 2]
- | o- iqn.2015-04.com.example:target1 ................................................................................... [TPGs: 1]
- | | o- tpg1 .................................................................................................. [gen-acls, no-auth]
- | | o- acls .......................................................................................................... [ACLs: 0]
- | | o- luns .......................................................................................................... [LUNs: 1]
- | | | o- lun0 ............................................................... [fileio/disk1 (/var/lib/libvirt/images/disk1.img)]
- | | o- portals .................................................................................................... [Portals: 1]
- | | o- 0.0.0.0:3260 ..................................................................................................... [OK]
- | o- iqn.2015-04.com.example:target2 ................................................................................... [TPGs: 1]
- | o- tpg1 .................................................................................................. [gen-acls, no-auth]
- | o- acls .......................................................................................................... [ACLs: 0]
- | o- luns .......................................................................................................... [LUNs: 1]
- | | o- lun0 ............................................................... [fileio/disk2 (/var/lib/libvirt/images/disk2.img)]
- | o- portals .................................................................................................... [Portals: 1]
- | o- 0.0.0.0:3260 ..................................................................................................... [OK]
- o- loopback ......................................................................................................... [Targets: 0]</p>
-
-<h1>enable CHAP and Reverse CHAP (mutual) for both discovery and login authentication</h1>
-
-<h1>discovery authentication is enabled under the global iscsi node</h1>
-
-<p>/> cd /iscsi
-/iscsi> set discovery_auth enable=1
-/iscsi> set discovery_auth userid=IncomingUser
-/iscsi> set discovery_auth password=SomePassword1
-/iscsi> set discovery_auth mutual_userid=OutgoingUser
-/iscsi> set discovery_auth mutual_password=AnotherPassword2</p>
-
-<h1>login authentication is enabled either under the TPG node or under ACLs</h1>
-
-<p>/iscsi> cd iqn.2015-04.com.example:target1/tpg1
-/iscsi/iqn.20...:target1/tpg1> set attribute authentication=1
-/iscsi/iqn.20...:target1/tpg1> set auth userid=IncomingUser2
-/iscsi/iqn.20...:target1/tpg1> set auth password=SomePassword3
-/iscsi/iqn.20...:target1/tpg1> set auth mutual_userid=OutgoingUser2
-/iscsi/iqn.20...:target1/tpg1> set auth mutual_password=AnotherPassword4
-/iscsi/iqn.20...:target1/tpg1> exit</p>
-
-<p>
-
-
Hints:
-
-
-
activating targetcli service at boot is mandatory, otherwise your configuration won’t be read after a reboot
-
if you type cdtargetcli will display an interactive node tree
-
after configuration is saved you don't need to restart anything
-
the old scsi-target-utils doesn't support discovery authentication
-
targetcli allows kernel memory to be shared as a block SCSI device via the
-ramdisk backstore. It also supports "nullio" mode, which discards all writes, and returns all-zeroes for reads.
-
I'm having troubles configuring portals to listen both on any IPv4 addresses and any IPv6 addresses
-the system has. I've still not figured that out entirely.
diff --git a/blog/categories/qa/atom.xml b/blog/categories/qa/atom.xml
index 4ed49054a..ff37cd5bc 100644
--- a/blog/categories/qa/atom.xml
+++ b/blog/categories/qa/atom.xml
@@ -4,7 +4,7 @@
- 2015-07-01T12:17:16+03:00
+ 2015-07-27T13:34:41+03:00http://atodorov.org/
@@ -13,6 +13,53 @@
Octopress
+
+
+
+ 2015-07-27T13:04:00+03:00
+ http://atodorov.org/blog/2015/07/27/call-for-ideas-graphical-test-coverage-reports
+ If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don't use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python's coverage doesn't tell you how many times a particular
+line of code has been executed. It also doesn't tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
+]]>
+
+
@@ -189,33 +236,6 @@ level 137 in the Owl part of the game (recorded by somebody else):
-]]>
-
-
-
-
-
- 2014-12-22T15:46:00+02:00
- http://atodorov.org/blog/2014/12/22/blackberry-z10-is-killing-my-wifi-router
- Few days ago I've resurrected my BlackBerry Z10 only to find out that it kills
-my WiFi router shortly after connecting to the network.
-It looks like many people are having the same problem with BlackBerry but most forum
-threads don't offer a meaningful solution so I did some tests.
-
-
Everything works fine when WiFi mode is set to either 11bgn mixed or 11n only and
-WiFi security is disabled.
-
-
When using WPA2/Personal security mode and AES encryption the problem occurs
-regardless of which WiFi mode is used. There is another type of encryption called TKIP
-but the device itself warns that this is not supported by the 802.11n specification
-(all my devices use it anyway).
-
-
So to recap:
-BlackBerry Z10 causes my TP-Link router to die if using WPA2/Personal security with
-AES Encryption. Switching to open network with MAC address filtering works fine!
-
-
I haven't had the time to upgrade the firmware of this router and see if the problem persists.
-Most likely I'll just go ahead and flash it with OpenWRT.
Celery is an asynchronous task queue/job queue
+based on distributed message passing. You can define tasks as Python functions,
+execute them in the background and in a periodic fashion.
+Difio uses Celery for virtually everything.
+Some of the tasks are scheduled after some event takes place (like user pressed a button)
+or scheduled periodically.
+
+
Celery provides several components of which celerybeat is the periodic task scheduler.
+When combined with Django it gives you a very nice admin interface
+which allows periodic tasks to be added to the scheduler.
+
+
Why change
+
+
Difio has relied on celerybeat for a couple of months. Back then, when Difio launched,
+there was no cron support for OpenShift so running celerybeat sounded reasonable.
+It used to run on a dedicated virtual server and for most of the time that was fine.
+
+
There were a number of issues which Difio faced during its first months:
+
+
+
celerybeat would sometime die due to no free memory on the virtual instance.
+When that happened no new tasks were scheduled and data was left unprocessed.
+Let alone that higher memory instance and the processing power which comes with it
+cost extra money.
+
Difio is split into several components which need to have the same code base
+locally - the most important are database settings and the periodic tasks
+code. At least in one occasion celerybeat failed to start because of a buggy
+task code. The offending code was fixed in the application server on OpenShift but
+not properly synced to the celerybeat instance. Keeping code in sync is a priority
+for distributed projects which rely on Celery.
+
Celery and django-celery seem to be updated quite often. This poses a significant risk
+of ending up with different versions on the scheduler, worker nodes and the app server. This will
+bring the whole application to a halt if at some point a backward incompatible change is introduced
+and not properly tested and updated. Keeping infrastructure components in sync can be a big challenge
+and I try to minimize this effort as much as possible.
+
Having to navigate to the admin pages every time I add a new task or want to change the execution
+frequency doesn’t feel very natural for a console user like myself and IMHO is less productive.
+For the record I primarily use mcedit. I wanted to have something more close to the
+write, commit and push work-flow.
+
+
+
+
The take over
+
+
It’s been some time since OpenShift
+introduced
+the cron cartridge and I decided to give it a try.
+
+
The first thing I did is to write a simple script which can execute any task from the difio.tasks module
+by piping it to the Django shell (a Python shell actually).
#!/bin/bash
+#
+# Copyright (c) 2012, Alexander Todorov <atodorov@nospam.otb.bg>
+#
+# This script is symlinked to from the hourly/minutely, etc. directories
+#
+# SYNOPSIS
+#
+# ./run_celery_task cron_search_dates
+#
+# OR
+#
+# ln -s run_celery_task cron_search_dates
+# ./cron_search_dates
+#
+
+TASK_NAME=$1
+[ -z "$TASK_NAME"]&&TASK_NAME=$(basename $0)
+
+if[ -n "$OPENSHIFT_APP_DIR"]; then
+source$OPENSHIFT_APP_DIR/virtenv/bin/activate
+export PYTHON_EGG_CACHE=$OPENSHIFT_DATA_DIR/.python-eggs
+REPO_DIR=$OPENSHIFT_REPO_DIR
+else
+REPO_DIR=$(dirname $0)"/../../.."
+fi
+
+echo"import difio.tasks; difio.tasks.$TASK_NAME.delay()" | $REPO_DIR/wsgi/difio/manage.py shell
+
+
+
+
This is a multicall script which allows symlinks with different names to point to it.
+Thus to add a new task to cron I just need to make a symlink to the script from one of the
+hourly/, minutely/, daily/, etc. directories under cron/.
+
+
The script accepts a parameter as well which allows me to execute it locally for debugging purposes
+or to schedule some tasks out of band.
+This is how it looks like on the file system:
After having done these preparations I only had to embed the cron cartridge and git push to OpenShift:
+
+
rhc-ctl-app -a difio -e add-cron-1.4 && git push
+
+
+
What’s next
+
+
At present OpenShift can schedule your jobs every minute, hour, day, week or month and does so using the
+run-parts script. You can’t schedule a script to execute at 4:30 every Monday or every 45 minutes for example.
+See rhbz #803485 if you want to follow the
+progress. Luckily Difio doesn’t use this sort of job scheduling for the moment.
+
+
Difio is scheduling periodic tasks from OpenShift cron for a few days already.
+It seems to work reliably and with no issues. One less component to maintain and worry about.
+More time to write code.
Recently I’ve laid my hands on a list of little over 7000 email addresses.
+This begs the question how much of them are still in use and what for ?
+
+
My data is not fresh so I’ve uploaded the list to Facebook and created a custom
+audience. 2400 of 7129 addresses were recognized - 30% of these addresses are
+on Facebook and easy to target! Need to figure out which ones.
+
+
I could have tried some sort of batch search combined with the custom audience
+functionality but I didn’t find an API for that and decided not to bother.
+Instead I’ve opted for Gravatar.
Feed gravatars.sh with the email list and it will download all images to the
+current working directory and use the address as the file name. After
+md5sum *@* | cut -f1 -d' ' | sort | uniq -c I quickly noticed the following:
+
+
+
4563 addresses have the a1719586837f0fdac8835f74cf4ef04a check-sum; These are
+not found on Gravatar.
+
2400 addresses have the d5fe5cbcc31cff5f8ac010db72eb000c check-sum. These are
+addresses which are registered with Gravatar but didn’t bother to change the default
+image.
+
166 remaining addresses, each with a different check-sum. These have their custom
+pictures uploaded to Gravatar and probably much more actively used.
+
+
+
+
A second check with Facebook reveals 900 out of these 2566 addresses were recognized.
+This begs the question is Facebook showing incorrect stats or are there 1500 addresses
+using Gravatar (or have used at some point) which are not on Facebook ?
+
+
At least some of the remaining 4000 addresses are still active and used to send emails.
+Next I will be looking for ways to identify them. Any suggestions and comments are more
+than welcome!
Recently I’ve purchased a
-wireless range extender
-as the one shown here. It had troubles connecting to the upstream Wi-Fi router
-because it used MAC filtering instead of password security. Luckily there was
-a forum thread which helped
-me figure it out.
-
-
DAP 1320 uses two MAC addresses
-
-
Everything was working just fine with MAC filtering disabled on the upstream
-router but failed miserably when enabled. I thought the MAC address provided
-on the DAP 1320 packaging was wrong.
-
-
It turned out the device had 2 addresses.
-The one on the packaging is 70:62:B8:07:0B:76 and it didn’t matter if that
-is enabled or disabled in the router settings. The second MAC is used when
-trying to forward connections through the router. Both addresses differ by the
-second symbol with a difference of 2. So I’ve enabled 72:62:B8:07:0B:76
-in the router settings and everything worked like a charm.
-
-
Other findings
-
-
Unfortunately if a device is connected to the wifi extender’s network it will
-bypass the MAC filtering employed on the upstream wifi router. As much as I dislike
-using passwords for Wi-Fi I had to configure one for the extended network.
-
-
I’ve also found that when you save the configuration file from the device on your
-hard drive it comes in a base64-encoded-line-by-line format. Pretty awkward.
-
-
Another pleasant (but not entirely surprising) finding was that D-Link included
-a written acknowledgment of using open source components and an offer to provide
-source code upon request.
Recently I’ve purchased a
+wireless range extender
+as the one shown here. It had troubles connecting to the upstream Wi-Fi router
+because it used MAC filtering instead of password security. Luckily there was
+a forum thread which helped
+me figure it out.
+
+
DAP 1320 uses two MAC addresses
+
+
Everything was working just fine with MAC filtering disabled on the upstream
+router but failed miserably when enabled. I thought the MAC address provided
+on the DAP 1320 packaging was wrong.
+
+
It turned out the device had 2 addresses.
+The one on the packaging is 70:62:B8:07:0B:76 and it didn’t matter if that
+is enabled or disabled in the router settings. The second MAC is used when
+trying to forward connections through the router. Both addresses differ by the
+second symbol with a difference of 2. So I’ve enabled 72:62:B8:07:0B:76
+in the router settings and everything worked like a charm.
+
+
Other findings
+
+
Unfortunately if a device is connected to the wifi extender’s network it will
+bypass the MAC filtering employed on the upstream wifi router. As much as I dislike
+using passwords for Wi-Fi I had to configure one for the extended network.
+
+
I’ve also found that when you save the configuration file from the device on your
+hard drive it comes in a base64-encoded-line-by-line format. Pretty awkward.
+
+
Another pleasant (but not entirely surprising) finding was that D-Link included
+a written acknowledgment of using open source components and an offer to provide
+source code upon request.
Join upstream and create a test suite for a package you find interesting;
-
Provide patches - first patch
-came in less than 30 minutes of initial announcement :);
-
Review packages in the wiki and help identify false negatives;
-
Forward to people who may be interested to work on these items;
-
Share and promote in your local open source and developer communities;
-
-
-
-
Auto BuildRequires
-
-
Auto-BuildRequires
-is a simple set of scripts which compliments rpmbuild by
-automatically suggesting BuildRequires lines for the just built package.
-
-
It would be interesting to have this integrated into Koji and/or
-continuous integration environment and compare the output between every two
-consecutive builds (iow older and newer package versions). It sounds like a
-good way to identify newly added or removed dependencies and update the package
-specs accordingly.
I’ve come across a few fonts packages (amiri-fonts, gnu-free-fonts and thai-scalable-fonts)
-which seem to have some sort of test suites but I don’t know how they work or
-what type of problems they test for. On top of that all three have a different
-way of doing things (e.g. not using a standardized test framework or a variation of such).
-
-
I’ll keep you posted on this once I manage to get more info from upstream developers.
-
-
Is URL Field in RPM Useless
-
-
So is it? Opinions here differ from totally useless to “don’t remove it, I need it”.
-However I run a small test and from 2574 RPMs on the source DVD there is around
-40% of “something different than HTTP 200 OK”. This means 40% potentially broken URLs!
-
-
The majority are responses in the 3XX range and only less than 10% are
-actual errors (4XX, 5XX, missing URLs or connection errors).
-
-
It will be interesting to see if this can be removed from rpm altogether.
-I don’t think it will happen soon but if we don’t use it why have it there?
Join upstream and create a test suite for a package you find interesting;
+
Provide patches - first patch
+came in less than 30 minutes of initial announcement :);
+
Review packages in the wiki and help identify false negatives;
+
Forward to people who may be interested to work on these items;
+
Share and promote in your local open source and developer communities;
+
+
+
+
Auto BuildRequires
+
+
Auto-BuildRequires
+is a simple set of scripts which compliments rpmbuild by
+automatically suggesting BuildRequires lines for the just built package.
+
+
It would be interesting to have this integrated into Koji and/or
+continuous integration environment and compare the output between every two
+consecutive builds (iow older and newer package versions). It sounds like a
+good way to identify newly added or removed dependencies and update the package
+specs accordingly.
I’ve come across a few fonts packages (amiri-fonts, gnu-free-fonts and thai-scalable-fonts)
+which seem to have some sort of test suites but I don’t know how they work or
+what type of problems they test for. On top of that all three have a different
+way of doing things (e.g. not using a standardized test framework or a variation of such).
+
+
I’ll keep you posted on this once I manage to get more info from upstream developers.
+
+
Is URL Field in RPM Useless
+
+
So is it? Opinions here differ from totally useless to “don’t remove it, I need it”.
+However I run a small test and from 2574 RPMs on the source DVD there is around
+40% of “something different than HTTP 200 OK”. This means 40% potentially broken URLs!
+
+
The majority are responses in the 3XX range and only less than 10% are
+actual errors (4XX, 5XX, missing URLs or connection errors).
+
+
It will be interesting to see if this can be removed from rpm altogether.
+I don’t think it will happen soon but if we don’t use it why have it there?
In the last few weeks I’ve been working together with
-Tim Flink and
-Kamil Paral from the Fedora QA
-team on bringing some installation testing expertise to Fedora and establishing
-an open source test lab
-to perform automated testing in. The infrastructure is
-already in relatively usable condition so I’ve decided to share this information
-with the community.
-
-
Beaker is Running Our Test Lab
-
-
Beaker is the software suite that powers the test
-lab infrastructure. It is quite complex, with many components and sometimes not
-very straight-forward to set up. Tim has been working on that with me giving it
-a try and reporting issues as they have been discovered and fixed.
-
-
In the process of working on this I’ve managed to create
-couple of patches
-against Beaker and friends. They are still pending release in a future version
-because of more urgent bug fixes which need to released first.
-
-
SNAKE is The Kickstart Template Server
-
-
SNAKE is a client/server Python framework used
-to support Anaconda installations. It supports plain text ks.cfg files, IIRC those
-were static templates, no variable substitution.
-
-
The other possibility is Python templates based on Pykickstart:
At the moment SNAKE is essentially abandoned but feature complete.
-I’m thinking about adopting the project just in case we need to make some fixes.
-Will let you know more about this when it happens.
-
-
Open Source Test Suite
-
-
I have been working on opening up several test cases for what we (QE) call
-a tier #1 installation test suite. They can be found in
-git.
-The tests are base on beakerlib and
-the legacy RHTS framework which is now part of Beaker.
-
-
This effort has been coordinated with Kamil as part of a pilot
-project he’s responsible for. I’ve been executing the same test suite against
-earlier Fedora 20 snapshots but using an internal environment. Now everything
-is going out in the open.
-
-
Executing The Tests
-
-
Well you can’t do that - YET! There are command line client tools for Fedora
-but Beaker and SNAKE are not well suited for use outside a restricted network
-like LAN or VPN. There are issues with authentication most notably for SNAKE.
-
-
At the moment I have to ssh through two different systems to get proper access.
-However this is been worked on. I’ve read about a rewrite which will allow Beaker
-to utilize a custom authentication framework like FAS for example. Hopefully that
-will be implemented soon enough.
-
-
I will also like to see the test systems have direct access to the Internet for
-various reasons but this is not without its risks either. This is still to be
-decided.
-
-
If you are interested anyway see the kick-tests.sh file in the test suite for
-examples and command line options.
-
-
Test Results
-
-
The first successfully completed
-test jobs are jobs 50 to 58.
-There’s a failure in one of the test cases, namely SELinux related
-RHBZ #1027148.
-
-
From what I can tell the lab is now working as expected and we can start doing
-some testing against Fedora development snapshots.
-
-
Ping me or join #fedora-qa on irc.freenode.net if you’d like to join Fedora QA!
In the last few weeks I’ve been working together with
+Tim Flink and
+Kamil Paral from the Fedora QA
+team on bringing some installation testing expertise to Fedora and establishing
+an open source test lab
+to perform automated testing in. The infrastructure is
+already in relatively usable condition so I’ve decided to share this information
+with the community.
+
+
Beaker is Running Our Test Lab
+
+
Beaker is the software suite that powers the test
+lab infrastructure. It is quite complex, with many components and sometimes not
+very straight-forward to set up. Tim has been working on that with me giving it
+a try and reporting issues as they have been discovered and fixed.
+
+
In the process of working on this I’ve managed to create
+couple of patches
+against Beaker and friends. They are still pending release in a future version
+because of more urgent bug fixes which need to released first.
+
+
SNAKE is The Kickstart Template Server
+
+
SNAKE is a client/server Python framework used
+to support Anaconda installations. It supports plain text ks.cfg files, IIRC those
+were static templates, no variable substitution.
+
+
The other possibility is Python templates based on Pykickstart:
At the moment SNAKE is essentially abandoned but feature complete.
+I’m thinking about adopting the project just in case we need to make some fixes.
+Will let you know more about this when it happens.
+
+
Open Source Test Suite
+
+
I have been working on opening up several test cases for what we (QE) call
+a tier #1 installation test suite. They can be found in
+git.
+The tests are base on beakerlib and
+the legacy RHTS framework which is now part of Beaker.
+
+
This effort has been coordinated with Kamil as part of a pilot
+project he’s responsible for. I’ve been executing the same test suite against
+earlier Fedora 20 snapshots but using an internal environment. Now everything
+is going out in the open.
+
+
Executing The Tests
+
+
Well you can’t do that - YET! There are command line client tools for Fedora
+but Beaker and SNAKE are not well suited for use outside a restricted network
+like LAN or VPN. There are issues with authentication most notably for SNAKE.
+
+
At the moment I have to ssh through two different systems to get proper access.
+However this is been worked on. I’ve read about a rewrite which will allow Beaker
+to utilize a custom authentication framework like FAS for example. Hopefully that
+will be implemented soon enough.
+
+
I will also like to see the test systems have direct access to the Internet for
+various reasons but this is not without its risks either. This is still to be
+decided.
+
+
If you are interested anyway see the kick-tests.sh file in the test suite for
+examples and command line options.
+
+
Test Results
+
+
The first successfully completed
+test jobs are jobs 50 to 58.
+There’s a failure in one of the test cases, namely SELinux related
+RHBZ #1027148.
+
+
From what I can tell the lab is now working as expected and we can start doing
+some testing against Fedora development snapshots.
+
+
Ping me or join #fedora-qa on irc.freenode.net if you’d like to join Fedora QA!
As this year’s GUADEC is coming to an end
-I’m publishing an interesting update from
-Petr Muller for
-those who were not able to attend.
-Petr is a Senior Quality Engineer at Red Hat. His notes were
-sent to an internal QE mailing list and re-published with permission.
-
-
As this year’s GUADEC happened in the same building where I have my other office, I decided to attend. I’m sharing my notes from the two sessions I consider to be especially interesting for the audience of this mailing list:
== How to not report your UX bug == Speaker: Fabiana Simões Blog: http://fabianapsimoes.wordpress.com/ Twitter: https://twitter.com/fabianapsimoes
Do not do this stuff: * Do not simply present a preferred solution, but describe a problem (a difficulty you are having, etc.) * Do not use “This sucks” idiom, not even hidden in false niceties like “It’s not user friendly” * Do not talk for majority, when you are not entitled to (“most users would like”) * Do not consider all UX issues as minor: an inability to do stuff is not a minor issue
What is actually interesting for the designer in a report? * What were you trying to do? * Why did you want to do it? * What did you do? * What happened? * What were your expectations?
More notes * Write as much as needed * Describe what you see, did and *how you felt* * Print screen is your friend! * *Give praise*
== Extreme containment measures: keeping bug reports under control == Speaker: Jean-Francois Fortin Tam Homepage: http://jeff.ecchi.ca Twitter: https://twitter.com/nekohayo
Discussed the problem lot of OS projects are having: lot of useless (old, irrelevant, waiting for decision no one wants to make) bug/rfe reports in their bug tracking systems. Lots of food for thought about our own projects, internal or external. Clever applications of principles from personal productivity systems such as GTD and Inbox Zero for bug tracking.
The talk was mostly an applied version of this blog post, which is worth reading: http://jeff.ecchi.ca/blog/2012/10/08/reducing-our-core-apps-software-inventory/
-
-
-
I particularly like the UX bug reporting guide lines. Need to take those into
-account when reporting UI issues.
-
-
I still haven’t read the second blog post which also looks interesting although
-not very applicable to me. After all I’m the person reporting bugs not the one
-who decides what and when gets fixed.
As this year’s GUADEC is coming to an end
+I’m publishing an interesting update from
+Petr Muller for
+those who were not able to attend.
+Petr is a Senior Quality Engineer at Red Hat. His notes were
+sent to an internal QE mailing list and re-published with permission.
+
+
As this year’s GUADEC happened in the same building where I have my other office, I decided to attend. I’m sharing my notes from the two sessions I consider to be especially interesting for the audience of this mailing list:
== How to not report your UX bug == Speaker: Fabiana Simões Blog: http://fabianapsimoes.wordpress.com/ Twitter: https://twitter.com/fabianapsimoes
Do not do this stuff: * Do not simply present a preferred solution, but describe a problem (a difficulty you are having, etc.) * Do not use “This sucks” idiom, not even hidden in false niceties like “It’s not user friendly” * Do not talk for majority, when you are not entitled to (“most users would like”) * Do not consider all UX issues as minor: an inability to do stuff is not a minor issue
What is actually interesting for the designer in a report? * What were you trying to do? * Why did you want to do it? * What did you do? * What happened? * What were your expectations?
More notes * Write as much as needed * Describe what you see, did and *how you felt* * Print screen is your friend! * *Give praise*
== Extreme containment measures: keeping bug reports under control == Speaker: Jean-Francois Fortin Tam Homepage: http://jeff.ecchi.ca Twitter: https://twitter.com/nekohayo
Discussed the problem lot of OS projects are having: lot of useless (old, irrelevant, waiting for decision no one wants to make) bug/rfe reports in their bug tracking systems. Lots of food for thought about our own projects, internal or external. Clever applications of principles from personal productivity systems such as GTD and Inbox Zero for bug tracking.
The talk was mostly an applied version of this blog post, which is worth reading: http://jeff.ecchi.ca/blog/2012/10/08/reducing-our-core-apps-software-inventory/
+
+
+
I particularly like the UX bug reporting guide lines. Need to take those into
+account when reporting UI issues.
+
+
I still haven’t read the second blog post which also looks interesting although
+not very applicable to me. After all I’m the person reporting bugs not the one
+who decides what and when gets fixed.
I will donate an
-Asus
-eeePC and a
-Fujitsu
-laptop plus all books
-from my Give Away List, which are not currently taken.
-Because this is not much I have an offer for everyone else, who would like to help.
-
-
-
-
-
What is the offer
-
-
Give a book or your old laptop and get a new one with discount!.
-
-
My company
-Open Technologies Bulgaria, Ltd. is an authorized reseller of Vali Computers and
-Fujitsu. Hardware reselling is not the main company activity but a backup in case a customer
-wants to purchase entire solution from one vendor.
-
-
I will not charge the standard reseller’s discount (between 5% and 10%) if you drop-off your books
-or old laptops with me and agree to donated them to children.
-The offer is valid as long as the donation campaign is (I don’t know how long but looks like ongoing).
-
-
You can select anything from http://www.vali.bg with the reseller’s discount off!
-Delivery or pick-up is on you though.
-
-
If you want to participate use the comments below and I will get in touch with you.
I will donate an
+Asus
+eeePC and a
+Fujitsu
+laptop plus all books
+from my Give Away List, which are not currently taken.
+Because this is not much I have an offer for everyone else, who would like to help.
+
+
+
+
+
What is the offer
+
+
Give a book or your old laptop and get a new one with discount!.
+
+
My company
+Open Technologies Bulgaria, Ltd. is an authorized reseller of Vali Computers and
+Fujitsu. Hardware reselling is not the main company activity but a backup in case a customer
+wants to purchase entire solution from one vendor.
+
+
I will not charge the standard reseller’s discount (between 5% and 10%) if you drop-off your books
+or old laptops with me and agree to donated them to children.
+The offer is valid as long as the donation campaign is (I don’t know how long but looks like ongoing).
+
+
You can select anything from http://www.vali.bg with the reseller’s discount off!
+Delivery or pick-up is on you though.
+
+
If you want to participate use the comments below and I will get in touch with you.
Elsys (in Bulgarian TUES) is a technology school in Sofia.
-It is a subsidiary of Technical University of Sofia and this week they’ve celebrated their 25th anniversary.
-Elsys is not an ordinary school, they teach computer science to these young kids.
-And they do it pretty damn well. At the moment it’s the best
-school to study IT (software, hardware, networks) in the country, contrary to what TU Sofia has
-become :(.
-
-
As one of the school sponsors I met lots of the students
-and want to show everyone else what they are doing. I have no doubts we will be hearing more about
-them in the future.
-
-
Robots first
-
-
So these boys and girls make robots. I was there when the first image was taken.
-It was this week in Thursday, April 25th at an educational fair. All visitors were
-fascinated by the robots and stopped by to watch and play with them. I personally
-wanted to see and play with the quadcopter shown above but it was not available that
-day.
-
-
While I was there, a guy approached the kids and said his
-company wants to fund development of another quadcopter. He wanted a bigger one, which
-is able to carry equipment for aerial photographs.
-
-
What shook me was that
-this is a rare occasion where a local business wants to fund R&D activities.
-Not to mentions these are school boys, not university students or research fellows
-where this is more
-common. And this happened days after the news about the quadcopter has been released
-in the social media.
-
-
-
-
-
Elsys also teaches Arduino classes where students play with home made robots. I personally
-have attended a robots competition held in the school where these small robots compete
-and sometimes fight with one another.
-
-
Did I mention they take part in First Lego League too? Just see the
-photos.
-
-
Open source
-
-
When not making robots students from Elsys hack open source and as it happened,
-one of them won the grand prize in Google Code-In 2012
-(article in Bulgarian).
-For the last few years kids from Elsys are taking part in Google Code-In and according
-to the school website
-they’ve made $7300 from Google :). Over 40
-boys and girls took part in the first
-edition of Google Code-In. That’s 10% of all participants.
-
-
I’m sure Google and others were impressed by the fact so many good developers
-are coming from a single school. Aren’t you?
As I said I’m a school sponsor. Probably the smallest one. If you want to help
-these kids and their school just let me know. I will put you in touch with the
-principal.
-
-
Alternatively you can donate your time and knowledge and start teaching an interesting
-class at school!
-
-
Or you can donate high quality IT books if you have such. Anything helps.
Elsys (in Bulgarian TUES) is a technology school in Sofia.
+It is a subsidiary of Technical University of Sofia and this week they’ve celebrated their 25th anniversary.
+Elsys is not an ordinary school, they teach computer science to these young kids.
+And they do it pretty damn well. At the moment it’s the best
+school to study IT (software, hardware, networks) in the country, contrary to what TU Sofia has
+become :(.
+
+
As one of the school sponsors I met lots of the students
+and want to show everyone else what they are doing. I have no doubts we will be hearing more about
+them in the future.
+
+
Robots first
+
+
So these boys and girls make robots. I was there when the first image was taken.
+It was this week in Thursday, April 25th at an educational fair. All visitors were
+fascinated by the robots and stopped by to watch and play with them. I personally
+wanted to see and play with the quadcopter shown above but it was not available that
+day.
+
+
While I was there, a guy approached the kids and said his
+company wants to fund development of another quadcopter. He wanted a bigger one, which
+is able to carry equipment for aerial photographs.
+
+
What shook me was that
+this is a rare occasion where a local business wants to fund R&D activities.
+Not to mentions these are school boys, not university students or research fellows
+where this is more
+common. And this happened days after the news about the quadcopter has been released
+in the social media.
+
+
+
+
+
Elsys also teaches Arduino classes where students play with home made robots. I personally
+have attended a robots competition held in the school where these small robots compete
+and sometimes fight with one another.
+
+
Did I mention they take part in First Lego League too? Just see the
+photos.
+
+
Open source
+
+
When not making robots students from Elsys hack open source and as it happened,
+one of them won the grand prize in Google Code-In 2012
+(article in Bulgarian).
+For the last few years kids from Elsys are taking part in Google Code-In and according
+to the school website
+they’ve made $7300 from Google :). Over 40
+boys and girls took part in the first
+edition of Google Code-In. That’s 10% of all participants.
+
+
I’m sure Google and others were impressed by the fact so many good developers
+are coming from a single school. Aren’t you?
As I said I’m a school sponsor. Probably the smallest one. If you want to help
+these kids and their school just let me know. I will put you in touch with the
+principal.
+
+
Alternatively you can donate your time and knowledge and start teaching an interesting
+class at school!
+
+
Or you can donate high quality IT books if you have such. Anything helps.
During the past month one of my cell phones,
-Nokia
-5800 XpressMusic
-, was not showing the caller name when a friend was calling.
-The number in the contacts list was correct but the name wasn’t showing,
-nor the custom assigned ringing tone. It turned out to be a bug!
-
-
The story behind this is that accidentally the same number was saved again
-in the contacts list, but without a name assigned to it.
-The software was matching the later one, so no custom ringing tone,
-no name shown. Removing the duplicate entry fixed the issue. Software version of this
-phone is
-
-
v 21.0.025
-RM-356
-02-04-09
-
-
-
I wondered what will happen with multiple duplicates and if this was fixed in a later
-software version so I tested with another phone,
-Nokia 6303.
-Software version is
-
-
V 07.10
-25-03-10
-RM-638
-
-
-
-
Step 0 - add the number to the contacts list, with name Buddy 1
-
Step 1 - add the same number to the contacts, with empty name.
-Result: You get a warning this number is already present for Buddy 1!
-When receiving a call, Buddy 1 is displayed.
-
Step 2 - edit the empty name contact and change the name to Buddy 2.
-Result: when receiving a call Buddy 2 is displayed.
-
Step 3 - add the same number again, with name Buddy 0. This is the latest entry
-but it is sorted before the previous two (this is important).
-Result: You get a warning that this number is already present for Buddy 1 and Buddy 2.
-When receiving a call Buddy 0 is displayed.
-
-
-
-
Summary: so it looks like Nokia fixed the issue with empty names, by simply ignoring them
-but when multiple duplicate contacts are available it displays the name of the last entered in the
-contact list, independent of name sort order.
-
-
-Later today or tomorrow I will test on
-Nokia 700
-which runs Symbian OS and update this post with more results.
-
-
-
Updated on 2013-03-19 23:50
-
-
Finally managed to test on
-Nokia 700.
-Software version is:
-
-
Release
-Nokia Belle Feature pack 1
-Software version
-112.010.1404
-Software version date
-2012-03-30
-Type
-RM-670
-
-
-
Result: If a duplicate contact entry is present it doesn’t matter if the name is empty or not.
-Both times no name was displayed when receiving a call. Looks like Nokia is not paying attention to
-regressions at all.
-
-
Android and iPhone
-
-
I don’t own any
-Android
-or
-iPhone
-devices so I’m not able to test on them. If you have one, please let me know if this bug is still present
-and how does the software behave when multiple contacts share the same number or have empty names! Thanks!
During the past month one of my cell phones,
+Nokia
+5800 XpressMusic
+, was not showing the caller name when a friend was calling.
+The number in the contacts list was correct but the name wasn’t showing,
+nor the custom assigned ringing tone. It turned out to be a bug!
+
+
The story behind this is that accidentally the same number was saved again
+in the contacts list, but without a name assigned to it.
+The software was matching the later one, so no custom ringing tone,
+no name shown. Removing the duplicate entry fixed the issue. Software version of this
+phone is
+
+
v 21.0.025
+RM-356
+02-04-09
+
+
+
I wondered what will happen with multiple duplicates and if this was fixed in a later
+software version so I tested with another phone,
+Nokia 6303.
+Software version is
+
+
V 07.10
+25-03-10
+RM-638
+
+
+
+
Step 0 - add the number to the contacts list, with name Buddy 1
+
Step 1 - add the same number to the contacts, with empty name.
+Result: You get a warning this number is already present for Buddy 1!
+When receiving a call, Buddy 1 is displayed.
+
Step 2 - edit the empty name contact and change the name to Buddy 2.
+Result: when receiving a call Buddy 2 is displayed.
+
Step 3 - add the same number again, with name Buddy 0. This is the latest entry
+but it is sorted before the previous two (this is important).
+Result: You get a warning that this number is already present for Buddy 1 and Buddy 2.
+When receiving a call Buddy 0 is displayed.
+
+
+
+
Summary: so it looks like Nokia fixed the issue with empty names, by simply ignoring them
+but when multiple duplicate contacts are available it displays the name of the last entered in the
+contact list, independent of name sort order.
+
+
+Later today or tomorrow I will test on
+Nokia 700
+which runs Symbian OS and update this post with more results.
+
+
+
Updated on 2013-03-19 23:50
+
+
Finally managed to test on
+Nokia 700.
+Software version is:
+
+
Release
+Nokia Belle Feature pack 1
+Software version
+112.010.1404
+Software version date
+2012-03-30
+Type
+RM-670
+
+
+
Result: If a duplicate contact entry is present it doesn’t matter if the name is empty or not.
+Both times no name was displayed when receiving a call. Looks like Nokia is not paying attention to
+regressions at all.
+
+
Android and iPhone
+
+
I don’t own any
+Android
+or
+iPhone
+devices so I’m not able to test on them. If you have one, please let me know if this bug is still present
+and how does the software behave when multiple contacts share the same number or have empty names! Thanks!
Celery is an asynchronous task queue/job queue
-based on distributed message passing. You can define tasks as Python functions,
-execute them in the background and in a periodic fashion.
-Difio uses Celery for virtually everything.
-Some of the tasks are scheduled after some event takes place (like user pressed a button)
-or scheduled periodically.
-
-
Celery provides several components of which celerybeat is the periodic task scheduler.
-When combined with Django it gives you a very nice admin interface
-which allows periodic tasks to be added to the scheduler.
-
-
Why change
-
-
Difio has relied on celerybeat for a couple of months. Back then, when Difio launched,
-there was no cron support for OpenShift so running celerybeat sounded reasonable.
-It used to run on a dedicated virtual server and for most of the time that was fine.
-
-
There were a number of issues which Difio faced during its first months:
-
-
-
celerybeat would sometime die due to no free memory on the virtual instance.
-When that happened no new tasks were scheduled and data was left unprocessed.
-Let alone that higher memory instance and the processing power which comes with it
-cost extra money.
-
Difio is split into several components which need to have the same code base
-locally - the most important are database settings and the periodic tasks
-code. At least in one occasion celerybeat failed to start because of a buggy
-task code. The offending code was fixed in the application server on OpenShift but
-not properly synced to the celerybeat instance. Keeping code in sync is a priority
-for distributed projects which rely on Celery.
-
Celery and django-celery seem to be updated quite often. This poses a significant risk
-of ending up with different versions on the scheduler, worker nodes and the app server. This will
-bring the whole application to a halt if at some point a backward incompatible change is introduced
-and not properly tested and updated. Keeping infrastructure components in sync can be a big challenge
-and I try to minimize this effort as much as possible.
-
Having to navigate to the admin pages every time I add a new task or want to change the execution
-frequency doesn’t feel very natural for a console user like myself and IMHO is less productive.
-For the record I primarily use mcedit. I wanted to have something more close to the
-write, commit and push work-flow.
-
-
-
-
The take over
-
-
It’s been some time since OpenShift
-introduced
-the cron cartridge and I decided to give it a try.
-
-
The first thing I did is to write a simple script which can execute any task from the difio.tasks module
-by piping it to the Django shell (a Python shell actually).
#!/bin/bash
-#
-# Copyright (c) 2012, Alexander Todorov <atodorov@nospam.otb.bg>
-#
-# This script is symlinked to from the hourly/minutely, etc. directories
-#
-# SYNOPSIS
-#
-# ./run_celery_task cron_search_dates
-#
-# OR
-#
-# ln -s run_celery_task cron_search_dates
-# ./cron_search_dates
-#
-
-TASK_NAME=$1
-[ -z "$TASK_NAME"]&&TASK_NAME=$(basename $0)
-
-if[ -n "$OPENSHIFT_APP_DIR"]; then
-source$OPENSHIFT_APP_DIR/virtenv/bin/activate
-export PYTHON_EGG_CACHE=$OPENSHIFT_DATA_DIR/.python-eggs
-REPO_DIR=$OPENSHIFT_REPO_DIR
-else
-REPO_DIR=$(dirname $0)"/../../.."
-fi
-
-echo"import difio.tasks; difio.tasks.$TASK_NAME.delay()" | $REPO_DIR/wsgi/difio/manage.py shell
-
-
-
-
This is a multicall script which allows symlinks with different names to point to it.
-Thus to add a new task to cron I just need to make a symlink to the script from one of the
-hourly/, minutely/, daily/, etc. directories under cron/.
-
-
The script accepts a parameter as well which allows me to execute it locally for debugging purposes
-or to schedule some tasks out of band.
-This is how it looks like on the file system:
After having done these preparations I only had to embed the cron cartridge and git push to OpenShift:
-
-
rhc-ctl-app -a difio -e add-cron-1.4 && git push
-
-
-
What’s next
-
-
At present OpenShift can schedule your jobs every minute, hour, day, week or month and does so using the
-run-parts script. You can’t schedule a script to execute at 4:30 every Monday or every 45 minutes for example.
-See rhbz #803485 if you want to follow the
-progress. Luckily Difio doesn’t use this sort of job scheduling for the moment.
-
-
Difio is scheduling periodic tasks from OpenShift cron for a few days already.
-It seems to work reliably and with no issues. One less component to maintain and worry about.
-More time to write code.
If you are working with Python and writing unit tests chances are you are
+familiar with the coverage reporting
+tool. However there are testing scenarios in which we either don’t use unit tests
+or maybe execute different code paths(test cases) independent of each other.
+
+
For example, this is the case with installation testing in Fedora. Because anaconda
+- the installer is very complex the easiest way is to test it live, not with unit tests.
+Even though we can get a coverage report (anaconda is written in Python) it reflects
+only the test case it was collected from.
+
+
coverage combine can be used to combine several data files and produce an aggregate
+report. This can tell you how much test coverage you have across all your tests.
+
+
As far as I can tell Python’s coverage doesn’t tell you how many times a particular
+line of code has been executed. It also doesn’t tell you which test cases executed
+a particular line
+(see PR #59).
+In the Fedora example, I have the feeling many of our tests are touching the same
+code base and not contributing that much to the overall test coverage.
+So I started working on these items.
+
+
I imagine a script which will read coverage data from several test executions
+(preferably in JSON format,
+PR #60) and produce a
+graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
+executions. Check the HTML for the actual numbers b/c there are no hints yet.
+The input JSON files are
+here and
+the script to generate the above HTML is at
+GitHub.
+
+
Now I need your ideas and comments!
+
+
What kinds of coverage reports are you using in your job ? How do you generate them ?
+How do they look like ?
Recently I’ve laid my hands on a list of little over 7000 email addresses.
-This begs the question how much of them are still in use and what for ?
-
-
My data is not fresh so I’ve uploaded the list to Facebook and created a custom
-audience. 2400 of 7129 addresses were recognized - 30% of these addresses are
-on Facebook and easy to target! Need to figure out which ones.
-
-
I could have tried some sort of batch search combined with the custom audience
-functionality but I didn’t find an API for that and decided not to bother.
-Instead I’ve opted for Gravatar.
Feed gravatars.sh with the email list and it will download all images to the
-current working directory and use the address as the file name. After
-md5sum *@* | cut -f1 -d' ' | sort | uniq -c I quickly noticed the following:
-
-
-
4563 addresses have the a1719586837f0fdac8835f74cf4ef04a check-sum; These are
-not found on Gravatar.
-
2400 addresses have the d5fe5cbcc31cff5f8ac010db72eb000c check-sum. These are
-addresses which are registered with Gravatar but didn’t bother to change the default
-image.
-
166 remaining addresses, each with a different check-sum. These have their custom
-pictures uploaded to Gravatar and probably much more actively used.
-
-
-
-
A second check with Facebook reveals 900 out of these 2566 addresses were recognized.
-This begs the question is Facebook showing incorrect stats or are there 1500 addresses
-using Gravatar (or have used at some point) which are not on Facebook ?
-
-
At least some of the remaining 4000 addresses are still active and used to send emails.
-Next I will be looking for ways to identify them. Any suggestions and comments are more
-than welcome!