Skip to content

Commit

Permalink
updated README
Browse files Browse the repository at this point in the history
  • Loading branch information
deftio committed Aug 8, 2021
2 parents b18525a + b54e3a8 commit d4e257c
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 24 deletions.
58 changes: 35 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,17 @@
This repo covers setting up a basic testing suite with github badges for a C/C++ library. Its not meant to be deep tutorial on testing but just cover some basics of setting up unit tests, coverage tests, and continuous integration (in this case using Travis-CI). The repo doesn't have a lot of code - there is a simple library which is tested for coverage and integration.

### Motivation
I just wanted to make a small standalone test project to see tools and workflow for C (or C++) language testing.

I just wanted to make a small standalone test project to see tools and workflow for C language testing.

<<<<<<< HEAD

copyright (C) <2016 and onward> <M. A. Chatterjee> <deftio [at] deftio [dot] com>
version 1.0.1 (updated for travis-ci.com transition) M. A. Chatterjee

=======
copyright (C) 2016- <M. A. Chatterjee> <deftio [at] deftio [dot] com>
version 1.0 M. A. Chatterjee
>>>>>>> b54e3a8266c4ce85a4ab86b162e789d653132718

## Features
Expand All @@ -30,23 +34,25 @@ In this demo project there is a C library (could also be C++ etc). The library

### Quick Overview of Testing

Installing google test suite (a unit test framework) -- could have used other test frameworks such as CppUnit or etc.
There are many different phases of testing. Here are a few areas but phrased as questions.

Common Testing "Questions" about a project:
* Does it run as intended? (is it funcitonally correct)
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention?)
* Are all the possible permutations of execution tested? (code coverage)
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use?)
* Does it run as intended? (is it funcitonally correct, does it do what its supposed to do?)
* Does it have side effects when running? (are resources tied up such as ports blocked, thread contention? Are other programs or services affected unintentionally?)
* Are all the possible permutations of execution tested? (code coverage, Is every piece of code - every if-then statement etc tested?)
* How much memory or resources are used? (is memmory efficiently used? Is memory freed correctly after use? When the program is complete does it leave things intact?)
* Does it exit gracefully? (are any resources requested released before the code exits?)



### Unit Testing

Unit Testing is a practice of writting small tests to see that piece of code, typically a full module or library, passes a set of tests to make sure it runs as intended. The simple unit tests are done after writing function. We then make a small program (the Unit test program) which calls our new function with as many different example parameters as we think are appropriate to make sure it works correctly. If the results returned match the expected results we can say the function passes the test. If the results for a given set of parameters don't agree we call an assertion (usually via a special ASSERT type macro) which records the failure and attempts to keep running then test in our test program. The goal is to be able to craft a set of these tests which test all the possible paths of execution in our code and passes all the test.

Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run. For example if there is a function like this:
Note that its not the goal to create a test that passes every possible permutation of the input parameters - as this could be an impossibly large number or variations even for just a few parameters. This idea of testing all the possible paths of exeuction is called code coverage. Testing code coverage is done with tools which see if the test program has successfully "challenged" the target library code by examing whether each execution path (or line of code) has been run.

```
For example if there is a function like this:

```C
int add5ifGreaterThan2 (int a) {
int r;

Expand All @@ -60,27 +66,27 @@ int add5ifGreaterThan2 (int a) {
```
Our test program for add5ifGreaterThan2() needs to supply values of a that are both less and great than 2 so both paths of the if statement
```
```C
if (a<2)
```

are tested.

We do this with test code such as this:

```
```C
//code in test program ...
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case
ASSERT (add5ifGreaterThan2(1) == 1) // supplies value of 'a' that tests the if (a<2) case and tests for a result
ASSERT (add5ifGreaterThan2(3) == 8) // supplies value of 'a' that tests the if (a>2) case and tests for a result

```
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program.
Of course this example is very simple but it gives a general idea of how the parameters need to be chosen to make sure both sides of the if clause in the example are run by the test program. The ASSERT macro checks whether the result is logically true. If it is not then it triggers an error process. Depending on the testing framework used different types of logging and output can be examined if the statement fails.
#### More info
Here is a link to the wikipedia article for more depth on unit testing practice and history: [https://en.wikipedia.org/wiki/Unit_testing](Wikipedia: Unit Testing).
Here is a link to the wikipedia article for more depth on unit testing practice and history: [Unit_testing](https://en.wikipedia.org/wiki/Unit_testing).
### Frameworks
To make Unit Testing easier to automate, unit testing frameworks have been written to help test results from function calls, gather statistics about passing/failing test cases, and
Expand All @@ -100,12 +106,14 @@ Unit testing frameworks are available in most languages and many have names like
We'll be using Google Test (called gtest) here so first we need to install it.
Here is the link to the project source
[https://github.com/google/googletest](https://github.com/google/googletest)
[Google Test](https://github.com/google/googletest)
Examples here are built using Ubuntu Linux, but should apply to most other operating systems.
On Ubuntu Linux you can install gtest using this command. If you are developing on another sytem refer to the documentation link for install procedures. Other than installing, all of the commands and test procedures we'll be using later will be the same (whether Windows / MacOS / POSIX / Linux).
```
```bash
sudo apt-get install libgtest-dev
sudo apt-get install cmake # install cmake
Expand All @@ -126,8 +134,7 @@ sudo ln -s /usr/lib/libgtest_main.a /usr/local/lib/gtest/libgtest_main.a
```

You can read more about the Google Test project here:
[https://github.com/google/googletest/blob/master/googletest/docs/Primer.md](https://github.com/google/googletest/blob/master/googletest/docs/Primer.md)
You can read more about the Google Test project here: [Testing Primer](https://github.com/google/googletest/blob/master/googletest/docs/Primer.md)


===========================
Expand All @@ -138,18 +145,23 @@ You can read more about the Google Test project here:
The lib.h / lib.c files are broken out as examples of testing an embedded library. Most of the projects I work on are for embedded systems so I wanted a way to get a build badge for these embedded projects. Since many of those compilers and environments are not on Linux I wanted just a simple abstraction of how the Travis build project works without all the complexities of a "real" project.


## How it works
## Testing vs Continuous Integration

In this demo project there is a C library (could also be C++ etc). The library code is just a few demo functions which are in the lib.h and lib.c files. They don't really do anything but allow for simple abstraction of what is necessary to build a larger project.


Once you've made unit tests, and gotten your code to run using the local test suite the next step starts. How does an *outsider* know your code passes tests? This is where continuous integration (CI) starts. CI uses services (such as Travis-CI, Circle-CI, Jenkins and many others) to automatically run your test suites and then report the result. When a CI program runs your test suite it can be configured to accept or reject your code based on the tests passing. This in turn can be used to automatically deploy your code. This is called Continuous Deployment (CD) or pipelines. CD and pipelines are beyond this repo and example.

## Using Travis-CI as an example of build-badge and CI

Travis-CI looks in the .travis.yml (note that is dot travis dot yml) to see how to run the code. In this case it first calls make which compiles lib.c and example.c in to lib.o and example.o and then links them to produce the final executable called example.out. If you look inside the file example.c you will see there are few hand written test cases. They are not meant to be a complete example of how to write test cases just a simple view of how the tests will be run in general. The main() function calls local function run_tests() which in turn calls each individual test case. Rather than link in a larger test case environment such as cppUnit etc there is a trivial set of test functions, one for each function in the lib.c library. If run_tests() is able to run all the tests successfully it will return to main() with a value of S_OK else it will return some failure code. This value is then returned from the main() program in example.out on exit.

Travis-CI then runs the example.out and looks for the exit code from the main() function. Being a Posix style of system an exit code of zero from example.out is considered passing and hence Travis-ci will then declare the build passing. If a non zero value is returned travis will declare the build failing. So to sum up, the primary means for travis knowing whether the test suite passes is getting the proper exit code from the test suite executable which in our case here is running example.out.

## Code Coverage
Code coverage is achieved using gcov from the gcc test suite. The example.out test program is compiled with the flags -ftest-coverage -fprofile-arcs. To see the code coverage run gcov:

```
```bash
make clean
make
./test-library.out
Expand All @@ -158,7 +170,7 @@ gcov lib.c

which will generate the file

```
```bash
lib.c.gcov
```

Expand Down
3 changes: 2 additions & 1 deletion run_coverage_test.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#!/bin/bash

#this script calls the code coverage testing program gcov (part of the gcc suite)
#this shell script calls the code coverage testing program gcov (part of the gcc suite)
#you can run each command on your own at the command line

#fist clean all object files
make clean
Expand Down

0 comments on commit d4e257c

Please sign in to comment.