diff --git a/docs/Header-Only-Apps.md b/docs/Header-Only-Apps.md new file mode 100644 index 0000000..ff679c6 --- /dev/null +++ b/docs/Header-Only-Apps.md @@ -0,0 +1,19 @@ +Header-Only Apps +================ +The modern buzz for library development is a 'header-only' format in which the goal is to provide the code in inline headers rather than separating declarations and definitions into separate files. Taking this one step further I wish to submit that the benefits of header-only designs extend to application development as well and is a wise practice to implement. The purpose of separating declarations from definitions in classic C++ was for resource limitations in computers during an era that has long past. Old habits die hard and it's no longer necessary. Arguably, separating code into .h and .c or .hpp and .cpp files is a bad practice by today's standards. + +The basic C++ compilation process passes individual implementation units through the preprocessor to generate a temporary compilation unit. The compilation unit is then passed through the compiler to produce an intermediate object. Typically, all the applications .c/.cpp files are processed this way to produce a number of intermediate files which are finally collected by the linker to produce the resulting binary. It was necessary in ancient times to break down compilation steps this way because systems simply didn't have enough resources to compile all the code in a project. + +Each file that's included in a compilation unit must pass through the tokenizer and parser before reaching the compiler. Tokenizing and parsing can account for a good majority of compile times depending on the project so precompiled headers were introduced to help out. PCH is a solution to a problem that was created as the result of a solution to another problem which no longer exists. And though PCH can help improve compile times in binaries with separate declarations and definitions, neither are necessary by today's standards. + +I submit that the best format for today's application code is to have a single .cpp file which contains the application's entry point and all other code is contained in inline headers. Adopting this practice is a bit strange at first but it quickly becomes preferable to the classic format with the gains that are realized. + +Reduced code is always a win. When all the code is contained in inline headers then the declarations are eliminated. This amounts to reduced technical debt. If an interface or parameter needs to change then it only needs to change in one place. + +With the reduction in code comes the reduction in files, half of them to be exact. There's no longer a need to bounce around between multiple files to modify a single logical unit of code. Classic C++ suggested declarations in one file and definitions in another. Half the file maintenance is a big win in the technical debt department. + +Compile times can be significantly faster than separating declarations/defintions. It also builds faster than a PCH enabled binary. PCH works by caching a copy of the AST to disk after a compilation unit has passed through the tokenizer and parser then re-using the AST when another compilation unit requests the same headers. This skips the tokenizing and parsing passes of code that has already been encountered. A single compilation unit gets tokenized, parsed and passed through the compiler only once so there's no room for PCH to improve on it. Enabling PCH on such a binary would add an additional step of writing out the AST cache to disk which would never be used anyway. + +Increased productivity comes with less code, less files and faster compile times. The bean counters are always happy about increased output. But, increasing productivity will also be gained from the faster binaries that result from the build. + +A final relic which is no longer needed is 'link time code generation' which is another 'fix' to the first problem which no longer exists. Multiple compilation units mean inefficiencies in the generated intermediate code. Modern linkers will try to get around some of these inefficiencies by utilizing LTCG to recompile portions of the intermediate objects. Many inefficiencies are reduced with LTCG but many more can still exist and wind up in the resulting executable binary. The most efficient binary results from giving everything to the compiler in a single compilation unit. When the compiler has complete visibility into all the executable code it will more aggressively optimize and the size of the resulting binary is often smaller too. diff --git a/docs/LICENSE.md.md b/docs/LICENSE.md.md new file mode 100644 index 0000000..44da875 --- /dev/null +++ b/docs/LICENSE.md.md @@ -0,0 +1,23 @@ +# Boost Software License - Version 1.0 - August 17th, 2003 + +Permission is hereby granted, free of charge, to any person or organization +obtaining a copy of the software and accompanying documentation covered by +this license (the "Software") to use, reproduce, display, distribute, +execute, and transmit the Software, and to prepare derivative works of the +Software, and to permit third-parties to whom the Software is furnished to +do so, all subject to the following: + +The copyright notices in the Software and this entire statement, including +the above license grant, this restriction and the following disclaimer, +must be included in all copies of the Software, in whole or in part, and +all derivative works of the Software, unless such copies or derivative +works are solely in the form of machine-executable object code generated by +a source language processor. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT +SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE +FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/docs/Parsing.md b/docs/Parsing.md new file mode 100644 index 0000000..3f63488 --- /dev/null +++ b/docs/Parsing.md @@ -0,0 +1,77 @@ +Parsing +======= + +Parsing is an extensive subject and a frequent stumbling block of even experienced engineers. Due to the repetitive nature of parsing code, the volume of the points of failure and the frequently changing format of inputs, parsing code is often a significant resource sink for many software projects. + +Parsing is frequently outlined with a grammar specification in [Backus-Naur Form (BNF)](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form) which defines the terminals and non-terminals that must be tagged in the input stream. Some tools accept the BNF grammar directly as input and generate all the parsing code as output. These tools are called parser-generators and they can save a lot of pain and time. Though not limited to accepting BNF exclusively as inputs, some parser-generates have custom language specifications or decorate BNF with various text handling code, the basic product is normally the same: some language specification is taken as input and all the boiler plate parsing code is generated as output. In most cases the resulting generated parser includes some interfaces for the library consumer to interrogate the parsed sources. Frequently, the parsed inputs are then feed as inputs to another stage which produces an [Abstract Syntax Tree (AST)](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form). The AST is most often the primary model that applications use to work with the inputs. + +Some applications do not use generated parsers nor an AST. Some apps do not even abstract the parsing code from the processing code and will instead parse the inputs directly in-line with application logic. For all except the most trivial applications this is a recipe for disaster. Abstracting the two tasks is almost always the preferred method. However, doing things 'right' and adhering to best practice is also time consuming. For example, using the traditional GNU tools to handle parsing is an entire discipline of it's own that complicates the build process and requires a modicum of domain expertise before any application logic is ever addressed. Modern tools such as Boost::Spirit and Antlr suffer from the same problem of requiring a detailed study of the library to get a simple parse task accomplished. + +When the introduction of a tool that is intended to simplify a problem makes the problem more complicated the tool looses it's utility and usefullness. Since using these tools require a significant investiment to learn and use properly it's no surprise that so many developers just opt to hand-roll a parser rather than learn yet another library. With all that said I introduce yet another library. + +Introduction +------------ + +XTL::parse uses template meta-programming techniques to generate parse trees from a grammar specification. The grammar specification is written in C++ templates. The library is header-only and the parse trees are generated at compile time so there's no libraries to link to and no external tools to run as part of the build process. A unique feature of XTL::parse is that the grammar specification gets instantiated as the [AST] when the parse is successful which eleminates the need for an additional import step which is frequent of other similar tools. XTL::parse is a simple LL(k) parser that encourages embedding the grammar specification in the AST for simplicity and clarity. + +To illustrate the entire process, here's a simple BNF grammar describing the command line syntax of example_parse1.cpp: + +~~~{.cpp} +//terminals + := 'red' + := 'green' + := 'blue' + := '1' + := '3' + := '5' + := '--color=' + := '--prime=' +//rules + := | | + := | | + := + := + := | +~~~ +> If the intention of this BNF syntax is unclear there are plenty of tutorials on the web. + +This is the sort of BNF that is frequently encountered in RFCs, white papers, programming language and protocol specifications. It maps into a XTL::parse grammar specification as: + +~~~{.cpp} +//termnals +STRING(red, "red"); +STRING(green, "green"); +STRING(blue, "blue"); +STRING(one, "1"); +STRING(three, "3"); +STRING(five, "5"); +STRING(dash_color, "--color="); +STRING(dash_prime, "--prime="); +//rules +using rgb = or_; +using prime_num = or_; +using color_param = and_; +using prime_param = and_; +using parameter = or_; +~~~ + +The `parameter` maybe either `--prime=` or `--color=`. `` and `` maybe either `red, green` or `blue` and `1, 3` or `5` respectively. This example uses a mixture of preprocessor macros and template aliases to define the grammar. The C++ representation is more verbose due to C++ language requirements but it maps to the BNF line-for-line. + +Using this specification to parse command line parameters is a matter of passing the start rule and string to a parser: + +~~~{.cpp} +int main(int argc, char * argv[]){ + std::string sParam = argv[1]; + auto oAST = xtd::parser::parse(sParam.begin(), sParam.end()); + if (!oAST){ + //parse failed, show usage or error + }else{ + //work with parsed parameters + } +} +~~~ +Done and done. + +AST Generation +-------------- +An AST is instantated and returned by `xtd::parser<>::parse()` when the parse is successful. The AST is an object model that represents the parsed grammar. diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000..ce58873 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,96 @@ +eXtended Template Library +========================= +[![Open Hub project report](https://www.openhub.net/p/libxtl/widgets/project_thin_badge.gif)](https://www.openhub.net/p/libxtl) +[![Travis](https://img.shields.io/travis/djmott/xtl.svg?style=plastic)](https://travis-ci.org/djmott/xtl) +[![Coveralls branch](https://img.shields.io/coveralls/djmott/xtl.svg?style=plastic)](https://coveralls.io/github/djmott/xtl) +[![SonarQube Tech Debt](https://img.shields.io/sonar/https/sonarqube.com/xtl/tech_debt.svg)](https://sonarqube.com/overview?id=xtl) +[![SonarQube Quality Gate](http://nemo.sonarqube.org/api/badges/gate?key=xtl&blinking=true)](https://sonarqube.com/overview?id=xtl) +[![Boost License](https://img.shields.io/badge/license-Boost_Version_1.0-green.svg?style=plastic)](http://www.boost.org/LICENSE_1_0.txt) + +XTL is a public release of portions from a much larger private set of libraries which I've maintained over the years and used in a number of projects. It's primarily a series of C++ template metaprogramming patterns, idioms, algorithms and libraries that solve a variety of programming tasks. It supplements, extends and cooperates with the STL by providing some frequently used components that are otherwise absent from the standard. A short list of some of the more notable headers: + +|Header |Description| +|--------------------|-----------| +|callback.hpp |single producer notifies multiple consumers of an event| +|dynamic_library.hpp |load and invoke methods in a dynamic library| +|parse.hpp |text parsing and AST generation| +|socket.hpp |general purpose socket communication| +|source_location.hpp |maintains info about locations within source code| +|spin_lock.hpp |simple user mode spin lock based on std::atomic| +|string.hpp |advanced and common string handling| +|tuple.hpp |manipulate and generate tuples| +|unique_id.hpp |global unique identifier / universal unique identifier data type| +|var.hpp |multi-type variant using type-erasure| + +### Getting started + +XTL works with modern C++11 compilers and has been tested with MinGW, GCC, Intel C++, Cygwin and Microsoft Visual C++. The library can be used out-of-the-box in many cases by simply including the desired header since most components are header-only. A few components require linking to a run-time component so they will need to be compiled. + +### Requirements + +* [CMake](http://www.cmake.org) is required to configure +* [libiconv](https://www.gnu.org/software/libiconv/) is optional for unicode support on Posix platforms. +* [libuuid](https://sourceforge.net/projects/libuuid/) is optional for UUID/GUID support on Posix plaforms. (This library has bounced around to several locations over the years. Some documentation says it's included in modern Linux kernel code while others say it's included in the e2fsprogs package. Most modern Linux distros support some version in their respective package managers.) + +### Obtaining + +XTL is hosted on GitHub and is available at http://www.github.io/djmott/xtl +Checkout the repo with git: + +``` +git clone https://github.com/djmott/xtl.git +``` + +### Compiling + +For the most part XTL is a 'header-only' library so compilation isn't necessary. None the less, it must be configured for use with the compiler and operating system with [CMake](https://cmake.org/). From within the top level directory: + +``` +mkdir build +cd build +cmake .. +``` +The compilation step is not always necessary depending on the required components that will be used. The method used to compile the run-time code is platform, toolchain and CMake configuration specific. For Linux, Cygwin and MinGW make files just run `make`. +### Using +Several configuration options are available during configuration with CMake. For most purposes the default configuration should work fine. Applications should add the `include` folder to the search path. The configuration with CMake detects the compiler toolchain and target operating system then produces the primary include file. For most applications just including the project header will go a long way: +```{.cpp} + #include +``` + +### Testing + +XTL uses the [Google Test](https://github.com/google/googletest) framework for unit tests and system test. From within the build directory: +``` +make unit_tests +``` +The unit tests and system tests are contained in the same resulting binary at `tests/unit_tests`. The `coverage_tests` build target is only available for GCC: +``` +make coverage_tests +``` +This will produce the binary `tests/coverage_tests` which is identical to the `tests/unit_tests` binary but has additional instrumenting enabled for gcov. + +### Documentation + +[Doxygen](http://www.doxygen.org) is used to generate source documentation. The code is fairly well marked up for doxygen generation. After the project has been configured with CMake build with documentation with: + +``` +make docs +``` +This will extract the source comments and generate nice documentation in the `docs/html` folder. Also available is the [wiki](https://github.com/djmott/xtl/wiki) + +### Feedback and Issues + +Submit a [ticket](https://github.com/djmott/xtl/issues) on GitHub if a bug is found. Effort will be made to fix it ASAP. + +### Contributing + +Contributions are appreciated. To contirube, fork +the project, add some code and submit a [pull request](https://github.com/djmott/xtl/pulls). In general, contributions should: +* Clear around %80 in code coverage tests +* Pass SonarQube quality gateway +* Pass unit and system tests +* Pass tests through ValGrind memcheck or some other dynamic analysis with no resource leaks or other significant issues + +### License + +XTL is copyright by David Mott and licensed under the Boost Version 1.0 license agreement. See [LICENSE.md](LICENSE.md) or [http://www.boost.org/LICENSE_1_0.txt](http://www.boost.org/LICENSE_1_0.txt) for license details. diff --git a/docs/Sockets.md b/docs/Sockets.md new file mode 100644 index 0000000..b40a708 --- /dev/null +++ b/docs/Sockets.md @@ -0,0 +1,17 @@ +Sockets +======= +The XTL socket library provides low-level and high-level abstractions around sockets that can be mixed and matched to achieve a multitude of interfaces. The various socket behaviors are decomposed into independent policies which are composed at compile time into complex concrete types. Sockets are well suited for a hierarchy generation pattern because various socket types share behaviors in unique ways. The hierarchy generation pattern permits the composition of constituent behaviors without resorting to multiple inheritance. The hierarchy generation pattern also permits concrete compositions to contain only the interface elements that make sense and should be present for a particular socket type. + +This concept is probably best explained from the highest level interfaces that will most commonly be used in applications. Here's a pre-defined typedef for an IPv4 UDP socket: + +```{.cpp} +using ipv4_udp_socket = socket_base; +``` + +The four constituent components that compose an `ipv4_udp_socket` are `ipv4address`, `socket_type::datagram`, `socket_protocol::udp` and `ip_options`. The `socket_base` template composes these individual behavioral components in a linear object hierarchy that avoids multiple inheritance. Some of these components are used in the IPv4 TCP socket: + +```{.cpp} +using ipv4_tcp_stream = socket_base; +``` + +Additional behavioral policies can be added or removed as desired to achieve a variety of custom interfaces. For example, the `connectable_socket` behavior produces a `connect` method typically for TCP clients which the `bindable_socket` and `listening_socket` provide `bind` and `listen` respectively, typically for TCP servers. So, the predefined `ipv4_tcp_stream` type can be used as both a client and server. If so desired, these behaviors could be declared in separate interfaces to produce independent client and server socket types. diff --git a/docs/TMP-Techniques.md b/docs/TMP-Techniques.md new file mode 100644 index 0000000..b84511c --- /dev/null +++ b/docs/TMP-Techniques.md @@ -0,0 +1,67 @@ +TMP Techniques +============== +There are a few patterns in wide-spread employ across the TMP landscape and it's worth recognizing them. Here are a few of the more common patterns/idioms with examples. + +### Use before declaration +Template declarations are parsed at compile time and not instantiated until their use is defined in run-time code. If a template is not used in run-time code then it can contain a myriad of errors and the application will compile just fine since it's not used. This permits template to contain code that operates on template parameters before the concrete types are declared effectively decoupling two class declarations in ways that aren't possible in classic C++. For example, the following will not compile: +```{.cpp} +struct Cat{ + Mouse m_Mouse; +}; +struct Mouse{ + int m_Cheese; +}; + +int main(){ + return 0; +} +``` +Neither Cat nor Mouse are used in run-time code but it won't compile because Cat reference Mouse before it's declared and forward declarations won't help. However, consider the following: +```{.cpp} +template struct Cat{ + _MemberT m_Mouse; +}; +struct Mouse{ + int m_Cheese; +}; +int main(){ + Cat oCat; +} +``` +The templated version compiles just fine because the template isn't fully resolved until it's used in the run-time code in `main` (`Cat oCat`.) At this point the template becomes instantiated. All the types are fully defined at the time they're used in run-time code so everything works. + +Expanding on this idiom, a template can interrogate the parameter: +```{.cpp} +template struct Cat{ + static bool Catch(){ + return _MouseT::Slow; + } +}; +struct MickeyMouse{ + static const bool Slow = true; +}; +struct MightyMouse{ + static const bool Slow = false; +} +int main(){ + std::cout << "Catch Mickey " << std::boolalpha << Cat::Catch() << std::endl; + std::cout << "Catch Mighty " << std::boolalpha << Cat::Catch() << std::endl; +} +``` +Here again `Cat` uses `_MouseT` but also uses a member named `Slow`. Using members of a template parameter like this requires that any type passed to the `Cat` template have a `Slow` member. In other words, the `_MouseT` parameter must adhere to a _concept_ that `Cat` requires. As long as the type passed as `_MouseT` to the `Cat` template adheres to the _interface concept_ the code is valid. + +Performing similar feats in classic C++ normally requires a generic interface such as `IMouse` be declared with some pure virtual members that subclasses implement. + +### Curiously Recurring Template Pattern +This one looks odd at first glance and seems that it shouldn't compile. Here's a common use care with STL: +```{.cpp} +struct MyStruct : std::enable_shared_from_this ... +``` +The MyStruct passes itself as a parameter to specialize std::enable_shared_from_this before the definition is complete. It doesn't look like it will compile but it does. This pattern is often used to give the base class visibility into the subclasses, something that cannot be easily done in classic OO. + +### Parameterized Base Class +This one also appears curious at first: +```{.cpp} +template struct MyStruct : _BaseT ... +``` +MyStruct is a template that subclasses it's template parameter. diff --git a/docs/Template-Meta-Programming.md b/docs/Template-Meta-Programming.md new file mode 100644 index 0000000..fc00990 --- /dev/null +++ b/docs/Template-Meta-Programming.md @@ -0,0 +1,177 @@ +Template Meta-Programming +========================= +The progression of modern C++ can be described as evolving through two distinct paradigms. First from the legacy C to object-oriented C++ with classes. The modern paradigm can be described as a shift toward [Template Meta Programming](https://en.wikipedia.org/wiki/Template_metaprogramming). + +Meta-Programming differs from conventional programming in a number of ways. A prominent distinction is that meta-programming is more accurately programming the compiler how and what to compile rather than programming a processor what to execute. Meta-programming closely interacts with the compiler typically before machine code is generated. It is close interaction with the compiler that makes meta-programming the preferential platform of generic library developers. + +> It's worth noting that I make a distinction between generic and object-oriented. In this context, generic programming may or may not be OO while OO is not generic. OO refers to concrete or abstract class types while generic programming is more or less type-less. + +The C++ template engine is a [Turing Complete](https://en.wikipedia.org/wiki/Turing_completeness) "language" within C++ that continues to grow and yield powerful new language extensions with each adopted revision of the language standard. Because the features were an after-thought and more or less discovered by accident, the syntax is cumbersome, unfamiliar and non-intuitive. It requires significant dedication to navigate the plethora of caveats but the payoffs can be significant. + +The goal of this informative is to shed light on the C++ template engine and present the topic in an easy to understand format with descriptive examples, each step building on the prior. This work will by no means be a complete corpus on this lengthy topic. The intent is to be a spring-board by which some basic knowledge can be derived. See the 'Suggested Reading' section at the bottom of this page for more detailed information. + +Template Basics +--------------- +At it's most basic level, templates permit a generic or type-less algorithm to work with various concrete types. When a template is used the compiler will qualify the generic types into the concrete types required to run. For example, here's a simple function template: +~~~{.c} +template _Ty square(_Ty src){ return src * src; } +~~~ +This function template can be reused to square shorts, ints, longs, doubles or any other type that supports operator *. The compiler will substitute the `_Ty` parameter with other concrete types at compile time to _generate_ the various overloaded versions of the function. Consider the following usage scenarios: + +~~~{.c} +char cVal = 8; +int iVal = 123; +double dVal = 3.14; + +std::cout << square(cVal) << std::endl; +std::cout << square(iVal) << std::endl; +std::cout << square(dVal) << std::endl; +~~~ + +The compiler will substitute the generic _Ty parameter with a char, int and double respectively to generate the following specializations: + +~~~{.c} +char square(char src){ return src * src; } +int square(int src){ return src * src; } +double square(double src){ return src * src; } +~~~ +The above three *specializaitons* are generated by the compiler based on the usage scenarios despite the single instance of the `square` function template in code. This feature can obviously save a lot of typing and technical debit when a single definition needs maintaining instead of three. The notable take-away from TMP at this point is **code generation**. + + +The above specializations are qualified with the type surrounded in angle brackets. (e.g. square<**char**>) Function templates have a special ability to evaluate certain parameter types based on usage scenarios. The above template can be simplified: +~~~{.c} +template auto square(_Ty src) -> _Ty { return src * src; } +~~~ +In this example, the `auto` keyword is a placeholder ordering the compiler to evaluate the type at a later time. This new version of `square` can be used without the type qualifiers: +~~~{.c} +char cVal = 8; +int iVal = 123; +double dVal = 3.14; + +std::cout << square(cVal) << std::endl; +std::cout << square(iVal) << std::endl; +std::cout << square(dVal) << std::endl; +~~~ +Here the types are unambiguous because they can be deduced based on the usage scenario so there's no need to qualify them. If the type cannot be deduced based on the usage scenario it must be qualified in the template parameter list. Within the definition of the specialization the parameters must be listed in the order that they're declared. +### Class Templates +The previous examples demonstrate _function templates_. _Class templates_ are another template variation: +~~~{.c} +template struct CheckedPointer{ + CheckedPointer(_Ty * newval) : _Ptr(newval){} + _Ty * operator->() { assert(_Ptr); return _Ptr; } + void delete_it(){ + delete _Ptr; + _Ptr = nullptr; + } + _Ty * _Ptr; +}; +~~~ +This is a very simple class template that checks if the contained pointer to type _Ty is not null during every access. This template could be used in code like: +~~~{.c} +int main(){ + struct user{ + int userid; + int lastlogin; + }; + CheckedPointer oUser(new user); + oUser->userid = 1234; + oUser->delete_it(); + oUser->lastlogin = 111980; //assert +} +~~~ +### Type and Non-Type Parameters +Templates can accept type and non-type parameters. Type parameters refer to all the compiler intrinsic and user defined types such as classes and structures. Compiler intrinsics are identified with keywords such as `int`, `short`, `double`, etc. Non-type parameters included values that can be evaluated at compile time such as static constant pointers and references. + +Here's an example of a class template that accepts a type and non-type template parameters: +~~~{.c} +template struct StaticArray{ + _Ty value[_Dims]; +}; +~~~ +This very simply declares a fixed sized array. It's of little use but can be used in code as: +~~~{.c} +int main(){ + StaticArray oArr; + oArr.value[0] = 123; + return oArr.value[0]; +} +~~~ +The first parameter is a type parameter, the second is a non-type parameter. Here's another example that pre-computes a value: +~~~{.c} +template struct StaticArray{ + static const bool EvenNumberOfElements = (0 == ( _Dims %2 )); + ... +}; +~~~ +Here the template engine performs the division and comparison at compile time and stores the result in the static constant. This consumes no clock cycles at run-time to calculate. Integral types can be calculated with ease by the template engine and leveraged in various ways to achieve significant run-time performance improvements. This is the name of the game with TMP: interacting with the compiler to generate, pre-compute and pre-compile as much as possible to reduce run-time overhead and reduce the volume of maintainable code. +Specialization +-------------- +The generic form of a template is declared with the following convention: + +| |keyword |input params |type | name | +|----------------:|:--------:|:-------------:|:---------:|:-----------:| +|function template|`template`|``| `int` |`fnTemplate` | +|class template |`template`|``| `struct`|`CTemplate` | + + +The input parameters on the generic form appear on the left-hand-side of the template name. + +A class or function template is specialized when a generic parameter is explicitly qualified in a declaration. For example, assume a compile-time constant is calculated: +~~~{.c} +template int Factoral(){ + return _Factor * Factoral<_Factor - 1>(); +} +~~~ +This is a meta-function that calculates `value` at compile time by recursively multiplying the value of the `_Factor` parameter by `_Factor-1`. At first glance this may seem correct, however, there's at least two problems with it. First, at some point during the recursive calculation the value of _Factor will reach zero and invalidate the computation (anything multiplied by zero is zero). Next, there is no terminating condition for the recursive calls so the compile will fail at some point due to infinite recursion. For this meta-function to operate as expected a specialization is introduced that terminates the recursion and corrects the calculation: +~~~{.c} +template <> int Factoral<1>(){ + return 1; +} +~~~ +The fully specialized form of a template is declared with the following convention: + +| |keyword |empty input params|type | name |output parameters| +|----------------:|:--------:|:----------------:|:---------:|:----------:|:---------------:| +|function template|`template`|`<>` | `int` |`fnTemplate`|<0> | +|class template |`template`|`<>` | `struct`|`CTemplate` |<5> | + +Notice the specialized values appear on the right-hand-side of the template name. This distinguishes specialized templates from their generic form. Input parameters appear to the left of the name while output parameters appear to the right. + +During compilation, the compiler chooses the most specialized version of a template over a lesser specialized version whenever it's encountered. In this case, the specialized `template <> struct Factoral<1>` is more specialized than the generic `template struct Factoral` so the compiler chooses it while recursively calculating the value. Assume this function template is used as: +~~~{.c} +int main(){ + return Factoral<5>(); +} +~~~ +>>The compiler generate 4 specializations before instantiating the explicit Factoral<1> specialization. Here is an excerpt of the listing file generated by MSVC14 with debugging enabled: +~~~{.asm} +PUBLIC ??$Factoral@$04@@YAHXZ ; Factoral<5> +PUBLIC ??$Factoral@$03@@YAHXZ ; Factoral<4> +PUBLIC ??$Factoral@$02@@YAHXZ ; Factoral<3> +PUBLIC ??$Factoral@$01@@YAHXZ ; Factoral<2> +PUBLIC ??$Factoral@$00@@YAHXZ ; Factoral<1> +~~~ +`Factoral<5>` through `Factoral<2>` are compiler generated specializations that will be called in the expected order. The beauty of TMP can be demonstrated in the listing of the release build. Since the values are deterministic at compile-time, the optimizer eliminates it all: +~~~{.c} +; int main() { +; return Factoral<5>(); +00E11000 mov eax,78h +00E11005 ret +~~~ + + + +Suggested Reading +----------------- + + + * [Advanced Metaprogramming in Classic C++](https://www.amazon.com/Advanced-Metaprogramming-Classic-Davide-Gennaro/dp/1484210115) + * [C++ Templates: The Complete Guide](https://www.amazon.com/Templates-Complete-Guide-David-Vandevoorde/dp/0201734842) + * [C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond](https://www.amazon.com/Template-Metaprogramming-Concepts-Techniques-Beyond/dp/0321227255) + * [Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 and C++14](https://www.amazon.com/Effective-Modern-Specific-Ways-Improve/dp/1491903996) + * [Modern C++ Design: Generic Programming and Design Patterns Applied](https://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315) + + + +Copyright :copyright: David Mott 2016 +